[jira] [Commented] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-08 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836101#comment-16836101
 ] 

Akira Ajisaka commented on HADOOP-16299:


Thank you, [~tasanuma0829]!

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16299.001.patch, HADOOP-16299.002.patch
>
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-08 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-16299:
--
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks for your contribution, [~ajisakaa], and thanks for 
your review, [~ste...@apache.org].

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16299.001.patch, HADOOP-16299.002.patch
>
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16299) [JDK 11] Build fails without specifying -Djavac.version=11

2019-05-08 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836097#comment-16836097
 ] 

Takanobu Asanuma commented on HADOOP-16299:
---

+1. Will commit it later.

> [JDK 11] Build fails without specifying -Djavac.version=11
> --
>
> Key: HADOOP-16299
> URL: https://issues.apache.org/jira/browse/HADOOP-16299
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16299.001.patch, HADOOP-16299.002.patch
>
>
> {{mvn install -DskipTests}} fails on Java 11 without specifying 
> {{-Djavac.version=11}}.
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) 
> on project hadoop-annotations: Fatal error compiling: error: option 
> --add-exports not allowed with target 1.8 -> [Help 1]
> {noformat}
> HADOOP-15941 added {{--add-exports}} option when the java version is 11 but 
> the option is not allowed when the javac target version is 1.8.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
bharatviswa504 commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282338372
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestTypedRDBTableStore.java
 ##
 @@ -236,4 +249,66 @@ public void forEachAndIterator() throws Exception {
   }
 }
   }
+
+  @Test
+  public void testTypedTableWithCache() throws Exception {
+int iterCount = 10;
+try (Table testTable = createTypedTableWithCache(
+"Seven", TableCache.CACHETYPE.FULLCACHE)) {
+
+  for (int x = 0; x < iterCount; x++) {
+String key = Integer.toString(x);
+String value = Integer.toString(x);
+testTable.addCacheEntry(new CacheKey<>(key), new CacheValue<>(value,
+CacheValue.OperationType.CREATED, x));
+  }
+
+  // As we have added to cache, so get should return value even if it
+  // does not exist in DB.
+  for (int x = 0; x < iterCount; x++) {
+Assert.assertEquals(Integer.toString(1),
+testTable.get(Integer.toString(1)));
+  }
+
+}
+  }
+
+  @Test
+  public void testTypedTableWithCacheWithFewDeletedOperationType()
+  throws Exception {
+int iterCount = 10;
+try (Table testTable = createTypedTableWithCache(
+"Seven", TableCache.CACHETYPE.PARTIALCACHE)) {
+
+  for (int x = 0; x < iterCount; x++) {
+String key = Integer.toString(x);
+String value = Integer.toString(x);
+if (x % 2 == 0) {
+  testTable.addCacheEntry(new CacheKey<>(key),
+  new CacheValue<>(value,
+  CacheValue.OperationType.CREATED, x));
+} else {
+  testTable.addCacheEntry(new CacheKey<>(key), new CacheValue<>(value,
+  CacheValue.OperationType.DELETED, x));
+}
+  }
+
+  // As we have added to cache, so get should return value even if it
+  // does not exist in DB.
+  for (int x = 0; x < iterCount; x++) {
+if (x % 2 == 0) {
+  Assert.assertEquals(Integer.toString(x),
+  testTable.get(Integer.toString(x)));
+} else {
+  Assert.assertNull(testTable.get(Integer.toString(x)));
+}
+  }
+
+  testTable.cleanupCache(5);
+
+  GenericTestUtils.waitFor(() ->
+  ((TypedTable) testTable).getCache().size() == 4,
+  100, 5000);
+}
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16287:
---
Attachment: HADOOP-16287-005.patch

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16287-005.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16836081#comment-16836081
 ] 

Prabhu Joseph commented on HADOOP-16287:


Patch 5 fixes checkstyle issues.

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16287-005.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
bharatviswa504 commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282337940
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Table.java
 ##
 @@ -60,6 +62,9 @@ void putWithBatch(BatchOperation batch, KEY key, VALUE value)
* Returns the value mapped to the given key in byte array or returns null
* if the key is not found.
*
+   * First it will check from cache, if it has entry return the value
+   * otherwise, get from the RocksDB table.
+   *
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
bharatviswa504 commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282337376
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+
+
+/**
+ * This is used for the tables where we don't want to cache entire table in
+ * in-memory.
+ */
+@Private
+@Evolving
+public class PartialTableCache
+implements TableCache{
+
+  private final ConcurrentHashMap cache;
+  private final TreeSet> epochEntries;
+  private ExecutorService executorService;
+
+
+
+  public PartialTableCache() {
+cache = new ConcurrentHashMap<>();
+epochEntries = new TreeSet>();
+// Created a singleThreadExecutor, so one cleanup will be running at a
+// time.
+executorService = Executors.newSingleThreadExecutor();
+  }
+
+  @Override
+  public CACHEVALUE get(CACHEKEY cachekey) {
+return cache.get(cachekey);
+  }
+
+  @Override
+  public void put(CACHEKEY cacheKey, CACHEVALUE value) {
+cache.put(cacheKey, value);
+CacheValue cacheValue = (CacheValue) cache.get(cacheKey);
+epochEntries.add(new EpochEntry<>(cacheValue.getEpoch(), cacheKey));
+  }
+
+  @Override
+  public void cleanup(long epoch) {
+executorService.submit(() -> evictCache(epoch));
+  }
+
+  @Override
+  public int size() {
+return cache.size();
+  }
+
+  private void evictCache(long epoch) {
+EpochEntry currentEntry = null;
+for (Iterator iterator = epochEntries.iterator(); iterator.hasNext();) {
+  currentEntry = (EpochEntry) iterator.next();
+  CACHEKEY cachekey = currentEntry.getCachekey();
+  CacheValue cacheValue = (CacheValue) cache.get(cachekey);
+  if (cacheValue.getEpoch() <= epoch) {
+cache.remove(cachekey);
+iterator.remove();
+  }
+
+  // If currentEntry epoch is greater than epoch, we have deleted all
+  // entries less than specified epoch. So, we can break.
+  if (currentEntry.getEpoch() > epoch) {
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
bharatviswa504 commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282337065
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+
+
+/**
+ * This is used for the tables where we don't want to cache entire table in
+ * in-memory.
+ */
+@Private
+@Evolving
+public class PartialTableCache
+implements TableCache{
 
 Review comment:
   The cache is maintained for correctness purpose for reads and validation of 
subsequent requests. The cache will be cleared once we flush to DB.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
bharatviswa504 commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282336866
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -71,6 +96,27 @@ public boolean isEmpty() throws IOException {
 
   @Override
   public VALUE get(KEY key) throws IOException {
+// Here the metadata lock will guarantee that cache is not updated for same
+// key during get key.
+if (cache != null) {
+  CacheValue cacheValue = cache.get(new CacheKey<>(key));
+  if (cacheValue == null) {
+return getFromTable(key);
+  } else {
+// Doing this because, if the Cache Value Last operation is deleted
+// means it will eventually removed from DB. So, we should return null.
+if (cacheValue.getLastOperation() != CacheValue.OperationType.DELETED) 
{
+  return cacheValue.getValue();
+} else {
+  return null;
+}
+  }
+} else {
+  return getFromTable(key);
 
 Review comment:
   For tables where the cache is disabled, we need to do as before just read 
from DB and return data. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
bharatviswa504 commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282336707
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Table.java
 ##
 @@ -97,6 +102,28 @@ void putWithBatch(BatchOperation batch, KEY key, VALUE 
value)
*/
   String getName() throws IOException;
 
+  /**
+   * Add entry to the table cache.
+   *
+   * If the cacheKey already exists, it will override the entry.
+   * @param cacheKey
+   * @param cacheValue
+   */
 
 Review comment:
   Once after the operation is executed in applyTransaction just before 
releasing the lock and sending a response to the client we need to add the 
response into cache. So that next subsequent read/write requests validation can 
be done with cache/db data.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
bharatviswa504 commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282336434
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
 ##
 @@ -44,17 +45,20 @@
*/
   Table getTable(String name) throws IOException;
 
+
   /**
* Gets an existing TableStore with implicit key/value conversion.
*
* @param name - Name of the TableStore to get
* @param keyType
* @param valueType
+   * @param cachetype - Type of cache need to be used for this table.
* @return - TableStore.
* @throws IOException on Failure
*/
Table getTable(String name,
-  Class keyType, Class valueType) throws IOException;
+  Class keyType, Class valueType,
+  TableCache.CACHETYPE cachetype) throws IOException;
 
 Review comment:
   Added this because for a few tables like bucket and volume table plan is to 
maintain full table information, for other tables we maintain a partial cache, 
whereas for few tables we don't want to maintain cache at all. (This is a 
common interface for all tables in Ozone SCM/OM. So, having this option will 
help to know which kind of cache need to be used for the table.)
   
   As these are frequently used for validation of almost every operation in OM. 
So, this might improve validation like bucket/volume exists or not checks.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
ben-roling commented on issue #794: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490726593
 
 
   > I will run the full test suite again against a bucket with versioning 
disabled just in case there are somehow other failures to uncover there since 
previously I only ran the suite against a bucket with versioning enabled.
   
   I completed the test run (us-west-2, bucket with versioning disabled):
   
   ```
   mvn -T 1C verify -Dparallel-tests -DtestsThreadCount=8 -Ds3guard -Ddynamo
   ```
   
   ```
   [ERROR] Tests run: 896, Failures: 1, Errors: 3, Skipped: 189
   ```
   
   The 3 errors and 1 failure were spread across 
ITestS3AContractGetFileStatusV1List, ITestDirectoryCommitMRJob, and 
ITestS3GuardToolDynamoDB, which succeeded when ran individually.
   
   ```
   mvn -T 1C verify -Dtest=skip -Dit.test=ITestS3AContractGetFileStatusV1List 
-Ds3guard -Ddynamo
   mvn -T 1C verify -Dtest=skip -Dit.test=ITestDirectoryCommitMRJob -Ds3guard 
-Ddynamo
   mvn -T 1C verify -Dtest=skip -Dit.test=ITestS3GuardToolDynamoDB -Ds3guard 
-Ddynamo
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16251) ABFS: add FSMainOperationsBaseTest

2019-05-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16831936#comment-16831936
 ] 

Aaron Fabbri edited comment on HADOOP-16251 at 5/9/19 1:04 AM:
---

Thanks for the patch [~DanielZhou]. We really appreciate you adding extra test 
coverage for cloud filesystems (ABFS)

Couple of questions about the patch:
{noformat}
@Ignore("There shouldn't be permission check for getFileInfo")
public void 
testListStatusThrowsExceptionForUnreadableDir() {{noformat}
Since this is a listing test, wouldn't the READ | EXECUTE checks still be valid?

*EDIT: Nevermind on the getFileInfo comment below.. I confused HA check with 
permission check there.*

Also, I'm surprised about getFileStatus / getFileInfo being listed as "N/A" for 
permission checks. It seems wrong from security perspective and -also looking 
at the code doesn't seem to be the case see this 
[link|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3202-L3204]:-
{noformat}
HdfsFileStatus getFileInfo(final String src, boolean resolveLink,
boolean needLocation, boolean needBlockToken) throws IOException {
  // if the client requests block tokens, then it can read data blocks
  // and should appear in the audit log as if getBlockLocations had been
  // called
  final String operationName = needBlockToken ? "open" : "getfileinfo";
  checkOperation(OperationCategory.READ);
  HdfsFileStatus stat = null;
  final FSPermissionChecker pc = getPermissionChecker();
  readLock();
  try {
checkOperation(OperationCategory.READ);
stat = FSDirStatAndListingOp.getFileInfo({noformat}
-Looks like the HDFS Permissions doc is incorrect, no?-


was (Author: fabbri):
Thanks for the patch [~DanielZhou]. We really appreciate you adding extra test 
coverage for cloud filesystems (ABFS)

Couple of questions about the patch:
{noformat}
@Ignore("There shouldn't be permission check for getFileInfo")
public void 
testListStatusThrowsExceptionForUnreadableDir() {{noformat}
Since this is a listing test, wouldn't the READ | EXECUTE checks still be valid?

Also, I'm surprised about getFileStatus / getFileInfo being listed as "N/A" for 
permission checks. It seems wrong from security perspective and also looking at 
the code doesn't seem to be the case - see this 
[link|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3202-L3204]:
{noformat}
HdfsFileStatus getFileInfo(final String src, boolean resolveLink,
boolean needLocation, boolean needBlockToken) throws IOException {
  // if the client requests block tokens, then it can read data blocks
  // and should appear in the audit log as if getBlockLocations had been
  // called
  final String operationName = needBlockToken ? "open" : "getfileinfo";
  checkOperation(OperationCategory.READ);
  HdfsFileStatus stat = null;
  final FSPermissionChecker pc = getPermissionChecker();
  readLock();
  try {
checkOperation(OperationCategory.READ);
stat = FSDirStatAndListingOp.getFileInfo({noformat}
Looks like the HDFS Permissions doc is incorrect, no?

> ABFS: add FSMainOperationsBaseTest
> --
>
> Key: HADOOP-16251
> URL: https://issues.apache.org/jira/browse/HADOOP-16251
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> Just happened to see 
> "hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java",
>  ABFS could inherit this test to increase its test coverage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16251) ABFS: add FSMainOperationsBaseTest

2019-05-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835999#comment-16835999
 ] 

Aaron Fabbri commented on HADOOP-16251:
---

Sorry for the confusion [~DanielZhou]. I misread that code. I saw check READ 
and catch AccessControlException and assumed it was a permission check but it 
is not. It is checking HA status. I'll edit my comment above.

> ABFS: add FSMainOperationsBaseTest
> --
>
> Key: HADOOP-16251
> URL: https://issues.apache.org/jira/browse/HADOOP-16251
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
>
> Just happened to see 
> "hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FSMainOperationsBaseTest.java",
>  ABFS could inherit this test to increase its test coverage.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #792: HDDS-1474. ozone.scm.datanode.id config should take path for a dir

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #792: HDDS-1474. ozone.scm.datanode.id config 
should take path for a dir 
URL: https://github.com/apache/hadoop/pull/792#issuecomment-490702131
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | 0 | yamllint | 1 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 78 | Maven dependency ordering for branch |
   | +1 | mvninstall | 415 | trunk passed |
   | +1 | compile | 202 | trunk passed |
   | +1 | checkstyle | 57 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 128 | trunk passed |
   | 0 | spotbugs | 246 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 433 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 406 | the patch passed |
   | +1 | compile | 210 | the patch passed |
   | +1 | javac | 210 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 730 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 127 | the patch passed |
   | +1 | findbugs | 450 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 151 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1244 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 6644 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/792 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml shellcheck shelldocs yamllint 
|
   | uname | Linux e27c9a094061 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0c5fa2e |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/6/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/6/testReport/ |
   | Max. process+thread count | 4450 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service 
hadoop-hdds/docs hadoop-ozone/dist U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-792/6/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #804: HDDS-1496. Support partial chunk reads and checksum verification

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #804: HDDS-1496. Support partial chunk reads 
and checksum verification
URL: https://github.com/apache/hadoop/pull/804#issuecomment-490694851
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 13 | Maven dependency ordering for branch |
   | +1 | mvninstall | 397 | trunk passed |
   | +1 | compile | 200 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 792 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 118 | trunk passed |
   | 0 | spotbugs | 232 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 408 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | +1 | mvninstall | 396 | the patch passed |
   | +1 | compile | 206 | the patch passed |
   | +1 | javac | 206 | the patch passed |
   | -0 | checkstyle | 29 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 667 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 126 | the patch passed |
   | +1 | findbugs | 430 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 136 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1228 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5461 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/804 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5e1a4a28f05e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0c5fa2e |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/1/testReport/ |
   | Max. process+thread count | 4687 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/client U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-804/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
hanishakoneru commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282297165
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+
+
+/**
+ * This is used for the tables where we don't want to cache entire table in
+ * in-memory.
+ */
+@Private
+@Evolving
+public class PartialTableCache
+implements TableCache{
+
+  private final ConcurrentHashMap cache;
+  private final TreeSet> epochEntries;
+  private ExecutorService executorService;
+
+
+
+  public PartialTableCache() {
+cache = new ConcurrentHashMap<>();
+epochEntries = new TreeSet>();
+// Created a singleThreadExecutor, so one cleanup will be running at a
+// time.
+executorService = Executors.newSingleThreadExecutor();
+  }
+
+  @Override
+  public CACHEVALUE get(CACHEKEY cachekey) {
+return cache.get(cachekey);
+  }
+
+  @Override
+  public void put(CACHEKEY cacheKey, CACHEVALUE value) {
+cache.put(cacheKey, value);
+CacheValue cacheValue = (CacheValue) cache.get(cacheKey);
 
 Review comment:
   Instead of casting the cache.get() object to CacheValue, I think CACHEVALUE 
itself should extend CacheValue so that it is guaranteed that the Value part of 
TableCache is an instance of CacheValue.class. Same for CACHEKEY also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
hanishakoneru commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282214231
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+
+
+/**
+ * This is used for the tables where we don't want to cache entire table in
+ * in-memory.
+ */
+@Private
+@Evolving
+public class PartialTableCache
+implements TableCache{
+
+  private final ConcurrentHashMap cache;
+  private final TreeSet> epochEntries;
+  private ExecutorService executorService;
+
+
+
+  public PartialTableCache() {
+cache = new ConcurrentHashMap<>();
+epochEntries = new TreeSet>();
+// Created a singleThreadExecutor, so one cleanup will be running at a
+// time.
+executorService = Executors.newSingleThreadExecutor();
+  }
+
+  @Override
+  public CACHEVALUE get(CACHEKEY cachekey) {
+return cache.get(cachekey);
+  }
+
+  @Override
+  public void put(CACHEKEY cacheKey, CACHEVALUE value) {
+cache.put(cacheKey, value);
+CacheValue cacheValue = (CacheValue) cache.get(cacheKey);
+epochEntries.add(new EpochEntry<>(cacheValue.getEpoch(), cacheKey));
+  }
+
+  @Override
+  public void cleanup(long epoch) {
+executorService.submit(() -> evictCache(epoch));
+  }
+
+  @Override
+  public int size() {
+return cache.size();
+  }
+
+  private void evictCache(long epoch) {
+EpochEntry currentEntry = null;
+for (Iterator iterator = epochEntries.iterator(); iterator.hasNext();) {
+  currentEntry = (EpochEntry) iterator.next();
+  CACHEKEY cachekey = currentEntry.getCachekey();
+  CacheValue cacheValue = (CacheValue) cache.get(cachekey);
+  if (cacheValue.getEpoch() <= epoch) {
+cache.remove(cachekey);
+iterator.remove();
+  }
+
+  // If currentEntry epoch is greater than epoch, we have deleted all
+  // entries less than specified epoch. So, we can break.
+  if (currentEntry.getEpoch() > epoch) {
 
 Review comment:
   We can avoid the 2nd if check and put it in else block.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
hanishakoneru commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282215862
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/utils/db/TestTypedRDBTableStore.java
 ##
 @@ -236,4 +249,66 @@ public void forEachAndIterator() throws Exception {
   }
 }
   }
+
+  @Test
+  public void testTypedTableWithCache() throws Exception {
+int iterCount = 10;
+try (Table testTable = createTypedTableWithCache(
+"Seven", TableCache.CACHETYPE.FULLCACHE)) {
+
+  for (int x = 0; x < iterCount; x++) {
+String key = Integer.toString(x);
+String value = Integer.toString(x);
+testTable.addCacheEntry(new CacheKey<>(key), new CacheValue<>(value,
+CacheValue.OperationType.CREATED, x));
+  }
+
+  // As we have added to cache, so get should return value even if it
+  // does not exist in DB.
+  for (int x = 0; x < iterCount; x++) {
+Assert.assertEquals(Integer.toString(1),
+testTable.get(Integer.toString(1)));
+  }
+
+}
+  }
+
+  @Test
+  public void testTypedTableWithCacheWithFewDeletedOperationType()
+  throws Exception {
+int iterCount = 10;
+try (Table testTable = createTypedTableWithCache(
+"Seven", TableCache.CACHETYPE.PARTIALCACHE)) {
+
+  for (int x = 0; x < iterCount; x++) {
+String key = Integer.toString(x);
+String value = Integer.toString(x);
+if (x % 2 == 0) {
+  testTable.addCacheEntry(new CacheKey<>(key),
+  new CacheValue<>(value,
+  CacheValue.OperationType.CREATED, x));
+} else {
+  testTable.addCacheEntry(new CacheKey<>(key), new CacheValue<>(value,
+  CacheValue.OperationType.DELETED, x));
+}
+  }
+
+  // As we have added to cache, so get should return value even if it
+  // does not exist in DB.
+  for (int x = 0; x < iterCount; x++) {
+if (x % 2 == 0) {
+  Assert.assertEquals(Integer.toString(x),
+  testTable.get(Integer.toString(x)));
+} else {
+  Assert.assertNull(testTable.get(Integer.toString(x)));
+}
+  }
+
+  testTable.cleanupCache(5);
+
+  GenericTestUtils.waitFor(() ->
+  ((TypedTable) testTable).getCache().size() == 4,
+  100, 5000);
+}
 
 Review comment:
   Can we also check that the cache entries remaining in the cache are the 
expected entries.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
hanishakoneru commented on a change in pull request #798: HDDS-1499. 
OzoneManager Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282189907
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Table.java
 ##
 @@ -60,6 +62,9 @@ void putWithBatch(BatchOperation batch, KEY key, VALUE value)
* Returns the value mapped to the given key in byte array or returns null
* if the key is not found.
*
+   * First it will check from cache, if it has entry return the value
+   * otherwise, get from the RocksDB table.
+   *
 
 Review comment:
   The RDBTable implementation of Table does not check the cache. We should 
probably move this statement to TypedTable which implements the cache.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-08 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835961#comment-16835961
 ] 

Eric Yang commented on HADOOP-16287:


[~daryn] Any concern with patch 4?  If not, I will give +1 on this patch and 
commit.

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #794: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490687271
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 30 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 66 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1039 | trunk passed |
   | +1 | compile | 1020 | trunk passed |
   | +1 | checkstyle | 141 | trunk passed |
   | +1 | mvnsite | 132 | trunk passed |
   | +1 | shadedclient | 1004 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 106 | trunk passed |
   | 0 | spotbugs | 67 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 185 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 76 | the patch passed |
   | +1 | compile | 958 | the patch passed |
   | +1 | javac | 958 | the patch passed |
   | -0 | checkstyle | 144 | root: The patch generated 29 new + 70 unchanged - 
4 fixed = 99 total (was 74) |
   | +1 | mvnsite | 125 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 685 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 105 | the patch passed |
   | +1 | findbugs | 203 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 528 | hadoop-common in the patch passed. |
   | +1 | unit | 285 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 6939 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-794/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/794 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux de5a731ca399 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0c5fa2e |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-794/4/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-794/4/testReport/ |
   | Max. process+thread count | 1448 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-794/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #794: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490676538
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 57 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 25 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 78 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1254 | trunk passed |
   | +1 | compile | 1276 | trunk passed |
   | +1 | checkstyle | 162 | trunk passed |
   | +1 | mvnsite | 135 | trunk passed |
   | +1 | shadedclient | 1043 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 109 | trunk passed |
   | 0 | spotbugs | 67 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 193 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 79 | the patch passed |
   | +1 | compile | 1104 | the patch passed |
   | +1 | javac | 1104 | the patch passed |
   | -0 | checkstyle | 146 | root: The patch generated 29 new + 70 unchanged - 
4 fixed = 99 total (was 74) |
   | +1 | mvnsite | 123 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 671 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 89 | the patch passed |
   | +1 | findbugs | 207 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 562 | hadoop-common in the patch passed. |
   | +1 | unit | 270 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 7587 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-794/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/794 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux cad520a187b1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3418bbb |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-794/3/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-794/3/testReport/ |
   | Max. process+thread count | 1387 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-794/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru opened a new pull request #804: HDDS-1496. Support partial chunk reads and checksum verification

2019-05-08 Thread GitBox
hanishakoneru opened a new pull request #804: HDDS-1496. Support partial chunk 
reads and checksum verification
URL: https://github.com/apache/hadoop/pull/804
 
 
   Partial chunk reads and checksum verifications


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835944#comment-16835944
 ] 

Aaron Fabbri commented on HADOOP-16279:
---

Thanks for the work on this stuff [~gabor.bota]. I commented on the PR. The 
logic looks pretty good but I think the design needs discussion here. The 
current patch sort of combines the two different ideas:

1. "Authoritative TTL": how fresh a MetadataStore entry needs to be for S3A to 
skip S3 query.
2. "Max entry lifetime" in MetadataStore.

I think these concepts should be kept separate in the public APIs/configs at 
least.

There are a couple of cases when querying MetadataStore (MS):
I. MetadataStore returns null (no information on that path)
II. MetadataStore returns something (has metadata entry for that path).
  II.a. entry is newer than authoritative TTL (S3A may short-circuit and skip 
S3 query)
  II.b. entry is older than authoritative TTL (there is data but S3A needs to 
also query  S3)

The patch combines II.b and I.

Sticking with the "general design, specific implementation" ideal, I'd keep the 
public interfaces and config params designed as above instead. That doesn't 
prevent you from doing a more simple implementation (e.g. for now, return null 
from S3Guard.getWithTtl() in case II.b. as you do in your patch. That works 
because it *does* cause S3A to query S3.)

So the patch made sense except the naming and description of the configuration 
parameter (i think it should be specifically for is "authoritative", not for 
existence of an entry in MS). And I didn't understand why we need more prune() 
functions added to the MS interface. Also I thought the LocalMetadataStore use 
of guava Cache meant the work was already done there?

My hope is that later on, we can replace this implementation of II.b. (where 
getWithTtl() returns null) with smarter logic that allows you set a policy for  
handling S3 versus MS conflicts.  (In this case, get() returns a PathMetadata, 
S3A would check if auth TTL expired, if so still queries S3 and if the data in 
S3 and MS conflict, take action depending on the configured conflict policy).

Shout if I can clarify this at all.



> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
> guava cache's internal solution for the TTL of these entries. This is an 
> S3AFileSystem level solution in S3Guard, a layer above all metadata store.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behavior than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.
> * Use the same ttl for entries and authoritative directory listing
> * All entries can be expired. Then the returned metadata from the MS will be 
> null.
> * Add two new methods pruneExpiredTtl() and pruneExpiredTtl(String keyPrefix) 
> to MetadataStore interface. These methods will delete all expired metadata 
> from the ms.
> * Use last_updated field in ms for both file metadata and authoritative 
> directory expiry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-05-08 Thread GitBox
ajfabbri commented on a change in pull request #802: HADOOP-16279. S3Guard: 
Implement time-based (TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802#discussion_r282255656
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
 ##
 @@ -549,10 +539,43 @@ public static void putWithTtl(MetadataStore ms, 
DirListingMetadata dirMeta,
 ms.put(dirMeta);
   }
 
+  public static void putWithTtl(MetadataStore ms, PathMetadata fileMeta,
+  ITtlTimeProvider timeProvider) throws IOException {
+fileMeta.setLastUpdated(timeProvider.getNow());
+ms.put(fileMeta);
+  }
+
+  public static void putWithTtl(MetadataStore ms,
+  Collection fileMetas, ITtlTimeProvider timeProvider)
+  throws IOException {
+fileMetas.forEach(
+fileMeta -> fileMeta.setLastUpdated(timeProvider.getNow())
 
 Review comment:
   Small optimization: call getNow() once and save in local variable, reuse it. 
Getting system time can sometimes be a bit slow.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-05-08 Thread GitBox
ajfabbri commented on a change in pull request #802: HADOOP-16279. S3Guard: 
Implement time-based (TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802#discussion_r282267382
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java
 ##
 @@ -237,6 +237,31 @@ void move(Collection pathsToDelete,
   void prune(long modTime, String keyPrefix)
   throws IOException, UnsupportedOperationException;
 
+  /**
+   * Clear any metadata which is expired with TTL.
+   * Implementations MUST clear expired file metadata, and expired directory
+   * metadata.
+   * (s3a itself does not track modification time for directories).
+   * Implementations may also choose to throw UnsupportedOperationException
+   * instead.
+   *
+   * @throws IOException if there is an error
+   * @throws UnsupportedOperationException if not implemented
+   */
+  void pruneExpiredTtl(ITtlTimeProvider timeProvider) throws IOException,
 
 Review comment:
   Why do we need new prune functions here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-05-08 Thread GitBox
ajfabbri commented on a change in pull request #802: HADOOP-16279. S3Guard: 
Implement time-based (TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802#discussion_r282249645
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3Guard.java
 ##
 @@ -549,10 +539,43 @@ public static void putWithTtl(MetadataStore ms, 
DirListingMetadata dirMeta,
 ms.put(dirMeta);
   }
 
+  public static void putWithTtl(MetadataStore ms, PathMetadata fileMeta,
+  ITtlTimeProvider timeProvider) throws IOException {
+fileMeta.setLastUpdated(timeProvider.getNow());
+ms.put(fileMeta);
+  }
+
+  public static void putWithTtl(MetadataStore ms,
+  Collection fileMetas, ITtlTimeProvider timeProvider)
+  throws IOException {
+fileMetas.forEach(
+fileMeta -> fileMeta.setLastUpdated(timeProvider.getNow())
+);
+ms.put(fileMetas);
+  }
+
+  public static PathMetadata getWithTtl(MetadataStore ms, Path path,
+  ITtlTimeProvider timeProvider) throws IOException {
+long ttl = timeProvider.getMetadataTtl();
+
+final PathMetadata pathMetadata = ms.get(path);
+
+if(pathMetadata != null) {
+  if(!pathMetadata.isExpired(ttl, timeProvider.getNow())) {
+return pathMetadata;
+  } else {
+LOG.debug("PathMetadata TTl for {} is expired in metadata store.");
 
 Review comment:
   debug() is missing format arg here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajfabbri commented on a change in pull request #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-05-08 Thread GitBox
ajfabbri commented on a change in pull request #802: HADOOP-16279. S3Guard: 
Implement time-based (TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802#discussion_r282261248
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
 ##
 @@ -1502,11 +1502,11 @@
 
 
 
-fs.s3a.metadatastore.authoritative.dir.ttl
+fs.s3a.metadatastore.metadata.ttl
 
 Review comment:
   Two things: 1. need to make sure this is ok with compatibility rules 
(changing public API, essentially.. not sure which releases have happened--if 
any--since this was added).  2. I think we want to separate (a) "is 
authoritative" from (b) "does metadata exist" in metadata store.  I think ideal 
would be having something like ```fs.s3a.metadatastore.authoritative.ttl`` 
which says how long S3A treats MS data as fresh enough to skip S3 query, and 
then some other parameter (existing "prune age" may be sufficient) saying when 
metadata should be deleted. I'll start a bigger discussion on the JIRA.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #725: HDDS-1422. Exception during DataNode shutdown. Contributed by Arpit A…

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #725: HDDS-1422. Exception during DataNode 
shutdown. Contributed by Arpit A…
URL: https://github.com/apache/hadoop/pull/725#issuecomment-490664627
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 51 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 435 | trunk passed |
   | +1 | compile | 212 | trunk passed |
   | +1 | checkstyle | 52 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 841 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 131 | trunk passed |
   | 0 | spotbugs | 325 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 567 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 464 | the patch passed |
   | +1 | compile | 251 | the patch passed |
   | +1 | javac | 251 | the patch passed |
   | +1 | checkstyle | 65 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 769 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | the patch passed |
   | +1 | findbugs | 558 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 155 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1344 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6961 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestHddsDatanodeService |
   |   | hadoop.ozone.container.common.volume.TestVolumeSet |
   |   | hadoop.ozone.container.common.volume.TestVolumeSetDiskChecks |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainer |
   |   | hadoop.ozone.container.common.TestDatanodeStateMachine |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerCommandHandler
 |
   |   | hadoop.ozone.container.keyvalue.TestBlockManagerImpl |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueBlockIterator |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.container.common.volume.TestHddsVolume |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainerMarkUnhealthy |
   |   | 
hadoop.ozone.container.common.volume.TestRoundRobinVolumeChoosingPolicy |
   |   | hadoop.ozone.container.common.impl.TestHddsDispatcher |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueContainerCheck |
   |   | hadoop.ozone.container.keyvalue.TestKeyValueHandler |
   |   | hadoop.ozone.container.keyvalue.TestChunkManagerImpl |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.scm.pipeline.TestPipelineManagerMXBean |
   |   | hadoop.ozone.web.TestOzoneWebAccess |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.scm.node.TestQueryNode |
   |   | hadoop.ozone.web.client.TestOzoneClient |
   |   | hadoop.ozone.om.TestOmInit |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.hdds.scm.pipeline.TestSCMPipelineManager |
   |   | hadoop.ozone.client.rpc.TestHybridPipelineOnDatanode |
   |   | hadoop.ozone.container.common.TestBlockDeletingService |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.TestContainerStateMachineIdempotency |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.container.metrics.TestContainerMetrics |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachine |
   |   | hadoop.ozone.scm.node.TestSCMNodeMetrics |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.TestMiniChaosOzoneCluster |
   |   | hadoop.ozone.om.TestContainerReportWithKeys |
   |   | hadoop.ozone.om.TestMultipleContainerReadWrite |
   |   | hadoop.ozone.container.common.impl.TestContainerPersistence |
   |   | hadoop.hdds.scm.container.TestContainerStateManagerIntegration |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.ozone.TestContainerOperations |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.scm.TestAllocateContainer |
   |   | hadoop.ozone.container.ozoneimpl.TestSecureOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | 

[GitHub] [hadoop] ben-roling commented on issue #803: HADOOP-16085: S3Guard to use object version or etags (interim PR)

2019-05-08 Thread GitBox
ben-roling commented on issue #803: HADOOP-16085: S3Guard to use object version 
or etags (interim PR)
URL: https://github.com/apache/hadoop/pull/803#issuecomment-490660850
 
 
   The changes here looked good to me and I pulled this into #794 as mentioned 
here:
   https://github.com/apache/hadoop/pull/794#issuecomment-490642722


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
ben-roling commented on issue #794: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490659830
 
 
   The latest commit fixes the test failures I was seeing against a bucket with 
versioning disabled.  In the (etag,client) case for 
`testRenameEventuallyConsistentFile`, stubbing of inconsistent responses from 
AmazonS3.copyObject() was incorrect.  For that case we should never see a 
"precondition failed" response since we don't pass any eTag or versionId 
qualification on the request.
   
   There were a few other failures in test methods that require versioning 
since I hadn't copied the code that executes the assumption to make sure 
versioning is there.
   
   I ran the full ITestS3ARemoteFileChanged once each against a bucket with 
versioning enabled and a bucket with versioning disabled and all tests either 
succeeded or were skipped as expected.
   
   I will run the full test suite again against a bucket with versioning 
disabled just in case there are somehow other failures to uncover there since 
previously I only ran the suite against a bucket with versioning _enabled_.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
ben-roling commented on issue #794: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490642722
 
 
   I looked over your changes and they all made sense to me.  Thanks for 
cleaning up my mistakes :) . I've fast-forwarded this PR branch to include your 
commit.
   
   The change to add the annotation that labeled the parameters on the 
parameterized tests is especially nice.  Embarrassingly I hadn't learned about 
that one yet.
   
   After pulling your changes in I re-ran `testRenameEventuallyConsistentFile` 
again and all permutations succeeded against a bucket with object versioning 
enabled.  I did see a reproducible failure on (etag, client) against a bucket 
with versioning disabled.  I'll dig into that further.  Which permutations are 
failing for you?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] eyanghwx commented on issue #800: HDDS-1458. Create a maven profile to run fault injection tests

2019-05-08 Thread GitBox
eyanghwx commented on issue #800: HDDS-1458. Create a maven profile to run 
fault injection tests
URL: https://github.com/apache/hadoop/pull/800#issuecomment-490642086
 
 
   Fault injection tests includes disk tests and network tests.  Blockade is 
only network test.  There are other scenarios missing and needs to be included.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
ben-roling commented on issue #794: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490634283
 
 
   Thanks Steve!  I'll have a look over it and see what's up with 
`testRenameEventuallyConsistentFile`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #725: HDDS-1422. Exception during DataNode shutdown. Contributed by Arpit A…

2019-05-08 Thread GitBox
arp7 commented on issue #725: HDDS-1422. Exception during DataNode shutdown. 
Contributed by Arpit A…
URL: https://github.com/apache/hadoop/pull/725#issuecomment-490627806
 
 
   Addressed issues flagged by CI, and rebased to current trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes inconsistent after partial failure of rename

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #654: HADOOP-15183 S3Guard store becomes 
inconsistent after partial failure of rename
URL: https://github.com/apache/hadoop/pull/654#issuecomment-490609283
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 18 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1140 | trunk passed |
   | +1 | compile | 1121 | trunk passed |
   | +1 | checkstyle | 143 | trunk passed |
   | +1 | mvnsite | 120 | trunk passed |
   | +1 | shadedclient | 978 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 91 | trunk passed |
   | 0 | spotbugs | 64 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 191 | trunk passed |
   | -0 | patch | 95 | Used diff version of patch file. Binary files and 
potentially other changes not applied. Please rebase and squash commits if 
necessary. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 80 | the patch passed |
   | +1 | compile | 1048 | the patch passed |
   | +1 | javac | 1048 | the patch passed |
   | -0 | checkstyle | 141 | root: The patch generated 54 new + 69 unchanged - 
1 fixed = 123 total (was 70) |
   | +1 | mvnsite | 120 | the patch passed |
   | -1 | whitespace | 0 | The patch has 2 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 666 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 30 | hadoop-tools_hadoop-aws generated 3 new + 1 unchanged 
- 0 fixed = 4 total (was 1) |
   | -1 | findbugs | 73 | hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 511 | hadoop-common in the patch passed. |
   | -1 | unit | 3651 | hadoop-aws in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 10463 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  org.apache.hadoop.fs.s3a.s3guard.PathOrderComparators$TopmostFirst 
implements Comparator but not Serializable  At 
PathOrderComparators.java:Serializable  At PathOrderComparators.java:[lines 
69-89] |
   |  |  org.apache.hadoop.fs.s3a.s3guard.PathOrderComparators$TopmostLast 
implements Comparator but not Serializable  At 
PathOrderComparators.java:Serializable  At PathOrderComparators.java:[lines 
98-109] |
   | Failed junit tests | hadoop.fs.s3a.commit.staging.TestStagingCommitter |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/23/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/654 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 7a7d8342fcbb 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9b0aace |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/23/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/23/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/23/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/23/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/23/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/23/testReport/ |
   | Max. process+thread count | 1463 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-654/23/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


[GitHub] [hadoop] avijayanhwx commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline and createPipeline are not lock protected.

2019-05-08 Thread GitBox
avijayanhwx commented on issue #799: HDDS-1451 : SCMBlockManager findPipeline 
and createPipeline are not lock protected.
URL: https://github.com/apache/hadoop/pull/799#issuecomment-490596428
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #801: HDDS-1500 : Allocate block failures in client should print exception trace.

2019-05-08 Thread GitBox
avijayanhwx commented on issue #801: HDDS-1500 : Allocate block failures in 
client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801#issuecomment-490596249
 
 
   \label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #801: HDDS-1500 : Allocate block failures in client should print exception trace.

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #801: HDDS-1500 : Allocate block failures in 
client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801#issuecomment-490592920
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 395 | trunk passed |
   | +1 | compile | 205 | trunk passed |
   | +1 | checkstyle | 49 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 752 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 117 | trunk passed |
   | 0 | spotbugs | 232 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 411 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 379 | the patch passed |
   | +1 | compile | 202 | the patch passed |
   | +1 | javac | 202 | the patch passed |
   | +1 | checkstyle | 55 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 591 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 121 | the patch passed |
   | +1 | findbugs | 428 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 134 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1492 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 5535 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/801 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ee90bb429995 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3418bbb |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/3/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/3/testReport/ |
   | Max. process+thread count | 5405 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/client U: hadoop-ozone/client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
steveloughran commented on issue #794: HADOOP-16085: use object version or 
etags to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490587966
 
 
   See #803


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #803: HADOOP-16085: S3Guard to use object version or etags (interim PR)

2019-05-08 Thread GitBox
steveloughran opened a new pull request #803: HADOOP-16085: S3Guard to use 
object version or etags (interim PR)
URL: https://github.com/apache/hadoop/pull/803
 
 
   This is #794 with my edits added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835785#comment-16835785
 ] 

Aaron Fabbri commented on HADOOP-16279:
---

[~ste...@apache.org] I'd argue LocalMetadataStore is useful still-- but if I'm 
the only one we could consider cutting it. You should be able to use it as a 
metadata cache for read-only or single-writer operations to speed things up in 
real world worlkloads (think setting it up as authoritative on a distcp, for 
example).

I'll take a peek at the PR here. Thanks for working on this [~gabor.bota]

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
> guava cache's internal solution for the TTL of these entries. This is an 
> S3AFileSystem level solution in S3Guard, a layer above all metadata store.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behavior than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.
> * Use the same ttl for entries and authoritative directory listing
> * All entries can be expired. Then the returned metadata from the MS will be 
> null.
> * Add two new methods pruneExpiredTtl() and pruneExpiredTtl(String keyPrefix) 
> to MetadataStore interface. These methods will delete all expired metadata 
> from the ms.
> * Use last_updated field in ms for both file metadata and authoritative 
> directory expiry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
steveloughran commented on issue #794: HADOOP-16085: use object version or 
etags to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490587418
 
 
   Right, I've done my edits and will put it up as a PR alongside that: if you 
cherry pick my patch in here, then I'll close/delete that one and this will 
have everything in.
   
   That patch is me just going through my review comments and doing them.
   
   I am seeing failures with `testRenameEventuallyConsistentFile` on some 
options, despite my efforts to understand it. Either the mock #of times to fake 
a failure is wrong, my changed retry policy isn't (now) being overridden or 
something else is up. Can you test and make sure it is still good for you?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16278) With S3A Filesystem, Long Running services End up Doing lot of GC and eventually die

2019-05-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835783#comment-16835783
 ] 

Aaron Fabbri commented on HADOOP-16278:
---

Agreed, +1 this simple patch stopping the quantiles on FS close.

> With S3A Filesystem, Long Running services End up Doing lot of GC and 
> eventually die
> 
>
> Key: HADOOP-16278
> URL: https://issues.apache.org/jira/browse/HADOOP-16278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, hadoop-aws, metrics
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Rajat Khandelwal
>Priority: Major
> Fix For: 3.1.3
>
> Attachments: HADOOP-16278.patch, Screenshot 2019-04-30 at 12.52.42 
> PM.png, Screenshot 2019-04-30 at 2.33.59 PM.png
>
>
> I'll start with the symptoms and eventually come to the cause. 
>  
> We are using HDP 3.1 and Noticed that every couple of days the Hive Metastore 
> starts doing GC, sometimes with 30 minute long pauses. Although nothing is 
> collected and the Heap remains fully used. 
>  
> Next, we looked at the Heap Dump and found that 99% of the memory is taken up 
> by one Executor Service for its task queue. 
>  
> !Screenshot 2019-04-30 at 12.52.42 PM.png!
> The Instance is Created like this:
> {{ private static final ScheduledExecutorService scheduler = Executors}}
>  {{ .newScheduledThreadPool(1, new ThreadFactoryBuilder().setDaemon(true)}}
>  {{ .setNameFormat("MutableQuantiles-%d").build());}}
>  
> So All the instances of MutableQuantiles are using a Shared single threaded 
> ExecutorService
> The second thing to notice is this block of code in the Constructor of 
> MutableQuantiles:
> {{this.scheduledTask = scheduler.scheduleAtFixedRate(new 
> MutableQuantiles.RolloverSample(this), (long)interval, (long)interval, 
> TimeUnit.SECONDS);}}
> So As soon as a MutableQuantiles Instance is created, one task is scheduled 
> at Fix Rate. Instead of that, it could schedule them at Fixed Delay (Refer 
> HADOOP-16248). 
> Now coming to why it's related to S3. 
>  
> S3AFileSystem Creates an instance of S3AInstrumentation, which creates two 
> quantiles (related to S3Guard) with 1s(hardcoded) interval and leaves them 
> hanging. By hanging I mean perpetually scheduled. As and when new Instances 
> of S3AFileSystem are created, two new quantiles are created, which in turn 
> create two scheduled tasks and never cancel them. This way number of 
> scheduled tasks keeps on growing without ever getting cleaned up, leading to 
> GC/OOM/Crash. 
>  
> MutableQuantiles has a numInfo field which tells things like the name of the 
> metric. From the Heapdump, I found one numInfo and traced all objects 
> referencing that.
>  
> !Screenshot 2019-04-30 at 2.33.59 PM.png!
>  
> There seem to be 300K objects of for the same metric 
> (S3Guard_metadatastore_throttle_rate). 
> As expected, there are other 300K objects for the other MutableQuantiles 
> created by S3AInstrumentation class. 
> Although the number of instances of S3AInstrumentation class is only 4. 
> Clearly, there is a leak. One S3AInstrumentation instance is creating two 
> scheduled tasks to be run every second. These tasks are left scheduled and 
> not cancelled when S3AInstrumentation.close() is called. Hence, they are 
> never cleaned up. GC is also not able to collect them since they are referred 
> by the scheduler. 
> Who creates S3AInstrumentation instances? S3AFileSystem.initialize(), which 
> is called in FileSystem.get(URI, Configuration). Since hive metastore is a 
> service that deals with a lot of Path Objects and hence needs to do a lot of 
> calls to FileSystem.get, it's the one to first shows these symptoms. 
> We're seeing similar symptoms in AM for long-running jobs (for both Tez AM 
> and MR AM). 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16278) With S3A Filesystem, Long Running services End up Doing lot of GC and eventually die

2019-05-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835783#comment-16835783
 ] 

Aaron Fabbri edited comment on HADOOP-16278 at 5/8/19 5:47 PM:
---

Agreed, +1 this simple patch stopping the quantiles on FS close. Also wanted to 
say nice work on this Jira [~prongs].


was (Author: fabbri):
Agreed, +1 this simple patch stopping the quantiles on FS close.

> With S3A Filesystem, Long Running services End up Doing lot of GC and 
> eventually die
> 
>
> Key: HADOOP-16278
> URL: https://issues.apache.org/jira/browse/HADOOP-16278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, hadoop-aws, metrics
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Rajat Khandelwal
>Priority: Major
> Fix For: 3.1.3
>
> Attachments: HADOOP-16278.patch, Screenshot 2019-04-30 at 12.52.42 
> PM.png, Screenshot 2019-04-30 at 2.33.59 PM.png
>
>
> I'll start with the symptoms and eventually come to the cause. 
>  
> We are using HDP 3.1 and Noticed that every couple of days the Hive Metastore 
> starts doing GC, sometimes with 30 minute long pauses. Although nothing is 
> collected and the Heap remains fully used. 
>  
> Next, we looked at the Heap Dump and found that 99% of the memory is taken up 
> by one Executor Service for its task queue. 
>  
> !Screenshot 2019-04-30 at 12.52.42 PM.png!
> The Instance is Created like this:
> {{ private static final ScheduledExecutorService scheduler = Executors}}
>  {{ .newScheduledThreadPool(1, new ThreadFactoryBuilder().setDaemon(true)}}
>  {{ .setNameFormat("MutableQuantiles-%d").build());}}
>  
> So All the instances of MutableQuantiles are using a Shared single threaded 
> ExecutorService
> The second thing to notice is this block of code in the Constructor of 
> MutableQuantiles:
> {{this.scheduledTask = scheduler.scheduleAtFixedRate(new 
> MutableQuantiles.RolloverSample(this), (long)interval, (long)interval, 
> TimeUnit.SECONDS);}}
> So As soon as a MutableQuantiles Instance is created, one task is scheduled 
> at Fix Rate. Instead of that, it could schedule them at Fixed Delay (Refer 
> HADOOP-16248). 
> Now coming to why it's related to S3. 
>  
> S3AFileSystem Creates an instance of S3AInstrumentation, which creates two 
> quantiles (related to S3Guard) with 1s(hardcoded) interval and leaves them 
> hanging. By hanging I mean perpetually scheduled. As and when new Instances 
> of S3AFileSystem are created, two new quantiles are created, which in turn 
> create two scheduled tasks and never cancel them. This way number of 
> scheduled tasks keeps on growing without ever getting cleaned up, leading to 
> GC/OOM/Crash. 
>  
> MutableQuantiles has a numInfo field which tells things like the name of the 
> metric. From the Heapdump, I found one numInfo and traced all objects 
> referencing that.
>  
> !Screenshot 2019-04-30 at 2.33.59 PM.png!
>  
> There seem to be 300K objects of for the same metric 
> (S3Guard_metadatastore_throttle_rate). 
> As expected, there are other 300K objects for the other MutableQuantiles 
> created by S3AInstrumentation class. 
> Although the number of instances of S3AInstrumentation class is only 4. 
> Clearly, there is a leak. One S3AInstrumentation instance is creating two 
> scheduled tasks to be run every second. These tasks are left scheduled and 
> not cancelled when S3AInstrumentation.close() is called. Hence, they are 
> never cleaned up. GC is also not able to collect them since they are referred 
> by the scheduler. 
> Who creates S3AInstrumentation instances? S3AFileSystem.initialize(), which 
> is called in FileSystem.get(URI, Configuration). Since hive metastore is a 
> service that deals with a lot of Path Objects and hence needs to do a lot of 
> calls to FileSystem.get, it's the one to first shows these symptoms. 
> We're seeing similar symptoms in AM for long-running jobs (for both Tez AM 
> and MR AM). 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom

2019-05-08 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835778#comment-16835778
 ] 

Aaron Fabbri commented on HADOOP-16269:
---

This was on my todo list today but [~ste...@apache.org] beat me to it. Thanks 
for the contribution [~DanielZhou] and the commit Steve.

> ABFS: add listFileStatus with StartFrom
> ---
>
> Key: HADOOP-16269
> URL: https://issues.apache.org/jira/browse/HADOOP-16269
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, 
> HADOOP-16269-003.patch
>
>
> Adding a ListFileStatus in a path from a entry name in lexical order.
> This is added to AzureBlobFileSystemStore and won't be exposed to FS level 
> api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #801: HDDS-1500 : Allocate block failures in client should print exception trace.

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #801: HDDS-1500 : Allocate block failures in 
client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801#issuecomment-490581463
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 52 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 457 | trunk passed |
   | +1 | compile | 210 | trunk passed |
   | +1 | checkstyle | 51 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 829 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 124 | trunk passed |
   | 0 | spotbugs | 267 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 468 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 56 | hadoop-ozone in the patch failed. |
   | -1 | compile | 35 | hadoop-ozone in the patch failed. |
   | -1 | javac | 35 | hadoop-ozone in the patch failed. |
   | -0 | checkstyle | 16 | The patch fails to run checkstyle in hadoop-ozone |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 655 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 35 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 76 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 168 | hadoop-hdds in the patch failed. |
   | -1 | unit | 46 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3929 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/801 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dfd6d755f3f2 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3418bbb |
   | Default Java | 1.8.0_191 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-801/out/maven-patch-checkstyle-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/artifact/out/patch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/testReport/ |
   | Max. process+thread count | 412 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/client U: hadoop-ozone/client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-801/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager 
Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282164337
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/DBStore.java
 ##
 @@ -44,17 +45,20 @@
*/
   Table getTable(String name) throws IOException;
 
+
   /**
* Gets an existing TableStore with implicit key/value conversion.
*
* @param name - Name of the TableStore to get
* @param keyType
* @param valueType
+   * @param cachetype - Type of cache need to be used for this table.
* @return - TableStore.
* @throws IOException on Failure
*/
Table getTable(String name,
-  Class keyType, Class valueType) throws IOException;
+  Class keyType, Class valueType,
+  TableCache.CACHETYPE cachetype) throws IOException;
 
 Review comment:
   Why do we need an external visible TableCache.CACHETYPE ? shouldn't this be 
an implementation detail of the Tables that have Cache?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager 
Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282164867
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/Table.java
 ##
 @@ -97,6 +102,28 @@ void putWithBatch(BatchOperation batch, KEY key, VALUE 
value)
*/
   String getName() throws IOException;
 
+  /**
+   * Add entry to the table cache.
+   *
+   * If the cacheKey already exists, it will override the entry.
+   * @param cacheKey
+   * @param cacheValue
+   */
 
 Review comment:
   well, I was really hoping that the fact that there is a cache is not visible 
to the layer that is reading and writing.
   Is there a reason why that should be exposed to calling applications?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager 
Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282167175
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -71,6 +96,27 @@ public boolean isEmpty() throws IOException {
 
   @Override
   public VALUE get(KEY key) throws IOException {
+// Here the metadata lock will guarantee that cache is not updated for same
+// key during get key.
+if (cache != null) {
+  CacheValue cacheValue = cache.get(new CacheKey<>(key));
+  if (cacheValue == null) {
+return getFromTable(key);
+  } else {
+// Doing this because, if the Cache Value Last operation is deleted
+// means it will eventually removed from DB. So, we should return null.
+if (cacheValue.getLastOperation() != CacheValue.OperationType.DELETED) 
{
 
 Review comment:
   Why do we even cache the deleted Operations? Delete is not in the 
performance critical path at all. If you can instruct the system to make the 
full commit or flush the buffer when there is a delete op you don't need to 
keep this extra state in the cache. yes, repeated deletes will call state 
machine call back. When do we actually flush / clear this entry?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager 
Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282165925
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -31,22 +38,40 @@
  */
 public class TypedTable implements Table {
 
-  private Table rawTable;
+  private final Table rawTable;
+
+  private final CodecRegistry codecRegistry;
 
-  private CodecRegistry codecRegistry;
+  private final Class keyType;
 
-  private Class keyType;
+  private final Class valueType;
 
-  private Class valueType;
+  private final TableCache, CacheValue> cache;
 
   public TypedTable(
   Table rawTable,
   CodecRegistry codecRegistry, Class keyType,
   Class valueType) {
+this(rawTable, codecRegistry, keyType, valueType,
+null);
+  }
+
+
+  public TypedTable(
+  Table rawTable,
+  CodecRegistry codecRegistry, Class keyType,
+  Class valueType, TableCache.CACHETYPE cachetype) {
 this.rawTable = rawTable;
 this.codecRegistry = codecRegistry;
 this.keyType = keyType;
 this.valueType = valueType;
+if (cachetype == TableCache.CACHETYPE.FULLCACHE) {
 
 Review comment:
   It is impossible for the user to tell you apriori if they want a full cache 
or partial cache. When you start a cluster you always want a full cache. We 
should get a cache size -- or get a percentage of memory from the OM cache size 
and use that if needed. Or for time being rely on the RocksDB doing the right 
thing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager 
Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282168861
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+
+
+/**
+ * This is used for the tables where we don't want to cache entire table in
+ * in-memory.
+ */
+@Private
+@Evolving
+public class PartialTableCache
+implements TableCache{
 
 Review comment:
   Not sure if you have seen this, 
https://github.com/facebook/rocksdb/wiki/Block-Cache
   
   We already do this cache control in the RockDB. I am not sure if we should 
do this twice. Unless you have a lookup problem which cannot be solved by 
hashing or prefix lookup, we will have more efficient usage of memory by 
relying on the underlying layer and more over having a unified cache layer will 
lead to better cache layer utilization.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager 
Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282167434
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/TypedTable.java
 ##
 @@ -71,6 +96,27 @@ public boolean isEmpty() throws IOException {
 
   @Override
   public VALUE get(KEY key) throws IOException {
+// Here the metadata lock will guarantee that cache is not updated for same
+// key during get key.
+if (cache != null) {
+  CacheValue cacheValue = cache.get(new CacheKey<>(key));
+  if (cacheValue == null) {
+return getFromTable(key);
+  } else {
+// Doing this because, if the Cache Value Last operation is deleted
+// means it will eventually removed from DB. So, we should return null.
+if (cacheValue.getLastOperation() != CacheValue.OperationType.DELETED) 
{
+  return cacheValue.getValue();
+} else {
+  return null;
+}
+  }
+} else {
+  return getFromTable(key);
 
 Review comment:
   Not sure if you need this get again ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager Cache.

2019-05-08 Thread GitBox
anuengineer commented on a change in pull request #798: HDDS-1499. OzoneManager 
Cache.
URL: https://github.com/apache/hadoop/pull/798#discussion_r282169161
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/utils/db/cache/PartialTableCache.java
 ##
 @@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+
+package org.apache.hadoop.utils.db.cache;
+
+import java.util.Iterator;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+
+import org.apache.hadoop.classification.InterfaceAudience.Private;
+import org.apache.hadoop.classification.InterfaceStability.Evolving;
+
+
+
+/**
+ * This is used for the tables where we don't want to cache entire table in
+ * in-memory.
+ */
+@Private
+@Evolving
+public class PartialTableCache
+implements TableCache{
+
+  private final ConcurrentHashMap cache;
+  private final TreeSet> epochEntries;
+  private ExecutorService executorService;
+
+
+
+  public PartialTableCache() {
+cache = new ConcurrentHashMap<>();
+epochEntries = new TreeSet>();
+// Created a singleThreadExecutor, so one cleanup will be running at a
+// time.
+executorService = Executors.newSingleThreadExecutor();
+  }
+
+  @Override
+  public CACHEVALUE get(CACHEKEY cachekey) {
+return cache.get(cachekey);
+  }
+
+  @Override
+  public void put(CACHEKEY cacheKey, CACHEVALUE value) {
+cache.put(cacheKey, value);
+CacheValue cacheValue = (CacheValue) cache.get(cacheKey);
+epochEntries.add(new EpochEntry<>(cacheValue.getEpoch(), cacheKey));
+  }
+
+  @Override
+  public void cleanup(long epoch) {
+executorService.submit(() -> evictCache(epoch));
+  }
+
+  @Override
+  public int size() {
+return cache.size();
+  }
+
+  private void evictCache(long epoch) {
 
 Review comment:
   Shouldn't a key be evicted if it was a delete operation and the state 
machine commit has taken place ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
ben-roling commented on issue #794: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490560778
 
 
   Great, thanks Steve!
   
   > I might do some changes to the PR locally and push them up as a branch for 
you to cherry-pick in, as that is potentially easier than me just adding a 
large set of bits of homework for you to do. Would that be OK? it should save 
time all round
   
   Sure, that sounds good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx commented on issue #801: HDDS-1500 : Allocate block failures in client should print exception trace.

2019-05-08 Thread GitBox
avijayanhwx commented on issue #801: HDDS-1500 : Allocate block failures in 
client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801#issuecomment-490560471
 
 
   Thank you for the suggestion @jiwq 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835744#comment-16835744
 ] 

Hudson commented on HADOOP-16269:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16526 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16526/])
HADOOP-16269. ABFS: add listFileStatus with StartFrom. (stevel: rev 
3418597d354bf24cfd610c1ad3adb06d8eae)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
* (add) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CRC64.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsCrc64.java
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
* (edit) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
* (add) 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemStoreListStatusWithRange.java


> ABFS: add listFileStatus with StartFrom
> ---
>
> Key: HADOOP-16269
> URL: https://issues.apache.org/jira/browse/HADOOP-16269
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, 
> HADOOP-16269-003.patch
>
>
> Adding a ListFileStatus in a path from a entry name in lexical order.
> This is added to AzureBlobFileSystemStore and won't be exposed to FS level 
> api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
steveloughran commented on issue #794: HADOOP-16085: use object version or 
etags to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490559096
 
 
   thanks, I'm checking this out and going to test/review it locally. with the 
goal of getting it in this week. I might do some changes to the PR locally and 
push them up as a branch for you to cherry-pick in, as that is potentially 
easier than me just adding a large set of bits of homework for you to do. Would 
that be OK? it should save time all round


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16269) ABFS: add listFileStatus with StartFrom

2019-05-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16269:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ABFS: add listFileStatus with StartFrom
> ---
>
> Key: HADOOP-16269
> URL: https://issues.apache.org/jira/browse/HADOOP-16269
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, 
> HADOOP-16269-003.patch
>
>
> Adding a ListFileStatus in a path from a entry name in lexical order.
> This is added to AzureBlobFileSystemStore and won't be exposed to FS level 
> api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16269) ABFS: add listFileStatus with StartFrom

2019-05-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835736#comment-16835736
 ] 

Steve Loughran commented on HADOOP-16269:
-

+1, committed to trunk!

Now, one warning: adding new FS API calls is great for internal stuff and for 
writing custom code to work with your store, but does have a few risks

* people get sad when you take things away
* it makes it hard/impossible to put another layered FS on top of this (to 
measure performance, cache results, etc)
* we diverge across stores
* apps don't use it, or if they do, they break when new versions ship.

Ideally there should be a stable API For this in the filesystem. We do actually 
have listStatus(Path, recursive) which returns an iterable so can be used to 
iterate through a directory in pages, or down an entire directory tree -which 
delivers fundamental performance gains for any store with a flat list 
operation. 

So now this is in, how about you use it or some other mechanism to implement 
{{FileSystem.listFiles()}} efficiently? The more stores which do, the more we 
can encourage people to switch to it in their code, for maximum speedup.



> ABFS: add listFileStatus with StartFrom
> ---
>
> Key: HADOOP-16269
> URL: https://issues.apache.org/jira/browse/HADOOP-16269
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Da Zhou
>Priority: Major
> Attachments: HADOOP-16269-001.patch, HADOOP-16269-002.patch, 
> HADOOP-16269-003.patch
>
>
> Adding a ListFileStatus in a path from a entry name in lexical order.
> This is added to AzureBlobFileSystemStore and won't be exposed to FS level 
> api.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #768: HADOOP-16269. ABFS: add listFileStatus with StartFrom.

2019-05-08 Thread GitBox
steveloughran commented on issue #768: HADOOP-16269. ABFS: add listFileStatus 
with StartFrom.
URL: https://github.com/apache/hadoop/pull/768#issuecomment-490554557
 
 
   +1, committed.
   
   thanks
   
   I'm going to leave a warning note on the JIRA about how these internal 
things are brittle and may go away.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #768: HADOOP-16269. ABFS: add listFileStatus with StartFrom.

2019-05-08 Thread GitBox
steveloughran closed pull request #768: HADOOP-16269. ABFS: add listFileStatus 
with StartFrom.
URL: https://github.com/apache/hadoop/pull/768
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ben-roling commented on issue #794: HADOOP-16085: use object version or etags to protect against inconsistent read after replace/overwrite

2019-05-08 Thread GitBox
ben-roling commented on issue #794: HADOOP-16085: use object version or etags 
to protect against inconsistent read after replace/overwrite
URL: https://github.com/apache/hadoop/pull/794#issuecomment-490537876
 
 
   I've pushed a commit that adds retries as discussed in 
https://github.com/apache/hadoop/pull/675#issuecomment-488614814
   
   The retries happen in S3AInputStream if the version doesn't match on initial 
open.  There are no retries if the version doesn't match on re-open (during 
seek() backwards).
   
   Retries also happen for rename() and select().
   
   Testing was added in ITestS3ARemoteFileChanged.  I used Mockito.spy() on the 
s3 client to stub in inconsistent responses until a threshold of retries is met.
   
   I've run the full test suite (against a bucket with versioning enabled in 
us-west-2):
   
   ```
   mvn -T 1C verify -Dparallel-tests -DtestsThreadCount=8 -Ds3guard -Ddynamo
   ```
   
   ```
   [ERROR] Tests run: 896, Failures: 0, Errors: 2, Skipped: 145
   ```
   
   The two errors were in ITestDirectoryCommitMRJob and  
ITestS3GuardConcurrentOps, which succeeded when run individually:
   
   ```
   mvn -T 1C verify -Dtest=skip -Dit.test=ITestDirectoryCommitMRJob -Ds3guard 
-Ddynamo
   mvn -T 1C verify -Dtest=skip -Dit.test=ITestS3GuardConcurrentOps -Ds3guard 
-Ddynamo
   ```
   
   https://github.com/apache/hadoop/pull/675#issuecomment-488614814 suggests 
possibly different retry settings for these scenarios.  I haven't done that, at 
least yet.  Perhaps that can be carved off as another issue.  Similarly, I 
haven't implemented the HADOOP-13293 proposal.  I'm open to those things but 
would like to get the rest of this settled (merged) first if possible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16263) Update BUILDING.txt with macOS native build instructions

2019-05-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835690#comment-16835690
 ] 

Hadoop QA commented on HADOOP-16263:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 34s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16263 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968203/HADOOP-16263.002.patch
 |
| Optional Tests |  dupname  asflicense  |
| uname | Linux 678f1b87f14d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9b0aace |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 447 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16238/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update BUILDING.txt with macOS native build instructions
> 
>
> Key: HADOOP-16263
> URL: https://issues.apache.org/jira/browse/HADOOP-16263
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HADOOP-16263.001.patch, HADOOP-16263.002.patch
>
>
> I recently tried to compile Hadoop native on a Mac and found a few catches, 
> involving fixing some YARN native compiling issues (YARN-8622, YARN-9487).
> Also, need to specify OpenSSL (brewed) header include dir when building 
> native with maven on a Mac. Should update BUILDING.txt for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16278) With S3 Filesystem, Long Running services End up Doing lot of GC and eventually die

2019-05-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835672#comment-16835672
 ] 

Steve Loughran commented on HADOOP-16278:
-

Actually I'm +1 as is; let's worry about tuning it if/when more quanties are 
added.

Rajat -is there an email address I can use to declare you as the author of the 
patch, so that github will wire up your contribution?

> With S3 Filesystem, Long Running services End up Doing lot of GC and 
> eventually die
> ---
>
> Key: HADOOP-16278
> URL: https://issues.apache.org/jira/browse/HADOOP-16278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, hadoop-aws, metrics
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Rajat Khandelwal
>Priority: Major
> Fix For: 3.1.3
>
> Attachments: HADOOP-16278.patch, Screenshot 2019-04-30 at 12.52.42 
> PM.png, Screenshot 2019-04-30 at 2.33.59 PM.png
>
>
> I'll start with the symptoms and eventually come to the cause. 
>  
> We are using HDP 3.1 and Noticed that every couple of days the Hive Metastore 
> starts doing GC, sometimes with 30 minute long pauses. Although nothing is 
> collected and the Heap remains fully used. 
>  
> Next, we looked at the Heap Dump and found that 99% of the memory is taken up 
> by one Executor Service for its task queue. 
>  
> !Screenshot 2019-04-30 at 12.52.42 PM.png!
> The Instance is Created like this:
> {{ private static final ScheduledExecutorService scheduler = Executors}}
>  {{ .newScheduledThreadPool(1, new ThreadFactoryBuilder().setDaemon(true)}}
>  {{ .setNameFormat("MutableQuantiles-%d").build());}}
>  
> So All the instances of MutableQuantiles are using a Shared single threaded 
> ExecutorService
> The second thing to notice is this block of code in the Constructor of 
> MutableQuantiles:
> {{this.scheduledTask = scheduler.scheduleAtFixedRate(new 
> MutableQuantiles.RolloverSample(this), (long)interval, (long)interval, 
> TimeUnit.SECONDS);}}
> So As soon as a MutableQuantiles Instance is created, one task is scheduled 
> at Fix Rate. Instead of that, it could schedule them at Fixed Delay (Refer 
> HADOOP-16248). 
> Now coming to why it's related to S3. 
>  
> S3AFileSystem Creates an instance of S3AInstrumentation, which creates two 
> quantiles (related to S3Guard) with 1s(hardcoded) interval and leaves them 
> hanging. By hanging I mean perpetually scheduled. As and when new Instances 
> of S3AFileSystem are created, two new quantiles are created, which in turn 
> create two scheduled tasks and never cancel them. This way number of 
> scheduled tasks keeps on growing without ever getting cleaned up, leading to 
> GC/OOM/Crash. 
>  
> MutableQuantiles has a numInfo field which tells things like the name of the 
> metric. From the Heapdump, I found one numInfo and traced all objects 
> referencing that.
>  
> !Screenshot 2019-04-30 at 2.33.59 PM.png!
>  
> There seem to be 300K objects of for the same metric 
> (S3Guard_metadatastore_throttle_rate). 
> As expected, there are other 300K objects for the other MutableQuantiles 
> created by S3AInstrumentation class. 
> Although the number of instances of S3AInstrumentation class is only 4. 
> Clearly, there is a leak. One S3AInstrumentation instance is creating two 
> scheduled tasks to be run every second. These tasks are left scheduled and 
> not cancelled when S3AInstrumentation.close() is called. Hence, they are 
> never cleaned up. GC is also not able to collect them since they are referred 
> by the scheduler. 
> Who creates S3AInstrumentation instances? S3AFileSystem.initialize(), which 
> is called in FileSystem.get(URI, Configuration). Since hive metastore is a 
> service that deals with a lot of Path Objects and hence needs to do a lot of 
> calls to FileSystem.get, it's the one to first shows these symptoms. 
> We're seeing similar symptoms in AM for long-running jobs (for both Tez AM 
> and MR AM). 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16278) With S3A Filesystem, Long Running services End up Doing lot of GC and eventually die

2019-05-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16278:

Summary: With S3A Filesystem, Long Running services End up Doing lot of GC 
and eventually die  (was: With S3 Filesystem, Long Running services End up 
Doing lot of GC and eventually die)

> With S3A Filesystem, Long Running services End up Doing lot of GC and 
> eventually die
> 
>
> Key: HADOOP-16278
> URL: https://issues.apache.org/jira/browse/HADOOP-16278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, hadoop-aws, metrics
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Rajat Khandelwal
>Priority: Major
> Fix For: 3.1.3
>
> Attachments: HADOOP-16278.patch, Screenshot 2019-04-30 at 12.52.42 
> PM.png, Screenshot 2019-04-30 at 2.33.59 PM.png
>
>
> I'll start with the symptoms and eventually come to the cause. 
>  
> We are using HDP 3.1 and Noticed that every couple of days the Hive Metastore 
> starts doing GC, sometimes with 30 minute long pauses. Although nothing is 
> collected and the Heap remains fully used. 
>  
> Next, we looked at the Heap Dump and found that 99% of the memory is taken up 
> by one Executor Service for its task queue. 
>  
> !Screenshot 2019-04-30 at 12.52.42 PM.png!
> The Instance is Created like this:
> {{ private static final ScheduledExecutorService scheduler = Executors}}
>  {{ .newScheduledThreadPool(1, new ThreadFactoryBuilder().setDaemon(true)}}
>  {{ .setNameFormat("MutableQuantiles-%d").build());}}
>  
> So All the instances of MutableQuantiles are using a Shared single threaded 
> ExecutorService
> The second thing to notice is this block of code in the Constructor of 
> MutableQuantiles:
> {{this.scheduledTask = scheduler.scheduleAtFixedRate(new 
> MutableQuantiles.RolloverSample(this), (long)interval, (long)interval, 
> TimeUnit.SECONDS);}}
> So As soon as a MutableQuantiles Instance is created, one task is scheduled 
> at Fix Rate. Instead of that, it could schedule them at Fixed Delay (Refer 
> HADOOP-16248). 
> Now coming to why it's related to S3. 
>  
> S3AFileSystem Creates an instance of S3AInstrumentation, which creates two 
> quantiles (related to S3Guard) with 1s(hardcoded) interval and leaves them 
> hanging. By hanging I mean perpetually scheduled. As and when new Instances 
> of S3AFileSystem are created, two new quantiles are created, which in turn 
> create two scheduled tasks and never cancel them. This way number of 
> scheduled tasks keeps on growing without ever getting cleaned up, leading to 
> GC/OOM/Crash. 
>  
> MutableQuantiles has a numInfo field which tells things like the name of the 
> metric. From the Heapdump, I found one numInfo and traced all objects 
> referencing that.
>  
> !Screenshot 2019-04-30 at 2.33.59 PM.png!
>  
> There seem to be 300K objects of for the same metric 
> (S3Guard_metadatastore_throttle_rate). 
> As expected, there are other 300K objects for the other MutableQuantiles 
> created by S3AInstrumentation class. 
> Although the number of instances of S3AInstrumentation class is only 4. 
> Clearly, there is a leak. One S3AInstrumentation instance is creating two 
> scheduled tasks to be run every second. These tasks are left scheduled and 
> not cancelled when S3AInstrumentation.close() is called. Hence, they are 
> never cleaned up. GC is also not able to collect them since they are referred 
> by the scheduler. 
> Who creates S3AInstrumentation instances? S3AFileSystem.initialize(), which 
> is called in FileSystem.get(URI, Configuration). Since hive metastore is a 
> service that deals with a lot of Path Objects and hence needs to do a lot of 
> calls to FileSystem.get, it's the one to first shows these symptoms. 
> We're seeing similar symptoms in AM for long-running jobs (for both Tez AM 
> and MR AM). 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #801: HDDS-1500 : Allocate block failures in client should print exception trace.

2019-05-08 Thread GitBox
jiwq commented on a change in pull request #801: HDDS-1500 : Allocate block 
failures in client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801#discussion_r282104663
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
 ##
 @@ -297,7 +297,8 @@ BlockOutputStreamEntry allocateBlockIfNeeded() throws 
IOException {
 succeededAllocates += 1;
   } catch (IOException ioe) {
 LOG.error("Try to allocate more blocks for write failed, already "
-+ "allocated " + succeededAllocates + " blocks for this write.");
++ "allocated " + succeededAllocates + " blocks for this write.",
 
 Review comment:
   ```suggestion
   + "allocated {} blocks for this write.", succeededAllocates, 
ioe);
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #801: HDDS-1500 : Allocate block failures in client should print exception trace.

2019-05-08 Thread GitBox
jiwq commented on a change in pull request #801: HDDS-1500 : Allocate block 
failures in client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801#discussion_r282104793
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/io/BlockOutputStreamEntryPool.java
 ##
 @@ -297,7 +297,8 @@ BlockOutputStreamEntry allocateBlockIfNeeded() throws 
IOException {
 succeededAllocates += 1;
   } catch (IOException ioe) {
 LOG.error("Try to allocate more blocks for write failed, already "
-+ "allocated " + succeededAllocates + " blocks for this write.");
++ "allocated " + succeededAllocates + " blocks for this write.",
+ioe);
 
 Review comment:
   ```suggestion
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16263) Update BUILDING.txt with macOS native build instructions

2019-05-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16263:

Attachment: HADOOP-16263.002.patch
Status: Patch Available  (was: In Progress)

Rev 002: Updated 3.1.1/3.1.2 backport requirement for building native code. 
Thanks [~adam.antal].

> Update BUILDING.txt with macOS native build instructions
> 
>
> Key: HADOOP-16263
> URL: https://issues.apache.org/jira/browse/HADOOP-16263
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HADOOP-16263.001.patch, HADOOP-16263.002.patch
>
>
> I recently tried to compile Hadoop native on a Mac and found a few catches, 
> involving fixing some YARN native compiling issues (YARN-8622, YARN-9487).
> Also, need to specify OpenSSL (brewed) header include dir when building 
> native with maven on a Mac. Should update BUILDING.txt for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16263) Update BUILDING.txt with macOS native build instructions

2019-05-08 Thread Siyao Meng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HADOOP-16263:

Status: In Progress  (was: Patch Available)

> Update BUILDING.txt with macOS native build instructions
> 
>
> Key: HADOOP-16263
> URL: https://issues.apache.org/jira/browse/HADOOP-16263
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HADOOP-16263.001.patch
>
>
> I recently tried to compile Hadoop native on a Mac and found a few catches, 
> involving fixing some YARN native compiling issues (YARN-8622, YARN-9487).
> Also, need to specify OpenSSL (brewed) header include dir when building 
> native with maven on a Mac. Should update BUILDING.txt for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16263) Update BUILDING.txt with macOS native build instructions

2019-05-08 Thread Siyao Meng (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835641#comment-16835641
 ] 

Siyao Meng commented on HADOOP-16263:
-

Thanks [~adam.antal]. Have you tried it?

> Update BUILDING.txt with macOS native build instructions
> 
>
> Key: HADOOP-16263
> URL: https://issues.apache.org/jira/browse/HADOOP-16263
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Minor
> Attachments: HADOOP-16263.001.patch
>
>
> I recently tried to compile Hadoop native on a Mac and found a few catches, 
> involving fixing some YARN native compiling issues (YARN-8622, YARN-9487).
> Also, need to specify OpenSSL (brewed) header include dir when building 
> native with maven on a Mac. Should update BUILDING.txt for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15604) Bulk commits of S3A MPUs place needless excessive load on S3 & S3Guard

2019-05-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835612#comment-16835612
 ] 

Steve Loughran commented on HADOOP-15604:
-

S3Guard.addAncestors() tries to efficiently walk up the tree and only call 
put() on entries which don't exist, so avoiding that excessive load.

But:  {{metadataStore.put(newDirs)}} goes on to create all the ancestors in 
{{innerPut(Collection metas)}}. That is: it doesn't bother 
looking for the parent entries, it just blindly tries to create them all. For 
HADOOP-15183 I'm minimising this across move operations by passing a context 
around for the {{move()}} calls, I think this same idea somehow needs to be 
preserved here, but its a lot harder to join up given that its 
S3AFileSystem.finishedWrite() where this stuff is done and the context is 
pretty minimal.

> Bulk commits of S3A MPUs place needless excessive load on S3 & S3Guard
> --
>
> Key: HADOOP-15604
> URL: https://issues.apache.org/jira/browse/HADOOP-15604
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Steve Loughran
>Priority: Major
>
> When there are ~50 files being committed; each in their own thread from the 
> commit pool; probably the DDB repo is being overloaded just from one single 
> process doing task commit. We should be backing off more, especially given 
> that failing on a write could potentially leave the store inconsistent with 
> the FS (renames, etc)
> It would be nice to have some tests to prove that the I/O thresholds are the 
> reason for unprocessed items in DynamoDB metadata store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #796: HADOOP-16294: Enable access to input options by DistCp subclasses

2019-05-08 Thread GitBox
steveloughran commented on issue #796: HADOOP-16294: Enable access to input 
options by DistCp subclasses
URL: https://github.com/apache/hadoop/pull/796#issuecomment-490487105
 
 
   yetus isn't reviewing this again, is it?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #796: HADOOP-16294: Enable access to input options by DistCp subclasses

2019-05-08 Thread GitBox
steveloughran commented on issue #796: HADOOP-16294: Enable access to input 
options by DistCp subclasses
URL: https://github.com/apache/hadoop/pull/796#issuecomment-490487611
 
 
   ...even if yetus is silent, patch LGTM. @noslowerdna once you are happy with 
these changes are working for what you are doing with distcp, I'm happy to 
merge it in


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #802: HADOOP-16279. S3Guard: Implement 
time-based (TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802#issuecomment-490484140
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1050 | trunk passed |
   | +1 | compile | 1017 | trunk passed |
   | +1 | checkstyle | 144 | trunk passed |
   | +1 | mvnsite | 131 | trunk passed |
   | +1 | shadedclient | 1003 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 93 | trunk passed |
   | 0 | spotbugs | 65 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 179 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 80 | the patch passed |
   | +1 | compile | 966 | the patch passed |
   | +1 | javac | 966 | the patch passed |
   | -0 | checkstyle | 139 | root: The patch generated 1 new + 18 unchanged - 2 
fixed = 19 total (was 20) |
   | -1 | mvnsite | 49 | hadoop-aws in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 89 | the patch passed |
   | +1 | findbugs | 193 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 512 | hadoop-common in the patch passed. |
   | -1 | unit | 279 | hadoop-aws in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 6930 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.s3a.s3guard.TestNullMetadataStore |
   |   | hadoop.fs.s3a.s3guard.TestLocalMetadataStore |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/802 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 72ea7fc7de89 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 96dc5ce |
   | Default Java | 1.8.0_191 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/1/artifact/out/diff-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/1/testReport/ |
   | Max. process+thread count | 1381 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-802/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16278) With S3 Filesystem, Long Running services End up Doing lot of GC and eventually die

2019-05-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835590#comment-16835590
 ] 

Steve Loughran commented on HADOOP-16278:
-

Ok, checked out the patch. Looks good. I'd just like to make sure we stop this 
coming back

# a {{List quantileList}} of quantiles to stop should be 
built up in a list the way the later counters are collected; there's no need to 
define unique fields
# so teardown would be {{quantileList.foreach(MutableQuantiles::stop);}} .
# and the {{quantiles()}} method could actually add it to the list after 
registration

Do you feel like extending your patch and testing locally to see it works for 
you? thanks

> With S3 Filesystem, Long Running services End up Doing lot of GC and 
> eventually die
> ---
>
> Key: HADOOP-16278
> URL: https://issues.apache.org/jira/browse/HADOOP-16278
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, hadoop-aws, metrics
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Rajat Khandelwal
>Priority: Major
> Fix For: 3.1.3
>
> Attachments: HADOOP-16278.patch, Screenshot 2019-04-30 at 12.52.42 
> PM.png, Screenshot 2019-04-30 at 2.33.59 PM.png
>
>
> I'll start with the symptoms and eventually come to the cause. 
>  
> We are using HDP 3.1 and Noticed that every couple of days the Hive Metastore 
> starts doing GC, sometimes with 30 minute long pauses. Although nothing is 
> collected and the Heap remains fully used. 
>  
> Next, we looked at the Heap Dump and found that 99% of the memory is taken up 
> by one Executor Service for its task queue. 
>  
> !Screenshot 2019-04-30 at 12.52.42 PM.png!
> The Instance is Created like this:
> {{ private static final ScheduledExecutorService scheduler = Executors}}
>  {{ .newScheduledThreadPool(1, new ThreadFactoryBuilder().setDaemon(true)}}
>  {{ .setNameFormat("MutableQuantiles-%d").build());}}
>  
> So All the instances of MutableQuantiles are using a Shared single threaded 
> ExecutorService
> The second thing to notice is this block of code in the Constructor of 
> MutableQuantiles:
> {{this.scheduledTask = scheduler.scheduleAtFixedRate(new 
> MutableQuantiles.RolloverSample(this), (long)interval, (long)interval, 
> TimeUnit.SECONDS);}}
> So As soon as a MutableQuantiles Instance is created, one task is scheduled 
> at Fix Rate. Instead of that, it could schedule them at Fixed Delay (Refer 
> HADOOP-16248). 
> Now coming to why it's related to S3. 
>  
> S3AFileSystem Creates an instance of S3AInstrumentation, which creates two 
> quantiles (related to S3Guard) with 1s(hardcoded) interval and leaves them 
> hanging. By hanging I mean perpetually scheduled. As and when new Instances 
> of S3AFileSystem are created, two new quantiles are created, which in turn 
> create two scheduled tasks and never cancel them. This way number of 
> scheduled tasks keeps on growing without ever getting cleaned up, leading to 
> GC/OOM/Crash. 
>  
> MutableQuantiles has a numInfo field which tells things like the name of the 
> metric. From the Heapdump, I found one numInfo and traced all objects 
> referencing that.
>  
> !Screenshot 2019-04-30 at 2.33.59 PM.png!
>  
> There seem to be 300K objects of for the same metric 
> (S3Guard_metadatastore_throttle_rate). 
> As expected, there are other 300K objects for the other MutableQuantiles 
> created by S3AInstrumentation class. 
> Although the number of instances of S3AInstrumentation class is only 4. 
> Clearly, there is a leak. One S3AInstrumentation instance is creating two 
> scheduled tasks to be run every second. These tasks are left scheduled and 
> not cancelled when S3AInstrumentation.close() is called. Hence, they are 
> never cleaned up. GC is also not able to collect them since they are referred 
> by the scheduler. 
> Who creates S3AInstrumentation instances? S3AFileSystem.initialize(), which 
> is called in FileSystem.get(URI, Configuration). Since hive metastore is a 
> service that deals with a lot of Path Objects and hence needs to do a lot of 
> calls to FileSystem.get, it's the one to first shows these symptoms. 
> We're seeing similar symptoms in AM for long-running jobs (for both Tez AM 
> and MR AM). 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16248) Fix MutableQuantiles memory leak

2019-05-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835586#comment-16835586
 ] 

Hadoop QA commented on HADOOP-16248:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HADOOP-16248 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16248 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16237/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Fix MutableQuantiles memory leak
> 
>
> Key: HADOOP-16248
> URL: https://issues.apache.org/jira/browse/HADOOP-16248
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Alexis Daboville
>Priority: Major
> Attachments: mutable-quantiles-leak.png, mutable-quantiles.patch
>
>
> In some circumstances (high GC, high CPU usage, creating lots of
>  S3AFileSystem) it is possible for MutableQuantiles::scheduler [1] to fall
>  behind processing tasks that are submitted to it; because tasks are
>  submitted on a regular schedule, the unbounded queue backing the
>  {{ExecutorService}} might grow to several gigs [2]. By using
>  {{scheduleWithFixedDelay}} instead, we ensure that under pressure this leak 
> won't
>  happen. In order to mitigate the growth, a simple fix [3] is proposed, 
> simply replacing {{scheduler.scheduleAtFixedRate}} by 
> {{scheduler.scheduleWithFixedDelay}}.
> [1] it is single threaded and shared across all instances of 
> {{MutableQuantiles}}: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java#L66-L68]
> [2] see attached mutable-quantiles-leak.png.
> [3] mutable-quantiles.patch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16248) Fix MutableQuantiles memory leak

2019-05-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835584#comment-16835584
 ] 

Steve Loughran commented on HADOOP-16248:
-

patch looks good. Hitting the "submit patch" button for an automated review

> Fix MutableQuantiles memory leak
> 
>
> Key: HADOOP-16248
> URL: https://issues.apache.org/jira/browse/HADOOP-16248
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Alexis Daboville
>Priority: Major
> Attachments: mutable-quantiles-leak.png, mutable-quantiles.patch
>
>
> In some circumstances (high GC, high CPU usage, creating lots of
>  S3AFileSystem) it is possible for MutableQuantiles::scheduler [1] to fall
>  behind processing tasks that are submitted to it; because tasks are
>  submitted on a regular schedule, the unbounded queue backing the
>  {{ExecutorService}} might grow to several gigs [2]. By using
>  {{scheduleWithFixedDelay}} instead, we ensure that under pressure this leak 
> won't
>  happen. In order to mitigate the growth, a simple fix [3] is proposed, 
> simply replacing {{scheduler.scheduleAtFixedRate}} by 
> {{scheduler.scheduleWithFixedDelay}}.
> [1] it is single threaded and shared across all instances of 
> {{MutableQuantiles}}: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java#L66-L68]
> [2] see attached mutable-quantiles-leak.png.
> [3] mutable-quantiles.patch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16248) Fix MutableQuantiles memory leak

2019-05-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16248:

Status: Patch Available  (was: Open)

> Fix MutableQuantiles memory leak
> 
>
> Key: HADOOP-16248
> URL: https://issues.apache.org/jira/browse/HADOOP-16248
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Alexis Daboville
>Priority: Major
> Attachments: mutable-quantiles-leak.png, mutable-quantiles.patch
>
>
> In some circumstances (high GC, high CPU usage, creating lots of
>  S3AFileSystem) it is possible for MutableQuantiles::scheduler [1] to fall
>  behind processing tasks that are submitted to it; because tasks are
>  submitted on a regular schedule, the unbounded queue backing the
>  {{ExecutorService}} might grow to several gigs [2]. By using
>  {{scheduleWithFixedDelay}} instead, we ensure that under pressure this leak 
> won't
>  happen. In order to mitigate the growth, a simple fix [3] is proposed, 
> simply replacing {{scheduler.scheduleAtFixedRate}} by 
> {{scheduler.scheduleWithFixedDelay}}.
> [1] it is single threaded and shared across all instances of 
> {{MutableQuantiles}}: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/lib/MutableQuantiles.java#L66-L68]
> [2] see attached mutable-quantiles-leak.png.
> [3] mutable-quantiles.patch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835531#comment-16835531
 ] 

Hadoop QA commented on HADOOP-16287:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
5s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16287 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968179/HADOOP-16287-004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  findbugs  checkstyle  |
| uname | Linux a2952578474d 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96dc5ce |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16236/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16236/testReport/ |
| Max. process+thread count | 1465 (vs. ulimit of 1) |
| modules | C: 

[GitHub] [hadoop] bgaborg opened a new pull request #802: HADOOP-16279. S3Guard: Implement time-based (TTL) expiry for entries …

2019-05-08 Thread GitBox
bgaborg opened a new pull request #802: HADOOP-16279. S3Guard: Implement 
time-based (TTL) expiry for entries …
URL: https://github.com/apache/hadoop/pull/802
 
 
   …(and tombstones)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-08 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835512#comment-16835512
 ] 

Gabor Bota commented on HADOOP-16279:
-

PR is up, but some tests are failing. Maybe using directly 
{{S3Guard.getWithTtl}} in {{S3AFileSystem#innerGetFileStatus}} the way I do is 
not the best solution.

> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
> guava cache's internal solution for the TTL of these entries. This is an 
> S3AFileSystem level solution in S3Guard, a layer above all metadata store.
> * This is not the same, and not using the [DDB's TTL 
> feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
>  We need a different behavior than what ddb promises: [cleaning once a day 
> with a background 
> job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
>  is not usable for this feature - although it can be used as a general 
> cleanup solution separately and independently from S3Guard.
> * Use the same ttl for entries and authoritative directory listing
> * All entries can be expired. Then the returned metadata from the MS will be 
> null.
> * Add two new methods pruneExpiredTtl() and pruneExpiredTtl(String keyPrefix) 
> to MetadataStore interface. These methods will delete all expired metadata 
> from the ms.
> * Use last_updated field in ms for both file metadata and authoritative 
> directory expiry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-08 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-16287:
---
Attachment: HADOOP-16287-004.patch

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16287-004.patch, HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16287) KerberosAuthenticationHandler Trusted Proxy Support for Knox

2019-05-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835487#comment-16835487
 ] 

Prabhu Joseph commented on HADOOP-16287:


[~eyang] Yes, using request attribute to set and get doAsUser is not the right 
way. Have wrapped the request overriding getRemoteUser to return doAsUser. 

> KerberosAuthenticationHandler Trusted Proxy Support for Knox
> 
>
> Key: HADOOP-16287
> URL: https://issues.apache.org/jira/browse/HADOOP-16287
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: auth
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: HADOOP-16287-001.patch, HADOOP-16287-002.patch, 
> HADOOP-16827-003.patch
>
>
> Knox passes doAs with end user while accessing RM, WebHdfs Rest Api. 
> Currently KerberosAuthenticationHandler sets the remote user to Knox. Need 
> Trusted Proxy Support by reading doAs query parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16279) S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)

2019-05-08 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16279:

Description: 
In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
the implementation is not done yet. 

To complete this feature the following should be done:
* Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
* Implement metadata entry and tombstone expiry 

I would like to start a debate on whether we need to use separate expiry times 
for entries and tombstones. My +1 on not using separate settings - so only one 
config name and value.



Notes:
* In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, using 
an existing feature in guava's cache implementation. Expiry is set with 
{{fs.s3a.s3guard.local.ttl}}.
* LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
guava cache's internal solution for the TTL of these entries. This is an 
S3AFileSystem level solution in S3Guard, a layer above all metadata store.
* This is not the same, and not using the [DDB's TTL 
feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
 We need a different behavior than what ddb promises: [cleaning once a day with 
a background 
job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
 is not usable for this feature - although it can be used as a general cleanup 
solution separately and independently from S3Guard.
* Use the same ttl for entries and authoritative directory listing
* All entries can be expired. Then the returned metadata from the MS will be 
null.
* Add two new methods pruneExpiredTtl() and pruneExpiredTtl(String keyPrefix) 
to MetadataStore interface. These methods will delete all expired metadata from 
the ms.
* Use last_updated field in ms for both file metadata and authoritative 
directory expiry.

  was:
In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
the implementation is not done yet. 

To complete this feature the following should be done:
* Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
* Implement metadata entry and tombstone expiry 

I would like to start a debate on whether we need to use separate expiry times 
for entries and tombstones. My +1 on not using separate settings - so only one 
config name and value.



Notes:
* In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, using 
an existing feature in guava's cache implementation. Expiry is set with 
{{fs.s3a.s3guard.local.ttl}}.
* This is not the same, and not using the [DDB's TTL 
feature|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html].
 We need a different behaviour than what ddb promises: [cleaning once a day 
with a background 
job|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/howitworks-ttl.html]
 is not usable for this feature - although it can be used as a general cleanup 
solution separately and independently from S3Guard.


> S3Guard: Implement time-based (TTL) expiry for entries (and tombstones)
> ---
>
> Key: HADOOP-16279
> URL: https://issues.apache.org/jira/browse/HADOOP-16279
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> In HADOOP-15621 we implemented TTL for Authoritative Directory Listings and 
> added {{ExpirableMetadata}}. {{DDBPathMetadata}} extends {{PathMetadata}} 
> extends {{ExpirableMetadata}}, so all metadata entries in ddb can expire, but 
> the implementation is not done yet. 
> To complete this feature the following should be done:
> * Add new tests for metadata entry and tombstone expiry to {{ITestS3GuardTtl}}
> * Implement metadata entry and tombstone expiry 
> I would like to start a debate on whether we need to use separate expiry 
> times for entries and tombstones. My +1 on not using separate settings - so 
> only one config name and value.
> 
> Notes:
> * In HADOOP-13649 the metadata TTL is implemented in LocalMetadataStore, 
> using an existing feature in guava's cache implementation. Expiry is set with 
> {{fs.s3a.s3guard.local.ttl}}.
> * LocalMetadataStore's TTL and this TTL is different. That TTL is using the 
> guava cache's internal solution for the TTL of these entries. This is an 
> S3AFileSystem level solution in S3Guard, a layer above all metadata store.
> * This is 

[jira] [Commented] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-08 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835468#comment-16835468
 ] 

Hudson commented on HADOOP-16293:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #16524 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16524/])
HADOOP-16293. AuthenticationFilterInitializer doc has speudo instead of 
(stevel: rev 96dc5cedfed7be1232b487b4994ebe9bae9a9f03)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* (edit) 
hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml


> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-16293-001.patch
>
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-08 Thread Prabhu Joseph (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835461#comment-16835461
 ] 

Prabhu Joseph commented on HADOOP-16293:


Thanks [~ste...@apache.org].

> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-16293-001.patch
>
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16293:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-16293-001.patch
>
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-08 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835456#comment-16835456
 ] 

Steve Loughran commented on HADOOP-16293:
-

+1
committed to trunk. There's some conflict with branch-3.2 (HADOOP-15785), so I 
left that alone.

thanks,

(ASF license error unrelated)

> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-16293-001.patch
>
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-08 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16293:

Fix Version/s: 3.3.0

> AuthenticationFilterInitializer doc has speudo instead of pseudo
> 
>
> Key: HADOOP-16293
> URL: https://issues.apache.org/jira/browse/HADOOP-16293
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, documentation
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.3.0
>
> Attachments: HADOOP-16293-001.patch
>
>
> AuthenticationFilterInitializer doc has speudo instead of pseudo.
> {code}
>  * It enables anonymous access, simple/speudo and Kerberos HTTP SPNEGO
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #800: HDDS-1458. Create a maven profile to run fault injection tests

2019-05-08 Thread GitBox
hadoop-yetus commented on issue #800: HDDS-1458. Create a maven profile to run 
fault injection tests
URL: https://github.com/apache/hadoop/pull/800#issuecomment-490388922
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 31 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 1 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 403 | trunk passed |
   | +1 | compile | 202 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1362 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 121 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 393 | the patch passed |
   | +1 | compile | 204 | the patch passed |
   | +1 | javac | 204 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 641 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 127 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 133 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1424 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 38 | The patch does not generate ASF License warnings. |
   | | | 4633 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-800/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/800 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 0b542b01a181 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3172f6c |
   | Default Java | 1.8.0_191 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-800/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-800/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-800/1/testReport/ |
   | Max. process+thread count | 4759 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-800/1/console |
   | versions | git=2.7.4 maven=3.3.9 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx opened a new pull request #801: HDDS-1500 : Allocate block failures in client should print exception trace.

2019-05-08 Thread GitBox
avijayanhwx opened a new pull request #801: HDDS-1500 : Allocate block failures 
in client should print exception trace.
URL: https://github.com/apache/hadoop/pull/801
 
 
   Minor change. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16293) AuthenticationFilterInitializer doc has speudo instead of pseudo

2019-05-08 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16835359#comment-16835359
 ] 

Hadoop QA commented on HADOOP-16293:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
55s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m 
22s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HADOOP-16293 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12968143/HADOOP-16293-001.patch
 |
| Optional Tests |  dupname  asflicense  xml  compile  javac  javadoc  
mvninstall  mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 88e22e35ec57 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon Mar 
18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / c336af3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16235/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16235/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1710 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common 

[GitHub] [hadoop] elek opened a new pull request #800: HDDS-1458. Create a maven profile to run fault injection tests

2019-05-08 Thread GitBox
elek opened a new pull request #800: HDDS-1458. Create a maven profile to run 
fault injection tests
URL: https://github.com/apache/hadoop/pull/800
 
 
   Some fault injection tests have been written using blockade.  It would be 
nice to have ability to start docker compose and exercise the blockade test 
cases against Ozone docker containers, and generate reports.  This is optional 
integration tests to catch race conditions and fault tolerance defects. 
   
   We can introduce a profile with id: it (short for integration tests).  This 
will launch docker compose via maven-exec-plugin and run blockade to simulate 
container failures and timeout.
   
   Usage command:
   {code}
   mvn clean verify -Pit
   {code}
   
   See: https://issues.apache.org/jira/browse/HDDS-1458


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org