[jira] [Commented] (HADOOP-16530) Update xercesImpl in branch-2

2019-08-26 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916397#comment-16916397
 ] 

Masatake Iwasaki commented on HADOOP-16530:
---

+1.

I found no problems on unit tests of maybe-relevant modules listed below, with 
the patch applied on my local.
{noformat}
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 import javax.xml.parsers.DocumentBuilderFactory;
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/record/XmlRecordInput.java
 import javax.xml.parsers.SAXParserFactory;
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/lib/util/ConfigurationUtils.java
 import javax.xml.parsers.DocumentBuilderFactory;
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/QueueConfigurationParser.java
 import javax.xml.parsers.DocumentBuilderFactory;
hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/JobConfigurationParser.java
 import javax.xml.parsers.DocumentBuilderFactory;
hadoop-tools/hadoop-rumen/src/main/java/org/apache/hadoop/tools/rumen/ParsedConfigFile.java
 import javax.xml.parsers.DocumentBuilderFactory;
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/AllocationFileLoaderService.java
 import javax.xml.parsers.DocumentBuilderFactory;
{noformat}


> Update xercesImpl in branch-2
> -
>
> Key: HADOOP-16530
> URL: https://issues.apache.org/jira/browse/HADOOP-16530
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16530.branch-2.001.patch
>
>
> Hadoop 2 depends on xercesImpl 2.9.1, which is more than 10 years old. The 
> latest version is 2.12.0, released last year Let's update this dependency.
> HDFS-12221 removed xercesImpl in Hadoop 3. Looking at HDFS-12221, the impact 
> of this dependency is very minimal: only used by offlineimageviewer. 
> TestOfflineEditsViewer passed for me after the update. Not sure about the 
> impact of downstream applications though.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16530) Update xercesImpl in branch-2

2019-08-26 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916391#comment-16916391
 ] 

Masatake Iwasaki commented on HADOOP-16530:
---

Interestingly unit tests of hadoop-common are using different implementationof 
{{javax.xml.parsers.\*}}. hadoop-common uses 
{{com.sun.org.apache.xerces.internal.jaxp.\*}} while hadoop-hdfs (and projects 
depending on it) uses {{org.apache.xerces.jaxp.*}} based on its dependency on 
xercesImpl.

I ran TestConfiguration and TestConfServlet after maually adding dependency on 
xercesImpl 2.12.0 in hadoop-common/pom.xml. found no problems.

> Update xercesImpl in branch-2
> -
>
> Key: HADOOP-16530
> URL: https://issues.apache.org/jira/browse/HADOOP-16530
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16530.branch-2.001.patch
>
>
> Hadoop 2 depends on xercesImpl 2.9.1, which is more than 10 years old. The 
> latest version is 2.12.0, released last year Let's update this dependency.
> HDFS-12221 removed xercesImpl in Hadoop 3. Looking at HDFS-12221, the impact 
> of this dependency is very minimal: only used by offlineimageviewer. 
> TestOfflineEditsViewer passed for me after the update. Not sure about the 
> impact of downstream applications though.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1350: YARN-9783. Remove low-level zookeeper test to be able to build Hadoop against zookeeper 3.5.5

2019-08-26 Thread GitBox
aajisaka commented on issue #1350: YARN-9783. Remove low-level zookeeper test 
to be able to build Hadoop against zookeeper 3.5.5
URL: https://github.com/apache/hadoop/pull/1350#issuecomment-525136857
 
 
   LGTM, +1 pending Jenkins.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on issue #1319: HDDS-1981: Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-26 Thread GitBox
lokeshj1703 commented on issue #1319: HDDS-1981: Datanode should sync db when 
container is moved to CLOSED or QUASI_CLOSED state
URL: https://github.com/apache/hadoop/pull/1319#issuecomment-525136428
 
 
   @bshashikant @supratimdeka @nandakumar131  Thanks for reviewing the PR. I 
have merged it with trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 merged pull request #1319: HDDS-1981: Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-26 Thread GitBox
lokeshj1703 merged pull request #1319: HDDS-1981: Datanode should sync db when 
container is moved to CLOSED or QUASI_CLOSED state
URL: https://github.com/apache/hadoop/pull/1319
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15958) Revisiting LICENSE and NOTICE files

2019-08-26 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15958:
---
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~iwasakims] for the reviews!

> Revisiting LICENSE and NOTICE files
> ---
>
> Key: HADOOP-15958
> URL: https://issues.apache.org/jira/browse/HADOOP-15958
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-15958-002.patch, HADOOP-15958-003.patch, 
> HADOOP-15958-004.patch, HADOOP-15958-wip.001.patch, HADOOP-15958.005.patch, 
> HADOOP-15958.006.patch, HADOOP-15958.007.patch
>
>
> Originally reported from [~jmclean]:
> * NOTICE file incorrectly lists copyrights that shouldn't be there and 
> mentions licenses such as MIT, BSD, and public domain that should be 
> mentioned in LICENSE only.
> * It's better to have a separate LICENSE and NOTICE for the source and binary 
> releases.
> http://www.apache.org/dev/licensing-howto.html



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1307: HADOOP-15958. Revisiting LICENSE and NOTICE files

2019-08-26 Thread GitBox
aajisaka commented on issue #1307: HADOOP-15958. Revisiting LICENSE and NOTICE 
files
URL: https://github.com/apache/hadoop/pull/1307#issuecomment-525135870
 
 
   Committed. Thank you, @iwasakims !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka closed pull request #1307: HADOOP-15958. Revisiting LICENSE and NOTICE files

2019-08-26 Thread GitBox
aajisaka closed pull request #1307: HADOOP-15958. Revisiting LICENSE and NOTICE 
files
URL: https://github.com/apache/hadoop/pull/1307
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.

2019-08-26 Thread GitBox
hadoop-yetus commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in 
OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-525128821
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 24 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 499 | Maven dependency ordering for branch |
   | +1 | mvninstall | 946 | trunk passed |
   | +1 | compile | 471 | trunk passed |
   | +1 | checkstyle | 85 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 953 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 212 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 665 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 48 | Maven dependency ordering for patch |
   | +1 | mvninstall | 575 | the patch passed |
   | +1 | compile | 402 | the patch passed |
   | +1 | javac | 402 | the patch passed |
   | +1 | checkstyle | 99 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 694 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 188 | the patch passed |
   | +1 | findbugs | 684 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 329 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2113 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 59 | The patch does not generate ASF License warnings. |
   | | | 9390 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestSecureOzoneCluster |
   |   | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle shellcheck shelldocs |
   | uname | Linux 8c8f60c8bcd6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 07e3cf9 |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/testReport/ |
   | Max. process+thread count | 5321 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager 
hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/20/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on issue #1307: HADOOP-15958. Revisiting LICENSE and NOTICE files

2019-08-26 Thread GitBox
iwasakims commented on issue #1307: HADOOP-15958. Revisiting LICENSE and NOTICE 
files
URL: https://github.com/apache/hadoop/pull/1307#issuecomment-525128588
 
 
   +1. Thanks for the update. Further nits (of original LICENSE.txt) should be 
fixed in follow-ups. Once the split is done, reviewing the update would be much 
easier.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16533) Update jackson-databind to 2.9.9.3

2019-08-26 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16533:
---
Target Version/s: 3.3.0

> Update jackson-databind to 2.9.9.3
> --
>
> Key: HADOOP-16533
> URL: https://issues.apache.org/jira/browse/HADOOP-16533
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> Now the latest version is 2.9.9.3 and that fixes regression in 2.9.9.2
> https://github.com/FasterXML/jackson-databind/issues/2395



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16533) Update jackson-databind to 2.9.9.3

2019-08-26 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16533:
---
Issue Type: Bug  (was: Improvement)

> Update jackson-databind to 2.9.9.3
> --
>
> Key: HADOOP-16533
> URL: https://issues.apache.org/jira/browse/HADOOP-16533
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Priority: Major
>
> Now the latest version is 2.9.9.3 and that fixes regression in 2.9.9.2
> https://github.com/FasterXML/jackson-databind/issues/2395



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16533) Update jackson-databind to 2.9.9.3

2019-08-26 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16533:
--

 Summary: Update jackson-databind to 2.9.9.3
 Key: HADOOP-16533
 URL: https://issues.apache.org/jira/browse/HADOOP-16533
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Akira Ajisaka


Now the latest version is 2.9.9.3 and that fixes regression in 2.9.9.2
https://github.com/FasterXML/jackson-databind/issues/2395



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16530) Update xercesImpl in branch-2

2019-08-26 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916293#comment-16916293
 ] 

Masatake Iwasaki commented on HADOOP-16530:
---

Thanks for working on this, [~jojochuang].
{quote}Looking at HDFS-12221, the impact of this dependency is very minimal: 
only used by offlineimageviewer
{quote}
xercesImpl seems to be used in Configuration too. If the jar is on the 
classpath, xerces classes are loaded via ServiceLoader.
{noformat}
$ jar tvf hadoop-2.9.2/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar | grep 
META-INF/services
 0 Fri Sep 14 18:22:18 JST 2007 META-INF/services/
52 Fri Sep 14 18:22:18 JST 2007 
META-INF/services/javax.xml.datatype.DatatypeFactory
50 Fri Sep 14 18:22:18 JST 2007 
META-INF/services/javax.xml.parsers.DocumentBuilderFactory
44 Fri Sep 14 18:22:18 JST 2007 
META-INF/services/javax.xml.parsers.SAXParserFactory
51 Fri Sep 14 18:22:18 JST 2007 
META-INF/services/javax.xml.validation.SchemaFactory
51 Fri Sep 14 18:22:18 JST 2007 
META-INF/services/org.w3c.dom.DOMImplementationSourceList
37 Fri Sep 14 18:22:18 JST 2007 META-INF/services/org.xml.sax.driver
{noformat}
I can see xercesImpl classes loaded in debug log emitted by namenode with 
system property {{-Djaxp.debug=true}}.
{noformat}
JAXP: find factoryId =javax.xml.parsers.SAXParserFactory
JAXP: found jar resource=META-INF/services/javax.xml.parsers.SAXParserFactory 
using ClassLoader: sun.misc.Launcher$AppClassLoader@756095fc
JAXP: found in resource, value=org.apache.xerces.jaxp.SAXParserFactoryImpl
JAXP: created new instance of class org.apache.xerces.jaxp.SAXParserFactoryImpl 
using ClassLoader: sun.misc.Launcher$AppClassLoader@756095fc
JAXP: find factoryId =javax.xml.parsers.SAXParserFactory
JAXP: found jar resource=META-INF/services/javax.xml.parsers.SAXParserFactory 
using ClassLoader: sun.misc.Launcher$AppClassLoader@756095fc
JAXP: found in resource, value=org.apache.xerces.jaxp.SAXParserFactoryImpl
JAXP: created new instance of class org.apache.xerces.jaxp.SAXParserFactoryImpl 
using ClassLoader: sun.misc.Launcher$AppClassLoader@756095fc
JAXP: find factoryId =javax.xml.parsers.SAXParserFactory
JAXP: found jar resource=META-INF/services/javax.xml.parsers.SAXParserFactory 
using ClassLoader: ContextLoader@hdfs
JAXP: found in resource, value=org.apache.xerces.jaxp.SAXParserFactoryImpl
JAXP: created new instance of class org.apache.xerces.jaxp.SAXParserFactoryImpl 
using ClassLoader: ContextLoader@hdfs
JAXP: find factoryId =javax.xml.parsers.DocumentBuilderFactory
JAXP: found jar 
resource=META-INF/services/javax.xml.parsers.DocumentBuilderFactory using 
ClassLoader: ContextLoader@hdfs
JAXP: found in resource, value=org.apache.xerces.jaxp.DocumentBuilderFactoryImpl
JAXP: created new instance of class 
org.apache.xerces.jaxp.DocumentBuilderFactoryImpl using ClassLoader: 
ContextLoader@hdfs
JAXP: find factoryId =javax.xml.transform.TransformerFactory
JAXP: loaded from fallback value: 
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl
JAXP: created new instance of class 
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerFactoryImpl using 
ClassLoader: null
{noformat}
If the jar of xercesImpl is not on the classpath, alternate implementation 
including JDK built-in is used.

> Update xercesImpl in branch-2
> -
>
> Key: HADOOP-16530
> URL: https://issues.apache.org/jira/browse/HADOOP-16530
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16530.branch-2.001.patch
>
>
> Hadoop 2 depends on xercesImpl 2.9.1, which is more than 10 years old. The 
> latest version is 2.12.0, released last year Let's update this dependency.
> HDFS-12221 removed xercesImpl in Hadoop 3. Looking at HDFS-12221, the impact 
> of this dependency is very minimal: only used by offlineimageviewer. 
> TestOfflineEditsViewer passed for me after the update. Not sure about the 
> impact of downstream applications though.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16485) Remove dependency on jackson

2019-08-26 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916289#comment-16916289
 ] 

Akira Ajisaka commented on HADOOP-16485:


bq. So what is the rule?

IMHO, security is more important than compatibility.

> Remove dependency on jackson
> 
>
> Key: HADOOP-16485
> URL: https://issues.apache.org/jira/browse/HADOOP-16485
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> Looking at git history, there were 5 commits related to updating jackson 
> versions due to various CVEs since 2018. And it seems to get worse more 
> recently.
> File this jira to discuss the possibility of removing jackson dependency once 
> for all. I see that jackson is deeply integrated into Hadoop codebase, so not 
> a trivial task. However, if Hadoop is forced to make a new set of releases 
> because of Jackson vulnerabilities, it may start to look not so costly.
> At the very least, consider stripping jackson-databind coode, since that's 
> where the majority of CVEs come from.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16485) Remove dependency on jackson

2019-08-26 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16916288#comment-16916288
 ] 

Akira Ajisaka commented on HADOOP-16485:


bq. We should get rid of org.codehaus.jackson too.

* jersey 1.x depends on jackson 1.9.13. We need to upgrade jersey 
(HADOOP-15984). This task is very difficult and help wanted. Thanks.
* Avro < 1.9.0 depends on jackson 1.9.13. We need to upgrade Avro to 1.9.0 
(HADOOP-13386).

> Remove dependency on jackson
> 
>
> Key: HADOOP-16485
> URL: https://issues.apache.org/jira/browse/HADOOP-16485
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> Looking at git history, there were 5 commits related to updating jackson 
> versions due to various CVEs since 2018. And it seems to get worse more 
> recently.
> File this jira to discuss the possibility of removing jackson dependency once 
> for all. I see that jackson is deeply integrated into Hadoop codebase, so not 
> a trivial task. However, if Hadoop is forced to make a new set of releases 
> because of Jackson vulnerabilities, it may start to look not so costly.
> At the very least, consider stripping jackson-databind coode, since that's 
> where the majority of CVEs come from.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #1307: HADOOP-15958. Revisiting LICENSE and NOTICE files

2019-08-26 Thread GitBox
aajisaka commented on issue #1307: HADOOP-15958. Revisiting LICENSE and NOTICE 
files
URL: https://github.com/apache/hadoop/pull/1307#issuecomment-525102811
 
 
   > CDDLs are not contained in licenses-binary.
   
   Added licenses-binary/LICENSE-cddl-gplv2-ce.txt


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on a change in pull request #1307: HADOOP-15958. Revisiting LICENSE and NOTICE files

2019-08-26 Thread GitBox
aajisaka commented on a change in pull request #1307: HADOOP-15958. Revisiting 
LICENSE and NOTICE files
URL: https://github.com/apache/hadoop/pull/1307#discussion_r317864952
 
 

 ##
 File path: LICENSE-binary
 ##
 @@ -0,0 +1,532 @@
+
+ Apache License
+   Version 2.0, January 2004
+http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+  "License" shall mean the terms and conditions for use, reproduction,
+  and distribution as defined by Sections 1 through 9 of this document.
+
+  "Licensor" shall mean the copyright owner or entity authorized by
+  the copyright owner that is granting the License.
+
+  "Legal Entity" shall mean the union of the acting entity and all
+  other entities that control, are controlled by, or are under common
+  control with that entity. For the purposes of this definition,
+  "control" means (i) the power, direct or indirect, to cause the
+  direction or management of such entity, whether by contract or
+  otherwise, or (ii) ownership of fifty percent (50%) or more of the
+  outstanding shares, or (iii) beneficial ownership of such entity.
+
+  "You" (or "Your") shall mean an individual or Legal Entity
+  exercising permissions granted by this License.
+
+  "Source" form shall mean the preferred form for making modifications,
+  including but not limited to software source code, documentation
+  source, and configuration files.
+
+  "Object" form shall mean any form resulting from mechanical
+  transformation or translation of a Source form, including but
+  not limited to compiled object code, generated documentation,
+  and conversions to other media types.
+
+  "Work" shall mean the work of authorship, whether in Source or
+  Object form, made available under the License, as indicated by a
+  copyright notice that is included in or attached to the work
+  (an example is provided in the Appendix below).
+
+  "Derivative Works" shall mean any work, whether in Source or Object
+  form, that is based on (or derived from) the Work and for which the
+  editorial revisions, annotations, elaborations, or other modifications
+  represent, as a whole, an original work of authorship. For the purposes
+  of this License, Derivative Works shall not include works that remain
+  separable from, or merely link (or bind by name) to the interfaces of,
+  the Work and Derivative Works thereof.
+
+  "Contribution" shall mean any work of authorship, including
+  the original version of the Work and any modifications or additions
+  to that Work or Derivative Works thereof, that is intentionally
+  submitted to Licensor for inclusion in the Work by the copyright owner
+  or by an individual or Legal Entity authorized to submit on behalf of
+  the copyright owner. For the purposes of this definition, "submitted"
+  means any form of electronic, verbal, or written communication sent
+  to the Licensor or its representatives, including but not limited to
+  communication on electronic mailing lists, source code control systems,
+  and issue tracking systems that are managed by, or on behalf of, the
+  Licensor for the purpose of discussing and improving the Work, but
+  excluding communication that is conspicuously marked or otherwise
+  designated in writing by the copyright owner as "Not a Contribution."
+
+  "Contributor" shall mean Licensor and any individual or Legal Entity
+  on behalf of whom a Contribution has been received by Licensor and
+  subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  copyright license to reproduce, prepare Derivative Works of,
+  publicly display, publicly perform, sublicense, and distribute the
+  Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+  this License, each Contributor hereby grants to You a perpetual,
+  worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+  (except as stated in this section) patent license to make, have made,
+  use, offer to sell, sell, import, and otherwise transfer the Work,
+  where such license applies only to those patent claims licensable
+  by such Contributor that are necessarily infringed by their
+  Contribution(s) alone or by combination of their Contribution(s)
+  with the Work to which such Contribution(s) was submitted. If You
+  institute patent litigation against any entity (including a
+  cross-claim or counterclaim in a 

[GitHub] [hadoop] bharatviswa504 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.

2019-08-26 Thread GitBox
bharatviswa504 commented on issue #1225: HDDS-1909. Use new HA code for Non-HA 
in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-525076456
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1225: HDDS-1909. Use new HA code for Non-HA in OM.

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #1225: HDDS-1909. Use new HA code for 
Non-HA in OM.
URL: https://github.com/apache/hadoop/pull/1225#issuecomment-524522115
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 21 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 683 | trunk passed |
   | +1 | compile | 375 | trunk passed |
   | +1 | checkstyle | 72 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 833 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 157 | trunk passed |
   | 0 | spotbugs | 432 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 623 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for patch |
   | +1 | mvninstall | 529 | the patch passed |
   | +1 | compile | 356 | the patch passed |
   | +1 | javac | 356 | the patch passed |
   | -0 | checkstyle | 34 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 630 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 155 | the patch passed |
   | +1 | findbugs | 632 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 275 | hadoop-hdds in the patch passed. |
   | -1 | unit | 333 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6055 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.recon.recovery.TestReconOmMetadataManagerImpl |
   |   | hadoop.ozone.recon.spi.impl.TestOzoneManagerServiceProviderImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1225 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 617da2d7d6af 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d2225c8 |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/testReport/ |
   | Max. process+thread count | 1338 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/integration-test 
hadoop-ozone/ozone-manager hadoop-ozone/ozone-recon U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1225/19/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1311: HDDS-1946. CertificateClient should not persist keys/certs to ozone.m…

2019-08-26 Thread GitBox
vivekratnavel commented on issue #1311: HDDS-1946. CertificateClient should not 
persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-525066689
 
 
   Created https://issues.apache.org/jira/browse/HDDS-2040 to track the 
integration test failure.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vivekratnavel commented on issue #1311: HDDS-1946. CertificateClient should not persist keys/certs to ozone.m…

2019-08-26 Thread GitBox
vivekratnavel commented on issue #1311: HDDS-1946. CertificateClient should not 
persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-525051986
 
 
   Hi @xiaoyuyao 
   
   I checked the cause of failure for this integration test 
`TestSecureContainerServer.testClientServerRatisGrpc` and it is not related to 
this change. It fails due to block token verification failure and it fails for 
the same reason in trunk.
   
   ```Caused by: org.apache.ratis.protocol.StateMachineException: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
Block token verification failed. Fail to find any token (empty or null.)```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1353: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-26 Thread GitBox
hadoop-yetus commented on issue #1353: HDDS-1927. Consolidate add/remove Acl 
into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1353#issuecomment-525046724
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 75 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ ozone-0.4.1 Compile Tests _ |
   | 0 | mvndep | 96 | Maven dependency ordering for branch |
   | +1 | mvninstall | 843 | ozone-0.4.1 passed |
   | +1 | compile | 363 | ozone-0.4.1 passed |
   | +1 | checkstyle | 78 | ozone-0.4.1 passed |
   | +1 | mvnsite | 0 | ozone-0.4.1 passed |
   | +1 | shadedclient | 1055 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | ozone-0.4.1 passed |
   | 0 | spotbugs | 430 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 637 | ozone-0.4.1 passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 552 | the patch passed |
   | +1 | compile | 374 | the patch passed |
   | +1 | javac | 374 | the patch passed |
   | -0 | checkstyle | 39 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 652 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 341 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2300 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 8744 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1353 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 770902329df7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | ozone-0.4.1 / ab7605b |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/testReport/ |
   | Max. process+thread count | 4619 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1353/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1311: HDDS-1946. CertificateClient should not persist keys/certs to ozone.m…

2019-08-26 Thread GitBox
xiaoyuyao commented on issue #1311: HDDS-1946. CertificateClient should not 
persist keys/certs to ozone.m…
URL: https://github.com/apache/hadoop/pull/1311#issuecomment-525044434
 
 
   The test failure in TestSecureContainerServer seems related. Can you take a 
look @vivekratnavel 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on issue #1351: HDDS-2037. Fix hadoop version in pom.ozone.xml.

2019-08-26 Thread GitBox
nandakumar131 commented on issue #1351: HDDS-2037. Fix hadoop version in 
pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1351#issuecomment-525027240
 
 
   @anuengineer Until now all the testing is done with hadoop 3.2.0 and also 
the previous ozone release (0.4.0-alpha) was done with hadoop 3.2.0 as 
dependency.
   Is there any reason to go to 3.1.0 now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1329: HDDS-738. Removing REST protocol support from OzoneClient

2019-08-26 Thread GitBox
hadoop-yetus commented on issue #1329: HDDS-738. Removing REST protocol support 
from OzoneClient
URL: https://github.com/apache/hadoop/pull/1329#issuecomment-525021740
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 5 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 48 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 81 | Maven dependency ordering for branch |
   | +1 | mvninstall | 629 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 865 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 442 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 647 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | +1 | mvninstall | 550 | the patch passed |
   | +1 | compile | 379 | the patch passed |
   | +1 | javac | 102 | hadoop-hdds in the patch passed. |
   | +1 | javac | 277 | hadoop-ozone generated 0 new + 5 unchanged - 3 fixed = 
5 total (was 8) |
   | +1 | checkstyle | 88 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 6 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 771 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 93 | hadoop-ozone generated 1 new + 26 unchanged - 1 fixed 
= 27 total (was 27) |
   | +1 | findbugs | 643 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 341 | hadoop-hdds in the patch passed. |
   | -1 | unit | 57 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 43 | The patch does not generate ASF License warnings. |
   | | | 6315 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestOmUtils |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1329 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml shellcheck shelldocs |
   | uname | Linux f9b0b83cb1bc 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d1aa859 |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/testReport/ |
   | Max. process+thread count | 436 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone hadoop-ozone/client 
hadoop-ozone/common hadoop-ozone/datanode hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager hadoop-ozone/ozonefs 
hadoop-ozone/s3gateway hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/6/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1345: HDDS-2013. Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread GitBox
dineshchitlangia commented on issue #1345: HDDS-2013. Add flag gdprEnabled for 
BucketInfo in OzoneManager proto
URL: https://github.com/apache/hadoop/pull/1345#issuecomment-525003014
 
 
   > Cool. But why do we add the gdprEnabled flag as a dedicated field in the 
protocol instead of adding a [gdprEnabled]=true metadata?
   @elek Thanks for your thoughts. It makes sense to use metadata hashmap for 
all the properties. So, we can ignore this PR. I will update the jira to 
reflect the same in the summary.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1345: HDDS-2013. Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread GitBox
elek commented on issue #1345: HDDS-2013. Add flag gdprEnabled for BucketInfo 
in OzoneManager proto
URL: https://github.com/apache/hadoop/pull/1345#issuecomment-524999802
 
 
   Cool. But why do we add the gdprEnabled flag as a dedicated field in the 
protocol instead of adding a [gdprEnabled]=true metadata? 
   
   (I am not against it, just trying to understand the motivation...)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao opened a new pull request #1353: HDDS-1927. Consolidate add/remove Acl into OzoneAclUtil class. Contri…

2019-08-26 Thread GitBox
xiaoyuyao opened a new pull request #1353: HDDS-1927. Consolidate add/remove 
Acl into OzoneAclUtil class. Contri…
URL: https://github.com/apache/hadoop/pull/1353
 
 
   This is to verify cherry-pick HDDS-1927 from trunk to ozone-0.4.1
   
   …buted by Xiaoyu Yao.
   
   Signed-off-by: Anu Engineer 
   (cherry picked from commit d58eba867234eaac0e229feb990e9dab3912e063)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1351: HDDS-2037. Fix hadoop version in pom.ozone.xml.

2019-08-26 Thread GitBox
anuengineer commented on issue #1351: HDDS-2037. Fix hadoop version in 
pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1351#issuecomment-524988938
 
 
   @elek  are there any known issues if we depend on 3.1.0? like the OzoneFS ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1351: HDDS-2037. Fix hadoop version in pom.ozone.xml.

2019-08-26 Thread GitBox
anuengineer commented on issue #1351: HDDS-2037. Fix hadoop version in 
pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1351#issuecomment-524988662
 
 
   Nanda, it might be a good idea to depend on version 3.1.0, @arp7, @jnp can 
we do that in this pull request ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1329: HDDS-738. Removing REST protocol support from OzoneClient

2019-08-26 Thread GitBox
hadoop-yetus commented on issue #1329: HDDS-738. Removing REST protocol support 
from OzoneClient
URL: https://github.com/apache/hadoop/pull/1329#issuecomment-524983994
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 4 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 48 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 80 | Maven dependency ordering for branch |
   | +1 | mvninstall | 617 | trunk passed |
   | +1 | compile | 380 | trunk passed |
   | +1 | checkstyle | 81 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 857 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 176 | trunk passed |
   | 0 | spotbugs | 439 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 644 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for patch |
   | -1 | mvninstall | 287 | hadoop-ozone in the patch failed. |
   | -1 | compile | 238 | hadoop-ozone in the patch failed. |
   | -1 | javac | 238 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 6 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 731 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 94 | hadoop-ozone generated 1 new + 27 unchanged - 1 fixed 
= 28 total (was 28) |
   | -1 | findbugs | 360 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 334 | hadoop-hdds in the patch passed. |
   | -1 | unit | 57 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 5944 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestOmUtils |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1329 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml shellcheck shelldocs |
   | uname | Linux 05d3a5ab9fa3 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6d7f01c |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/5/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/5/artifact/out/patch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/5/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/5/testReport/ |
   | Max. process+thread count | 428 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone hadoop-ozone/client 
hadoop-ozone/common hadoop-ozone/datanode hadoop-ozone/dist 
hadoop-ozone/integration-test hadoop-ozone/ozone-manager hadoop-ozone/ozonefs 
hadoop-ozone/s3gateway hadoop-ozone/tools U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1329/5/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, 

[GitHub] [hadoop] hadoop-yetus commented on issue #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-08-26 Thread GitBox
hadoop-yetus commented on issue #1229: HADOOP-16490. Improve S3Guard handling 
of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1229#issuecomment-524975812
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 627 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 1711 | Maven dependency ordering for branch |
   | +1 | mvninstall |  | trunk passed |
   | +1 | compile | 1008 | trunk passed |
   | +1 | checkstyle | 145 | trunk passed |
   | +1 | mvnsite | 133 | trunk passed |
   | +1 | shadedclient | 1006 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 114 | trunk passed |
   | 0 | spotbugs | 68 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 192 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 82 | the patch passed |
   | +1 | compile | 942 | the patch passed |
   | +1 | javac | 942 | the patch passed |
   | +1 | checkstyle | 146 | root: The patch generated 0 new + 97 unchanged - 2 
fixed = 97 total (was 99) |
   | +1 | mvnsite | 130 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 672 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 114 | the patch passed |
   | +1 | findbugs | 206 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 547 | hadoop-common in the patch passed. |
   | +1 | unit | 93 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 9096 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1229 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 2e49e7b6df2e 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 689d2e6 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/16/testReport/ |
   | Max. process+thread count | 1508 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1229/16/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #1349: HDDS-2026. Overlapping chunk region cannot be read concurrently

2019-08-26 Thread GitBox
anuengineer commented on issue #1349: HDDS-2026. Overlapping chunk region 
cannot be read concurrently
URL: https://github.com/apache/hadoop/pull/1349#issuecomment-524973708
 
 
   LGTM, I will test it to make sure it works as expected. I will also wait for 
others to comment and then commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1351: HDDS-2037. Fix hadoop version in pom.ozone.xml.

2019-08-26 Thread GitBox
adoroszlai commented on a change in pull request #1351: HDDS-2037. Fix hadoop 
version in pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1351#discussion_r317728393
 
 

 ##
 File path: hadoop-hdds/server-scm/pom.xml
 ##
 @@ -100,10 +100,6 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd;>
   org.bouncycastle
   bcprov-jdk15on
 
-
-  io.dropwizard.metrics
-  metrics-core
 
 Review comment:
   Thanks @nandakumar131 for fixing these Maven warnings, too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 merged pull request #1315: HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.

2019-08-26 Thread GitBox
bharatviswa504 merged pull request #1315: HDDS-1975. Implement default acls for 
bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #1315: HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.

2019-08-26 Thread GitBox
bharatviswa504 commented on issue #1315: HDDS-1975. Implement default acls for 
bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315#issuecomment-524964926
 
 
   Thank You @xiaoyuyao for the review.
   I will commit this to the trunk.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia edited a comment on issue #1340: HDDS-2021. Upgrade Guava library to v26 in hdds project

2019-08-26 Thread GitBox
dineshchitlangia edited a comment on issue #1340: HDDS-2021. Upgrade Guava 
library to v26 in hdds project
URL: https://github.com/apache/hadoop/pull/1340#issuecomment-524956616
 
 
   > @dineshchitlangia I think we have a build issue, can you please check?
   
   @anuengineer After rebasing to trunk, I see the problem is bigger than I 
originally presumed. The dependency convergence issue is now affecting multiple 
modules in hadoop, yarn, mr.
   I will spend some more time on this to see what is the best way to fix it. 
Right now, a no-brainer approach is to update the version across all modules, 
however, I am sure it will also lead to lot of related code changes where the 
methods may have changed/removed in new version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1340: HDDS-2021. Upgrade Guava library to v26 in hdds project

2019-08-26 Thread GitBox
dineshchitlangia commented on issue #1340: HDDS-2021. Upgrade Guava library to 
v26 in hdds project
URL: https://github.com/apache/hadoop/pull/1340#issuecomment-524956616
 
 
   > @dineshchitlangia I think we have a build issue, can you please check?
   
   @anuengineer After rebasing to trunk, I see the problem is bigger than I 
originally presumed. The dependency convergence issue is now affecting multiple 
modules in hadoop, yarn, mr.
   I will spend some more time on this to see what is the best way to fix it. 
Right now, a no-brainer approach is to update the version across all modules, 
however, I am sure if will also lead to lot of related code changes where the 
methods may have changes in new version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia edited a comment on issue #1345: HDDS-2013. Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread GitBox
dineshchitlangia edited a comment on issue #1345: HDDS-2013. Add flag 
gdprEnabled for BucketInfo in OzoneManager proto
URL: https://github.com/apache/hadoop/pull/1345#issuecomment-524953767
 
 
   > We can also use the existing metadata hashmap to store these kind of data. 
It would make it more easy to add more features without changing the protocol 
later...
   
   @elek That is absolutely correct. We intend to use the same for storing the 
symmetric encryption key info. Glad that we are on the same page even before it 
started!! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1345: HDDS-2013. Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread GitBox
dineshchitlangia commented on issue #1345: HDDS-2013. Add flag gdprEnabled for 
BucketInfo in OzoneManager proto
URL: https://github.com/apache/hadoop/pull/1345#issuecomment-524953767
 
 
   > We can also use the existing metadata hashmap to store these kind of data. 
It would make it more easy to add more features without changing the protocol 
later...
   
   @elek That is absolutely correct. We intend to use the same for this. Glad 
that we are on the same page even before it started!! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1345: HDDS-2013. Add flag gdprEnabled for BucketInfo in OzoneManager proto

2019-08-26 Thread GitBox
elek commented on issue #1345: HDDS-2013. Add flag gdprEnabled for BucketInfo 
in OzoneManager proto
URL: https://github.com/apache/hadoop/pull/1345#issuecomment-524952097
 
 
   We can also use the existing metadata hashmap to store these kind of data. 
It would make it more easy to add more features without changing the protocol 
later...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16439) Upgrade bundled Tomcat in branch-2

2019-08-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16439:
-
Labels: release-blocker  (was: )

> Upgrade bundled Tomcat in branch-2
> --
>
> Key: HADOOP-16439
> URL: https://issues.apache.org/jira/browse/HADOOP-16439
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16439-branch-2.000.patch, 
> HADOOP-16439-branch-2.001.patch
>
>
> proposed by  [~jojochuang] in mailing list:
> {quote}We migrated from Tomcat to Jetty in Hadoop3, because Tomcat 6 went EOL 
> in
>  2016. But we did not realize three years after Tomcat 6's EOL, a majority
>  of Hadoop users are still in Hadoop 2, and it looks like Hadoop 2 will stay
>  alive for another few years.
> Backporting Jetty to Hadoop2 is probably too big of an imcompatibility.
>  How about migrating to Tomcat9?
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16485) Remove dependency on jackson

2019-08-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16485:
-
Labels: release-blocker  (was: )

> Remove dependency on jackson
> 
>
> Key: HADOOP-16485
> URL: https://issues.apache.org/jira/browse/HADOOP-16485
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> Looking at git history, there were 5 commits related to updating jackson 
> versions due to various CVEs since 2018. And it seems to get worse more 
> recently.
> File this jira to discuss the possibility of removing jackson dependency once 
> for all. I see that jackson is deeply integrated into Hadoop codebase, so not 
> a trivial task. However, if Hadoop is forced to make a new set of releases 
> because of Jackson vulnerabilities, it may start to look not so costly.
> At the very least, consider stripping jackson-databind coode, since that's 
> where the majority of CVEs come from.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16530) Update xercesImpl in branch-2

2019-08-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16530:
-
Labels: release-blocker  (was: )

> Update xercesImpl in branch-2
> -
>
> Key: HADOOP-16530
> URL: https://issues.apache.org/jira/browse/HADOOP-16530
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
> Attachments: HADOOP-16530.branch-2.001.patch
>
>
> Hadoop 2 depends on xercesImpl 2.9.1, which is more than 10 years old. The 
> latest version is 2.12.0, released last year Let's update this dependency.
> HDFS-12221 removed xercesImpl in Hadoop 3. Looking at HDFS-12221, the impact 
> of this dependency is very minimal: only used by offlineimageviewer. 
> TestOfflineEditsViewer passed for me after the update. Not sure about the 
> impact of downstream applications though.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1349: HDDS-2026. Overlapping chunk region cannot be read concurrently

2019-08-26 Thread GitBox
adoroszlai commented on issue #1349: HDDS-2026. Overlapping chunk region cannot 
be read concurrently
URL: https://github.com/apache/hadoop/pull/1349#issuecomment-524934246
 
 
   @anuengineer @bshashikant @elek please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests

2019-08-26 Thread GitBox
adoroszlai commented on a change in pull request #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#discussion_r317673438
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 ##
 @@ -0,0 +1,329 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.InetSocketAddress;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ipc.Client;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes;
+import 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolPB;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import com.codahale.metrics.ConsoleReporter;
+import com.codahale.metrics.MetricRegistry;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.ratis.protocol.ClientId;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Option;
+import picocli.CommandLine.ParentCommand;
+
+/**
+ * Base class for simplified performance tests.
+ */
+public class BaseFreonGenerator {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(BaseFreonGenerator.class);
+
+  private static final int CHECK_INTERVAL_MILLIS = 1000;
+
+  private static final String DIGEST_ALGORITHM = "MD5";
+
+  private static final Pattern ENV_VARIABLE_IN_PATTERN =
+  Pattern.compile("__(.+?)__");
+
+  @ParentCommand
+  private Freon freonCommand;
+
+  @Option(names = {"-n", "--number-of-tests"},
+  description = "Number of the generated objects.",
+  defaultValue = "1000")
+  private long testNo = 1000;
+
+  @Option(names = {"-t", "--threads", "--thread"},
+  description = "Number of threads used to execute",
+  defaultValue = "10")
+  private int threadNo;
+
+  @Option(names = {"-f", "--fail-at-end"},
+  description = "If turned on, all the tasks will be executed even if "
+  + "there are failures.")
+  private boolean failAtEnd;
+
+  @Option(names = {"-p", "--prefix"},
+  description = "Unique identifier of the test execution. Usually used as"
+  + " a prefix of the generated object names. If empty, a random name"
+  + " will be generated",
+  defaultValue = "")
+  private String prefix = "";
+
+  private MetricRegistry metrics = new MetricRegistry();
+
+  private ExecutorService executor;
+
+  private AtomicLong successCounter;
+
+  private AtomicLong failureCounter;
+
+  private long startTime;
+
+  private PathSchema pathSchema;
+
+  /**
+   * The main logic to execute a test generator.
+   *
+   * @param provider creates the new steps to execute.
+   */
+  public void runTests(TaskProvider provider) {
+
+executor = Executors.newFixedThreadPool(threadNo);
+
+ProgressBar progressBar =
+new ProgressBar(System.out, testNo, successCounter::get);
+progressBar.start();
+
+startTime = System.currentTimeMillis();
+//schedule the execution of all the tasks.
+
+for (long i = 0; i < testNo; i++) {
+
+  final long counter = i;
+
+  executor.execute(() -> {
+try {
+
+  //in case of an other failed test, we shouldn't execute more tasks.
+  if (!failAtEnd && failureCounter.get() > 0) {
+return;
+  }
+
+  provider.executeNextTask(counter);
+  

[GitHub] [hadoop] adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests

2019-08-26 Thread GitBox
adoroszlai commented on a change in pull request #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#discussion_r317672618
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 ##
 @@ -0,0 +1,329 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.InetSocketAddress;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ipc.Client;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes;
+import 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolPB;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import com.codahale.metrics.ConsoleReporter;
+import com.codahale.metrics.MetricRegistry;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.ratis.protocol.ClientId;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Option;
+import picocli.CommandLine.ParentCommand;
+
+/**
+ * Base class for simplified performance tests.
+ */
+public class BaseFreonGenerator {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(BaseFreonGenerator.class);
+
+  private static final int CHECK_INTERVAL_MILLIS = 1000;
+
+  private static final String DIGEST_ALGORITHM = "MD5";
+
+  private static final Pattern ENV_VARIABLE_IN_PATTERN =
+  Pattern.compile("__(.+?)__");
+
+  @ParentCommand
+  private Freon freonCommand;
+
+  @Option(names = {"-n", "--number-of-tests"},
+  description = "Number of the generated objects.",
+  defaultValue = "1000")
+  private long testNo = 1000;
+
+  @Option(names = {"-t", "--threads", "--thread"},
+  description = "Number of threads used to execute",
+  defaultValue = "10")
+  private int threadNo;
+
+  @Option(names = {"-f", "--fail-at-end"},
+  description = "If turned on, all the tasks will be executed even if "
+  + "there are failures.")
+  private boolean failAtEnd;
+
+  @Option(names = {"-p", "--prefix"},
+  description = "Unique identifier of the test execution. Usually used as"
+  + " a prefix of the generated object names. If empty, a random name"
+  + " will be generated",
+  defaultValue = "")
+  private String prefix = "";
+
+  private MetricRegistry metrics = new MetricRegistry();
+
+  private ExecutorService executor;
+
+  private AtomicLong successCounter;
+
+  private AtomicLong failureCounter;
+
+  private long startTime;
+
+  private PathSchema pathSchema;
+
+  /**
+   * The main logic to execute a test generator.
+   *
+   * @param provider creates the new steps to execute.
+   */
+  public void runTests(TaskProvider provider) {
+
+executor = Executors.newFixedThreadPool(threadNo);
+
+ProgressBar progressBar =
+new ProgressBar(System.out, testNo, successCounter::get);
+progressBar.start();
+
+startTime = System.currentTimeMillis();
+//schedule the execution of all the tasks.
+
+for (long i = 0; i < testNo; i++) {
+
+  final long counter = i;
+
+  executor.execute(() -> {
+try {
+
+  //in case of an other failed test, we shouldn't execute more tasks.
+  if (!failAtEnd && failureCounter.get() > 0) {
+return;
+  }
+
+  provider.executeNextTask(counter);
+  

[GitHub] [hadoop] adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests

2019-08-26 Thread GitBox
adoroszlai commented on a change in pull request #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#discussion_r317563960
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/SameKeyReader.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.io.InputStream;
+import java.security.MessageDigest;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.io.IOUtils;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator tool test om performance.
+ */
+@Command(name = "ocokr",
+aliases = "ozone-client-one-key-reader",
+description = "Read the same key from multiple threads.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class SameKeyReader extends BaseFreonGenerator
+implements Callable {
+
+  @Option(names = {"-v", "--volume"},
+  description = "Name of the bucket which contains the test data. Will be"
+  + " created if missing.",
+  defaultValue = "vol1")
+  private String volumeName;
+
+  @Option(names = {"-b", "--bucket"},
+  description = "Name of the bucket which contains the test data. Will be"
+  + " created if missing.",
+  defaultValue = "bucket1")
+  private String bucketName;
+
+  @Option(names = {"-k", "--key"},
+  description = "Name of the key read from multiple threads",
+  defaultValue = "bucket1")
+  private String keyName;
+
+  private Timer timer;
+
+  private OzoneBucket bucket;
+
+  private ContentGenerator contentGenerator;
+
+  private byte[] referenceDigest;
+
+  private OzoneClient rpcClient;
+
+  @Override
+  public Void call() throws Exception {
+
+init();
+
+OzoneConfiguration ozoneConfiguration = createOzoneConfiguration();
+
+rpcClient = OzoneClientFactory.getRpcClient(ozoneConfiguration);
+
+try (InputStream stream = rpcClient.getObjectStore().getVolume(volumeName)
+.getBucket(bucketName).readKey(keyName)) {
+  referenceDigest = getDigest(stream);
+}
+
+timer = getMetrics().timer("key-create");
 
 Review comment:
   ```suggestion
   timer = getMetrics().timer("key-validate");
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests

2019-08-26 Thread GitBox
adoroszlai commented on a change in pull request #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#discussion_r317652114
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/HadoopFsValidator.java
 ##
 @@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.net.URI;
+import java.security.MessageDigest;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.io.IOUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator tool test om performance.
+ */
+@Command(name = "dfsv",
+aliases = "dfs-file-validator",
+description = "Validate if the generated files have the same hash.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class HadoopFsValidator extends BaseFreonGenerator
+implements Callable {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(HadoopFsValidator.class);
+
+  @Option(names = {"--path"},
+  description = "Hadoop FS file system path",
+  defaultValue = "o3fs://bucket1.vol1")
+  private String rootPath;
+
+  private ContentGenerator contentGenerator;
+
+  private Timer timer;
+
+  private FileSystem fileSystem;
+
+  private byte[] referenceDigest;
+
+  @Override
+  public Void call() throws Exception {
+
+init();
+
+OzoneConfiguration configuration = createOzoneConfiguration();
+
+fileSystem = FileSystem.get(URI.create(rootPath), configuration);
+
+Path file = new Path(rootPath + "/" + generateObjectName(0));
+try (FSDataInputStream stream = fileSystem.open(file)) {
+  referenceDigest = getDigest(stream);
+}
+
+timer = getMetrics().timer("gile-read");
 
 Review comment:
   ```suggestion
   timer = getMetrics().timer("file-read");
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests

2019-08-26 Thread GitBox
adoroszlai commented on a change in pull request #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#discussion_r317648557
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/BaseFreonGenerator.java
 ##
 @@ -0,0 +1,329 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.InetSocketAddress;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ipc.Client;
+import org.apache.hadoop.ipc.ProtobufRpcEngine;
+import org.apache.hadoop.ipc.RPC;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes;
+import 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB;
+import org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolPB;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import com.codahale.metrics.ConsoleReporter;
+import com.codahale.metrics.MetricRegistry;
+import org.apache.commons.codec.digest.DigestUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.ratis.protocol.ClientId;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Option;
+import picocli.CommandLine.ParentCommand;
+
+/**
+ * Base class for simplified performance tests.
+ */
+public class BaseFreonGenerator {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(BaseFreonGenerator.class);
+
+  private static final int CHECK_INTERVAL_MILLIS = 1000;
+
+  private static final String DIGEST_ALGORITHM = "MD5";
+
+  private static final Pattern ENV_VARIABLE_IN_PATTERN =
+  Pattern.compile("__(.+?)__");
+
+  @ParentCommand
+  private Freon freonCommand;
+
+  @Option(names = {"-n", "--number-of-tests"},
+  description = "Number of the generated objects.",
+  defaultValue = "1000")
+  private long testNo = 1000;
+
+  @Option(names = {"-t", "--threads", "--thread"},
+  description = "Number of threads used to execute",
+  defaultValue = "10")
+  private int threadNo;
+
+  @Option(names = {"-f", "--fail-at-end"},
+  description = "If turned on, all the tasks will be executed even if "
+  + "there are failures.")
+  private boolean failAtEnd;
+
+  @Option(names = {"-p", "--prefix"},
+  description = "Unique identifier of the test execution. Usually used as"
+  + " a prefix of the generated object names. If empty, a random name"
+  + " will be generated",
+  defaultValue = "")
+  private String prefix = "";
+
+  private MetricRegistry metrics = new MetricRegistry();
+
+  private ExecutorService executor;
+
+  private AtomicLong successCounter;
+
+  private AtomicLong failureCounter;
+
+  private long startTime;
+
+  private PathSchema pathSchema;
+
+  /**
+   * The main logic to execute a test generator.
+   *
+   * @param provider creates the new steps to execute.
+   */
+  public void runTests(TaskProvider provider) {
+
+executor = Executors.newFixedThreadPool(threadNo);
+
+ProgressBar progressBar =
+new ProgressBar(System.out, testNo, successCounter::get);
+progressBar.start();
+
+startTime = System.currentTimeMillis();
+//schedule the execution of all the tasks.
+
+for (long i = 0; i < testNo; i++) {
+
+  final long counter = i;
+
+  executor.execute(() -> {
+try {
+
+  //in case of an other failed test, we shouldn't execute more tasks.
+  if (!failAtEnd && failureCounter.get() > 0) {
+return;
+  }
+
+  provider.executeNextTask(counter);
+  

[GitHub] [hadoop] adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests

2019-08-26 Thread GitBox
adoroszlai commented on a change in pull request #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#discussion_r317121604
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/OzoneClientKeyValidator.java
 ##
 @@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.io.InputStream;
+import java.security.MessageDigest;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.io.IOUtils;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator tool test om performance.
+ */
+@Command(name = "ockv",
+aliases = "ozone-client-key-validator",
+description = "Generate keys with the help of the ozone clients.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class OzoneClientKeyValidator extends BaseFreonGenerator
+implements Callable {
+
+  @Option(names = {"-v", "--volume"},
+  description = "Name of the bucket which contains the test data. Will be"
+  + " created if missing.",
+  defaultValue = "vol1")
+  private String volumeName;
+
+  @Option(names = {"-b", "--bucket"},
+  description = "Name of the bucket which contains the test data. Will be"
+  + " created if missing.",
+  defaultValue = "bucket1")
+  private String bucketName;
+
+  private Timer timer;
+
+  private OzoneBucket bucket;
+
+  private ContentGenerator contentGenerator;
+
+  private byte[] referenceDigest;
+
+  private OzoneClient rpcClient;
+
+  @Override
+  public Void call() throws Exception {
+
+init();
+
+OzoneConfiguration ozoneConfiguration = createOzoneConfiguration();
+
+rpcClient = OzoneClientFactory.getRpcClient(ozoneConfiguration);
+
+try (InputStream stream = rpcClient.getObjectStore().getVolume(volumeName)
+.getBucket(bucketName).readKey(generateObjectName(0))) {
+  referenceDigest = getDigest(stream);
+}
+
+timer = getMetrics().timer("key-create");
 
 Review comment:
   ```suggestion
   timer = getMetrics().timer("key-validate");
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests

2019-08-26 Thread GitBox
adoroszlai commented on a change in pull request #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#discussion_r317121487
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/OzoneClientKeyValidator.java
 ##
 @@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.io.InputStream;
+import java.security.MessageDigest;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.io.IOUtils;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator tool test om performance.
+ */
+@Command(name = "ockv",
+aliases = "ozone-client-key-validator",
+description = "Generate keys with the help of the ozone clients.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class OzoneClientKeyValidator extends BaseFreonGenerator
+implements Callable {
+
+  @Option(names = {"-v", "--volume"},
+  description = "Name of the bucket which contains the test data. Will be"
+  + " created if missing.",
 
 Review comment:
   The validator does not create missing volume/bucket (nor should it), so 
`Will be created if missing.` should be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1341: HDDS-2022. Add additional freon tests

2019-08-26 Thread GitBox
adoroszlai commented on a change in pull request #1341: HDDS-2022. Add 
additional freon tests
URL: https://github.com/apache/hadoop/pull/1341#discussion_r317563795
 
 

 ##
 File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/SameKeyReader.java
 ##
 @@ -0,0 +1,109 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import java.io.InputStream;
+import java.security.MessageDigest;
+import java.util.concurrent.Callable;
+
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+
+import com.codahale.metrics.Timer;
+import org.apache.commons.io.IOUtils;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+/**
+ * Data generator tool test om performance.
+ */
+@Command(name = "ocokr",
+aliases = "ozone-client-one-key-reader",
+description = "Read the same key from multiple threads.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class SameKeyReader extends BaseFreonGenerator
+implements Callable {
+
+  @Option(names = {"-v", "--volume"},
+  description = "Name of the bucket which contains the test data. Will be"
+  + " created if missing.",
+  defaultValue = "vol1")
+  private String volumeName;
+
+  @Option(names = {"-b", "--bucket"},
+  description = "Name of the bucket which contains the test data. Will be"
+  + " created if missing.",
+  defaultValue = "bucket1")
+  private String bucketName;
+
+  @Option(names = {"-k", "--key"},
+  description = "Name of the key read from multiple threads",
+  defaultValue = "bucket1")
 
 Review comment:
   `bucket1` doesn't seem to be a good default value for a key. :)  I think 
this should default to `""`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pingsutw commented on issue #1328: HDDS-1998. TestSecureContainerServer#testClientServerRatisGrpc is fai…

2019-08-26 Thread GitBox
pingsutw commented on issue #1328: HDDS-1998. 
TestSecureContainerServer#testClientServerRatisGrpc is fai…
URL: https://github.com/apache/hadoop/pull/1328#issuecomment-524932029
 
 
   @adoroszlai Thanks for your help
   Updated the patch 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dineshchitlangia commented on issue #1352: HDDS-2037. Fix hadoop version in pom.ozone.xml.

2019-08-26 Thread GitBox
dineshchitlangia commented on issue #1352: HDDS-2037. Fix hadoop version in 
pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1352#issuecomment-524924019
 
 
   LGTM, +1 non-binding, pending Jenkins


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915917#comment-16915917
 ] 

Hadoop QA commented on HADOOP-15361:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HADOOP-15361 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15361 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920997/HADOOP-15361.04.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16506/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch, HADOOP-15361.02.patch, 
> HADOOP-15361.03.patch, HADOOP-15361.04.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1315: HDDS-1975. Implement default acls for bucket/volume/key for OM HA code.

2019-08-26 Thread GitBox
xiaoyuyao commented on issue #1315: HDDS-1975. Implement default acls for 
bucket/volume/key for OM HA code.
URL: https://github.com/apache/hadoop/pull/1315#issuecomment-524922722
 
 
   +1, Thanks @bharatviswa504 for the update.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1222: HADOOP-16488 Deprecated JsonSerialization and move it out of hadoop-c…

2019-08-26 Thread GitBox
steveloughran commented on a change in pull request #1222: HADOOP-16488 
Deprecated JsonSerialization and move it out of hadoop-c…
URL: https://github.com/apache/hadoop/pull/1222#discussion_r317676033
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 ##
 @@ -62,13 +65,12 @@
 import java.util.WeakHashMap;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.CopyOnWriteArrayList;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-import java.util.regex.PatternSyntaxException;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicReference;
-
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import java.util.regex.PatternSyntaxException;
 
 Review comment:
   and similarly, dont reorder stuff even when its wrong...it makes backporting 
too hard


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1222: HADOOP-16488 Deprecated JsonSerialization and move it out of hadoop-c…

2019-08-26 Thread GitBox
steveloughran commented on a change in pull request #1222: HADOOP-16488 
Deprecated JsonSerialization and move it out of hadoop-c…
URL: https://github.com/apache/hadoop/pull/1222#discussion_r317675782
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
 ##
 @@ -18,14 +18,18 @@
 
 package org.apache.hadoop.conf;
 
+import static org.apache.commons.lang3.StringUtils.isBlank;
 
 Review comment:
   these should go at the bottom; imports should be as unchanged as possible


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated JsonSerialization and move it out of hadoop-c…

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated 
JsonSerialization and move it out of hadoop-c…
URL: https://github.com/apache/hadoop/pull/1222#issuecomment-522086393
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 1 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1151 | trunk passed |
   | +1 | compile | 1191 | trunk passed |
   | +1 | checkstyle | 162 | trunk passed |
   | +1 | mvnsite | 335 | trunk passed |
   | +1 | shadedclient | 1231 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 225 | trunk passed |
   | 0 | spotbugs | 54 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 23 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 218 | the patch passed |
   | +1 | compile | 1055 | the patch passed |
   | -1 | javac | 1055 | root generated 13 new + 1475 unchanged - 0 fixed = 
1488 total (was 1475) |
   | -0 | checkstyle | 130 | root: The patch generated 5 new + 358 unchanged - 
3 fixed = 363 total (was 361) |
   | -1 | mvnsite | 22 | hadoop-kms in the patch failed. |
   | -1 | mvnsite | 35 | hadoop-hdfs-client in the patch failed. |
   | -1 | mvnsite | 27 | hadoop-mapreduce-client-core in the patch failed. |
   | -1 | mvnsite | 26 | hadoop-azure in the patch failed. |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 643 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 27 | hadoop-common-project_hadoop-kms generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | -1 | javadoc | 36 | hadoop-hdfs-project_hadoop-hdfs-client generated 3 new 
+ 0 unchanged - 0 fixed = 3 total (was 0) |
   | -1 | javadoc | 25 | hadoop-tools_hadoop-azure generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | 0 | findbugs | 22 | hadoop-project has no data from findbugs |
   | -1 | findbugs | 22 | hadoop-kms in the patch failed. |
   | -1 | findbugs | 34 | hadoop-hdfs-client in the patch failed. |
   | -1 | findbugs | 26 | hadoop-mapreduce-client-core in the patch failed. |
   | -1 | findbugs | 22 | hadoop-azure in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 20 | hadoop-project in the patch passed. |
   | -1 | unit | 472 | hadoop-common in the patch failed. |
   | -1 | unit | 20 | hadoop-kms in the patch failed. |
   | -1 | unit | 36 | hadoop-hdfs-client in the patch failed. |
   | -1 | unit | 28 | hadoop-mapreduce-client-core in the patch failed. |
   | +1 | unit | 87 | hadoop-aws in the patch passed. |
   | -1 | unit | 23 | hadoop-azure in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8187 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux e23e6cc66404 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9a1d8cf |
   | Default Java | 1.8.0_222 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/4/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/4/artifact/out/diff-checkstyle-root.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/4/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-kms.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/4/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/4/artifact/out/patch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
 |
  

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated JsonSerialization and move it out of hadoop-c…

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated 
JsonSerialization and move it out of hadoop-c…
URL: https://github.com/apache/hadoop/pull/1222#issuecomment-519469472
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1070 | trunk passed |
   | +1 | compile | 1112 | trunk passed |
   | +1 | checkstyle | 145 | trunk passed |
   | +1 | mvnsite | 290 | trunk passed |
   | +1 | shadedclient | 1136 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 227 | trunk passed |
   | 0 | spotbugs | 52 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 21 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 208 | the patch passed |
   | +1 | compile | 1084 | the patch passed |
   | -1 | javac | 1084 | root generated 13 new + 1482 unchanged - 0 fixed = 
1495 total (was 1482) |
   | -0 | checkstyle | 146 | root: The patch generated 5 new + 358 unchanged - 
3 fixed = 363 total (was 361) |
   | +1 | mvnsite | 349 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 683 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 27 | hadoop-tools_hadoop-azure generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | 0 | findbugs | 20 | hadoop-project has no data from findbugs |
   | -1 | findbugs | 25 | hadoop-kms in the patch failed. |
   | -1 | findbugs | 38 | hadoop-hdfs-client in the patch failed. |
   | -1 | findbugs | 29 | hadoop-mapreduce-client-core in the patch failed. |
   | -1 | findbugs | 25 | hadoop-azure in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 18 | hadoop-project in the patch passed. |
   | -1 | unit | 521 | hadoop-common in the patch failed. |
   | -1 | unit | 23 | hadoop-kms in the patch failed. |
   | -1 | unit | 37 | hadoop-hdfs-client in the patch failed. |
   | -1 | unit | 28 | hadoop-mapreduce-client-core in the patch failed. |
   | +1 | unit | 285 | hadoop-aws in the patch passed. |
   | -1 | unit | 33 | hadoop-azure in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 8428 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux a87432477dc3 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 00b5a27 |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/3/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/3/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/3/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/3/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-azure.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/3/artifact/out/patch-findbugs-hadoop-common-project_hadoop-kms.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/3/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/3/artifact/out/patch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/3/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt
 |
   | unit | 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated JsonSerialization and move it out of hadoop-c…

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated 
JsonSerialization and move it out of hadoop-c…
URL: https://github.com/apache/hadoop/pull/1222#issuecomment-523063714
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1046 | trunk passed |
   | +1 | compile | 1085 | trunk passed |
   | +1 | checkstyle | 144 | trunk passed |
   | +1 | mvnsite | 314 | trunk passed |
   | +1 | shadedclient | 1124 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 243 | trunk passed |
   | 0 | spotbugs | 57 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 22 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 225 | the patch passed |
   | +1 | compile | 1068 | the patch passed |
   | -1 | javac | 1068 | root generated 13 new + 1475 unchanged - 0 fixed = 
1488 total (was 1475) |
   | -0 | checkstyle | 144 | root: The patch generated 5 new + 358 unchanged - 
3 fixed = 363 total (was 361) |
   | +1 | mvnsite | 311 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 664 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 278 | the patch passed |
   | 0 | findbugs | 20 | hadoop-project has no data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 24 | hadoop-project in the patch passed. |
   | -1 | unit | 501 | hadoop-common in the patch failed. |
   | -1 | unit | 119 | hadoop-kms in the patch failed. |
   | +1 | unit | 59 | hadoop-registry in the patch passed. |
   | +1 | unit | 119 | hadoop-hdfs-client in the patch passed. |
   | +1 | unit | 326 | hadoop-mapreduce-client-core in the patch passed. |
   | +1 | unit | 76 | hadoop-aws in the patch passed. |
   | +1 | unit | 92 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 9055 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
   |   | hadoop.crypto.key.kms.server.TestKMS |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 4f17b1c58c12 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 6244502 |
   | Default Java | 1.8.0_222 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/6/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/6/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/6/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/6/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/6/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/6/testReport/ |
   | Max. process+thread count | 1542 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms hadoop-common-project/hadoop-registry 
hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/6/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated JsonSerialization and move it out of hadoop-c…

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated 
JsonSerialization and move it out of hadoop-c…
URL: https://github.com/apache/hadoop/pull/1222#issuecomment-522253684
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 65 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1167 | trunk passed |
   | +1 | compile | 1155 | trunk passed |
   | +1 | checkstyle | 148 | trunk passed |
   | +1 | mvnsite | 340 | trunk passed |
   | +1 | shadedclient | 1199 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdfs-client in trunk failed. |
   | -1 | javadoc | 16 | hadoop-mapreduce-client-core in trunk failed. |
   | 0 | spotbugs | 62 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 23 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 223 | the patch passed |
   | +1 | compile | 1028 | the patch passed |
   | -1 | javac | 1028 | root generated 13 new + 1475 unchanged - 0 fixed = 
1488 total (was 1475) |
   | -0 | checkstyle | 137 | root: The patch generated 5 new + 360 unchanged - 
3 fixed = 365 total (was 363) |
   | +1 | mvnsite | 326 | the patch passed |
   | -1 | whitespace | 0 | The patch has 48 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 0 | The patch 600  line(s) with tabs. |
   | +1 | xml | 4 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 653 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 235 | the patch passed |
   | 0 | findbugs | 19 | hadoop-project has no data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 22 | hadoop-project in the patch passed. |
   | -1 | unit | 502 | hadoop-common in the patch failed. |
   | -1 | unit | 124 | hadoop-kms in the patch failed. |
   | +1 | unit | 58 | hadoop-registry in the patch passed. |
   | +1 | unit | 120 | hadoop-hdfs-client in the patch passed. |
   | +1 | unit | 331 | hadoop-mapreduce-client-core in the patch passed. |
   | +1 | unit | 74 | hadoop-aws in the patch passed. |
   | +1 | unit | 84 | hadoop-azure in the patch passed. |
   | -1 | asflicense | 43 | The patch generated 1 ASF License warnings. |
   | | | 9334 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
   |   | hadoop.crypto.key.kms.server.TestKMS |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 3f8d00ae4f76 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d873ddd |
   | Default Java | 1.8.0_212 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/artifact/out/branch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/artifact/out/whitespace-tabs.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/5/testReport/ |
   | asflicense | 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated JsonSerialization and move it out of hadoop-c…

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated 
JsonSerialization and move it out of hadoop-c…
URL: https://github.com/apache/hadoop/pull/1222#issuecomment-519335098
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 42 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1074 | trunk passed |
   | +1 | compile | 1047 | trunk passed |
   | +1 | checkstyle | 145 | trunk passed |
   | +1 | mvnsite | 282 | trunk passed |
   | +1 | shadedclient | 1098 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 218 | trunk passed |
   | 0 | spotbugs | 52 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 19 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 210 | the patch passed |
   | +1 | compile | 1068 | the patch passed |
   | -1 | javac | 1068 | root generated 13 new + 1482 unchanged - 0 fixed = 
1495 total (was 1482) |
   | -0 | checkstyle | 132 | root: The patch generated 5 new + 360 unchanged - 
3 fixed = 365 total (was 363) |
   | +1 | mvnsite | 281 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 626 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 217 | the patch passed |
   | 0 | findbugs | 22 | hadoop-project has no data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 26 | hadoop-project in the patch passed. |
   | -1 | unit | 538 | hadoop-common in the patch failed. |
   | -1 | unit | 21 | hadoop-kms in the patch failed. |
   | -1 | unit | 37 | hadoop-hdfs-client in the patch failed. |
   | -1 | unit | 31 | hadoop-mapreduce-client-core in the patch failed. |
   | +1 | unit | 288 | hadoop-aws in the patch passed. |
   | -1 | unit | 26 | hadoop-azure in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 8377 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
   |   | hadoop.http.TestAuthenticationSessionCookie |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 8e0e5fb92ce5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 3cc0ace |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/2/testReport/ |
   | Max. process+thread count | 1361 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms hadoop-hdfs-project/hadoop-hdfs-client 

[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated JsonSerialization and move it out of hadoop-c…

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #1222: HADOOP-16488 Deprecated 
JsonSerialization and move it out of hadoop-c…
URL: https://github.com/apache/hadoop/pull/1222#issuecomment-517990909
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 83 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1068 | trunk passed |
   | +1 | compile | 1047 | trunk passed |
   | +1 | checkstyle | 148 | trunk passed |
   | +1 | mvnsite | 368 | trunk passed |
   | +1 | shadedclient | 1261 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 278 | trunk passed |
   | 0 | spotbugs | 55 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | 0 | findbugs | 24 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 206 | the patch passed |
   | +1 | compile | 1015 | the patch passed |
   | -1 | javac | 1015 | root generated 13 new + 1482 unchanged - 0 fixed = 
1495 total (was 1482) |
   | -0 | checkstyle | 133 | root: The patch generated 5 new + 358 unchanged - 
3 fixed = 363 total (was 361) |
   | +1 | mvnsite | 287 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | xml | 3 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 614 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 236 | the patch passed |
   | 0 | findbugs | 23 | hadoop-project has no data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 24 | hadoop-project in the patch passed. |
   | -1 | unit | 478 | hadoop-common in the patch failed. |
   | -1 | unit | 122 | hadoop-kms in the patch failed. |
   | +1 | unit | 122 | hadoop-hdfs-client in the patch passed. |
   | +1 | unit | 331 | hadoop-mapreduce-client-core in the patch passed. |
   | +1 | unit | 301 | hadoop-aws in the patch passed. |
   | +1 | unit | 94 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 9178 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.security.token.delegation.web.TestWebDelegationToken |
   |   | hadoop.crypto.key.kms.server.TestKMS |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux d84e585ad60a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 065cbc6 |
   | Default Java | 1.8.0_212 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/1/artifact/out/diff-compile-javac-root.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/1/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/1/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/1/testReport/ |
   | Max. process+thread count | 1640 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-common 
hadoop-common-project/hadoop-kms hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1222/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git 

[jira] [Commented] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915907#comment-16915907
 ] 

Hadoop QA commented on HADOOP-16083:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 45s{color} 
| {color:red} hadoop-distcp in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 77m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.tools.mapred.TestCopyMapper |
|   | hadoop.tools.mapred.TestCopyMapperCompositeCrc |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16083 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12956683/HADOOP-16083.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 216e7d011666 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 23e532d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16504/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16504/testReport/ |
| Max. process+thread count | 423 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-distcp U: 

[jira] [Commented] (HADOOP-15361) RawLocalFileSystem should use Java nio framework for rename

2019-08-26 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915908#comment-16915908
 ] 

Steve Loughran commented on HADOOP-15361:
-

[~boky01] -stick this up as a PR to see what Yetus says, then we can review and 
see how the object stores handle the new test. thanks

> RawLocalFileSystem should use Java nio framework for rename
> ---
>
> Key: HADOOP-15361
> URL: https://issues.apache.org/jira/browse/HADOOP-15361
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
>  Labels: incompatibleChange
> Attachments: HADOOP-15361.01.patch, HADOOP-15361.02.patch, 
> HADOOP-15361.03.patch, HADOOP-15361.04.patch
>
>
> Currently RawLocalFileSystem uses a fallback logic for cross-volume renames. 
> The fallback logic is a copy-on-fail logic so when rename fails it copies the 
> source then delete it.
>  An additional fallback logic was needed for Windows to provide POSIX rename 
> behavior.
> Due to the fallback logic RawLocalFileSystem does not pass the contract tests 
> (HADOOP-13082).
> With using Java nio framework both could be eliminated since it is not 
> platform dependent and provides cross-volume rename.
> In addition the fallback logic for Windows is not correct since Java io 
> overrides the destination only if the source is also a directory but 
> handleEmptyDstDirectoryOnWindows method checks only the destination. That 
> means rename allows to override a directory with a file on Windows but not on 
> Unix.
> File#renameTo and Files#move are not 100% compatible:
>  If the source is a directory and the destination is an empty directory 
> File#renameTo overrides the source but Files#move is does not. We have to use 
> {{StandardCopyOption.REPLACE_EXISTING}} but it overrides the destination even 
> if the source or the destination is a file. So to make them compatible we 
> have to check that the either the source or the destination is a directory 
> before we add the copy option.
> I think the correct strategy is
>  * Where the contract test passed so far it should pass after this
>  * Where the contract test failed because of Java specific think and not 
> because of the fallback logic we should keep the original behavior.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16532) TestViewFsTrash uses home directory of real fs; brittle

2019-08-26 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915904#comment-16915904
 ] 

Steve Loughran commented on HADOOP-16532:
-

me
{code}

java.lang.AssertionError: Ensure trash folder is empty 
Expected :0
Actual   :6



at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.apache.hadoop.fs.TestTrash.trashShell(TestTrash.java:459)
at 
org.apache.hadoop.fs.viewfs.TestViewFsTrash.testTrash(TestViewFsTrash.java:74)

{code}

> TestViewFsTrash uses home directory of real fs; brittle
> ---
>
> Key: HADOOP-16532
> URL: https://issues.apache.org/jira/browse/HADOOP-16532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> the test {{TestViewFsTrash}} uses the .Trash directory under the current 
> user's home dir so
> * fails in test setups which block writing to it (jenkins)
> * fails when users have real trash in there
> * may fail if there are parallel test runs.
> the home dir should be under some test path of the build.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16532) TestViewFsTrash uses home directory of real fs; brittle

2019-08-26 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915903#comment-16915903
 ] 

Steve Loughran commented on HADOOP-16532:
-

yetus
{code}
Running org.apache.hadoop.fs.viewfs.TestViewfsFileStatus
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.197 s 
<<< FAILURE! - in org.apache.hadoop.fs.viewfs.TestViewFsTrash
[ERROR] testTrash(org.apache.hadoop.fs.viewfs.TestViewFsTrash)  Time elapsed: 
1.095 s  <<< FAILURE!
java.lang.AssertionError: Expected TrashRoot 
(file:/home/jenkins/.Trash/Current) to exist in file system:file:///
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.apache.hadoop.fs.TestTrash.trashShell(TestTrash.java:313)
at 
org.apache.hadoop.fs.viewfs.TestViewFsTrash.testTrash(TestViewFsTrash.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418
{code}

> TestViewFsTrash uses home directory of real fs; brittle
> ---
>
> Key: HADOOP-16532
> URL: https://issues.apache.org/jira/browse/HADOOP-16532
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> the test {{TestViewFsTrash}} uses the .Trash directory under the current 
> user's home dir so
> * fails in test setups which block writing to it (jenkins)
> * fails when users have real trash in there
> * may fail if there are parallel test runs.
> the home dir should be under some test path of the build.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16532) TestViewFsTrash uses home directory of real fs; brittle

2019-08-26 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16532:
---

 Summary: TestViewFsTrash uses home directory of real fs; brittle
 Key: HADOOP-16532
 URL: https://issues.apache.org/jira/browse/HADOOP-16532
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


the test {{TestViewFsTrash}} uses the .Trash directory under the current user's 
home dir so

* fails in test setups which block writing to it (jenkins)
* fails when users have real trash in there
* may fail if there are parallel test runs.

the home dir should be under some test path of the build.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 opened a new pull request #1352: HDDS-2037. Fix hadoop version in pom.ozone.xml.

2019-08-26 Thread GitBox
nandakumar131 opened a new pull request #1352: HDDS-2037. Fix hadoop version in 
pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1352
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 opened a new pull request #1351: HDDS-2037. Fix hadoop version in pom.ozone.xml.

2019-08-26 Thread GitBox
nandakumar131 opened a new pull request #1351: HDDS-2037. Fix hadoop version in 
pom.ozone.xml.
URL: https://github.com/apache/hadoop/pull/1351
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1349: HDDS-2026. Overlapping chunk region cannot be read concurrently

2019-08-26 Thread GitBox
adoroszlai commented on issue #1349: HDDS-2026. Overlapping chunk region cannot 
be read concurrently
URL: https://github.com/apache/hadoop/pull/1349#issuecomment-524892662
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15602) Support SASL Rpc request handling in separate Handlers

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915861#comment-16915861
 ] 

Hadoop QA commented on HADOOP-15602:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-15602 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15602 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949271/HADOOP-15602.04.patch 
|
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16505/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Support SASL Rpc request handling in separate Handlers 
> ---
>
> Key: HADOOP-15602
> URL: https://issues.apache.org/jira/browse/HADOOP-15602
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-15602.01.patch, HADOOP-15602.02.patch, 
> HADOOP-15602.04.patch
>
>
> Right now, during RPC Connection establishment, all SASL requests are 
> considered as OutOfBand requests and handled within the same Reader thread.
> SASL handling involves authentication with Kerberos and SecretManagers(for 
> Token validation). During this time, Reader thread would be blocked, hence 
> blocking all the incoming RPC requests on other established connections. Some 
> secretManager impls require to communicate to external systems (ex: ZK) for 
> verification.
> SASL RPC handling in separate dedicated handlers, would enable Reader threads 
> to read RPC requests from established connections without blocking.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1349: HDDS-2026. Overlapping chunk region cannot be read concurrently

2019-08-26 Thread GitBox
hadoop-yetus commented on issue #1349: HDDS-2026. Overlapping chunk region 
cannot be read concurrently
URL: https://github.com/apache/hadoop/pull/1349#issuecomment-524890575
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 48 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1160 | trunk passed |
   | +1 | compile | 441 | trunk passed |
   | +1 | checkstyle | 88 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1183 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 205 | trunk passed |
   | 0 | spotbugs | 528 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 774 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 673 | the patch passed |
   | +1 | compile | 454 | the patch passed |
   | +1 | javac | 454 | the patch passed |
   | +1 | checkstyle | 86 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 785 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 185 | the patch passed |
   | +1 | findbugs | 738 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 327 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1785 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 56 | The patch does not generate ASF License warnings. |
   | | | 9172 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1349/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1349 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux cbe1e1f743ac 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 23e532d |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1349/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1349/1/testReport/ |
   | Max. process+thread count | 5398 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1349/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on issue #1319: HDDS-1981: Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-26 Thread GitBox
bshashikant commented on issue #1319: HDDS-1981: Datanode should sync db when 
container is moved to CLOSED or QUASI_CLOSED state
URL: https://github.com/apache/hadoop/pull/1319#issuecomment-524886343
 
 
   Thanks @lokeshj1703 for updating the patch . The patch looks good to me. I 
am +1 on this change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] symat opened a new pull request #1350: YARN-9783. Remove low-level zookeeper test to be able to build Hadoop against zookeeper 3.5.5

2019-08-26 Thread GitBox
symat opened a new pull request #1350: YARN-9783. Remove low-level zookeeper 
test to be able to build Hadoop against zookeeper 3.5.5
URL: https://github.com/apache/hadoop/pull/1350
 
 
   Currently the Hadoop build (with ZooKeeper 3.5.5) fails because of a YARN 
test case: TestSecureRegistry.testLowlevelZKSaslLogin(). This test case seems 
to use low-level ZooKeeper internal code, which changed in the new ZooKeeper 
version.
   
   Removing the testcase will enable us to build and test Hadoop with the 
latest ZooKeeper version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15747) warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal

2019-08-26 Thread Zhankun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated HADOOP-15747:
--
Target Version/s: 2.10.0, 3.0.4, 3.1.4  (was: 2.10.0, 3.0.4, 3.1.3)

> warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal
> ---
>
> Key: HADOOP-15747
> URL: https://issues.apache.org/jira/browse/HADOOP-15747
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.1, 3.1.1, 3.0.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Now that HADOOP-14833 has removed user:secret from URIs, change the warning 
> printed when people to that to explicitly declare Hadoop 3.3 as when it will 
> happen. Do the same in the docs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15747) warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal

2019-08-26 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915822#comment-16915822
 ] 

Zhankun Tang commented on HADOOP-15747:
---

Bulk update: Preparing for 3.1.3 release. moved all 3.1.3 non-blocker issues to 
3.1.4, please move back if it is a blocker for you.

> warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal
> ---
>
> Key: HADOOP-15747
> URL: https://issues.apache.org/jira/browse/HADOOP-15747
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.1, 3.1.1, 3.0.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Now that HADOOP-14833 has removed user:secret from URIs, change the warning 
> printed when people to that to explicitly declare Hadoop 3.3 as when it will 
> happen. Do the same in the docs.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2019-08-26 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915817#comment-16915817
 ] 

Zhankun Tang commented on HADOOP-16083:
---

Bulk update: Preparing for 3.1.3 release. moved all 3.1.3 non-blocker issues to 
3.1.4, please move back if it is a blocker for you.

> DistCp shouldn't always overwrite the target file when checksums match
> --
>
> Key: HADOOP-16083
> URL: https://issues.apache.org/jira/browse/HADOOP-16083
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16083.001.patch
>
>
> {code:java|title=CopyMapper#setup}
> ...
> try {
>   overWrite = overWrite || 
> targetFS.getFileStatus(targetFinalPath).isFile();
> } catch (FileNotFoundException ignored) {
> }
> ...
> {code}
> The above code overrides config key "overWrite" to "true" when the target 
> path is a file. Therefore, unnecessary transfer happens when the source and 
> target file have the same checksums.
> My suggestion is: remove the code above. If the user insists to overwrite, 
> just add -overwrite in the options:
> {code:bash|title=DistCp command with -overwrite option}
> hadoop distcp -overwrite hdfs://localhost:64464/source/5/6.txt 
> hdfs://localhost:64464/target/5/6.txt
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-08-26 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915816#comment-16915816
 ] 

Zhankun Tang commented on HADOOP-16082:
---

Bulk update: Preparing for 3.1.3 release. moved all 3.1.3 non-blocker issues to 
3.1.4, please move back if it is a blocker for you.

> FsShell ls: Add option -i to print inode id
> ---
>
> Key: HADOOP-16082
> URL: https://issues.apache.org/jira/browse/HADOOP-16082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16082.001.patch
>
>
> When debugging the FSImage corruption issue, I often need to know a file's or 
> directory's inode id. At this moment, the only way to do that is to use OIV 
> tool to dump the FSImage and look up the filename, which is very inefficient.
> Here I propose adding option "-i" in FsShell that prints files' or 
> directories' inode id.
> h2. Implementation
> h3. For hdfs:// (HDFS)
> fileId exists in HdfsLocatedFileStatus, which is already returned to 
> hdfs-client. We just need to print it in Ls#processPath().
> h3. For file:// (Local FS)
> h4. Linux
> Use java.nio.
> h4. Windows
> Windows has the concept of "File ID" which is similar to inode id. It is 
> unique in NTFS and ReFS.
> h3. For other FS
> The fileId entry will be "0" in FileStatus if it is not set. We could either 
> ignore or throw an exception.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16083) DistCp shouldn't always overwrite the target file when checksums match

2019-08-26 Thread Zhankun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated HADOOP-16083:
--
Target Version/s: 3.3.0, 3.2.1, 3.1.4  (was: 3.3.0, 3.2.1, 3.1.3)

> DistCp shouldn't always overwrite the target file when checksums match
> --
>
> Key: HADOOP-16083
> URL: https://issues.apache.org/jira/browse/HADOOP-16083
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 3.1.1, 3.3.0
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16083.001.patch
>
>
> {code:java|title=CopyMapper#setup}
> ...
> try {
>   overWrite = overWrite || 
> targetFS.getFileStatus(targetFinalPath).isFile();
> } catch (FileNotFoundException ignored) {
> }
> ...
> {code}
> The above code overrides config key "overWrite" to "true" when the target 
> path is a file. Therefore, unnecessary transfer happens when the source and 
> target file have the same checksums.
> My suggestion is: remove the code above. If the user insists to overwrite, 
> just add -overwrite in the options:
> {code:bash|title=DistCp command with -overwrite option}
> hadoop distcp -overwrite hdfs://localhost:64464/source/5/6.txt 
> hdfs://localhost:64464/target/5/6.txt
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16082) FsShell ls: Add option -i to print inode id

2019-08-26 Thread Zhankun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated HADOOP-16082:
--
Target Version/s: 3.3.0, 3.2.1, 3.1.4  (was: 3.3.0, 3.2.1, 3.1.3)

> FsShell ls: Add option -i to print inode id
> ---
>
> Key: HADOOP-16082
> URL: https://issues.apache.org/jira/browse/HADOOP-16082
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HADOOP-16082.001.patch
>
>
> When debugging the FSImage corruption issue, I often need to know a file's or 
> directory's inode id. At this moment, the only way to do that is to use OIV 
> tool to dump the FSImage and look up the filename, which is very inefficient.
> Here I propose adding option "-i" in FsShell that prints files' or 
> directories' inode id.
> h2. Implementation
> h3. For hdfs:// (HDFS)
> fileId exists in HdfsLocatedFileStatus, which is already returned to 
> hdfs-client. We just need to print it in Ls#processPath().
> h3. For file:// (Local FS)
> h4. Linux
> Use java.nio.
> h4. Windows
> Windows has the concept of "File ID" which is similar to inode id. It is 
> unique in NTFS and ReFS.
> h3. For other FS
> The fileId entry will be "0" in FileStatus if it is not set. We could either 
> ignore or throw an exception.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1229: HADOOP-16490. Improve S3Guard handling of FNFEs in copy

2019-08-26 Thread GitBox
steveloughran commented on a change in pull request #1229: HADOOP-16490. 
Improve S3Guard handling of FNFEs in copy
URL: https://github.com/apache/hadoop/pull/1229#discussion_r317621630
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java
 ##
 @@ -80,13 +82,18 @@
 @SuppressWarnings("visibilitymodifier")  // I want a struct of finals, for 
real.
 public class S3ARetryPolicy implements RetryPolicy {
 
+  private static final Logger LOG = LoggerFactory.getLogger(
+  S3ARetryPolicy.class);
+
+  private final Configuration configuration;
+
   /** Final retry policy we end up with. */
   private final RetryPolicy retryPolicy;
 
   // Retry policies for mapping exceptions to
 
   /** Base policy from configuration. */
-  protected final RetryPolicy fixedRetries;
+  protected final RetryPolicy defaultRetries;
 
 Review comment:
   now `baseExponentialRetry` 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16310) Log of a slow RPC request should contain the parameter of the request

2019-08-26 Thread Zhankun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated HADOOP-16310:
--
Target Version/s: 3.1.4  (was: 3.1.2)

> Log of a slow RPC request should contain the parameter of the request
> -
>
> Key: HADOOP-16310
> URL: https://issues.apache.org/jira/browse/HADOOP-16310
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Affects Versions: 3.1.1, 2.7.7, 3.1.2
>Reporter: lindongdong
>Priority: Minor
>
>  Now, the log of  a slow RPC request just contains the 
> *methodName*,*processingTime* and *client*. Code is here:
> {code:java}
> if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) &&
> (processingTime > threeSigma)) {
>   if(LOG.isWarnEnabled()) {
> String client = CurCall.get().toString();
> LOG.warn(
> "Slow RPC : " + methodName + " took " + processingTime +
> " milliseconds to process from client " + client);
>   }
>   rpcMetrics.incrSlowRpc();
> }{code}
>  
> It is not enough to analyze why the RPC request is slow. 
> The parameter of the request is a very important thing, and need to be logged.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16310) Log of a slow RPC request should contain the parameter of the request

2019-08-26 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915805#comment-16915805
 ] 

Zhankun Tang commented on HADOOP-16310:
---

Bulk update: Preparing for 3.1.3 release. moved all 3.1.2 non-blocker issues to 
3.1.4, please move back if it is a blocker for you.

> Log of a slow RPC request should contain the parameter of the request
> -
>
> Key: HADOOP-16310
> URL: https://issues.apache.org/jira/browse/HADOOP-16310
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Affects Versions: 3.1.1, 2.7.7, 3.1.2
>Reporter: lindongdong
>Priority: Minor
>
>  Now, the log of  a slow RPC request just contains the 
> *methodName*,*processingTime* and *client*. Code is here:
> {code:java}
> if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) &&
> (processingTime > threeSigma)) {
>   if(LOG.isWarnEnabled()) {
> String client = CurCall.get().toString();
> LOG.warn(
> "Slow RPC : " + methodName + " took " + processingTime +
> " milliseconds to process from client " + client);
>   }
>   rpcMetrics.incrSlowRpc();
> }{code}
>  
> It is not enough to analyze why the RPC request is slow. 
> The parameter of the request is a very important thing, and need to be logged.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16310) Log of a slow RPC request should contain the parameter of the request

2019-08-26 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915805#comment-16915805
 ] 

Zhankun Tang edited comment on HADOOP-16310 at 8/26/19 2:03 PM:


Bulk update: Preparing for 3.1.3 release. moved all legacy 3.1.2 non-blocker 
issues to 3.1.4, please move back if it is a blocker for you.


was (Author: tangzhankun):
Bulk update: Preparing for 3.1.3 release. moved all 3.1.2 non-blocker issues to 
3.1.4, please move back if it is a blocker for you.

> Log of a slow RPC request should contain the parameter of the request
> -
>
> Key: HADOOP-16310
> URL: https://issues.apache.org/jira/browse/HADOOP-16310
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Affects Versions: 3.1.1, 2.7.7, 3.1.2
>Reporter: lindongdong
>Priority: Minor
>
>  Now, the log of  a slow RPC request just contains the 
> *methodName*,*processingTime* and *client*. Code is here:
> {code:java}
> if ((rpcMetrics.getProcessingSampleCount() > minSampleSize) &&
> (processingTime > threeSigma)) {
>   if(LOG.isWarnEnabled()) {
> String client = CurCall.get().toString();
> LOG.warn(
> "Slow RPC : " + methodName + " took " + processingTime +
> " milliseconds to process from client " + client);
>   }
>   rpcMetrics.incrSlowRpc();
> }{code}
>  
> It is not enough to analyze why the RPC request is slow. 
> The parameter of the request is a very important thing, and need to be logged.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15602) Support SASL Rpc request handling in separate Handlers

2019-08-26 Thread Zhankun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915802#comment-16915802
 ] 

Zhankun Tang commented on HADOOP-15602:
---

Bulk update: Preparing for 3.1.3 release. moved all 3.1.3 non-blocker issues to 
3.1.4, please move back if it is a blocker for you.

> Support SASL Rpc request handling in separate Handlers 
> ---
>
> Key: HADOOP-15602
> URL: https://issues.apache.org/jira/browse/HADOOP-15602
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-15602.01.patch, HADOOP-15602.02.patch, 
> HADOOP-15602.04.patch
>
>
> Right now, during RPC Connection establishment, all SASL requests are 
> considered as OutOfBand requests and handled within the same Reader thread.
> SASL handling involves authentication with Kerberos and SecretManagers(for 
> Token validation). During this time, Reader thread would be blocked, hence 
> blocking all the incoming RPC requests on other established connections. Some 
> secretManager impls require to communicate to external systems (ex: ZK) for 
> verification.
> SASL RPC handling in separate dedicated handlers, would enable Reader threads 
> to read RPC requests from established connections without blocking.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15602) Support SASL Rpc request handling in separate Handlers

2019-08-26 Thread Zhankun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhankun Tang updated HADOOP-15602:
--
Target Version/s: 3.1.4  (was: 3.1.3)

> Support SASL Rpc request handling in separate Handlers 
> ---
>
> Key: HADOOP-15602
> URL: https://issues.apache.org/jira/browse/HADOOP-15602
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
> Attachments: HADOOP-15602.01.patch, HADOOP-15602.02.patch, 
> HADOOP-15602.04.patch
>
>
> Right now, during RPC Connection establishment, all SASL requests are 
> considered as OutOfBand requests and handled within the same Reader thread.
> SASL handling involves authentication with Kerberos and SecretManagers(for 
> Token validation). During this time, Reader thread would be blocked, hence 
> blocking all the incoming RPC requests on other established connections. Some 
> secretManager impls require to communicate to external systems (ex: ZK) for 
> verification.
> SASL RPC handling in separate dedicated handlers, would enable Reader threads 
> to read RPC requests from established connections without blocking.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1319: HDDS-1981: Datanode should sync db when container is moved to CLOSED or QUASI_CLOSED state

2019-08-26 Thread GitBox
hadoop-yetus commented on issue #1319: HDDS-1981: Datanode should sync db when 
container is moved to CLOSED or QUASI_CLOSED state
URL: https://github.com/apache/hadoop/pull/1319#issuecomment-524871271
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 44 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 131 | Maven dependency ordering for branch |
   | +1 | mvninstall | 710 | trunk passed |
   | +1 | compile | 404 | trunk passed |
   | +1 | checkstyle | 82 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 977 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | trunk passed |
   | 0 | spotbugs | 486 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 697 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 609 | the patch passed |
   | +1 | compile | 386 | the patch passed |
   | +1 | cc | 386 | the patch passed |
   | +1 | javac | 386 | the patch passed |
   | +1 | checkstyle | 84 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 677 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 171 | the patch passed |
   | +1 | findbugs | 640 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 310 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1970 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 8316 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestBlockDeletion |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.container.server.TestSecureContainerServer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1319/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1319 |
   | JIRA Issue | HDDS-1981 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux a9241bd1bc80 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 23e532d |
   | Default Java | 1.8.0_222 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1319/5/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1319/5/testReport/ |
   | Max. process+thread count | 4947 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/container-service U: 
hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1319/5/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2019-08-26 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915797#comment-16915797
 ] 

Sean Busbey commented on HADOOP-15998:
--

Thanks Rohith.

[~briangru] or [~giovanni.fumarola] either of y'all up for trying out the 
current patch?

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Assignee: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Attachments: HADOOP-15998.5.patch, HADOOP-15998.v4.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16531) Log more detail for slow RPC

2019-08-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915775#comment-16915775
 ] 

Hadoop QA commented on HADOOP-16531:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 5s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestFixKerberosTicketOrder |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | HADOOP-16531 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978586/HADOOP-16531.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5620fb355fa7 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 23e532d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16503/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16503/testReport/ |
| Max. process+thread count | 1360 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| 

[GitHub] [hadoop] adoroszlai opened a new pull request #1349: HDDS-2026. Overlapping chunk region cannot be read concurrently

2019-08-26 Thread GitBox
adoroszlai opened a new pull request #1349: HDDS-2026. Overlapping chunk region 
cannot be read concurrently
URL: https://github.com/apache/hadoop/pull/1349
 
 
   ## What changes were proposed in this pull request?
   
   Only allow a single read/write operation for the same path in `ChunkUtils`, 
to avoid `OverlappingFileLockException` due to concurrent reads.  This allows 
concurrent reads/writes of separate files (as opposed to simply synchronizing 
the methods).  It might be improved later by storing and reusing the file lock.
   
   Use plain `FileChannel` instead of `AsynchronousFileChannel` for reading, 
too, since it was used in synchronous fashion (by calling `.get()`) anyway.
   
   https://issues.apache.org/jira/browse/HDDS-2026
   
   ## How was this patch tested?
   
   Added unit test.
   
   Used improved Freon tool from #1341 to perform read of same key from 
multiple threads (which revealed the bug in the first place).
   
   ```
   $ ozone freon ockg -n 1 -p asdf
   $ ozone sh key list vol1/bucket1
   [ {
 "version" : 0,
 "size" : 10240,
 "keyName" : "asdf/0"
 ...
   } ]
   
   $ ozone freon ocokr -k 'asdf/0'
   ...
mean rate = 164.39 calls/second
 ...
 mean = 53.75 milliseconds
   stddev = 42.11 milliseconds
   median = 44.05 milliseconds
   ...
   Total execution time (sec): 6
   Failures: 0
   Successful executions: 1000
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1349: HDDS-2026. Overlapping chunk region cannot be read concurrently

2019-08-26 Thread GitBox
adoroszlai commented on issue #1349: HDDS-2026. Overlapping chunk region cannot 
be read concurrently
URL: https://github.com/apache/hadoop/pull/1349#issuecomment-524833276
 
 
   /label ozone


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16152) Upgrade Eclipse Jetty version to 9.4.x

2019-08-26 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-16152:
-
Description: 
Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
compatibility issues.

Spark: 
[https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
Calcite: [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
Hive: https://issues.apache.org/jira/browse/HIVE-21211

  was:
Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
compatibility issues.

Spark: 
[https://github.com/apache/spark/blob/5a92b5a47cdfaea96a9aeedaf80969d825a382f2/pom.xml#L141]
Calcite: [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
Hive: https://issues.apache.org/jira/browse/HIVE-21211


> Upgrade Eclipse Jetty version to 9.4.x
> --
>
> Key: HADOOP-16152
> URL: https://issues.apache.org/jira/browse/HADOOP-16152
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0
>Reporter: Yuming Wang
>Assignee: Yuming Wang
>Priority: Major
> Attachments: HADOOP-16152.v1.patch
>
>
> Some big data projects have been upgraded Jetty to 9.4.x, which causes some 
> compatibility issues.
> Spark: 
> [https://github.com/apache/spark/blob/02a0cdea13a5eebd27649a60d981de35156ba52c/pom.xml#L146]
> Calcite: 
> [https://github.com/apache/calcite/blob/avatica-1.13.0-rc0/pom.xml#L87]
> Hive: https://issues.apache.org/jira/browse/HIVE-21211



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16193) add extra S3A MPU test to see what happens if a file is created during the MPU

2019-08-26 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915714#comment-16915714
 ] 

Hudson commented on HADOOP-16193:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17183 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17183/])
Revert "HADOOP-16193. Add extra S3A MPU test to see what happens if a 
(ewan.higgs: rev 23e532d73983a17eae4f3baec56d402ec471f0c3)
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMultipartUploader.java


> add extra S3A MPU test to see what happens if a file is created during the MPU
> --
>
> Key: HADOOP-16193
> URL: https://issues.apache.org/jira/browse/HADOOP-16193
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.1.3
>
>
> Proposed extra test for the S3A MPU: if you create and then delete a file 
> while an MPU is in progress, when you finally complete the MPU the new data 
> is present.
> This verifies that the other FS operations don't somehow cancel the 
> in-progress upload, and that eventual consistency brings the latest value out.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #533: HADOOP-14630 Contract Tests to verify create, mkdirs and rename under a file is forbidden

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #533: HADOOP-14630  Contract Tests to 
verify create, mkdirs and rename under a file is forbidden
URL: https://github.com/apache/hadoop/pull/533#issuecomment-523936173
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 16 | https://github.com/apache/hadoop/pull/533 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/533 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/10/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #533: HADOOP-14630 Contract Tests to verify create, mkdirs and rename under a file is forbidden

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #533: HADOOP-14630  Contract Tests to 
verify create, mkdirs and rename under a file is forbidden
URL: https://github.com/apache/hadoop/pull/533#issuecomment-522098790
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 12 | https://github.com/apache/hadoop/pull/533 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/533 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/8/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #533: HADOOP-14630 Contract Tests to verify create, mkdirs and rename under a file is forbidden

2019-08-26 Thread GitBox
hadoop-yetus removed a comment on issue #533: HADOOP-14630  Contract Tests to 
verify create, mkdirs and rename under a file is forbidden
URL: https://github.com/apache/hadoop/pull/533#issuecomment-523064629
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 12 | https://github.com/apache/hadoop/pull/533 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/533 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-533/9/console |
   | versions | git=2.7.4 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >