Re: [VOTE] Release Apache Hadoop 2.9.2 (RC0)

2018-11-14 Thread Takanobu Asanuma
Thanks for driving the release, Akira!
 
+1 (non-binding)
   - verified checksums
   - succeeded in building the package
   - started hadoop cluster with 1 master and 5 slaves
   - ran TeraGen/TeraSort
   - verified Web UI (NN, RM, JobHistory, Timeline)
   - verified some operations of Router-based Federation

Thanks,
-Takanobu

on 2018/11/14 10:02, "Akira Ajisaka" wrote:

Hi folks,

I have put together a release candidate (RC0) for Hadoop 2.9.2. It
includes 204 bug fixes and improvements since 2.9.1. [1]

The RC is available at http://home.apache.org/~aajisaka/hadoop-2.9.2-RC0/
Git signed tag is release-2.9.2-RC0 and the checksum is
826afbeae31ca687bc2f8471dc841b66ed2c6704
The maven artifacts are staged at
https://repository.apache.org/content/repositories/orgapachehadoop-1166/

You can find my public key at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Please try the release and vote. The vote will run for 5 days.

[1] https://s.apache.org/2.9.2-fixed-jiras

Thanks,
Akira

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





[jira] [Created] (HADOOP-15937) [JDK 11] Update maven-shade-plugin.version to 3.2.1

2018-11-14 Thread Devaraj K (JIRA)
Devaraj K created HADOOP-15937:
--

 Summary: [JDK 11] Update maven-shade-plugin.version to 3.2.1
 Key: HADOOP-15937
 URL: https://issues.apache.org/jira/browse/HADOOP-15937
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
 Environment: openjdk version "11" 2018-09-25
Reporter: Devaraj K


Build fails with the below error,
{code:xml}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on project 
hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
 entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: UnsupportedOperationException 
-> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-shade-plugin:3.2.0:shade (default) on project 
hadoop-yarn-csi: Error creating shaded jar: Problem shading JAR 
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-csi/target/hadoop-yarn-csi-3.3.0-SNAPSHOT.jar
 entry csi/v0/Csi$GetPluginInfoRequestOrBuilder.class: 
org.apache.maven.plugin.MojoExecutionException: Error in ASM processing class 
csi/v0/Csi$GetPluginInfoRequestOrBuilder.class
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:213)
...
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-csi

{code}


Updating maven-shade-plugin.version to 3.2.1 fixes the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15936) [JDK 11] MiniDFSClusterManager & MiniHadoopClusterManager compilation fails due to the usage of '_' as identifier

2018-11-14 Thread Devaraj K (JIRA)
Devaraj K created HADOOP-15936:
--

 Summary: [JDK 11] MiniDFSClusterManager & MiniHadoopClusterManager 
compilation fails due to the usage of '_' as identifier
 Key: HADOOP-15936
 URL: https://issues.apache.org/jira/browse/HADOOP-15936
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
 Environment: openjdk version "11" 2018-09-25
Reporter: Devaraj K


{code:xml}
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] 
/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/test/MiniDFSClusterManager.java:[130,37]
 as of release 9, '_' is a keyword, and may not be used as an identifier
[INFO] 1 error
{code}

{code:xml}
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] 
/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/MiniHadoopClusterManager.java:[140,37]
 as of release 9, '_' is a keyword, and may not be used as an identifier
[INFO] 1 error
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15935) [JDK 11] Update maven.plugin-tools.version to 3.6.0

2018-11-14 Thread Devaraj K (JIRA)
Devaraj K created HADOOP-15935:
--

 Summary: [JDK 11] Update maven.plugin-tools.version to 3.6.0
 Key: HADOOP-15935
 URL: https://issues.apache.org/jira/browse/HADOOP-15935
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
 Environment: openjdk version "11" 2018-09-25
Reporter: Devaraj K


Build fails with the below error,

{code:xml}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-plugin-plugin:3.5.1:descriptor 
(default-descriptor) on project hadoop-maven-plugins: Execution 
default-descriptor of goal 
org.apache.maven.plugins:maven-plugin-plugin:3.5.1:descriptor failed.: 
IllegalArgumentException -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-plugin-plugin:3.5.1:descriptor 
(default-descriptor) on project hadoop-maven-plugins: Execution 
default-descriptor of goal 
org.apache.maven.plugins:maven-plugin-plugin:3.5.1:descriptor failed.
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:213)
...
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:356)
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-maven-plugins

{code}


Updating maven.plugin-tools.version to 3.6.0 fixes this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15934) ABFS: make retry policy configurable

2018-11-14 Thread Da Zhou (JIRA)
Da Zhou created HADOOP-15934:


 Summary: ABFS: make retry policy configurable
 Key: HADOOP-15934
 URL: https://issues.apache.org/jira/browse/HADOOP-15934
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Da Zhou
Assignee: Da Zhou


Currently the retry policy parameter is hard coded, should make it configurable 
for user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15933) Need for more stats in DFSClient

2018-11-14 Thread Pranay Singh (JIRA)
Pranay Singh created HADOOP-15933:
-

 Summary: Need for more stats in DFSClient
 Key: HADOOP-15933
 URL: https://issues.apache.org/jira/browse/HADOOP-15933
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Pranay Singh


The usage of HDFS has changed from being used as a map-reduce filesystem, now 
it's becoming more of like a general purpose filesystem. In most of the cases 
there are issues with the Namenode so we have metrics to know the workload or 
stress on Namenode.

However, there is a need to have more statistics collected for different 
operations/RPCs in DFSClient to know which RPC operations are taking longer 
time or to know what is the frequency of the operation.These statistics can be 
exposed to the users of DFS Client and they can periodically log or do some 
sort of flow control if the response is slow. This will also help to isolate 
HDFS issue in a mixed environment where on a node we have HBase and Impala 
running together. We can check the throughput of different operation across 
client and isolate the problem caused because of noisy neighbor or network 
congestion or shared JVM.

We have dealt with several problems from the field for which there is no 
conclusive evidence as to what caused the problem. If we had metrics or stats 
in DFSClient we would be better equipped to solve such complex problems.

List of jiras for reference:
-
 HADOOP-15538 HADOOP-15530 ( client side deadlock)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop Ozone 0.3.0-alpha (RC1)

2018-11-14 Thread Elek, Marton
Hi all,

I've created the second release candidate (RC1) for Apache Hadoop Ozone
0.3.0-alpha including one more fix on top of the previous RC0 (HDDS-854)

This is the second release of Apache Hadoop Ozone. Notable changes since
the first release:

* A new S3 compatible rest server is added. Ozone can be used from any
S3 compatible tools (HDDS-434)
* Ozone Hadoop file system URL prefix is renamed from o3:// to o3fs://
(HDDS-651)
* Extensive testing and stability improvements of OzoneFs.
* Spark, YARN and Hive support and stability improvements.
* Improved Pipeline handling and recovery.
* Separated/dedicated classpath definitions for all the Ozone
components. (HDDS-447)

The RC artifacts are available from:
https://home.apache.org/~elek/ozone-0.3.0-alpha-rc1/

The RC tag in git is: ozone-0.3.0-alpha-RC1 (ebbf459e6a6)

Please try it out, vote, or just give us feedback.

The vote will run for 5 days, ending on November 19, 2018 18:00 UTC.


Thank you very much,
Marton

PS:

The easiest way to try it out is:

1. Download the binary artifact
2. Read the docs from ./docs/index.html
3. TLDR; cd compose/ozone && docker-compose up -d
4. open localhost:9874 or localhost:9876



The easiest way to try it out from the source:

1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
-DskipShade -am -pl :hadoop-ozone-dist
2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha && docker-compose up -d



The easiest way to test basic functionality (with acceptance tests):

1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
-DskipShade -am -pl :hadoop-ozone-dist
2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha/smoketest
3. ./test.sh

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[CANCELED] [VOTE] Release Apache Hadoop Ozone 0.3.0-alpha (RC0)

2018-11-14 Thread Elek, Marton
Unfortunately a memory issue is found with the default settings. Fixed
in HDDS-834 (thanks Mukul and Shashikant)

I cancel this vote and start a rc1 soon.

Marton

On 11/13/18 1:53 PM, Elek, Marton wrote:
> Hi all,
> 
> I've created the first release candidate (RC0) for Apache Hadoop Ozone
> 0.3.0-alpha according to the plans shared here previously.
> 
> This is the second release of Apache Hadoop Ozone. Notable changes since
> the first release:
> 
> * A new S3 compatible rest server is added. Ozone can be used from any
> S3 compatible tools (HDDS-434)
> * Ozone Hadoop file system URL prefix is renamed from o3:// to o3fs://
> (HDDS-651)
> * Extensive testing and stability improvements of OzoneFs.
> * Spark, YARN and Hive support and stability improvements.
> * Improved Pipeline handling and recovery.
> * Separated/dedicated classpath definitions for all the Ozone
> components. (HDDS-447)
> 
> The RC artifacts are available from:
> https://home.apache.org/~elek/ozone-0.3.0-alpha-rc0/
> 
> The RC tag in git is: ozone-0.3.0-alpha-RC0 (dc661083683)
> 
> Please try it out, vote, or just give us feedback.
> 
> The vote will run for 5 days, ending on November 18, 2018 13:00 UTC.
> 
> 
> Thank you very much,
> Marton
> 
> PS:
> 
> The easiest way to try it out is:
> 
> 1. Download the binary artifact
> 2. Read the docs from ./docs/index.html
> 3. TLDR; cd compose/ozone && docker-compose up -d
> 4. open localhost:9874 or localhost:9876
> 
> 
> 
> The easiest way to try it out from the source:
> 
> 1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
> -DskipShade -am -pl :hadoop-ozone-dist
> 2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha && docker-compose up -d
> 
> 
> 
> The easiest way to test basic functionality (with acceptance tests):
> 
> 1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
> -DskipShade -am -pl :hadoop-ozone-dist
> 2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha/smoketest
> 3. ./test.sh
> 
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15932) Oozie unable to create sharelib in s3a filesystem

2018-11-14 Thread Soumitra Sulav (JIRA)
Soumitra Sulav created HADOOP-15932:
---

 Summary: Oozie unable to create sharelib in s3a filesystem
 Key: HADOOP-15932
 URL: https://issues.apache.org/jira/browse/HADOOP-15932
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Soumitra Sulav


Oozie server unable to start cause of below exception.
s3a expects a file to copy it in store but sharelib is a folder containing all 
the needed components jars.
Hence throws the exception :
_Not a file: /usr/hdp/current/oozie-server/share/lib_

{code:java}
[oozie@sg-hdp1 ~]$ /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib 
create -fs s3a://hdp -locallib /usr/hdp/current/oozie-server/share
  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
  setting 
CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
  setting JRE_HOME=${JAVA_HOME}
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
  setting OOZIE_LOG=/var/log/oozie
  setting CATALINA_PID=/var/run/oozie/oozie.pid
  setting OOZIE_DATA=/hadoop/oozie/data
  setting OOZIE_HTTP_PORT=11000
  setting OOZIE_ADMIN_PORT=11001
  setting 
JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
-Doozie.connection.retry.count=5 "
  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-client/conf}
  setting 
CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-client/oozie-server}
  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112
  setting JRE_HOME=${JAVA_HOME}
  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"
  setting OOZIE_LOG=/var/log/oozie
  setting CATALINA_PID=/var/run/oozie/oozie.pid
  setting OOZIE_DATA=/hadoop/oozie/data
  setting OOZIE_HTTP_PORT=11000
  setting OOZIE_ADMIN_PORT=11001
  setting 
JAVA_LIBRARY_PATH=/usr/hdp/3.0.0.0-1634/hadoop/lib/native/Linux-amd64-64
  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} 
-Doozie.connection.retry.count=5 "
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.0.0-1634/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/3.0.0.0-1634/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
518 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
605 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir
619 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating Configuration
the destination path for sharelib is: /user/oozie/share/lib/lib_20181114154552
log4j:WARN No appenders could be found for logger 
(org.apache.htrace.core.Tracer).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more 
info.
1118 [main] WARN org.apache.hadoop.metrics2.impl.MetricsConfig - Cannot locate 
configuration: tried 
hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - Scheduled 
Metric snapshot period at 10 second(s).
1172 [main] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
s3a-file-system metrics system started
2255 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - 
fs.s3a.server-side-encryption-key is deprecated. Instead, use 
fs.s3a.server-side-encryption.key

Error: Not a file: /usr/hdp/current/oozie-server/share/lib

Stack trace for the error was (for debug purposes):
--
java.io.FileNotFoundException: Not a file: 
/usr/hdp/current/oozie-server/share/lib
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerCopyFromLocalFile(S3AFileSystem.java:2375)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.copyFromLocalFile(S3AFileSystem.java:2339)
at 
org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:2386)
at 
org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:182)
at 
org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:67)
--

2268 [pool-2-thread-1] INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl - 
Stopping s3a-file-system metrics system...
2

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-11-14 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/

[Nov 13, 2018 2:38:27 AM] (yqlin) HDDS-831. TestOzoneShell in integration-test 
is flaky. Contributed by
[Nov 13, 2018 4:40:43 AM] (aajisaka) HADOOP-15923. create-release script should 
set max-cache-ttl as well as
[Nov 13, 2018 4:57:07 AM] (tasanuma) HADOOP-15912. start-build-env.sh still 
creates an invalid
[Nov 13, 2018 7:15:44 AM] (brahma) HDFS-14070. Refactor NameNodeWebHdfsMethods 
to allow better
[Nov 13, 2018 2:52:58 PM] (surendralilhore) HADOOP-15869. 
BlockDecompressorStream#decompress should not return -1 in
[Nov 13, 2018 6:09:14 PM] (shashikant) HDDS-675. Add blocking buffer and use 
watchApi for flush/close in
[Nov 13, 2018 7:24:15 PM] (wangda) YARN-8918. [Submarine] Correct method usage 
of str.subString in
[Nov 13, 2018 7:25:41 PM] (wangda) MAPREDUCE-7158. Inefficient Flush Logic in 
JobHistory EventWriter.
[Nov 13, 2018 8:44:25 PM] (xiao) Revert "HDFS-13732. ECAdmin should print the 
policy name when an EC
[Nov 13, 2018 9:13:27 PM] (wangda) YARN-9001. [Submarine] Use AppAdminClient 
instead of ServiceClient to
[Nov 13, 2018 9:46:18 PM] (stevel) HADOOP-15876. Use keySet().removeAll() to 
remove multiple keys from Map




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.balancer.TestBalancer 
   
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   hadoop.mapreduce.jobhistory.TestEvents 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [44K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/957/artifact/out/branch-findbugs

Re: YARN-8789

2018-11-14 Thread Steve Loughran
You'll have to ask the yarn-dev team there, I'm afraid

> On 13 Nov 2018, at 16:34, dam6923  wrote:
> 
> Is anyone able to assist in getting YARN-8789 reviewed and committed?
> 
> Thanks!


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-8842) local file system behavior of mv into an empty directory is inconsistent with HDFS

2018-11-14 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-8842.

Resolution: Won't Fix

the semantics of rename() are complex and in places the hadoop FS APIs and the 
hadoop fs -mv command are wrong. Don't think we can fix, though if someone were 
to add/extend that fs shell's mv command then we could change the UI

> local file system behavior of mv into an empty directory is inconsistent with 
> HDFS
> --
>
> Key: HADOOP-8842
> URL: https://issues.apache.org/jira/browse/HADOOP-8842
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: Julien Le Dem
>Priority: Major
>
> moving into an empty directory replaces the directory instead.
> See output of attached script to reproduce :
> repro.sh
> {noformat}
> rm -rf local_fs_bug
> mkdir local_fs_bug
> hdfs -rmr local_fs_bug
> hdfs -mkdir local_fs_bug
> echo ">>> HDFS: normal behavior"
> touch part-
> hdfs -mkdir local_fs_bug/a
> hdfs -copyFromLocal part- local_fs_bug/a
> hdfs -mkdir local_fs_bug/b
> hdfs -mkdir local_fs_bug/b/c
> echo "content of a: 1 part"
> hdfs -ls local_fs_bug/a
> echo "content of b/c: empty"
> hdfs -ls local_fs_bug/b/c
> echo "mv a b/c"
> hdfs -mv local_fs_bug/a local_fs_bug/b/c
> echo "resulting content of b/c"
> hdfs -ls local_fs_bug/b/c
> echo "a is moved inside of c"
> echo
> echo ">>> local fs: bug"
> mkdir -p local_fs_bug/a
> touch local_fs_bug/a/part-
> mkdir -p local_fs_bug/b/c
> echo "content of a: 1 part"
> hdfs -fs local -ls local_fs_bug/a
> echo "content of b/c: empty"
> hdfs -fs local -ls local_fs_bug/b/c
> echo "mv a b/c"
> hdfs -fs local -mv local_fs_bug/a local_fs_bug/b/c
> echo "resulting content of b/c"
> hdfs -fs local -ls local_fs_bug/b/c
> echo "bug: a replaces c"
> echo
> echo ">>> but it works if the destination is not empty"
> mkdir local_fs_bug/a2
> touch local_fs_bug/a2/part-
> mkdir -p local_fs_bug/b2/c2
> touch local_fs_bug/b2/c2/dummy
> echo "content of a2: 1 part"
> hdfs -fs local -ls local_fs_bug/a2
> echo "content of b2/c2: 1 dummy file"
> hdfs -fs local -ls local_fs_bug/b2/c2
> echo "mv a2 b2/c2"
> hdfs -fs local -mv local_fs_bug/a2 local_fs_bug/b2/c2
> echo "resulting content of b/c"
> hdfs -fs local -ls local_fs_bug/b2/c2
> echo "a2 is moved inside of c2"
> {noformat}
> Output:
> {noformat}
> >>> HDFS: normal behavior
> content of a: 1 part
> Found 1 items
> -rw-r--r--   3 julien g  0 2012-09-25 17:16 
> /user/julien/local_fs_bug/a/part-
> content of b/c: empty
> mv a b/c
> resulting content of b/c
> Found 1 items
> drwxr-xr-x   - julien g  0 2012-09-25 17:16 
> /user/julien/local_fs_bug/b/c/a
> a is moved inside of c
> >>> local fs: bug
> content of a: 1 part
> 12/09/25 17:16:34 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 1 items
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/a/part-
> content of b/c: empty
> 12/09/25 17:16:34 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> mv a b/c
> 12/09/25 17:16:35 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> resulting content of b/c
> 12/09/25 17:16:35 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 1 items
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/b/c/part-
> bug: a replaces c
> >>> but it works if the destination is not empty
> content of a2: 1 part
> 12/09/25 17:16:36 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 1 items
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/a2/part-
> content of b2/c2: 1 dummy file
> 12/09/25 17:16:37 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 1 items
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/b2/c2/dummy
> mv a2 b2/c2
> 12/09/25 17:16:37 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> resulting content of b/c
> 12/09/25 17:16:38 WARN fs.FileSystem: "local" is a deprecated filesystem 
> name. Use "file:///" instead.
> Found 2 items
> drwxr-xr-x   - julien g   4096 2012-09-25 17:16 
> /home/julien/local_fs_bug/b2/c2/a2
> -rw-r--r--   1 julien g  0 2012-09-25 17:16 
> /home/julien/local_fs_bug/b2/c2/dummy
> a2 is moved inside of c2
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-de