[jira] [Resolved] (HADOOP-16007) Order of property settings is incorrect when includes are processed

2018-12-20 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-16007.
-
Resolution: Duplicate

This was fixed by HADOOP-15554.

> Order of property settings is incorrect when includes are processed
> ---
>
> Key: HADOOP-16007
> URL: https://issues.apache.org/jira/browse/HADOOP-16007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.2.0, 3.1.1, 3.0.4
>Reporter: Jason Lowe
>Assignee: Eric Payne
>Priority: Blocker
>
> If a configuration file contains a setting for a property then later includes 
> another file that also sets that property to a different value then the 
> property will be parsed incorrectly. For example, consider the following 
> configuration file:
> {noformat}
> http://www.w3.org/2001/XInclude";>
>  
>  myprop
>  val1
>  
> 
> 
> {noformat}
> with the contents of /some/other/file.xml as:
> {noformat}
>  
>myprop
>val2
>  
> {noformat}
> Parsing this configuration should result in myprop=val2, but it actually 
> results in myprop=val1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16018) DistCp won't reassemble chunks when blocks per chunk > 0

2018-12-20 Thread Kai X (JIRA)
Kai X created HADOOP-16018:
--

 Summary: DistCp won't reassemble chunks when blocks per chunk > 0
 Key: HADOOP-16018
 URL: https://issues.apache.org/jira/browse/HADOOP-16018
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools/distcp
Affects Versions: 3.0.3, 2.9.2, 3.1.1
Reporter: Kai X


I was investigating why hadoop-distcp-2.9.2 won't reassemble chunks of the same 
file when blocks per chunk has been set > 0.

In the CopyCommitter::commitJob, this logic can prevent reassemble chunks if 
blocks per chunk is equal to 0:
{code:java}
if (blocksPerChunk > 0) {
  concatFileChunks(conf);
}
{code}
Then in CopyCommitter's ctor, blocksPerChunk is initialised from the config:

 
{code:java}
blocksPerChunk = context.getConfiguration().getInt(
DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel(), 0);
{code}
 

But here the config key DistCpOptionSwitch.BLOCKS_PER_CHUNK.getConfigLabel() 
will always returns empty string because it is constructed without config label:

 
{code:java}
BLOCKS_PER_CHUNK("",
new Option("blocksperchunk", true, "If set to a positive value, files"
+ "with more blocks than this value will be split into chunks of "
+ " blocks to be transferred in parallel, and "
+ "reassembled on the destination. By default,  is "
+ "0 and the files will be transmitted in their entirety without "
+ "splitting. This switch is only applicable when the source file "
+ "system implements getBlockLocations method and the target file "
+ "system implements concat method"))
{code}
As a result it will fall back to the default value 0 for blocksPerChunk, and 
prevent the chunks from reassembling.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-12-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/

[Dec 19, 2018 7:00:40 AM] (arp) HDDS-940. Remove dead store to local variable 
in OmMetadataManagerImpl.
[Dec 19, 2018 8:12:06 AM] (arp) HDDS-893. pipeline status is ALLOCATED in 
scmcli listPipelines command.
[Dec 19, 2018 7:55:56 PM] (eyang) YARN-9126.  Fix container clean up for 
reinitialization.
[Dec 20, 2018 12:45:23 AM] (billie) YARN-9129. Ensure flush after printing to 
log plus additional cleanup.
[Dec 20, 2018 1:03:33 AM] (tasanuma) HDFS-13661. Ls command with e option fails 
when the filesystem is not
[Dec 20, 2018 1:09:50 AM] (aajisaka) MAPREDUCE-7166. map-only job should ignore 
node lost event when task is




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestReadWriteDiskValidator 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/branch-findbugs-hadoop-ozone_tools.txt
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [164K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/patch-unit-hadoop-common-project_hadoop-registry.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/993/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/9

[jira] [Created] (HADOOP-16017) Add some S3A-specific create file options

2018-12-20 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16017:
---

 Summary: Add some S3A-specific create file options
 Key: HADOOP-16017
 URL: https://issues.apache.org/jira/browse/HADOOP-16017
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


Add some options in createFile() for S3A specifically. 

I think we need something for "put-only" where none of the existence checks nor 
cleanup checks are made when s3guard == off. That is: no HEAD/LIST first, no 
DELETE parents after. Only the PUT is done.

This is 

* faster
* lower cost
* avoids 404's being cached in load balancers and so create inconsistency

It does rely on the caller knowing what they are doing else you end up in a 
mess, but since the s3a committers use WriteOperationsHelper for this exact 
operation, we should open it up to others who also know what they are doing. 




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org