Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-10-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/925/

[Oct 12, 2018 2:37:37 AM] (szetszwo) HDDS-625. putKey hangs for a long time 
after completion, sometimes
[Oct 12, 2018 3:21:00 AM] (vrushali) YARN-8834 Provide Java client for fetching 
Yarn specific entities from
[Oct 12, 2018 3:39:41 AM] (xiao) HDFS-13973. getErasureCodingPolicy should log 
path in audit event.
[Oct 12, 2018 4:03:01 AM] (bharat) HDDS-609. On restart, SCM does not exit 
chill mode as it expects DNs to
[Oct 12, 2018 4:14:06 AM] (vrushali) YARN-3879 [Storage implementation] Create 
HDFS backing storage
[Oct 12, 2018 6:00:13 AM] (yqlin) HDDS-628. Fix outdated names used in HDDS 
documentations.
[Oct 12, 2018 8:59:19 AM] (stevel) HADOOP-15831. Include modificationTime in 
the toString method of
[Oct 12, 2018 11:57:23 AM] (vinayakumarb) HDFS-13156. HDFS Block Placement 
Policy - Client Local Rack. Contributed
[Oct 12, 2018 12:04:10 PM] (vinayakumarb) HDFS-13945. TestDataNodeVolumeFailure 
is Flaky. Contributed by Ayush
[Oct 12, 2018 4:35:52 PM] (xiao) HADOOP-14445. Use DelegationTokenIssuer to 
create KMS delegation tokens
[Oct 12, 2018 4:40:34 PM] (rkanter) HADOOP-15832. Addendum: Upgrade 
BouncyCastle to 1.60. Contributed by
[Oct 12, 2018 4:58:16 PM] (bharat) HDDS-524. Implement PutBucket REST endpoint. 
Contributed by Bharat
[Oct 12, 2018 5:10:12 PM] (inigoiri) HDFS-13802. RBF: Remove FSCK from Router 
Web UI. Contributed by Fei Hui.
[Oct 12, 2018 5:36:57 PM] (xyao) HDDS-555. RandomKeyGenerator runs not closing 
the XceiverClient
[Oct 12, 2018 5:51:50 PM] (xyao) HDDS-555. RandomKeyGenerator runs not closing 
the XceiverClient
[Oct 12, 2018 6:30:57 PM] (arp) HDDS-641. Fix ozone filesystem robot test. 
Contributed by Mukul Kumar
[Oct 12, 2018 8:33:38 PM] (arp) HDDS-639. ChunkGroupInputStream gets into 
infinite loop after reading a
[Oct 12, 2018 8:58:53 PM] (bharat) HDDS-606. Create delete s3Bucket. 
Contributed by Bharat Viswanadham.
[Oct 12, 2018 9:22:46 PM] (aengineer) HDDS-624.PutBlock fails with Unexpected 
Storage Container Exception.
[Oct 12, 2018 9:44:51 PM] (arp) HDDS-644. Rename 
dfs.container.ratis.num.container.op.threads.
[Oct 12, 2018 9:46:04 PM] (aengineer) HDDS-645. Enable OzoneFS contract tests 
by default. Contributed by Arpit
[Oct 12, 2018 9:52:20 PM] (nanda) HDDS-587. Add new classes for pipeline 
management. Contributed by Lokesh
[Oct 12, 2018 10:06:42 PM] (arp) HDDS-646. 
TestChunkStreams.testErrorReadGroupInputStream fails.
[Oct 12, 2018 11:27:54 PM] (aengineer) HDDS-445. Create a logger to print out 
all of the incoming requests.
[Oct 12, 2018 11:59:11 PM] (bharat) HDDS-613. Update HeadBucket, DeleteBucket 
to not to have volume in path.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed CTEST tests :

   test_test_libhdfs_threaded_hdfs_static 
   test_libhdfs_threaded_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.TestEncryptionZonesWithKMS 
   hadoop.hdfs.TestEncryptionZones 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/925/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/925/artifact/out/diff-compile-javac-root.txt
  [300K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/925/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/925/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/925/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/925/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/925/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/925/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-tr

[jira] [Created] (HADOOP-15850) Allow CopyCommitter to skip concatenating source files specified by DistCpConstants.CONF_LABEL_LISTING_FILE_PATH

2018-10-13 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-15850:
---

 Summary: Allow CopyCommitter to skip concatenating source files 
specified by DistCpConstants.CONF_LABEL_LISTING_FILE_PATH
 Key: HADOOP-15850
 URL: https://issues.apache.org/jira/browse/HADOOP-15850
 Project: Hadoop Common
  Issue Type: Task
Reporter: Ted Yu


I was investigating test failure of TestIncrementalBackupWithBulkLoad from 
hbase against hadoop 3.1.1

hbase MapReduceBackupCopyJob$BackupDistCp would create listing file:
{code}
LOG.debug("creating input listing " + listing + " , totalRecords=" + 
totalRecords);
cfg.set(DistCpConstants.CONF_LABEL_LISTING_FILE_PATH, listing);
cfg.setLong(DistCpConstants.CONF_LABEL_TOTAL_NUMBER_OF_RECORDS, 
totalRecords);
{code}
For the test case, two bulk loaded hfiles are in the listing:
{code}
2018-10-13 14:09:24,123 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(195): BackupDistCp : 
hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
2018-10-13 14:09:24,125 DEBUG [Time-limited test] 
mapreduce.MapReduceBackupCopyJob$BackupDistCp(197): BackupDistCp execute for 2 
files of 10242
{code}
Later on, CopyCommitter#concatFileChunks would throw the following exception:
{code}
2018-10-13 14:09:25,351 WARN  [Thread-936] mapred.LocalJobRunner$Job(590): 
job_local1795473782_0004
java.io.IOException: Inconsistent sequence file: current chunk file 
org.apache.hadoop.tools.CopyListingFileStatus@bb8826ee{hdfs://localhost:42796/user/hbase/test-data/
   
160aeab5-6bca-9f87-465e-2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/a7599081e835440eb7bf0dd3ef4fd7a5_SeqId_205_
 length = 5100 aclEntries  = null, xAttrs = null} doesnt match prior entry 
org.apache.hadoop.tools.CopyListingFileStatus@243d544d{hdfs://localhost:42796/user/hbase/test-data/160aeab5-6bca-9f87-465e-
   
2517a0c43119/data/default/test-1539439707496/96b5a3613d52f4df1ba87a1cef20684c/f/394e6d39a9b94b148b9089c4fb967aad_SeqId_205_
 length = 5142 aclEntries = null, xAttrs = null}
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276)
  at 
org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100)
  at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567)
{code}
The above warning shouldn't happen - the two bulk loaded hfiles are independent.

>From the contents of the two CopyListingFileStatus instances, we can see that 
>their isSplit() return false. Otherwise the following from toString should be 
>logged:
{code}
if (isSplit()) {
  sb.append(", chunkOffset = ").append(this.getChunkOffset());
  sb.append(", chunkLength = ").append(this.getChunkLength());
}
{code}
>From hbase side, we can specify one bulk loaded hfile per job but that defeats 
>the purpose of using DistCp.

There should be a way for DistCp to specify the skipping of source file 
concatenation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org