Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-12-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/63/

[Dec 7, 2017 9:15:21 PM] (mackrorysd) HADOOP-15080.  Aliyun OSS: update oss sdk 
from 2.8.1 to 2.8.3 to remove
[Dec 8, 2017 5:20:33 AM] (xiao) HADOOP-15012. Add readahead, dropbehind, and 
unbuffer to
[Dec 8, 2017 5:20:33 AM] (xiao) HADOOP-15056. Fix 
TestUnbuffer#testUnbufferException failure.
[Dec 8, 2017 5:26:12 AM] (xiao) HADOOP-14872. CryptoInputStream should 
implement unbuffer. Contributed
[Dec 8, 2017 5:53:33 AM] (wwei) YARN-7607. Remove the trailing duplicated 
timestamp in container
[Dec 8, 2017 12:08:16 PM] (sammi.chen) HADOOP-14997. Add hadoop-aliyun as 
dependency of hadoop-cloud-storage.
[Dec 8, 2017 2:02:00 PM] (sammi.chen) HADOOP-15024. AliyunOSS: Support user 
agent configuration and include




-1 overall


The following subsystems voted -1:
asflicense unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Unreaped Processes :

   hadoop-common:1 
   hadoop-hdfs:21 
   bkjournal:5 
   hadoop-mapreduce-client-jobclient:4 
   hadoop-distcp:2 
   hadoop-extras:1 
   hadoop-yarn-applications-distributedshell:1 
   hadoop-yarn-client:9 
   hadoop-yarn-server-timelineservice:1 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestSetrepIncreasing 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.TestBlocksScheduledCounter 
   hadoop.hdfs.TestCrcCorruption 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.TestHDFSTrash 
   hadoop.mapreduce.v2.TestUberAM 
   hadoop.tools.TestIntegration 
   hadoop.tools.TestDistCpViewFs 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   
hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels 
   hadoop.yarn.server.TestContainerManagerSecurity 

Timed out junit tests :

   org.apache.hadoop.log.TestLogLevel 
   org.apache.hadoop.hdfs.TestLeaseRecovery2 
   org.apache.hadoop.hdfs.TestMaintenanceState 
   org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade 
   org.apache.hadoop.hdfs.TestHDFSFileSystemContract 
   org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage 
   org.apache.hadoop.hdfs.web.TestWebHdfsTokens 
   org.apache.hadoop.hdfs.TestFileCreationDelete 
   org.apache.hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter 
   org.apache.hadoop.hdfs.TestBlockStoragePolicy 
   org.apache.hadoop.hdfs.TestDFSOutputStream 
   org.apache.hadoop.hdfs.web.TestWebHDFS 
   org.apache.hadoop.hdfs.TestAppendSnapshotTruncate 
   org.apache.hadoop.hdfs.web.TestWebHDFSXAttr 
   org.apache.hadoop.hdfs.TestRollingUpgradeRollback 
   org.apache.hadoop.hdfs.TestMiniDFSCluster 
   org.apache.hadoop.hdfs.TestDFSShell 
   org.apache.hadoop.hdfs.TestDataTransferProtocol 
   org.apache.hadoop.hdfs.web.TestWebHDFSAcl 
   org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperEditLogStreams 
   org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress 
   org.apache.hadoop.mapred.TestClusterMapReduceTestCase 
   org.apache.hadoop.mapred.TestMRIntermediateDataEncryption 
   org.apache.hadoop.mapred.TestMRTimelineEventHandling 
   org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromTarget 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromSource 
   org.apache.hadoop.tools.TestCopyFiles 
   
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy 
   org.apache.hadoop.yarn.client.TestRMFailover 
   org.apache.hadoop.yarn.client.cli.TestYarnCLI 
   org.apache.hadoop.yarn.client.TestApplicationMasterServiceProtocolOnHA 
   org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClientWithReservation 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClient 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
   org.apache.hadoop.yarn.client.api.impl.TestNMClient 
   
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServices
 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/63/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   

Re: [VOTE] Merge HDFS-9806 to trunk

2017-12-08 Thread Virajith Jalaparti
Hi,

We have tested the HDFS-9806 branch in two settings:

(i) 26 node bare-metal cluster, with PROVIDED storage configured to point
to another instance of HDFS (containing 468 files, total of ~400GB of
data). Half of the Datanodes are configured with only DISK volumes and
other other half have both DISK and PROVIDED volumes.
(ii) 8 VMs on Azure, with PROVIDED storage configured to point to a WASB
account (containing 26,074 files and ~1.3TB of data). All Datanodes are
configured with DISK and PROVIDED volumes.

(i) was tested using both the text-based alias map (TextFileRegionAliasMap)
and the in-memory leveldb-based alias map (InMemoryLevelDBAliasMapClient),
while (ii) was tested using the text-based alias map only.

Steps followed:
(0) Build from apache/HDFS-9806. (Note that for the leveldb-based alias
map, the patch posted to HDFS-12912
 needs to be applied; we
will commit this to apache/HDFS-9806 after review).
(1) Generate the FSImage using the image generation tool with the
appropriate remote location (hdfs:// in (i) and wasb:// in (ii)).
(2) Bring up the HDFS cluster.
(3) Verify that the remote namespace is reflected correctly and data on
remote store can be accessed. Commands ran: ls, copyToLocal, fsck, getrep,
setrep, getStoragePolicy
(4) Run Sort and Gridmix jobs on the data in the remote location with the
input paths pointing to the local HDFS.
(5) Increase replication of the PROVIDED files and verified that local
(DISK) replicas were created for the PROVIDED replicas, using fsck.
(6) Verify that Provided storage capacity is shown correctly on the NN and
Datanode Web-UI.
(7) Bring down datanodes, one by one. When all are down, verify NN reports
all PROVIDED files as missing. Bringing back up any one Datanode makes all
the data available.
(8) Restart NN and verify data is still accesible.
(9) Verify that Writes to local HDFS continue to work.
(10) Bring down all Datanodes except one. Start decommissioning the
remaining Datanode. Verify that the data in the PROVIDED storage is still
accessible.

Apart from the above, we ported the changes in HDFS-9806 to branch-2.7 and
deployed it on a ~800 node cluster as one of the sub-clusters in a
Router-based Federated HDFS of nearly 4000 nodes (with help from Inigo
Goiri). We mounted about 1000 files, 650TB of remote data (~2.6million
blocks with 256MB block size) in this cluster using the text-based alias
map. We verified that the basic commands (ls, copyToLocal, setrep)  work.
We also ran spark jobs against this cluster.

-Virajith


On Fri, Dec 8, 2017 at 3:44 PM, Chris Douglas  wrote:

> Discussion thread: https://s.apache.org/kxT1
>
> We're down to the last few issues and are preparing the branch to
> merge to trunk. We'll post merge patches to HDFS-9806 [1]. Minor,
> "cleanup" tasks (checkstyle, findbugs, naming, etc.) will be tracked
> in HDFS-12712 [2].
>
> We've tried to ensure that when this feature is disabled, HDFS is
> unaffected. For those reviewing this, please look for places where
> this might add overheads and we'll address them before the merge. The
> site documentation [3] and design doc [4] should be up to date and
> sufficient to try this out. Again, please point out where it is
> unclear and we can address it.
>
> This has been a long effort and we're grateful for the support we've
> received from the community. In particular, thanks to Íñigo Goiri,
> Andrew Wang, Anu Engineer, Steve Loughran, Sean Mackrory, Lukas
> Majercak, Uma Gunuganti, Kai Zheng, Rakesh Radhakrishnan, Sriram Rao,
> Lei Xu, Zhe Zhang, Jing Zhao, Bharat Viswanadham, ATM, Chris Nauroth,
> Sanjay Radia, Atul Sikaria, and Peng Li for all your input into the
> design, testing, and review of this feature.
>
> The vote will close no earlier than one week from today, 12/15. -C
>
> [1]: https://issues.apache.org/jira/browse/HDFS-9806
> [2]: https://issues.apache.org/jira/browse/HDFS-12712
> [3]: https://github.com/apache/hadoop/blob/HDFS-9806/hadoop-
> hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
> [4]: https://issues.apache.org/jira/secure/attachment/
> 12875791/HDFS-9806-design.002.pdf
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-12-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/616/

[Dec 7, 2017 7:15:40 PM] (lei) HDFS-12840. Creating a file with non-default EC 
policy in a EC zone is
[Dec 7, 2017 7:30:58 PM] (mackrorysd) HADOOP-15098. 
TestClusterTopology#testChooseRandom fails intermittently.
[Dec 7, 2017 10:50:30 PM] (weichiu) HDFS-11915. Sync rbw dir on the first 
hsync() to avoid file lost on
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-6471. Support to add min/max resource 
configuration for a queue.
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7254. UI and metrics changes related to 
absolute resource
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7332. Compute effectiveCapacity per each 
resource vector. (Sunil G
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7411. Inter-Queue preemption's 
computeFixpointAllocation need to
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7482. Max applications calculation per 
queue has to be retrospected
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7483. CapacityScheduler test cases 
cleanup post YARN-5881. (Sunil G
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7538. Fix performance regression 
introduced by Capacity Scheduler
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7544. Use 
queue-path.capacity/maximum-capacity to specify absolute
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7564. Cleanup to fix checkstyle issues 
of YARN-5881 branch.
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7575. NPE in scheduler UI when 
max-capacity is not configured.
[Dec 8, 2017 2:56:54 AM] (wangda) YARN-7533. Documentation for absolute 
resource support in Capacity
[Dec 8, 2017 5:05:55 AM] (xiao) HADOOP-15056. Fix 
TestUnbuffer#testUnbufferException failure.
[Dec 8, 2017 3:03:54 PM] (zhengkai.zk) HADOOP-15104. AliyunOSS: change the 
default value of max error retry.
[Dec 8, 2017 4:00:21 PM] (vinodkv) HADOOP-15059. Undoing the switch of 
Credentials to PB format as default




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference of replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:replication in 
org.apache.hadoop.hdfs.server.namenode.INodeFile$HeaderFormat.getBlockLayoutRedundancy(BlockType,
 Short, Byte) Dereferenced at INodeFile.java:[line 210] 

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.web.TestWebHDFSAcl 
   hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade 
   hadoop.hdfs.server.namenode.ha.TestHAStateTransitions 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 
   hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS 
   hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock 
   hadoop.hdfs.server.namenode.ha.TestLossyRetryInvocationHandler 
   hadoop.hdfs.TestReadStripedFileWithDecodingCorruptData 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.TestProcessCorruptBlocks 
   hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR 
   hadoop.hdfs.server.namenode.TestQuotaByStorageType 
   hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA 
   hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerWithMockMover 
   hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes 
   hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot 
   hadoop.hdfs.server.namenode.ha.TestHAMetrics 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.server.namenode.ha.TestQuotasWithHA 
   hadoop.hdfs.server.blockmanagement.TestSequentialBlockGroupId 
   hadoop.hdfs.TestDatanodeReport 
   hadoop.cli.TestCacheAdminCLI 
   hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir 
   hadoop.hdfs.server.namenode.TestQuotaWithStripedBlocks 
   hadoop.hdfs.server.namenode.TestMetadataVersionOutput 
   hadoop.hdfs.TestHdfsAdmin 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.web.TestWebHDFSXAttr 
   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   hadoop.hdfs.server.namenode.TestSecureNameNode 
   hadoop.hdfs.server.namenode.TestINodeAttributeProvider 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 
   

[jira] [Created] (HDFS-12912) [READ] Fix configuration and implementation of LevelDB-based alias maps

2017-12-08 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-12912:
-

 Summary: [READ] Fix configuration and implementation of 
LevelDB-based alias maps
 Key: HDFS-12912
 URL: https://issues.apache.org/jira/browse/HDFS-12912
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti


{{LevelDBFileRegionAliasMap}} fails to create the leveldb store if the 
directory is absent.
{{InMemoryAliasMap}} does not support reading from leveldb-based alias map 
created from {{LevelDBFileRegionAliasMap}} with the block id configured. 
Further, the configuration for these aliasmaps must be specified using local 
paths and not as URIs as currently shown in the documentation 
({{HdfsProvidedStorage.md}}).

This JIRA is to fix these issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12911) [SPS]: Fix review comments from discussions in HDFS-10285

2017-12-08 Thread Uma Maheswara Rao G (JIRA)
Uma Maheswara Rao G created HDFS-12911:
--

 Summary: [SPS]: Fix review comments from discussions in HDFS-10285
 Key: HDFS-12911
 URL: https://issues.apache.org/jira/browse/HDFS-12911
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Uma Maheswara Rao G
Assignee: Rakesh R


This is the JIRA for tracking the possible improvements or issues discussed in 
main JIRA

So, far from Daryn:
  1. Lock should not kept while executing placement policy.
   2. While starting up the NN, SPS Xattrs checks happen even if feature 
disabled. This could potentially impact the startup speed. 

I am adding one more possible improvement to reduce Xattr objects significantly.
 SPS Xattr is constant object. So, we create one Xattr deduplication object 
once statically and use the same object reference when required to add SPS 
Xattr to Inode. So, here additional bytes required for storing SPS Xattr would 
turn to same as single object ref ( i.e 4 bytes in 32 bit). So Xattr overhead 
should come down significantly IMO. Lets explore the feasibility on this option.

Xattr list Future will not be specially created for SPS, that list would have 
been created by SetStoragePolicy already on the same directory. So, no extra 
Future creation because of SPS alone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[VOTE] Merge HDFS-9806 to trunk

2017-12-08 Thread Chris Douglas
Discussion thread: https://s.apache.org/kxT1

We're down to the last few issues and are preparing the branch to
merge to trunk. We'll post merge patches to HDFS-9806 [1]. Minor,
"cleanup" tasks (checkstyle, findbugs, naming, etc.) will be tracked
in HDFS-12712 [2].

We've tried to ensure that when this feature is disabled, HDFS is
unaffected. For those reviewing this, please look for places where
this might add overheads and we'll address them before the merge. The
site documentation [3] and design doc [4] should be up to date and
sufficient to try this out. Again, please point out where it is
unclear and we can address it.

This has been a long effort and we're grateful for the support we've
received from the community. In particular, thanks to Íñigo Goiri,
Andrew Wang, Anu Engineer, Steve Loughran, Sean Mackrory, Lukas
Majercak, Uma Gunuganti, Kai Zheng, Rakesh Radhakrishnan, Sriram Rao,
Lei Xu, Zhe Zhang, Jing Zhao, Bharat Viswanadham, ATM, Chris Nauroth,
Sanjay Radia, Atul Sikaria, and Peng Li for all your input into the
design, testing, and review of this feature.

The vote will close no earlier than one week from today, 12/15. -C

[1]: https://issues.apache.org/jira/browse/HDFS-9806
[2]: https://issues.apache.org/jira/browse/HDFS-12712
[3]: 
https://github.com/apache/hadoop/blob/HDFS-9806/hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HdfsProvidedStorage.md
[4]: 
https://issues.apache.org/jira/secure/attachment/12875791/HDFS-9806-design.002.pdf

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10262) Change HdfsFileStatus::fileId to an opaque identifier

2017-12-08 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas resolved HDFS-10262.
--
Resolution: Duplicate

Fixed in HDFS-7878

> Change HdfsFileStatus::fileId to an opaque identifier
> -
>
> Key: HDFS-10262
> URL: https://issues.apache.org/jira/browse/HDFS-10262
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, webhdfs
>Reporter: Chris Douglas
>
> HDFS exposes the INode ID as a long via HdfsFileStatus::getFileId. Since 
> equality is the only valid client operation (sequential/monotonically 
> increasing ids are not guaranteed in any spec; leases do not rely on any 
> other property), this identifier can be opaque instead of assigning it a 
> primitive type in HdfsFileStatus.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop 3.0.0 RC1

2017-12-08 Thread Andrew Wang
Hi all,

Let me start, as always, by thanking the efforts of all the contributors
who contributed to this release, especially those who jumped on the issues
found in RC0.

I've prepared RC1 for Apache Hadoop 3.0.0. This release incorporates 302
fixed JIRAs since the previous 3.0.0-beta1 release.

You can find the artifacts here:

http://home.apache.org/~wang/3.0.0-RC1/

I've done the traditional testing of building from the source tarball and
running a Pi job on a single node cluster. I also verified that the shaded
jars are not empty.

Found one issue that create-release (probably due to the mvn deploy change)
didn't sign the artifacts, but I fixed that by calling mvn one more time.
Available here:

https://repository.apache.org/content/repositories/orgapachehadoop-1075/

This release will run the standard 5 days, closing on Dec 13th at 12:31pm
Pacific. My +1 to start.

Best,
Andrew


Re: [VOTE] Release Apache Hadoop 2.7.5 (RC1)

2017-12-08 Thread Erik Krogen
+1 (non-binding)
• Verified that the missed JIRAs now show up in the release notes
• Clicked around the documentation included in the bin tarball
• Verified hashes and signatures for the bin and src tarball
• Built from source on RHEL 6.6
• Ran small HDFS cluster, executed basic commands, poked around NN web UI

On 12/7/17, 7:22 PM, "Konstantin Shvachko"  wrote:

Hi everybody,

I updated CHANGES.txt and fixed documentation links.
Also committed  MAPREDUCE-6165, which fixes a consistently failing test.

This is RC1 for the next dot release of Apache Hadoop 2.7 line. The
previous one 2.7.4 was release August 4, 2017.
Release 2.7.5 includes critical bug fixes and optimizations. See more
details in Release Note:
http://home.apache.org/~shv/hadoop-2.7.5-RC1/releasenotes.html

The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.5-RC1/

Please give it a try and vote on this thread. The vote will run for 5 days
ending 12/13/2017.

My up to date public key is available from:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Thanks,
--Konstantin




Re: [VOTE] Release Apache Hadoop 3.0.0 RC0

2017-12-08 Thread Andrew Wang
FYI that we got our last blocker in today, so I'm currently rolling RC1.
Stay tuned!

On Thu, Nov 30, 2017 at 8:32 AM, Allen Wittenauer 
wrote:

>
> > On Nov 30, 2017, at 1:07 AM, Rohith Sharma K S <
> rohithsharm...@apache.org> wrote:
> >
> >
> > >. If ATSv1 isn’t replaced by ATSv2, then why is it marked deprecated?
> > Ideally it should not be. Can you point out where it is marked as
> deprecated? If it is in historyserver daemon start, that change made very
> long back when timeline server added.
>
>
> Ahh, I see where all the problems lie.  No one is paying attention to the
> deprecation message because it’s kind of oddly worded:
>
> * It really means “don’t use ‘yarn historyserver’ use ‘yarn
> timelineserver’ ”
> * ‘yarn historyserver’ was removed from the documentation in 2.7.0
> * ‘yarn historyserver’ doesn’t appear in the yarn usage output
> * ‘yarn timelineserver’ runs the exact same class
>
> There’s no reason for ‘yarn historyserver’ to exist in 3.x.  Just run
> ‘yarn timelineserver’ instead.
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (HDFS-12910) Secure Datanode Starter should log the port when it

2017-12-08 Thread Stephen O'Donnell (JIRA)
Stephen O'Donnell created HDFS-12910:


 Summary: Secure Datanode Starter should log the port when it 
 Key: HDFS-12910
 URL: https://issues.apache.org/jira/browse/HDFS-12910
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 3.1.0
Reporter: Stephen O'Donnell
Priority: Minor


When running a secure data node, the default ports it uses are 1004 and 1006. 
Sometimes other OS services can start on these ports causing the DN to fail to 
start (eg the nfs service can use random ports under 1024).

When this happens an error is logged by jsvc, but it is confusing as it does 
not tell you which port it is having issues binding to, for example, when port 
1004 is used by another process:

{code}
Initializing secure datanode resources
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:105)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Cannot load daemon
Service exit with a return value of 3
{code}

And when port 1006 is used:

{code}
Opened streaming server at /0.0.0.0:1004
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:67)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.getSecureResources(SecureDataNodeStarter.java:129)
at 
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.init(SecureDataNodeStarter.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)
Cannot load daemon
Service exit with a return value of 3
{code}

We should catch the BindException exception and log out the problem 
address:port and then re-throw the exception to make the problem more clear.

I will upload a patch for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12908) Ozone: write chunk call fails because of Metrics registry exception

2017-12-08 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12908:


 Summary: Ozone: write chunk call fails because of Metrics registry 
exception
 Key: HDFS-12908
 URL: https://issues.apache.org/jira/browse/HDFS-12908
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


write chunk call fail because of Metric registration exception.

{code}
2017-12-08 04:02:19,894 WARN org.apache.hadoop.metrics2.util.MBeans: Error 
creating MBean object name: 
Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db
org.apache.hadoop.metrics2.MetricsException: 
org.apache.hadoop.metrics2.MetricsException: 
Hadoop:service=Ozone,name=RocksDbStore,dbName=container.db already exists!
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newObjectName(DefaultMetricsSystem.java:135)
at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newMBeanName(DefaultMetricsSystem.java:110)
at org.apache.hadoop.metrics2.util.MBeans.getMBeanName(MBeans.java:155)
at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:87)
at org.apache.hadoop.utils.RocksDBStore.(RocksDBStore.java:77)
at 
org.apache.hadoop.utils.MetadataStoreBuilder.build(MetadataStoreBuilder.java:115)
at 
org.apache.hadoop.ozone.container.common.utils.ContainerCache.getDB(ContainerCache.java:138)
at 
org.apache.hadoop.ozone.container.common.helpers.KeyUtils.getDB(KeyUtils.java:65)
at 
org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.readContainerInfo(ContainerManagerImpl.java:261)
at 
org.apache.hadoop.ozone.container.common.impl.ContainerManagerImpl.createContainer(ContainerManagerImpl.java:330)
at 
org.apache.hadoop.ozone.container.common.impl.Dispatcher.handleCreateContainer(Dispatcher.java:399)
at 
org.apache.hadoop.ozone.container.common.impl.Dispatcher.containerProcessHandler(Dispatcher.java:158)
at 
org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:105)
at 
org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:61)
at 
org.apache.hadoop.ozone.container.common.transport.server.XceiverServerHandler.channelRead0(XceiverServerHandler.java:32)
at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:312)
at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:286)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1302)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:646)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:581)
at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:460)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at 

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2017-12-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/62/

[Dec 7, 2017 4:24:45 AM] (aajisaka) HDFS-12889. Addendum patch to add missing 
file.
[Dec 7, 2017 8:43:21 AM] (cdouglas) HDFS-11576. Block recovery will fail 
indefinitely if recovery time >




-1 overall


The following subsystems voted -1:
asflicense unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Unreaped Processes :

   hadoop-common:1 
   hadoop-hdfs:16 
   bkjournal:5 
   hadoop-yarn-server-resourcemanager:1 
   hadoop-yarn-client:8 
   hadoop-yarn-applications-distributedshell:1 
   hadoop-mapreduce-client-jobclient:12 
   hadoop-distcp:3 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
   
hadoop.yarn.applications.distributedshell.TestDistributedShellWithNodeLabels 
   hadoop.mapreduce.security.ssl.TestEncryptedShuffle 
   hadoop.mapreduce.security.TestMRCredentials 
   hadoop.tools.TestIntegration 
   hadoop.tools.TestDistCpViewFs 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 

Timed out junit tests :

   org.apache.hadoop.log.TestLogLevel 
   org.apache.hadoop.hdfs.TestLeaseRecovery2 
   org.apache.hadoop.security.TestPermission 
   org.apache.hadoop.hdfs.web.TestWebHdfsTokens 
   org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream 
   org.apache.hadoop.hdfs.TestDatanodeLayoutUpgrade 
   org.apache.hadoop.hdfs.TestFileAppendRestart 
   org.apache.hadoop.hdfs.security.TestDelegationToken 
   org.apache.hadoop.hdfs.TestDFSMkdirs 
   org.apache.hadoop.hdfs.TestDFSOutputStream 
   org.apache.hadoop.hdfs.web.TestWebHDFS 
   org.apache.hadoop.hdfs.web.TestWebHDFSXAttr 
   org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes 
   org.apache.hadoop.metrics2.sink.TestRollingFileSystemSinkWithHdfs 
   org.apache.hadoop.hdfs.TestDistributedFileSystem 
   org.apache.hadoop.hdfs.TestReplaceDatanodeFailureReplication 
   org.apache.hadoop.hdfs.TestDFSShell 
   org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir 
   org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy 
   org.apache.hadoop.yarn.client.TestRMFailover 
   org.apache.hadoop.yarn.client.TestApplicationMasterServiceProtocolOnHA 
   org.apache.hadoop.yarn.client.TestApplicationClientProtocolOnHA 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClientWithReservation 
   org.apache.hadoop.yarn.client.api.impl.TestYarnClient 
   org.apache.hadoop.yarn.client.api.impl.TestAMRMClient 
   org.apache.hadoop.yarn.client.api.impl.TestNMClient 
   
org.apache.hadoop.yarn.applications.distributedshell.TestDistributedShell 
   org.apache.hadoop.fs.TestFileSystem 
   org.apache.hadoop.mapred.TestMiniMRClasspath 
   org.apache.hadoop.mapred.TestClusterMapReduceTestCase 
   org.apache.hadoop.mapred.TestMRIntermediateDataEncryption 
   org.apache.hadoop.mapred.TestJobSysDirWithDFS 
   org.apache.hadoop.mapreduce.security.TestBinaryTokenFile 
   org.apache.hadoop.mapred.TestMRTimelineEventHandling 
   org.apache.hadoop.mapred.join.TestDatamerge 
   org.apache.hadoop.mapred.TestReduceFetch 
   org.apache.hadoop.conf.TestNoDefaultsJobConf 
   org.apache.hadoop.tools.TestDistCpSync 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromTarget 
   org.apache.hadoop.tools.TestDistCpSyncReverseFromSource 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/62/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/62/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/62/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/62/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/62/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs: