[jira] [Created] (HADOOP-15401) ConcurrentModificationException on Subject.getPrivateCredentials in UGI constructor
Xiao Chen created HADOOP-15401: -- Summary: ConcurrentModificationException on Subject.getPrivateCredentials in UGI constructor Key: HADOOP-15401 URL: https://issues.apache.org/jira/browse/HADOOP-15401 Project: Hadoop Common Issue Type: Bug Reporter: Xiao Chen Seen a recent exception from KMS client provider as follows: {noformat} java.io.IOException: java.util.ConcurrentModificationException at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:488) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:776) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:287) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider$5.call(LoadBalancingKMSClientProvider.java:283) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.doOp(LoadBalancingKMSClientProvider.java:123) at org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:283) at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532) at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:927) at org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:946) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:316) at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:311) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:323) Caused by: java.util.ConcurrentModificationException at java.util.LinkedList$ListItr.checkForComodification(LinkedList.java:966) at java.util.LinkedList$ListItr.next(LinkedList.java:888) at javax.security.auth.Subject$SecureSet$1.next(Subject.java:1070) at javax.security.auth.Subject$ClassSet$1.run(Subject.java:1401) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject$ClassSet.populateSet(Subject.java:1399) at javax.security.auth.Subject$ClassSet.(Subject.java:1372) at javax.security.auth.Subject.getPrivateCredentials(Subject.java:767) at org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosKeyTab(KerberosUtil.java:267) at org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:715) at org.apache.hadoop.security.UserGroupInformation.(UserGroupInformation.java:701) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:742) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:141) at org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:348) at org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:333) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:477) at org.apache.hadoop.crypto.key.kms.KMSClientProvider$1.run(KMSClientProvider.java:472) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.createConnection(KMSClientProvider.java:471) ... 12 more {noformat} It looks like we have ran into a race inside jdk's Subject class. Found https://bugs.openjdk.java.net/browse/JDK-4892913 but that jira was created before Hadoop [~daryn], any thoughts on this? (With all due respect, we have not seen this in versions without HADOOP-9747 yet) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64
For more details, see https://builds.apache.org/job/hadoop-trunk-win/442/ [Apr 18, 2018 3:23:45 PM] (Bharat) HDFS-13464. Fix javadoc in FsVolumeList#handleVolumeFailures. [Apr 18, 2018 11:35:38 PM] (aajisaka) HADOOP-15396. Some java source files are executable [Apr 19, 2018 7:07:14 AM] (aajisaka) YARN-8169. Cleanup RackResolver.java -1 overall The following subsystems voted -1: compile mvninstall unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 00m 00s) unit Specific tests: Failed junit tests : hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec hadoop.fs.contract.rawlocal.TestRawlocalContractAppend hadoop.fs.TestFileUtil hadoop.fs.TestFsShellCopy hadoop.fs.TestFsShellList hadoop.fs.TestLocalFileSystem hadoop.fs.TestRawLocalFileSystemContract hadoop.fs.TestTrash hadoop.http.TestHttpServer hadoop.http.TestHttpServerLogs hadoop.io.nativeio.TestNativeIO hadoop.ipc.TestIPC hadoop.ipc.TestSocketFactory hadoop.metrics2.impl.TestStatsDMetrics hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal hadoop.security.TestSecurityUtil hadoop.security.TestShellBasedUnixGroupsMapping hadoop.security.token.TestDtUtilShell hadoop.util.TestNativeCodeLoader hadoop.fs.TestResolveHdfsSymlink hadoop.hdfs.crypto.TestHdfsCryptoStreams hadoop.hdfs.qjournal.client.TestQuorumJournalManager hadoop.hdfs.qjournal.server.TestJournalNode hadoop.hdfs.qjournal.server.TestJournalNodeSync hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage hadoop.hdfs.server.datanode.TestBlockRecovery hadoop.hdfs.server.datanode.TestBlockScanner hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics hadoop.hdfs.server.datanode.TestDataNodeFaultInjector hadoop.hdfs.server.datanode.TestDataNodeMetrics hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.datanode.TestHSync hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC hadoop.hdfs.server.mover.TestMover hadoop.hdfs.server.mover.TestStorageMover hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport hadoop.hdfs.server.namenode.TestAddBlock hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands hadoop.hdfs.server.namenode.TestCheckpoint hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate hadoop.hdfs.server.namenode.TestEditLogRace hadoop.hdfs.server.namenode.TestFileTruncate hadoop.hdfs.server.namenode.TestFsck hadoop.hdfs.server.namenode.TestFSImage hadoop.hdfs.server.namenode.TestFSImageWithSnapshot hadoop.hdfs.server.namenode.TestNamenodeCapacityReport hadoop.hdfs.server.namenode.TestNameNodeMXBean hadoop.hdfs.server.namenode.TestNestedEncryptionZones hadoop.hdfs.server.namenode.TestQuotaByStorageType hadoop.hdfs.server.namenode.TestReencryptionHandler hadoop.hdfs.server.namenode.TestStartup
[jira] [Created] (HADOOP-15400) Improve S3Guard documentation on Authoritative Mode implemenation
Aaron Fabbri created HADOOP-15400: - Summary: Improve S3Guard documentation on Authoritative Mode implemenation Key: HADOOP-15400 URL: https://issues.apache.org/jira/browse/HADOOP-15400 Project: Hadoop Common Issue Type: Improvement Components: fs/s3 Affects Versions: 3.0.1 Reporter: Aaron Fabbri Part of the design of S3Guard is support for skipping the call to S3 listObjects and serving directory listings out of the MetadataStore under certain circumstances. This feature is called "authoritative" mode. I've talked to many people about this feature and it seems to be universally confusing. I suggest we improve / add a section to the s3guard.md site docs elaborating on what Authoritative Mode is. It is *not* treating the MetadataStore (e.g. dynamodb) as the source of truth in general. It *is* the ability to short-circuit S3 list objects and serve listings from the MetadataStore in some circumstances: For S3A to skip S3's list objects on some *path*, and serve it directly from the MetadataStore, the following things must all be true: # The MetadataStore implementation persists the bit {{DirListingMetadata.isAuthorititative}} set when calling {{MetadataStore#put(DirListingMetadata)}} # The S3A client is configured to allow metadatastore to be authoritative source of a directory listing (fs.s3a.metadatastore.authoritative=true). # The MetadataStore has a full listing for *path* stored in it. This only happens if the FS client (s3a) explicitly has stored a full directory listing with {{DirListingMetadata.isAuthorititative=true}} before the said listing request happens. Note that #1 only currently happens in LocalMetadataStore. Adding support to DynamoDBMetadataStore is covered in HADOOP-14154. Also, the multiple uses of the word "authoritative" are confusing. Two meanings are used: 1. In the FS client configuration fs.s3a.metadatastore.authoritative - Behavior of S3A code (not MetadataStore) - "S3A is allowed to skip S3.list() when it has full listing from MetadataStore" 2. MetadataStore When storing a dir listing, can set a bit isAuthoritative 1 : "full contents of directory" 0 : "may not be full listing" Note that a MetadataStore *MAY* persist this bit. (not *MUST*). We should probably rename the {{DirListingMetadata.isAuthorititative}} to {{.fullListing}} or at least put a comment where it is used to clarify its meaning. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.9.1 (RC0)
Hi Chen, I am so sorry to bring this up now but there are 16 tests failing in hadoop-distcp project. I have opened a ticket and cc'ed Junping since he is branch-2.8 committer but I missed to ping you. IMHO we should fix the unit tests before we release but I would leave upto other members to give their opinion.
Apache Hadoop 3.1.1 release plan
Hi, All We have released Apache Hadoop 3.1.0 on Apr 06. To further improve the quality of the release, we plan to release 3.1.1 at May 06. The focus of 3.1.1 will be fixing blockers / critical bugs and other enhancements. So far there are 100 JIRAs [1] have fix version marked to 3.1.1. We plan to cut branch-3.1.1 on May 01 and vote for RC on the same day. Please feel free to share your insights. Thanks, Wangda Tan [1] project in (YARN, "Hadoop HDFS", "Hadoop Common", "Hadoop Map/Reduce") AND fixVersion = 3.1.1
[VOTE] Release Apache Hadoop 2.9.1 (RC0)
Hi all, This is the first dot release of Apache Hadoop 2.9 line since 2.9.0 was released on November 17, 2017. It includes 208 changes. Among them, 9 blockers, 15 critical issues and rest are normal bug fixes and feature improvements. Thanks to the many who contributed to the 2.9.1 development. The artifacts are available here: https://dist.apache.org/repos/dist/dev/hadoop/2.9.1-RC0/ The RC tag in git is release-2.9.1-RC0. Last git commit SHA is e30710aea4e6e55e69372929106cf119af06fd0e. The maven artifacts are available at: https://repository.apache.org/content/repositories/orgapachehadoop-1115/ My public key is available from: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS Please try the release and vote; the vote will run for the usual 5 days, ending on 4/25/2018 PST time. Also I would like thank Lei(Eddy) Xu and Chris Douglas for your help during the RC preparation. Bests, Sammi Chen