Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/197/ [Jul 7, 2020 1:50:03 AM] (noreply) HDFS-15449. Optionally ignore port number in mount-table name when picking from initialized uri. Contributed by Uma Maheswara Rao G. [Jul 7, 2020 2:01:46 AM] (noreply) HDFS-15312. Apply umask when creating directory by WebHDFS (#2096) [Jul 7, 2020 11:40:59 AM] (pjoseph) YARN-10337. Fix failing testcase TestRMHATimelineCollectors. [Jul 7, 2020 4:02:39 PM] (Xiaoqiao He) HDFS-15425. Review Logging of DFSClient. Contributed by Hongbing Wang. [Error replacing 'FILE' - Workspace is not accessible] - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15761) intermittent failure of TestAbfsClient.validateUserAgent
[ https://issues.apache.org/jira/browse/HADOOP-15761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H resolved HADOOP-15761. --- Resolution: Won't Fix Please see [HADOOP-16922|https://issues.apache.org/jira/browse/HADOOP-16922] > intermittent failure of TestAbfsClient.validateUserAgent > > > Key: HADOOP-15761 > URL: https://issues.apache.org/jira/browse/HADOOP-15761 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, test >Affects Versions: HADOOP-15407 > Environment: test suites run from IntelliJ IDEA >Reporter: Steve Loughran >Assignee: Bilahari T H >Priority: Minor > > (seemingly intermittent) failure of the pattern matcher in > {{TestAbfsClient.validateUserAgent}} > {code} > java.lang.AssertionError: User agent Azure Blob FS/1.0 (JavaJRE 1.8.0_121; > MacOSX 10.13.6; openssl-1.0) Partner Service does not match regexp Azure Blob > FS\/1.0 \(JavaJRE ([^\)]+) SunJSSE-1.8\) Partner Service > {code} > Using a regexp is probably too brittle here: safest just to look for some > specific substring. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/741/ No changes -1 overall The following subsystems voted -1: docker Powered by Apache Yetushttps://yetus.apache.org - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17116) Skip Retry INFO logging on first failover from a proxy
Hanisha Koneru created HADOOP-17116: --- Summary: Skip Retry INFO logging on first failover from a proxy Key: HADOOP-17116 URL: https://issues.apache.org/jira/browse/HADOOP-17116 Project: Hadoop Common Issue Type: Task Reporter: Hanisha Koneru Assignee: Hanisha Koneru RetryInvocationHandler logs an INFO level message on every failover except the first. This used to be ideal before when there were only 2 proxies in the FailoverProxyProvider. But if there are more than 2 proxies (as is possible with 3 or more NNs in HA), then there could be more than one failover to find the currently active proxy. To avoid creating noise in clients logs/ console, RetryInvocationHandler should skip logging once for each proxy. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
A more inclusive elephant...
Hello Folks, I hope you are all doing well... *The problem* The recent protests made me realize that we are not just a bystanders of the systematic racism that affect our society, but we are active participants of it. Being "non-racist" is not enough, I strongly feel we should be actively "anti-racist" in our day to day lives, and continuously check our biases. I assume most of you will agree with the general sentiment, but based on your exposure to the recent events and US culture/history might have more or less strong feelings about your role in the problem and potential solution. *What can we do about it?* I think a simple action we can take is to work on our code/comments/documentation/websites and remove racist terminology. Here is a IETF draft to fix up some of the most egregious examples (master/slave, whitelist/backlist) with proposed alternatives. https://tools.ietf.org/id/draft-knodel-terminology-00.html#rfc.section.1.1.1 Also as we go about this effort, we should also consider other "non-inclusive" terminology issues around gender (e.g., binary gendered examples, "Alice" doing the wrong security thing systematically), and ableism (e.g., referring to misbehaving hardware as "lame" or "limping", etc.). The easiest action item is to avoid this going forward (ideally adding it to the checkstyles if possible), a more costly one is to start going back and refactor away existing instances. I know this requires a bunch of work as refactorings might break dev branches and non-committed patches, possibly scripts, etc. but I think this is something important and relatively simple we can do. The effect goes well beyond some text in github, it signals what we believe in, and forces hundreds of users and contributors to notice and think about it. Our force-multiplier is huge and it matches our responsibility. What do you folks think? Thanks, Carlo
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64
For more details, see https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/196/ [Jul 6, 2020 7:08:36 AM] (Akira Ajisaka) HADOOP-17111. Replace Guava Optional with Java8+ Optional. Contributed by Ahmed Hussein. [Jul 6, 2020 3:25:42 PM] (noreply) HADOOP-17081. MetricsSystem doesn't start the sink adapters on restart (#2089) [Jul 6, 2020 3:43:34 PM] (noreply) HDFS-15451. Do not discard non-initial block report for provided storage. (#2119). Contributed by Shanyu Zhao. [Jul 6, 2020 11:17:09 PM] (noreply) HDFS-15417. RBF: Get the datanode report from cache for federation WebHDFS operations (#2080) -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml findbugs : module:hadoop-yarn-project/hadoop-yarn Uncallable method org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance() defined in anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:[line 87] Dead store to entities in org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:[line 190] findbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server Uncallable method org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance() defined in anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:[line 87] Dead store to entities in org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:[line 190] findbugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests Uncallable method org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance() defined in anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:[line 87] Dead store to entities in org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:[line 190] findbugs : module:hadoop-yarn-project Uncallable method org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance() defined in anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:anonymous class At TestTimelineReaderWebServicesHBaseStorage.java:[line 87] Dead store to entities in org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl) At TestTimelineReaderHBaseDown.java:[line 190] findbugs : module:hadoop-cloud-storage-project/hadoop-cos org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal
[jira] [Resolved] (HADOOP-17058) Support for Appendblob in abfs driver
[ https://issues.apache.org/jira/browse/HADOOP-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani resolved HADOOP-17058. - Resolution: Fixed > Support for Appendblob in abfs driver > - > > Key: HADOOP-17058 > URL: https://issues.apache.org/jira/browse/HADOOP-17058 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Ishani >Priority: Major > > add changes to support appendblob in the hadoop-azure abfs driver. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17115) Replace Guava initialization of Sets.newHashSet
Ahmed Hussein created HADOOP-17115: -- Summary: Replace Guava initialization of Sets.newHashSet Key: HADOOP-17115 URL: https://issues.apache.org/jira/browse/HADOOP-17115 Project: Hadoop Common Issue Type: Sub-task Reporter: Ahmed Hussein Unjustified usage of Guava API to initialize a {{HashSet}}. This should be replaced by Java APIs. {code:java} Targets Occurrences of 'Sets.newHashSet' in project Found Occurrences (223 usages found) org.apache.hadoop.crypto.key (2 usages found) TestValueQueue.java (2 usages found) testWarmUp() (2 usages found) 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"), 107 Sets.newHashSet(fillInfos[0].key, org.apache.hadoop.crypto.key.kms (6 usages found) TestLoadBalancingKMSClientProvider.java (6 usages found) testCreation() (6 usages found) 86 assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;), 87 Sets.newHashSet(providers[0].getKMSUrl())); 95 assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;, 98 Sets.newHashSet(providers[0].getKMSUrl(), 108 assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;, 111 Sets.newHashSet(providers[0].getKMSUrl(), org.apache.hadoop.crypto.key.kms.server (1 usage found) KMSAudit.java (1 usage found) 59 static final Set AGGREGATE_OPS_WHITELIST = Sets.newHashSet( org.apache.hadoop.fs.s3a (1 usage found) TestS3AAWSCredentialsProvider.java (1 usage found) testFallbackToDefaults() (1 usage found) 183 Sets.newHashSet()); org.apache.hadoop.fs.s3a.auth (1 usage found) AssumedRoleCredentialProvider.java (1 usage found) AssumedRoleCredentialProvider(URI, Configuration) (1 usage found) 113 Sets.newHashSet(this.getClass())); org.apache.hadoop.fs.s3a.commit.integration (1 usage found) ITestS3ACommitterMRJob.java (1 usage found) test_200_execute() (1 usage found) 232 Set expectedKeys = Sets.newHashSet(); org.apache.hadoop.fs.s3a.commit.staging (5 usages found) TestStagingCommitter.java (3 usages found) testSingleTaskMultiFileCommit() (1 usage found) 341 Set keys = Sets.newHashSet(); runTasks(JobContext, int, int) (1 usage found) 603 Set uploads = Sets.newHashSet(); commitTask(StagingCommitter, TaskAttemptContext, int) (1 usage found) 640 Set files = Sets.newHashSet(); TestStagingPartitionedTaskCommit.java (2 usages found) verifyFilesCreated(PartitionedStagingCommitter) (1 usage found) 148 Set files = Sets.newHashSet(); buildExpectedList(StagingCommitter) (1 usage found) 188 Set expected = Sets.newHashSet(); org.apache.hadoop.hdfs (5 usages found) DFSUtil.java (2 usages found) getNNServiceRpcAddressesForCluster(Configuration) (1 usage found) 615 Set availableNameServices = Sets.newHashSet(conf getNNLifelineRpcAddressesForCluster(Configuration) (1 usage found) 660 Set availableNameServices = Sets.newHashSet(conf MiniDFSCluster.java (1 usage found) 597 private Set fileSystems = Sets.newHashSet(); TestDFSUtil.java (2 usages found) testGetNNServiceRpcAddressesForNsIds() (2 usages found) 1046 assertEquals(Sets.newHashSet("nn1"), internal); 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all); org.apache.hadoop.hdfs.net (5 usages found) TestDFSNetworkTopology.java (5 usages found) testChooseRandomWithStorageType() (4 usages found) 277 Sets.newHashSet("host2", "host4", "host5", "host6"); 278 Set archiveUnderL1 = Sets.newHashSet("host1", "host3"); 279 Set ramdiskUnderL1 = Sets.newHashSet("host7"); 280 Set ssdUnderL1 = Sets.newHashSet("host8"); testChooseRandomWithStorageTypeWithExcluded() (1 usage found) 363 Set expectedSet = Sets.newHashSet("host4", "host5"); org.apache.hadoop.hdfs.qjournal.server (2 usages found) JournalNodeSyncer.java (2 usages found) getOtherJournalNodeAddrs() (1 usage found) 276 HashSet sharedEditsUri = Sets.newHashSet(); getJournalAddrList(String) (1 usage found) 318 Sets.newHashSet(jn.getBoundIpcAddress())); org.apache.hadoop.hdfs.server.datanode (5 usages found) BlockPoolManager.java (1 usage found) doRefreshNamenodes(Map>, Map>) (1 usage found) 198 toRemove = Sets.newHashSet(Sets.difference(
[jira] [Created] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList
Ahmed Hussein created HADOOP-17114: -- Summary: Replace Guava initialization of Lists.newArrayList Key: HADOOP-17114 URL: https://issues.apache.org/jira/browse/HADOOP-17114 Project: Hadoop Common Issue Type: Sub-task Reporter: Ahmed Hussein There are unjustified use of Guava APIs to initialize Lists. This could be simply replaced by Java API. {code:java} Targets Occurrences of 'Lists.newArrayList' in project Found Occurrences (787 usages found) org.apache.hadoop.conf (2 usages found) TestReconfiguration.java (2 usages found) testAsyncReconfigure() (1 usage found) 391 List changes = Lists.newArrayList(); testStartReconfigurationFailureDueToExistingRunningTask() (1 usage found) 435 List changes = Lists.newArrayList( org.apache.hadoop.crypto (1 usage found) CryptoCodec.java (1 usage found) getCodecClasses(Configuration, CipherSuite) (1 usage found) 107 List> result = Lists.newArrayList(); org.apache.hadoop.fs.azurebfs (84 usages found) ITestAbfsIdentityTransformer.java (7 usages found) transformAclEntriesForSetRequest() (3 usages found) 240 List aclEntriesToBeTransformed = Lists.newArrayList( 253 List aclEntries = Lists.newArrayList(aclEntriesToBeTransformed); 271 List expectedAclEntries = Lists.newArrayList( transformAclEntriesForGetRequest() (4 usages found) 291 List aclEntriesToBeTransformed = Lists.newArrayList( 302 List aclEntries = Lists.newArrayList(aclEntriesToBeTransformed); 318 aclEntries = Lists.newArrayList(aclEntriesToBeTransformed); 322 List expectedAclEntries = Lists.newArrayList( ITestAzureBlobFilesystemAcl.java (76 usages found) testModifyAclEntries() (2 usages found) 95 List aclSpec = Lists.newArrayList( 103 aclSpec = Lists.newArrayList( testModifyAclEntriesOnlyAccess() (2 usages found) 128 List aclSpec = Lists.newArrayList( 134 aclSpec = Lists.newArrayList( testModifyAclEntriesOnlyDefault() (2 usages found) 151 List aclSpec = Lists.newArrayList( 154 aclSpec = Lists.newArrayList( testModifyAclEntriesMinimal() (1 usage found) 175 List aclSpec = Lists.newArrayList( testModifyAclEntriesMinimalDefault() (1 usage found) 192 List aclSpec = Lists.newArrayList( testModifyAclEntriesCustomMask() (1 usage found) 213 List aclSpec = Lists.newArrayList( testModifyAclEntriesStickyBit() (2 usages found) 231 List aclSpec = Lists.newArrayList( 238 aclSpec = Lists.newArrayList( testModifyAclEntriesPathNotFound() (1 usage found) 261 List aclSpec = Lists.newArrayList( testModifyAclEntriesDefaultOnFile() (1 usage found) 276 List aclSpec = Lists.newArrayList( testModifyAclEntriesWithDefaultMask() (2 usages found) 287 List aclSpec = Lists.newArrayList( 291 List modifyAclSpec = Lists.newArrayList( testModifyAclEntriesWithAccessMask() (2 usages found) 311 List aclSpec = Lists.newArrayList( 315 List modifyAclSpec = Lists.newArrayList( testModifyAclEntriesWithDuplicateEntries() (2 usages found) 332 List aclSpec = Lists.newArrayList( 336 List modifyAclSpec = Lists.newArrayList( testRemoveAclEntries() (2 usages found) 348 List aclSpec = Lists.newArrayList( 355 aclSpec = Lists.newArrayList( testRemoveAclEntriesOnlyAccess() (2 usages found) 377 List aclSpec = Lists.newArrayList( 384 aclSpec = Lists.newArrayList( testRemoveAclEntriesOnlyDefault() (2 usages found) 401 List aclSpec = Lists.newArrayList( 408 aclSpec = Lists.newArrayList( testRemoveAclEntriesMinimal() (2 usages found) 429 List aclSpec = Lists.newArrayList( 435 aclSpec = Lists.newArrayList( testRemoveAclEntriesMinimalDefault() (2 usages found) 451 List aclSpec = Lists.newArrayList( 458 aclSpec = Lists.newArrayList( testRemoveAclEntriesStickyBit() (2 usages found) 479 List aclSpec = Lists.newArrayList( 486 aclSpec = Lists.newArrayList( testRemoveAclEntriesPathNotFound() (1 usage found) 507 List aclSpec = Lists.newArrayList( testRemoveAclEntriesAccessMask() (2 usages found) 518 List
[jira] [Created] (HADOOP-17113) Adding ReadAhead
Mehakmeet Singh created HADOOP-17113: Summary: Adding ReadAhead Key: HADOOP-17113 URL: https://issues.apache.org/jira/browse/HADOOP-17113 Project: Hadoop Common Issue Type: Improvement Reporter: Mehakmeet Singh -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org