[jira] [Created] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2020-07-17 Thread Sahil Takiar (Jira)
Sahil Takiar created HADOOP-17139:
-

 Summary: Re-enable optimized copyFromLocal implementation in 
S3AFileSystem
 Key: HADOOP-17139
 URL: https://issues.apache.org/jira/browse/HADOOP-17139
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sahil Takiar


It looks like HADOOP-15932 disabled the optimized copyFromLocal implementation 
in S3A for correctness reasons.  innerCopyFromLocalFile should be fixed and 
re-enabled. The current implementation uses FileSystem.copyFromLocal which will 
open an input stream from the local fs and an output stream to the destination 
fs, and then call IOUtils.copyBytes. With default configs, this will cause S3A 
to read the file into memory, write it back to a file on the local fs, and then 
when the file is closed, upload it to S3.

The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17104) Replace Guava Supplier with Java8+ Supplier in hdfs

2020-07-17 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein resolved HADOOP-17104.

Resolution: Implemented

After HADOOP-17090, It was possible to combine the change into a single patch.

HADOOP-17100 already include changes required for the HDFS module.

> Replace Guava Supplier with Java8+ Supplier in hdfs
> ---
>
> Key: HADOOP-17104
> URL: https://issues.apache.org/jira/browse/HADOOP-17104
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Replacing Usage of Guava supplier are in Unit tests 
> {{GenereicTestUtils.waitFor()}} in hadoop-hdfs-project subdirectory.
> {code:java}
> Targets
> Occurrences of 'com.google.common.base.Supplier' in directory 
> hadoop-hdfs-project with mask '*.java'
> Found Occurrences  (99 usages found)
> org.apache.hadoop.fs  (1 usage found)
> TestEnhancedByteBufferAccess.java  (1 usage found)
> 75 import com.google.common.base.Supplier;
> org.apache.hadoop.fs.viewfs  (1 usage found)
> TestViewFileSystemWithTruncate.java  (1 usage found)
> 23 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs  (20 usages found)
> DFSTestUtil.java  (1 usage found)
> 79 import com.google.common.base.Supplier;
> MiniDFSCluster.java  (1 usage found)
> 78 import com.google.common.base.Supplier;
> TestBalancerBandwidth.java  (1 usage found)
> 29 import com.google.common.base.Supplier;
> TestClientProtocolForPipelineRecovery.java  (1 usage found)
> 30 import com.google.common.base.Supplier;
> TestDatanodeRegistration.java  (1 usage found)
> 44 import com.google.common.base.Supplier;
> TestDataTransferKeepalive.java  (1 usage found)
> 47 import com.google.common.base.Supplier;
> TestDeadNodeDetection.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestDecommission.java  (1 usage found)
> 41 import com.google.common.base.Supplier;
> TestDFSShell.java  (1 usage found)
> 37 import com.google.common.base.Supplier;
> TestEncryptedTransfer.java  (1 usage found)
> 35 import com.google.common.base.Supplier;
> TestEncryptionZonesWithKMS.java  (1 usage found)
> 22 import com.google.common.base.Supplier;
> TestFileCorruption.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestLeaseRecovery2.java  (1 usage found)
> 32 import com.google.common.base.Supplier;
> TestLeaseRecoveryStriped.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestMaintenanceState.java  (1 usage found)
> 63 import com.google.common.base.Supplier;
> TestPread.java  (1 usage found)
> 61 import com.google.common.base.Supplier;
> TestQuota.java  (1 usage found)
> 39 import com.google.common.base.Supplier;
> TestReplaceDatanodeOnFailure.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestReplication.java  (1 usage found)
> 27 import com.google.common.base.Supplier;
> TestSafeMode.java  (1 usage found)
> 62 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.client.impl  (2 usages found)
> TestBlockReaderLocalMetrics.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestLeaseRenewer.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.qjournal  (1 usage found)
> MiniJournalCluster.java  (1 usage found)
> 31 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.qjournal.client  (1 usage found)
> TestIPCLoggerChannel.java  (1 usage found)
> 43 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.qjournal.server  (1 usage found)
> TestJournalNodeSync.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> org.apache.hadoop.hdfs.server.blockmanagement  (7 usages found)
> TestBlockManagerSafeMode.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> TestBlockReportRateLimiting.java  (1 usage found)
> 25 import com.google.common.base.Supplier;
> TestNameNodePrunesMissingStorages.java  (1 usage found)
> 21 import com.google.common.base.Supplier;
> TestPendingInvalidateBlock.java  (1 usage found)
> 43 import com.google.common.base.Supplier;
> TestPendingReconstructio

[jira] [Resolved] (HADOOP-17103) Replace Guava Supplier with Java8+ Supplier in MAPREDUCE

2020-07-17 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein resolved HADOOP-17103.

Resolution: Implemented

After HADOOP-17090, It was possible to combine the change into a single patch.

HADOOP-17100 already include changes required for the Mapreduce module.

> Replace Guava Supplier with Java8+ Supplier in MAPREDUCE
> 
>
> Key: HADOOP-17103
> URL: https://issues.apache.org/jira/browse/HADOOP-17103
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Replacing Usage of Guava supplier are in Unit tests 
> {{GenereicTestUtils.waitFor()}} in hadoop-mapreduce-project subdirectory.
> {code:java}
> Targets
> hadoop-mapreduce-project with mask '*.java'
> Found Occurrences  (8 usages found)
> org.apache.hadoop.mapred  (2 usages found)
> TestTaskAttemptListenerImpl.java  (1 usage found)
> 20 import com.google.common.base.Supplier;
> UtilsForTests.java  (1 usage found)
> 64 import com.google.common.base.Supplier;
> org.apache.hadoop.mapreduce.v2.app  (4 usages found)
> TestFetchFailure.java  (1 usage found)
> 29 import com.google.common.base.Supplier;
> TestMRApp.java  (1 usage found)
> 31 import com.google.common.base.Supplier;
> TestRecovery.java  (1 usage found)
> 31 import com.google.common.base.Supplier;
> TestTaskHeartbeatHandler.java  (1 usage found)
> 28 import com.google.common.base.Supplier;
> org.apache.hadoop.mapreduce.v2.app.rm  (1 usage found)
> TestRMContainerAllocator.java  (1 usage found)
> 156 import com.google.common.base.Supplier;
> org.apache.hadoop.mapreduce.v2.hs  (1 usage found)
> TestJHSDelegationTokenSecretManager.java  (1 usage found)
> 30 import com.google.common.base.Supplier;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[unguava] Heads up: patch affecting 147 files in HADOOP-17100

2020-07-17 Thread Ahmed Hussein
Hi folks,

In the process of replacing Guava APIs, I uploaded a patch to
[YHADOOP-1700.HADOOP-17100.006.patch
]
that
replaces guava.Supplier
with java.Supplier.

The change is straightforward changing an import line, but it touches 147
files (mostly unit tests).

I appreciate if someone takes a look at the patch and merge it into
trunk, branch-3.3, branch-3.2, and branch-3.1

--
Best Regards,

*Ahmed Hussein, PhD*


Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-07-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/750/

[Jul 16, 2020 4:17:17 PM] (hexiaoqiao) HDFS-14498. LeaseManager can loop 
forever on the file for which create




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.state.StateMachineFactory.generateStateGraph(String) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:keySet iterator instead of entrySet iterator At 
StateMachineFactory.java:[line 505] 

findbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Useless object stored in variable removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:removedNullContainers of method 
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
 At NodeStatusUpdaterImpl.java:[line 664] 
   
org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At 
NodeStatusUpdaterImpl.java:[line 741] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
 makes inefficient use of keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:keySet iterator instead of entrySet iterator At 
ContainerLocalizer.java:[line 359] 
   
org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
 is a mutable collection which should be package protected At 
ContainerMetrics.java:which should be package protected At 
ContainerMetrics.java:[line 134] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean)

[jira] [Resolved] (HADOOP-16930) Add com.amazonaws.auth.profile.ProfileCredentialsProvider to hadoop-aws docs

2020-07-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16930.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Add com.amazonaws.auth.profile.ProfileCredentialsProvider to hadoop-aws docs
> 
>
> Key: HADOOP-16930
> URL: https://issues.apache.org/jira/browse/HADOOP-16930
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, fs/s3
>Affects Versions: 3.2.1
>Reporter: Nicholas Chammas
>Priority: Minor
> Fix For: 3.4.0
>
>
> There is a very, very useful S3A authentication method that is not currently 
> documented: {{com.amazonaws.auth.profile.ProfileCredentialsProvider}}
> This provider lets you source your AWS credentials from a shared credentials 
> file, typically stored under {{~/.aws/credentials}}, using a [named 
> profile|https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html].
>  All you need is to set the {{AWS_PROFILE}} environment variable, and the 
> provider will get the appropriate credentials for you.
> I discovered this from my coworkers, but cannot find it in the docs for 
> hadoop-aws. I'd expect to see it at least mentioned in [this 
> section|https://hadoop.apache.org/docs/r2.9.2/hadoop-aws/tools/hadoop-aws/index.html#S3A_Authentication_methods].
>  It should probably be added to the docs for every minor release that 
> supports it, which I'd guess includes 2.8 on up.
> (This provider should probably also be added to the default list of 
> credential provider classes, but we can address that in another ticket. I can 
> say that at least in 2.9.2, it's not in the default list.)
> (This is not to be confused with 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, which serves a 
> completely different purpose.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17138) Fix spotbugs warnings surfaced after upgrade to 4.0.6

2020-07-17 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HADOOP-17138:
-

 Summary: Fix spotbugs warnings surfaced after upgrade to 4.0.6
 Key: HADOOP-17138
 URL: https://issues.apache.org/jira/browse/HADOOP-17138
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


Spotbugs 4.0.6 generated additional warnings.
{noformat}
$ mvn clean install findbugs:findbugs -DskipTests -DskipShade
$ find . -name findbugsXml.xml | xargs -n 1 
/opt/spotbugs-4.0.6/bin/convertXmlToText 
M D DLS: Dead store to $L5 in 
org.apache.hadoop.ipc.Server$ConnectionManager.decrUserConnections(String)  At 
Server.java:[line 3729]
M D DLS: Dead store to $L5 in 
org.apache.hadoop.ipc.Server$ConnectionManager.incrUserConnections(String)  At 
Server.java:[line 3717]
H D NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(Object)
 overrides the nullness annotation of parameter $L1 in an incompatible way  At 
DatasetVolumeChecker.java:[line 322]
H D NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.DatasetVolumeChecker$ResultHandler.onSuccess(VolumeCheckResult)
 overrides the nullness annotation of parameter result in an incompatible way  
At DatasetVolumeChecker.java:[lines 358-376]
M D NP: Method 
org.apache.hadoop.hdfs.server.datanode.checker.ThrottledAsyncChecker$2.onSuccess(Object)
 overrides the nullness annotation of parameter result in an incompatible way  
At ThrottledAsyncChecker.java:[lines 170-175]
M D DLS: Dead store to $L8 in 
org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.incrOpCount(FSEditLogOpCodes,
 EnumMap, Step, StartupProgress$Counter)  At FSEditLogLoader.java:[line 1241]
M D NP: result must be non-null but is marked as nullable  At 
LocatedFileStatusFetcher.java:[lines 380-397]
M D NP: result must be non-null but is marked as nullable  At 
LocatedFileStatusFetcher.java:[lines 291-309]
M D DLS: Dead store to $L6 in 
org.apache.hadoop.yarn.sls.SLSRunner.increaseQueueAppNum(String)  At 
SLSRunner.java:[line 816]
H C UMAC: Uncallable method 
org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage$1.getInstance()
 defined in anonymous class  At 
TestTimelineReaderWebServicesHBaseStorage.java:[line 87]
M D DLS: Dead store to entities in 
org.apache.hadoop.yarn.server.timelineservice.storage.TestTimelineReaderHBaseDown.checkQuery(HBaseTimelineReaderImpl)
  At TestTimelineReaderHBaseDown.java:[line 190]
M V EI: org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer  
At CosNInputStream.java:[line 87]
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17137) ABFS: Tests ITestAbfsNetworkStatistics need to be config setting agnostic

2020-07-17 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-17137:
--

 Summary: ABFS: Tests ITestAbfsNetworkStatistics need to be config 
setting agnostic
 Key: HADOOP-17137
 URL: https://issues.apache.org/jira/browse/HADOOP-17137
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan


Tess in ITestAbfsNetworkStatistics have asserts to a  static number of network 
calls made from the start of fileystem instance creation. But this number of 
calls are dependent on the certain configs settings which allow creation of 
container or account is HNS enabled to avoid GetAcl call.

 

The tests need to be modified to ensure that count asserts are made for the 
requests made by the tests alone.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [RESULT][VOTE] Rlease Apache Hadoop-3.3.0

2020-07-17 Thread Ayush Saxena
Hi Brahma,
Seems the link to changelog for Release-3.3.0 isn't correct at :
https://hadoop.apache.org/

It points to :
http://hadoop.apache.org/docs/r3.3.0/hadoop-project-dist/hadoop-common/release/3.3.0/CHANGES.3.3.0.html

CHANGES.3.3.0.html isn't there, instead it should point to :

http://hadoop.apache.org/docs/r3.3.0/hadoop-project-dist/hadoop-common/release/3.3.0/CHANGELOG.3.3.0.html

Please give a check once!!!

-Ayush




On Wed, 15 Jul 2020 at 19:18, Brahma Reddy Battula 
wrote:

> Hi Stephen,
>
> thanks for bringing this to my attention.
>
> Looks it's late..I pushed the release tag ( which can't be reverted) and
> updated the release date in the jira.
>
> Can we plan this next release near future..?
>
>
> On Wed, Jul 15, 2020 at 5:25 PM Stephen O'Donnell
>  wrote:
>
> > Hi All,
> >
> > Sorry for being a bit late to this, but I wonder if we have a potential
> > blocker to this release.
> >
> > In Cloudera we have recently encountered a serious dataloss issue in HDFS
> > surrounding snapshots. To hit the dataloss issue, you must have
> HDFS-13101
> > and HDFS-15012 on the build (which branch-3.3.0 does). To prevent it, you
> > must also have HDFS-15313 and unfortunately, this was only committed to
> > trunk, so we need to cherry-pick it down the active branches.
> >
> > With data loss being a serious issue, should we pull this Jira into
> > branch-3.3.0 and cut a new release candidate?
> >
> > Thanks,
> >
> > Stephen.
> >
> > On Tue, Jul 14, 2020 at 1:22 PM Brahma Reddy Battula 
> > wrote:
> >
> > > Hi All,
> > >
> > > With 8 binding and 11 non-binding +1s and no -1s the vote for Apache
> > > hadoop-3.3.0 Release
> > > passes.
> > >
> > > Thank you everybody for contributing to the release, testing, and
> voting.
> > >
> > > Special thanks whoever verified the ARM Binary as this is the first
> > release
> > > to support the ARM in hadoop.
> > >
> > >
> > > Binding +1s
> > >
> > > =
> > > Akira Ajisaka
> > > Vinayakumar B
> > > Inigo Goiri
> > > Surendra Singh Lilhore
> > > Masatake Iwasaki
> > > Rakesh Radhakrishnan
> > > Eric Badger
> > > Brahma Reddy Battula
> > >
> > > Non-binding +1s
> > >
> > > =
> > > Zhenyu Zheng
> > > Sheng Liu
> > > Yikun Jiang
> > > Tianhua huang
> > > Ayush Saxena
> > > Hemanth Boyina
> > > Bilwa S T
> > > Takanobu Asanuma
> > > Xiaoqiao He
> > > CR Hota
> > > Gergely Pollak
> > >
> > > I'm going to work on staging the release.
> > >
> > >
> > > The voting thread is:
> > >
> > >  https://s.apache.org/hadoop-3.3.0-Release-vote-thread
> > >
> > >
> > >
> > > --Brahma Reddy Battula
> > >
> >
>
>
> --
>
>
>
> --Brahma Reddy Battula
>


[jira] [Resolved] (HADOOP-17130) Configuration.getValByRegex() shouldn't update the results while fetching.

2020-07-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17130.
-
Fix Version/s: 3.1.5
   3.3.1
   3.2.2
   Resolution: Fixed

patch backported to all the active 3.x branches

> Configuration.getValByRegex() shouldn't update the results while fetching.
> --
>
> Key: HADOOP-17130
> URL: https://issues.apache.org/jira/browse/HADOOP-17130
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.1.3
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
> Fix For: 3.2.2, 3.3.1, 3.1.5
>
>
> We have seen this stacktrace while using ABFS file system. After analysing 
> the stack trace we can see that getValByRegex() is reading the properties and 
> substituting the value in the same call. This may cause the 
> ConcurrentModificationException. 
> {code:java}
> Caused by: java.util.concurrent.ExecutionException: 
> java.util.ConcurrentModificationException at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192) at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1877)
>  ... 18 more Caused by: java.util.ConcurrentModificationException at 
> java.util.Hashtable$Enumerator.next(Hashtable.java:1387) at 
> org.apache.hadoop.conf.Configuration.getValByRegex(Configuration.java:3855) 
> at 
> org.apache.hadoop.fs.azurebfs.AbfsConfiguration.validateStorageAccountKeys(AbfsConfiguration.java:689)
>  at 
> org.apache.hadoop.fs.azurebfs.AbfsConfiguration.(AbfsConfiguration.java:237)
>  at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.(AzureBlobFileSystemStore.java:154)
>  at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:113)
>  at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3396) at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158) at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3456) at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3424) at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:518) at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17136) ITestS3ADirectoryPerformance.testListOperations failing

2020-07-17 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-17136:
--

 Summary: ITestS3ADirectoryPerformance.testListOperations failing
 Key: HADOOP-17136
 URL: https://issues.apache.org/jira/browse/HADOOP-17136
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.3
Reporter: Mukund Thakur
 Fix For: 3.1.4


Because of HADOOP-17022
[INFO] Running org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
[ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 670.029 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance
[ERROR] 
testListOperations(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance) 
 Time elapsed: 44.089 s  <<< FAILURE!
java.lang.AssertionError: object_list_requests starting=166 current=167 diff=1 
expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testListOperations(ITestS3ADirectoryPerformance.java:117)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org