Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-16 Thread Masatake Iwasaki

This thread seems to be relevant.
https://lists.apache.org/thread.html/0d2a1b39f7e890c4f40be5fd92f107fbf048b936005901b7b53dd0f1%40%3Ccommon-dev.hadoop.apache.org%3E 



> Convenience binary artifacts are not official release artifacts and thus
> are not voted on. However, since they are distributed by Apache, they 
are

> still subject to the same distribution requirements as official release
> artifacts. This means they need to have a LICENSE and NOTICE file, 
follow

> ASF licensing rules, etc. The PMC needs to ensure that binary artifacts
> meet these requirements.
>
> However, being a "convenience" artifact doesn't mean it isn't important.
> The appropriate level of quality for binary artifacts is left up to the
> project. An OpenOffice person mentioned the quality of their binary
> artifacts is super important since very few of their users will compile
> their own office suite.
>
> I don't know if we've discussed the topic of binary artifact quality in
> Hadoop. My stance is that if we're going to publish something, it 
should be

> good, or we shouldn't publish it at all. I think we do want to publish
> binary tarballs (it's the easiest way for new users to get started with
> Hadoop), so it's fair to consider them when evaluating a release.

Just providing build machine to RM would not be enough if
PMC need to ensure that binary artifiacts meet these requirements.

Thanks,
Masatake Iwasaki


On 3/17/20 14:11, 俊平堵 wrote:

Hi Brahma,
  I think most of us in Hadoop community doesn't want to have biased on
ARM or any other platforms.
  The only thing I try to understand is how much complexity get involved
for our RM work. Does that potentially become a blocker for future
releases? And how we can get rid of this risk.
   If you can list the concrete work that RM need to do extra for ARM
release, that would help us to better understand.

Thanks,

Junping

Akira Ajisaka  于2020年3月13日周五 上午12:34写道:


If you can provide ARM release for future releases, I'm fine with that.

Thanks,
Akira

On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula 
wrote:


thanks Akira.

Currently only problem is dedicated ARM for future RM.This i want to sort
out like below,if you've some other,please let me know.

i) Single machine and share cred to future RM ( as we can delete keys

once

release is over).
ii) Creating the jenkins project ( may be we need to discuss in the
board..)
iii) I can provide ARM release for future releases.







On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 

wrote:

Hi Brahma,

I think we cannot do any of your proposed actions.



http://www.apache.org/legal/release-policy.html#owned-controlled-hardware

Strictly speaking, releases must be verified on hardware owned and

controlled by the committer. That means hardware the committer has

physical

possession and control of and exclusively full administrative/superuser
access to. That's because only such hardware is qualified to hold a PGP
private key, and the release should be verified on the machine the

private

key lives on or on a machine as trusted as that.

https://www.apache.org/dev/release-distribution.html#sigs-and-sums

Private keys MUST NOT be stored on any ASF machine. Likewise,

signatures

for releases MUST NOT be created on ASF machines.

We need to have dedicated physical ARM machines for each release

manager,

and now it is not feasible.
If you provide an unofficial ARM binary release in some repository,

that's

okay.

-Akira

On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula <

bra...@apache.org>

wrote:


Hello folks,

As currently trunk will support ARM based compilation and qbt(1) is
running
from several months with quite stable, hence planning to propose ARM
binary
this time.

( Note : As we'll know voting will be based on the source,so this will

not

issue.)

*Proposed Change:*
Currently in downloads we are keeping only x86 binary(2),Can we keep

ARM

binary also.?

*Actions:*
a) *Dedicated* *Machine*:
i) Dedicated ARM machine will be donated which I confirmed
ii) Or can use jenkins ARM machine itself which is currently

used

for ARM
b) *Automate Release:* How about having one release project in

jenkins..?

So that future RM's just trigger the jenkin project.

Please let me know your thoughts on this.


1.



https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-qbt-linux-ARM-trunk/

2.https://hadoop.apache.org/releases.html






--Brahma Reddy Battula


--



--Brahma Reddy Battula




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-16 Thread Masatake Iwasaki

This thread seems to be relevant.
https://lists.apache.org/thread.html/0d2a1b39f7e890c4f40be5fd92f107fbf048b936005901b7b53dd0f1%40%3Ccommon-dev.hadoop.apache.org%3E

> Convenience binary artifacts are not official release artifacts and thus
> are not voted on. However, since they are distributed by Apache, they are
> still subject to the same distribution requirements as official release
> artifacts. This means they need to have a LICENSE and NOTICE file, follow
> ASF licensing rules, etc. The PMC needs to ensure that binary artifacts
> meet these requirements.
>
> However, being a "convenience" artifact doesn't mean it isn't important.
> The appropriate level of quality for binary artifacts is left up to the
> project. An OpenOffice person mentioned the quality of their binary
> artifacts is super important since very few of their users will compile
> their own office suite.
>
> I don't know if we've discussed the topic of binary artifact quality in
> Hadoop. My stance is that if we're going to publish something, it 
should be

> good, or we shouldn't publish it at all. I think we do want to publish
> binary tarballs (it's the easiest way for new users to get started with
> Hadoop), so it's fair to consider them when evaluating a release.

Just providing build machine to RM would not be enough if
PMC need to ensure that binary artifiacts meet these requirements.

Thanks,
Masatake Iwasaki

On 3/17/20 14:11, 俊平堵 wrote:

Hi Brahma,
  I think most of us in Hadoop community doesn't want to have biased on
ARM or any other platforms.
  The only thing I try to understand is how much complexity get involved
for our RM work. Does that potentially become a blocker for future
releases? And how we can get rid of this risk.
   If you can list the concrete work that RM need to do extra for ARM
release, that would help us to better understand.

Thanks,

Junping

Akira Ajisaka  于2020年3月13日周五 上午12:34写道:


If you can provide ARM release for future releases, I'm fine with that.

Thanks,
Akira

On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula 
wrote:


thanks Akira.

Currently only problem is dedicated ARM for future RM.This i want to sort
out like below,if you've some other,please let me know.

i) Single machine and share cred to future RM ( as we can delete keys

once

release is over).
ii) Creating the jenkins project ( may be we need to discuss in the
board..)
iii) I can provide ARM release for future releases.







On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 

wrote:

Hi Brahma,

I think we cannot do any of your proposed actions.



http://www.apache.org/legal/release-policy.html#owned-controlled-hardware

Strictly speaking, releases must be verified on hardware owned and

controlled by the committer. That means hardware the committer has

physical

possession and control of and exclusively full administrative/superuser
access to. That's because only such hardware is qualified to hold a PGP
private key, and the release should be verified on the machine the

private

key lives on or on a machine as trusted as that.

https://www.apache.org/dev/release-distribution.html#sigs-and-sums

Private keys MUST NOT be stored on any ASF machine. Likewise,

signatures

for releases MUST NOT be created on ASF machines.

We need to have dedicated physical ARM machines for each release

manager,

and now it is not feasible.
If you provide an unofficial ARM binary release in some repository,

that's

okay.

-Akira

On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula <

bra...@apache.org>

wrote:


Hello folks,

As currently trunk will support ARM based compilation and qbt(1) is
running
from several months with quite stable, hence planning to propose ARM
binary
this time.

( Note : As we'll know voting will be based on the source,so this will

not

issue.)

*Proposed Change:*
Currently in downloads we are keeping only x86 binary(2),Can we keep

ARM

binary also.?

*Actions:*
a) *Dedicated* *Machine*:
i) Dedicated ARM machine will be donated which I confirmed
ii) Or can use jenkins ARM machine itself which is currently

used

for ARM
b) *Automate Release:* How about having one release project in

jenkins..?

So that future RM's just trigger the jenkin project.

Please let me know your thoughts on this.


1.



https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-qbt-linux-ARM-trunk/

2.https://hadoop.apache.org/releases.html






--Brahma Reddy Battula


--



--Brahma Reddy Battula




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 3.3.0 Release include ARM binary

2020-03-16 Thread 俊平堵
Hi Brahma,
 I think most of us in Hadoop community doesn't want to have biased on
ARM or any other platforms.
 The only thing I try to understand is how much complexity get involved
for our RM work. Does that potentially become a blocker for future
releases? And how we can get rid of this risk.
  If you can list the concrete work that RM need to do extra for ARM
release, that would help us to better understand.

Thanks,

Junping

Akira Ajisaka  于2020年3月13日周五 上午12:34写道:

> If you can provide ARM release for future releases, I'm fine with that.
>
> Thanks,
> Akira
>
> On Thu, Mar 12, 2020 at 9:41 PM Brahma Reddy Battula 
> wrote:
>
> > thanks Akira.
> >
> > Currently only problem is dedicated ARM for future RM.This i want to sort
> > out like below,if you've some other,please let me know.
> >
> > i) Single machine and share cred to future RM ( as we can delete keys
> once
> > release is over).
> > ii) Creating the jenkins project ( may be we need to discuss in the
> > board..)
> > iii) I can provide ARM release for future releases.
> >
> >
> >
> >
> >
> >
> >
> > On Thu, Mar 12, 2020 at 5:14 PM Akira Ajisaka 
> wrote:
> >
> > > Hi Brahma,
> > >
> > > I think we cannot do any of your proposed actions.
> > >
> > >
> >
> http://www.apache.org/legal/release-policy.html#owned-controlled-hardware
> > > > Strictly speaking, releases must be verified on hardware owned and
> > > controlled by the committer. That means hardware the committer has
> > physical
> > > possession and control of and exclusively full administrative/superuser
> > > access to. That's because only such hardware is qualified to hold a PGP
> > > private key, and the release should be verified on the machine the
> > private
> > > key lives on or on a machine as trusted as that.
> > >
> > > https://www.apache.org/dev/release-distribution.html#sigs-and-sums
> > > > Private keys MUST NOT be stored on any ASF machine. Likewise,
> > signatures
> > > for releases MUST NOT be created on ASF machines.
> > >
> > > We need to have dedicated physical ARM machines for each release
> manager,
> > > and now it is not feasible.
> > > If you provide an unofficial ARM binary release in some repository,
> > that's
> > > okay.
> > >
> > > -Akira
> > >
> > > On Thu, Mar 12, 2020 at 7:57 PM Brahma Reddy Battula <
> bra...@apache.org>
> > > wrote:
> > >
> > >> Hello folks,
> > >>
> > >> As currently trunk will support ARM based compilation and qbt(1) is
> > >> running
> > >> from several months with quite stable, hence planning to propose ARM
> > >> binary
> > >> this time.
> > >>
> > >> ( Note : As we'll know voting will be based on the source,so this will
> > not
> > >> issue.)
> > >>
> > >> *Proposed Change:*
> > >> Currently in downloads we are keeping only x86 binary(2),Can we keep
> ARM
> > >> binary also.?
> > >>
> > >> *Actions:*
> > >> a) *Dedicated* *Machine*:
> > >>i) Dedicated ARM machine will be donated which I confirmed
> > >>ii) Or can use jenkins ARM machine itself which is currently
> used
> > >> for ARM
> > >> b) *Automate Release:* How about having one release project in
> > jenkins..?
> > >> So that future RM's just trigger the jenkin project.
> > >>
> > >> Please let me know your thoughts on this.
> > >>
> > >>
> > >> 1.
> > >>
> > >>
> >
> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-qbt-linux-ARM-trunk/
> > >> 2.https://hadoop.apache.org/releases.html
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> --Brahma Reddy Battula
> > >>
> > >
> >
> > --
> >
> >
> >
> > --Brahma Reddy Battula
> >
>


[jira] [Created] (HADOOP-16926) s3guard can't init table if caller doesn't have tag permissions

2020-03-16 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-16926:
---

 Summary: s3guard can't init table if caller doesn't have tag 
permissions
 Key: HADOOP-16926
 URL: https://issues.apache.org/jira/browse/HADOOP-16926
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


If the user doesn't have the permissions to tag a DDB table creation will fail.


Caused by  HADOOP-16520; we downgrade the failure on other inits, HADOOP-16653, 
but not on actual creation.


We could downgrade creation the same way



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16661) Support TLS 1.3

2020-03-16 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-16661.
--
Fix Version/s: 3.3.0
   Resolution: Fixed

Thanks Akira for the review!

> Support TLS 1.3
> ---
>
> Key: HADOOP-16661
> URL: https://issues.apache.org/jira/browse/HADOOP-16661
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.3.0
>
>
> HADOOP-16152 is going to update Jetty from 9.3 to 9.4.20, which should allow 
> us to support TLS 1.3 https://www.eclipse.org/lists/jetty-users/msg08569.html
> We should test and document the support of TLS 1.3. Assuming its support 
> depends on JDK, then it is likely only supported in JDK11 and above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-03-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1440/

[Mar 15, 2020 10:46:27 AM] (ayushsaxena) HDFS-15159. Prevent adding same DN 
multiple times in
[Mar 15, 2020 11:00:39 AM] (ayushsaxena) HDFS-15197. [SBN read] Change 
ObserverRetryOnActiveException log to
[Mar 15, 2020 3:14:32 PM] (surendralilhore) HDFS-15211. EC: File write hangs 
during close in case of Exception




-1 overall


The following subsystems voted -1:
asflicense compile findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy 
   hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots 
   hadoop.hdfs.TestStateAlignmentContextWithHA 
   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.TestDecommissionWithStriped 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1440/artifact/out/patch-compile-root.txt
  [576K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1440/artifact/out/patch-compile-root.txt
  [576K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1440/artifact/out/patch-compile-root.txt
  [576K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1440/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1440/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1440/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1440/artifact/out/diff-patch-shellche

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-03-16 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/626/

[Mar 15, 2020 11:02:29 AM] (ayushsaxena) HDFS-15197. [SBN read] Change 
ObserverRetryOnActiveException log to




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy 
   hadoop.hdfs.server.namenode.TestAuditLogs 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication 
   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   hadoop.hdfs.server.datanode.TestDeleteBlockPool 
   hadoop.hdfs.server.namenode.ha.TestFailoverWithBlockTokensEnabled 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   hadoop.hdfs.server.namenode.TestFsck 
   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits 
   hadoop.hdfs.server.namenode.TestLeaseManager 
   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
   hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover 
   hadoop.hdfs.server.namenode.ha.TestHASafeMode 
   hadoop.hdfs.server.datanode.TestDataNodeReconfiguration 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   hadoop.hdfs.TestDistributedFileSystem 
   hadoop.hdfs.server.namenode.TestFSImageWithSnapshot 
   hadoop.hdfs.server.namenode.TestParallelImageWrite 
   hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN 
   hadoop.hdfs.server.namenode.TestLargeDirectoryDelete 
   hadoop.hdfs.server.namenode.TestFileTruncate 
   hadoop.hdfs.server.datanode.TestDataNodeECN 
   hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.TestEditLogJournalFailures 
   hadoop.hdfs.server.namenode.TestEditLogAutoroll 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.v2.TestUberAM 
   hadoop.mapreduce.v2.TestMRJobsWithProfiler 
   hadoop.mapreduce.v2.TestMiniMRProxyUser 
   hadoop.fs.azure.TestClientThrottlingAnalyzer 
   hadoop.fs.adl.TestListStatus 
   hadoop.fs.adl.TestConcurrentDataReadOperations 
   hadoop.contrib.utils.join.TestDataJoin 
   hadoop.registry.secure.TestSecureLogins 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/626/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/626/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/626/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/626/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/626/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/626/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/626/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/626/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

[jira] [Created] (HADOOP-16925) MetricsConfig incorrectly loads the configuration whose value is String list in the properties file

2020-03-16 Thread Jiayi Liu (Jira)
Jiayi Liu created HADOOP-16925:
--

 Summary: MetricsConfig incorrectly loads the configuration whose 
value is String list in the properties file
 Key: HADOOP-16925
 URL: https://issues.apache.org/jira/browse/HADOOP-16925
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 3.1.3
 Environment: 
[HADOOP-15549|https://jira.apache.org/jira/browse/HADOOP-15549]modified 
loadFirst function in MetricsConfig, and forget to set the DelimiterHandler, 
which caused that when loading the properties file, if the configured value is 
a String List, the value will not be loaded as a String Array, but just a 
String. For example, if we set
{code:java}
*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
{code}
in hadoop-metrics2.properties. If we use 
conf.getStringArray("*.sink.ganglia.dmax") to get the value list, we will get 
an array with single element, the content of which is 
"jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40", which wil cause an 
error during loadGangliaConf. loadGangliaConf will assume that the value of 
jvm.metrics.threadsBlocked is 70, jvm.metrics.memHeapUsedM, which will cause an 
error "java.lang.NumberFormatException: For input string: 
"70,jvm.metrics.memHeapUsedM".

Reporter: Jiayi Liu






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org