Re: [VOTE] Unprotect HDFS-13891 (HDFS RBF Branch)

2019-05-15 Thread Aaron Fabbri
+1 to unprotect feature branch (in general) for rebasing against trunk.

On Tue, May 14, 2019 at 7:53 PM Dinesh Chitlangia
 wrote:

> +1(non-binding) for branch.
>
> -Dinesh
>
>
>
>
> On Tue, May 14, 2019 at 10:04 PM Brahma Reddy Battula 
> wrote:
>
> > Yes Arpit,it’s not for trunk..
> >
> >
> > On Wed, May 15, 2019 at 2:11 AM, Arpit Agarwal 
> > wrote:
> >
> > > The request is specific to HDFS-13891, correct?
> > >
> > > We should not allow force push on trunk.
> > >
> > >
> > > > On May 14, 2019, at 8:07 AM, Anu Engineer  > .INVALID>
> > > wrote:
> > > >
> > > > Is it possible to unprotect the branches and not the trunk?
> Generally,
> > a
> > > > force push to trunk indicates a mistake and we have had that in the
> > past.
> > > > This is just a suggestion,  even if this request is not met, I am
> still
> > > +1.
> > > >
> > > > Thanks
> > > > Anu
> > > >
> > > >
> > > >
> > > > On Tue, May 14, 2019 at 4:58 AM Takanobu Asanuma <
> > tasan...@yahoo-corp.jp
> > > >
> > > > wrote:
> > > >
> > > >> +1.
> > > >>
> > > >> Thanks!
> > > >> - Takanobu
> > > >>
> > > >> 
> > > >> From: Akira Ajisaka 
> > > >> Sent: Tuesday, May 14, 2019 4:26:30 PM
> > > >> To: Giovanni Matteo Fumarola
> > > >> Cc: Iñigo Goiri; Brahma Reddy Battula; Hadoop Common; Hdfs-dev
> > > >> Subject: Re: [VOTE] Unprotect HDFS-13891 (HDFS RBF Branch)
> > > >>
> > > >> +1 to unprotect the branch.
> > > >>
> > > >> Thanks,
> > > >> Akira
> > > >>
> > > >> On Tue, May 14, 2019 at 3:11 PM Giovanni Matteo Fumarola
> > > >>  wrote:
> > > >>>
> > > >>> +1 to unprotect the branches for rebases.
> > > >>>
> > > >>> On Mon, May 13, 2019 at 11:01 PM Iñigo Goiri 
> > > wrote:
> > > >>>
> > >  Syncing the branch to trunk should be a fairly standard task.
> > >  Is there a way to do this without rebasing and forcing the push?
> > >  As far as I know this has been the standard for other branches
> and I
> > > >> don't
> > >  know of any alternative.
> > >  We should clarify the process as having to get PMC consensus to
> > rebase
> > > >> a
> > >  branch seems a little overkill to me.
> > > 
> > >  +1 from my side to un protect the branch to do the rebase.
> > > 
> > >  On Mon, May 13, 2019, 22:46 Brahma Reddy Battula <
> bra...@apache.org
> > >
> > >  wrote:
> > > 
> > > > Hi Folks,
> > > >
> > > > INFRA-18181 made all the Hadoop branches are protected.
> > > > Badly HDFS-13891 branch needs to rebased as we contribute core
> > > >> patches
> > > > trunk..So,currently we are stuck with rebase as it’s not allowed
> to
> > > >> force
> > > > push.Hence raised INFRA-18361.
> > > >
> > > > Can we have a quick vote for INFRA sign-off to proceed as this is
> > >  blocking
> > > > all branch commits??
> > > >
> > > > --
> > > >
> > > >
> > > >
> > > > --Brahma Reddy Battula
> > > >
> > > 
> > > >>
> > > >>
> -
> > > >> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > >> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > > >>
> > > >>
> > >
> > > --
> >
> >
> >
> > --Brahma Reddy Battula
> >
>


[jira] [Created] (HADOOP-16316) S3A delegation tests fail if you set fs.s3a.secret.key

2019-05-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16316:
---

 Summary: S3A delegation tests fail if you set fs.s3a.secret.key
 Key: HADOOP-16316
 URL: https://issues.apache.org/jira/browse/HADOOP-16316
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


The ITests for Session and Role DTs in S3A set the encryption option (to verify 
its propagation). But if you have set an encryption key in the config then test 
setup will fail

Fix: when you set the encryption, clear the options for fs.s3a.encryption.key 
for the dest bucket



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16315) ABFS: transform full UPN for named user in AclStatus

2019-05-15 Thread Da Zhou (JIRA)
Da Zhou created HADOOP-16315:


 Summary: ABFS: transform full UPN for named user in AclStatus
 Key: HADOOP-16315
 URL: https://issues.apache.org/jira/browse/HADOOP-16315
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Da Zhou
Assignee: Da Zhou


When converting the identity in AclStatus, only "owner" and "owning group" are 
transformed. We need to add the conversion for the AclEntry in AclStatus too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16314) Make sure all end point URL is covered by the same AuthenticationFilter

2019-05-15 Thread Eric Yang (JIRA)
Eric Yang created HADOOP-16314:
--

 Summary: Make sure all end point URL is covered by the same 
AuthenticationFilter
 Key: HADOOP-16314
 URL: https://issues.apache.org/jira/browse/HADOOP-16314
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Eric Yang


In the enclosed spreadsheet, it shows the list of web applications deployed by 
Hadoop, and filters applied to each entry point.

Hadoop web protocol impersonation has been inconsistent.  Most of entry point 
do not support ?doAs parameter.  This creates problem for secure gateway like 
Knox to proxy Hadoop web interface on behave of the end user.  When the 
receiving end does not check for ?doAs flag, web interface would be accessed 
using proxy user credential.  This can lead to all kind of security holes using 
path traversal to exploit Hadoop. 

In HADOOP-16287, ProxyUserAuthenticationFilter is proposed as solution to solve 
the web impersonation problem.  This task is to track changes required in 
Hadoop code base to apply authentication filter globally for each of the web 
service port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-05-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/

[May 14, 2019 3:49:52 AM] (github) HDDS-1491. Ozone KeyInputStream seek() 
should not read the chunk file.
[May 14, 2019 9:19:50 AM] (github) HDDS-1503. Reduce garbage generated by 
non-netty threads in datanode
[May 14, 2019 5:48:08 PM] (sunilg) YARN-9519. TFile log aggregation file format 
is not working for
[May 14, 2019 6:04:06 PM] (todd) HDFS-14482: Crash when using libhdfs with bad 
classpath
[May 14, 2019 9:05:39 PM] (wwei) HADOOP-16306. AliyunOSS: Remove temporary 
files when upload small files




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.webapp.TestNMWebServices 
   
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.cli.TestLogsCLI 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/diff-patch-pylint.txt
  [96K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-documentstore-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-tony-runtime.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/branch-findbugs-hadoop-submarine_hadoop-submarine-yarnservice-runtime.txt
  [4.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1137/artifact/out/diff-javadoc-javadoc-root.txt
  [752K]

   unit:

   

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-05-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/

[May 14, 2019 9:09:12 PM] (wwei) HADOOP-16306. AliyunOSS: Remove temporary 
files when upload small files




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/322/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   

[jira] [Created] (HADOOP-16313) multipart/huge file upload tests to look at checksums returned

2019-05-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16313:
---

 Summary: multipart/huge file upload tests to look at checksums 
returned
 Key: HADOOP-16313
 URL: https://issues.apache.org/jira/browse/HADOOP-16313
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.2.0
Reporter: Steve Loughran


Apparently some third party S3-API object stores have empty etags for multipart 
uploads. 

We don't have any tests to explore this (just small file uploads). 

Proposed

*set {{fs.s3a.etag.checksum.enabled"}} before huge file, MPU and committer tests
* warn (fail?) if nothing comes back.

If true, it means that the store isn't going to be detect changes to a large 
file during copy/read




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org