Re: [DISCUSS] Docker build process

2019-03-21 Thread Eric Yang
The flexibility of date appended release number is equivalent to maven snapshot 
or Docker latest image convention, machine can apply timestamp better than 
human.  By using the Jenkins release process, this can be done with little 
effort.  For official release, it is best to use Docker image digest id to 
ensure uniqueness.  E.g.

FROM 
centos@sha256:67dad89757a55bfdfabec8abd0e22f8c7c12a1856514726470228063ed86593b 

Developer downloaded released source would build with the same docker image 
without getting side effects.  

A couple years ago, RedHat has decided to fix SSL vulnerability in RedHat 6/7 
by adding extra parameter to disable certification validation in urllib2 python 
library and force certificate signer validation on by default.  It completely 
broke Ambari agent and its self-signed certificate.  Customers had to backtrack 
to pick up a specific version of python SSL library to keep their production 
cluster operational.  Without doing the due-diligence of certify Hadoop code 
and the OS image, there is wriggle room for errors.  OS update example is a 
perfect example that we want the container OS image certified with Hadoop 
binary release to avoid the wriggle rooms.  Snapshot release is ok to have 
wriggle room for developers, but I don't think that flexibility is necessary 
for official release.

Regards,
Eric

On 3/21/19, 2:44 PM, "Elek, Marton"  wrote:



> If versioning is done correctly, older branches can have the same docker 
subproject, and Hadoop 2.7.8 can be released for older Hadoop branches.  We 
don't generate timeline paradox to allow changing the history of Hadoop 2.7.1.  
That release has passed and let it stay that way.

I understand your point but I am afraid that my concerns were not
expressed clearly enough (sorry for that).

Let's say that we use centos as the base image. In case of a security
problem on the centos side (eg. in libssl) or jdk side, I would rebuild
all the hadoop:2.x / hadoop:3.x images and republish them. Exactly the
same hadoop bytes but updated centos/jdk libraries.

I understand your concerns that in this case the an image with the same
tag (eg. hadoop:3.2.1) will be changed over the time. But this can be
solved by adding date specific postfixes (eg. hadoop:3.2.1-20190321 tag
would never change but hadoop:3.2.1 can be changed)

I know that it's not perfect, but this is widely used. For example the
centos:7 tag is not fixed but centos:7.6.1810 is (hopefully).

Without this flexibility any centos/jdk security issue can invalidate
all of our images (and would require new releases from all the active lines)

Marton





[jira] [Created] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-03-21 Thread Esfandiar Manii (JIRA)
Esfandiar Manii created HADOOP-16205:


 Summary: Backporting ABFS driver from trunk to branch 2.0
 Key: HADOOP-16205
 URL: https://issues.apache.org/jira/browse/HADOOP-16205
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Esfandiar Manii
Assignee: Da Zhou
 Fix For: HADOOP-15407


Commit the core code of the ABFS connector (HADOOP-15407) to its development 
branch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Docker build process

2019-03-21 Thread Elek, Marton



> If versioning is done correctly, older branches can have the same docker 
> subproject, and Hadoop 2.7.8 can be released for older Hadoop branches.  We 
> don't generate timeline paradox to allow changing the history of Hadoop 
> 2.7.1.  That release has passed and let it stay that way.

I understand your point but I am afraid that my concerns were not
expressed clearly enough (sorry for that).

Let's say that we use centos as the base image. In case of a security
problem on the centos side (eg. in libssl) or jdk side, I would rebuild
all the hadoop:2.x / hadoop:3.x images and republish them. Exactly the
same hadoop bytes but updated centos/jdk libraries.

I understand your concerns that in this case the an image with the same
tag (eg. hadoop:3.2.1) will be changed over the time. But this can be
solved by adding date specific postfixes (eg. hadoop:3.2.1-20190321 tag
would never change but hadoop:3.2.1 can be changed)

I know that it's not perfect, but this is widely used. For example the
centos:7 tag is not fixed but centos:7.6.1810 is (hopefully).

Without this flexibility any centos/jdk security issue can invalidate
all of our images (and would require new releases from all the active lines)

Marton

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16058) S3A tests to include Terasort

2019-03-21 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-16058:
-

reopening for a backport to branch-3.2

> S3A tests to include Terasort
> -
>
> Key: HADOOP-16058
> URL: https://issues.apache.org/jira/browse/HADOOP-16058
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16058-001.patch, HADOOP-16058-002.patch, 
> HADOOP-16058-002.patch, HADOOP-16058-002.patch
>
>
> Add S3A tests to run terasort for the magic and directory committers.
> MAPREDUCE-7091 is a requirement for this
> Bonus feature: print the results to see which committers are faster in the 
> specific test setup. As that's a function of latency to the store, bandwidth 
> and size of jobs, it's not at all meaningful, just interesting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Kerberos credential cache location in UGI

2019-03-21 Thread Eric Yang
IBM JDK is one of the Java that default ticket cache to different than 
/tmp/krb5* path.  If I recall correctly, most logic had been implemented in UGI 
by passing ticketCachePath parameter in 2012-13 time frame.  The new addition 
will follow the MIT Kerberos lookup order, this is a good improvement with low 
risk.  I think it's a great improvement to have.
 
Regards,
Eric

On 3/21/19, 8:35 AM, "Vipin Rathor"  wrote:

Thank you Steve for your reply.

> I'f you haven't guessed, Kerberos is an eternal support of pain and 
suffering 
Agreed. But it hurt us further when our utilities don’t behave in the way 
they are expected to be.

> Any change must be matched with clarifications the hadoop security docs, 
and KDiag extended to provide extra information about the source of the cache.
Understood. I’ll keep this in mind.

> One big risk here is over regressions across versions of clients
Yes, agreed again. We can keep the current behavior intact and introduce 
this change as a configurable option. I believe more Kerberos admins would like 
to opt for this as this is how any Kerberos client is expected to work.

Suggestions/ comments?

Regards,
Vipin

> On Mar 19, 2019, at 03:27, Steve Loughran  
wrote:
> 
> I'f you haven't guessed, Kerberos is an eternal support of pain and
> suffering
> 
> Any change must be matched with clarifications the hadoop security docs,
> and KDiag extended to provide extra information about the source of the
> cache.
> 
> One big risk here is over regressions across versions of clients
> 
> 
>> On Mon, Mar 18, 2019 at 11:48 PM Vipin Rathor  wrote:
>> 
>> Hello Devs,
>> I'm Vipin, a long time Apache Hadoop user and I like to tinker around in 
my
>> free time. I've been a MIT Kerberos contributor in my past life.
>> 
>> While chasing the Kerberos credential cache usage in Hadoop, I found out
>> that UGI code[1] makes use of KRB5CCNAME environment variable to find the
>> credential cache name and defaults to /tmp/krb5cc_$uid when there is no
>> KRB5CCNAME defined, while completely ignoring the values defined in
>> /etc/krb5.conf.
>> 
>> As per MIT Kerberos doc[2], the correct credential cache location logic
>> should be:
>> 
>> Default ccache name
>> The default credential cache name is determined by the following, in
>> descending order of priority:
>>The KRB5CCNAME environment variable. For example,
>> KRB5CCNAME=DIR:/mydir/.
>>The default_ccache_name profile variable in [libdefaults].
>>The hardcoded default, DEFCCNAME.
>> 
>> 
>> I propose to include support for reading default_ccache_name from
>> /etc/krb5.conf while deciding the right Kerberos credential cache to use.
>> 
>> I am testing a patch currently but wanted to check what does the 
community
>> think before submitting.
>> 
>> Thanks for reading and I'm open to discuss any suggestions.
>> 
>> Regards,
>> Vipin
>> 
>> [1]
>> 
>> 
https://github.com/apache/hadoop/blob/ae3a2c3851cbf7f010f7ae5734ed9e2dbac5d50c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L2045
>> [2]
>> 
>> 
https://web.mit.edu/kerberos/krb5-1.15/doc/basic/ccache_def.html#default-ccache-name
>> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org





Re: [DISCUSS] Kerberos credential cache location in UGI

2019-03-21 Thread Vipin Rathor
Thank you Steve for your reply.

> I'f you haven't guessed, Kerberos is an eternal support of pain and suffering 
Agreed. But it hurt us further when our utilities don’t behave in the way they 
are expected to be.

> Any change must be matched with clarifications the hadoop security docs, and 
> KDiag extended to provide extra information about the source of the cache.
Understood. I’ll keep this in mind.

> One big risk here is over regressions across versions of clients
Yes, agreed again. We can keep the current behavior intact and introduce this 
change as a configurable option. I believe more Kerberos admins would like to 
opt for this as this is how any Kerberos client is expected to work.

Suggestions/ comments?

Regards,
Vipin

> On Mar 19, 2019, at 03:27, Steve Loughran  wrote:
> 
> I'f you haven't guessed, Kerberos is an eternal support of pain and
> suffering
> 
> Any change must be matched with clarifications the hadoop security docs,
> and KDiag extended to provide extra information about the source of the
> cache.
> 
> One big risk here is over regressions across versions of clients
> 
> 
>> On Mon, Mar 18, 2019 at 11:48 PM Vipin Rathor  wrote:
>> 
>> Hello Devs,
>> I'm Vipin, a long time Apache Hadoop user and I like to tinker around in my
>> free time. I've been a MIT Kerberos contributor in my past life.
>> 
>> While chasing the Kerberos credential cache usage in Hadoop, I found out
>> that UGI code[1] makes use of KRB5CCNAME environment variable to find the
>> credential cache name and defaults to /tmp/krb5cc_$uid when there is no
>> KRB5CCNAME defined, while completely ignoring the values defined in
>> /etc/krb5.conf.
>> 
>> As per MIT Kerberos doc[2], the correct credential cache location logic
>> should be:
>> 
>> Default ccache name
>> The default credential cache name is determined by the following, in
>> descending order of priority:
>>The KRB5CCNAME environment variable. For example,
>> KRB5CCNAME=DIR:/mydir/.
>>The default_ccache_name profile variable in [libdefaults].
>>The hardcoded default, DEFCCNAME.
>> 
>> 
>> I propose to include support for reading default_ccache_name from
>> /etc/krb5.conf while deciding the right Kerberos credential cache to use.
>> 
>> I am testing a patch currently but wanted to check what does the community
>> think before submitting.
>> 
>> Thanks for reading and I'm open to discuss any suggestions.
>> 
>> Regards,
>> Vipin
>> 
>> [1]
>> 
>> https://github.com/apache/hadoop/blob/ae3a2c3851cbf7f010f7ae5734ed9e2dbac5d50c/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L2045
>> [2]
>> 
>> https://web.mit.edu/kerberos/krb5-1.15/doc/basic/ccache_def.html#default-ccache-name
>> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-03-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1082/

[Mar 20, 2019 11:44:19 AM] (msingh) HDDS-1306. 
TestContainerStateManagerIntegration fails in Ratis shutdown.
[Mar 20, 2019 3:48:19 PM] (rohithsharmaks) YARN-9389. FlowActivity and FlowRun 
table prefix is wrong. Contributed
[Mar 20, 2019 3:50:50 PM] (rohithsharmaks) YARN-9387. Update document for ATS 
HBase Custom tablenames
[Mar 20, 2019 3:52:54 PM] (rohithsharmaks) YARN-9357. Modify HBase Liveness 
monitor log to debug. Contributed by
[Mar 20, 2019 3:54:31 PM] (rohithsharmaks) YARN-9299. 
TestTimelineReaderWhitelistAuthorizationFilter ignores Http
[Mar 20, 2019 6:20:45 PM] (ajay) HDFS-14176. Replace incorrect use of system 
property user.name.
[Mar 20, 2019 7:45:37 PM] (eyang) YARN-9398.  Fixed javadoc errors for FPGA 
related java files.   
[Mar 20, 2019 11:12:19 PM] (eyang) YARN-9370.  Added logging for recovering 
assigned GPU devices.  




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setEvents(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:[line 159] 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.entity.TimelineEntityDocument.setMetrics(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:keySet iterator instead of entrySet iterator At 
TimelineEntityDocument.java:[line 142] 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 
   Switch statement found in 
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregate(TimelineMetric,
 TimelineMetric) where default case is missing At 
FlowRunDocument.java:TimelineMetric) where default case is missing At 
FlowRunDocument.java:[lines 121-136] 
   
org.apache.hadoop.yarn.server.timelineservice.documentstore.collection.document.flowrun.FlowRunDocument.aggregateMetrics(Map)
 makes inefficient use of keySet iterator instead of entrySet iterator At 
FlowRunDocument.java:keySet iterator instead of entrySet iterator At 
FlowRunDocument.java:[line 103] 
   Possible doublecheck on 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader.client
 in new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration)
 At CosmosDBDocumentStoreReader.java:new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.reader.cosmosdb.CosmosDBDocumentStoreReader(Configuration)
 At CosmosDBDocumentStoreReader.java:[lines 73-75] 
   Possible doublecheck on 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter.client
 in new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration)
 At CosmosDBDocumentStoreWriter.java:new 
org.apache.hadoop.yarn.server.timelineservice.documentstore.writer.cosmosdb.CosmosDBDocumentStoreWriter(Configuration)
 At CosmosDBDocumentStoreWriter.java:[lines 66-68] 

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.ozone.client.rpc.TestContainerStateMachineFailures 
   hadoop.ozone.client.rpc.TestFailureHandlingByClient 
   hadoop.ozone.client.rpc.TestBCSID 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1082/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1082/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   

[jira] [Created] (HADOOP-16204) ABFS tests to include terasort

2019-03-21 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16204:
---

 Summary: ABFS tests to include terasort
 Key: HADOOP-16204
 URL: https://issues.apache.org/jira/browse/HADOOP-16204
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


with MAPREDUCE-7092 in, all the MR examples can be run against object stores, 
even when the cluster fs is just file://

running these against ABFS helps validate that the store works against these 
workflows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-03-21 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Dead store to state in 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At 
FSImageFormatPBINode.java:org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode$Saver.save(OutputStream,
 INodeSymlink) At FSImageFormatPBINode.java:[line 623] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.hdfs.server.blockmanagement.TestPendingReplication 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/diff-compile-cc-root-jdk1.8.0_191.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/diff-compile-javac-root-jdk1.8.0_191.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/267/artifact/out/xml.txt
  [20K]

   findbugs:

   

[jira] [Resolved] (HADOOP-15890) Some S3A committer tests don't match ITest* pattern; don't run in maven

2019-03-21 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15890.
-
   Resolution: Duplicate
Fix Version/s: 3.3.0

> Some S3A committer tests don't match ITest* pattern; don't run in maven
> ---
>
> Key: HADOOP-15890
> URL: https://issues.apache.org/jira/browse/HADOOP-15890
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> some of the s3A committer tests don't have the right prefix for the maven IT 
> test runs to pick up
> {code}
> ITMagicCommitMRJob.java
> ITStagingCommitMRJobBad
> ITDirectoryCommitMRJob
> ITStagingCommitMRJob
> {code}
> They all work when run by name or in the IDE (which is where I developed 
> them), but they don't run in maven builds.
> Fix: rename. There are some new tests in branch-3.2 from HADOOP-15107 which 
> aren't in 3.1; need patches for both.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org