Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-07-05 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/519/

[Jul 4, 2018 8:41:10 PM] (nanda) HDDS-212. Introduce NodeStateManager to manage 
the state of Datanodes in




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency 
   hadoop.hdfs.server.namenode.TestReencryption 
   hadoop.hdfs.server.namenode.TestReencryptionWithKMS 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedInputStream 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestFileConcurrentReader 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReconstructStripedFile 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController
 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestLeveldbConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineserv

答复: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Lin,Yiqun(vip.com)
+1.

-邮件原件-
发件人: Subru Krishnan [mailto:su...@apache.org]
发送时间: 2018年7月6日 5:59
收件人: Wei-Chiu Chuang
抄送: Rohith Sharma K S; Subramaniam Krishnan; yarn-...@hadoop.apache.org; 
Hdfs-dev; Hadoop Common; mapreduce-...@hadoop.apache.org
主题: Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to 
trunk

Unfortunately since it was a merge commit, less straightforward to revert.
You can find the details in the original mail thread:
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E

On Thu, Jul 5, 2018 at 2:49 PM, Wei-Chiu Chuang  wrote:

> I'm sorry I come to this thread late.
> Anu commented on INFRA-16727 saying he reverted the commit. Do we
> still need the vote?
>
> Thanks
>
> On Thu, Jul 5, 2018 at 2:47 PM Rohith Sharma K S <
> rohithsharm...@apache.org> wrote:
>
>> +1
>>
>> On 5 July 2018 at 14:37, Subru Krishnan  wrote:
>>
>> > Folks,
>> >
>> > There was a merge commit accidentally pushed to trunk, you can find
>> > the details in the mail thread [1].
>> >
>> > I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
>> >
>> > Can we have a quick vote for INFRA sign-off to proceed as this is
>> blocking
>> > all commits?
>> >
>> > Thanks,
>> > Subru
>> >
>> > [1]
>> > http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbo
>> > x/% 3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%
>> > 40mail.gmail.com%3E
>> > [2] https://issues.apache.org/jira/browse/INFRA-16727
>> >
>>
>> --
>> A very happy Hadoop contributor
>>
>
本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
 This communication is intended only for the addressee(s) and may contain 
information that is privileged and confidential. You are hereby notified that, 
if you are not an intended recipient listed above, or an authorized employee or 
agent of an addressee of this communication responsible for delivering e-mail 
messages to an intended recipient, any dissemination, distribution or 
reproduction of this communication (including any attachments hereto) is 
strictly prohibited. If you have received this communication in error, please 
notify us immediately by a reply e-mail addressed to the sender and permanently 
delete the original e-mail communication and any attachments from all storage 
devices without making or otherwise retaining a copy.


[jira] [Created] (HADOOP-15585) Fix spaces in HADOOP_OPTS arguments

2018-07-05 Thread Brandon Scheller (JIRA)
Brandon Scheller created HADOOP-15585:
-

 Summary: Fix spaces in HADOOP_OPTS arguments
 Key: HADOOP-15585
 URL: https://issues.apache.org/jira/browse/HADOOP-15585
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.3
Reporter: Brandon Scheller


Prefix exec by eval in Hadoop bin scripts
Prior to this change, if HADOOP_OPTS contains any arguments that include a
space, the command is not parsed correctly. For example, if
HADOOP_OPTS="... -XX:OnOutOfMemoryError=\"kill -9 %p\" ...", the bin/hadoop
script will fail with the error "Unrecognized option: -9". No amount of clever
escaping of the quotes or spaces in the "kill -9 %p" command will fix this.

The only alternative appears to be to use 'eval'. Switching to use 'eval'
*instead of* 'exec' also works, but it results in an intermediate bash process
being left alive throughout the entire lifetime of the Java proces being
started. Using 'exec' prefixed by 'eval' as has been done in this commit gets
the best of both worlds, in that options with spaces are parsed correctly, and
you don't end up with an intermediate bash process as the parent of the Java
process.

This is the exact approach that has been taken with Tomcat as well. See:
http://tomcat.10.x6.nabble.com/Using-eval-vs-exec-in-shell-scripts-td2193116.html
https://github.com/apache/tomcat/commit/3445dc3dba7b15f2fff27faef77003215f62e49a
https://github.com/apache/tomcat/commit/83c0aea60f331eb632dcea8e9919d234903e06d1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS]: securing ASF Hadoop releases out of the box

2018-07-05 Thread Eric Yang
+1 on Non-routable IP idea.  My preference is to start in Hadoop-common to 
minimize the scope and incrementally improve.  However, this will be 
incompatible change for initial user experience on public cloud.  What would be 
the right release vehicle for this work (3.2+ or 4.x)?

Regards,
Eric

On 7/5/18, 2:33 PM, "larry mccay"  wrote:

+1 from me as well.

On Thu, Jul 5, 2018 at 5:19 PM, Steve Loughran 
wrote:

>
>
> > On 5 Jul 2018, at 23:15, Anu Engineer  wrote:
> >
> > +1, on the Non-Routable Idea. We like it so much that we added it to the
> Ozone roadmap.
> > https://issues.apache.org/jira/browse/HDDS-231
> >
> > If there is consensus on bringing this to Hadoop in general, we can
> build this feature in common.
> >
> > --Anu
> >
>
>
> +1 to out the box, everywhere. Web UIs included
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Subru Krishnan
Unfortunately since it was a merge commit, less straightforward to revert.
You can find the details in the original mail thread:
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E

On Thu, Jul 5, 2018 at 2:49 PM, Wei-Chiu Chuang  wrote:

> I'm sorry I come to this thread late.
> Anu commented on INFRA-16727 saying he reverted the commit. Do we still
> need the vote?
>
> Thanks
>
> On Thu, Jul 5, 2018 at 2:47 PM Rohith Sharma K S <
> rohithsharm...@apache.org> wrote:
>
>> +1
>>
>> On 5 July 2018 at 14:37, Subru Krishnan  wrote:
>>
>> > Folks,
>> >
>> > There was a merge commit accidentally pushed to trunk, you can find the
>> > details in the mail thread [1].
>> >
>> > I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
>> >
>> > Can we have a quick vote for INFRA sign-off to proceed as this is
>> blocking
>> > all commits?
>> >
>> > Thanks,
>> > Subru
>> >
>> > [1]
>> > http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%
>> > 3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%
>> > 40mail.gmail.com%3E
>> > [2] https://issues.apache.org/jira/browse/INFRA-16727
>> >
>>
>> --
>> A very happy Hadoop contributor
>>
>


Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Wei-Chiu Chuang
I'm sorry I come to this thread late.
Anu commented on INFRA-16727 saying he reverted the commit. Do we still
need the vote?

Thanks

On Thu, Jul 5, 2018 at 2:47 PM Rohith Sharma K S 
wrote:

> +1
>
> On 5 July 2018 at 14:37, Subru Krishnan  wrote:
>
> > Folks,
> >
> > There was a merge commit accidentally pushed to trunk, you can find the
> > details in the mail thread [1].
> >
> > I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
> >
> > Can we have a quick vote for INFRA sign-off to proceed as this is
> blocking
> > all commits?
> >
> > Thanks,
> > Subru
> >
> > [1]
> > http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%
> > 3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%
> > 40mail.gmail.com%3E
> > [2] https://issues.apache.org/jira/browse/INFRA-16727
> >
>
> --
> A very happy Hadoop contributor
>


Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread varunsax...@apache.org
+1

On Fri, 6 Jul 2018 at 3:07 AM, Subru Krishnan  wrote:

> Folks,
>
> There was a merge commit accidentally pushed to trunk, you can find the
> details in the mail thread [1].
>
> I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
>
> Can we have a quick vote for INFRA sign-off to proceed as this is blocking
> all commits?
>
> Thanks,
> Subru
>
> [1]
>
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E
> [2] https://issues.apache.org/jira/browse/INFRA-16727
>


Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Rohith Sharma K S
+1

On 5 July 2018 at 14:37, Subru Krishnan  wrote:

> Folks,
>
> There was a merge commit accidentally pushed to trunk, you can find the
> details in the mail thread [1].
>
> I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
>
> Can we have a quick vote for INFRA sign-off to proceed as this is blocking
> all commits?
>
> Thanks,
> Subru
>
> [1]
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%
> 3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%
> 40mail.gmail.com%3E
> [2] https://issues.apache.org/jira/browse/INFRA-16727
>


Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Sunil G
+1 for this.

- Sunil


On Thu, Jul 5, 2018 at 2:37 PM Subru Krishnan  wrote:

> Folks,
>
> There was a merge commit accidentally pushed to trunk, you can find the
> details in the mail thread [1].
>
> I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
>
> Can we have a quick vote for INFRA sign-off to proceed as this is blocking
> all commits?
>
> Thanks,
> Subru
>
> [1]
>
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E
> [2] https://issues.apache.org/jira/browse/INFRA-16727
>


Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Eric Yang
+1

On 7/5/18, 2:44 PM, "Giovanni Matteo Fumarola"  
wrote:

+1

On Thu, Jul 5, 2018 at 2:41 PM, Wangda Tan  wrote:

> +1
>
> On Thu, Jul 5, 2018 at 2:37 PM Subru Krishnan  wrote:
>
> > Folks,
> >
> > There was a merge commit accidentally pushed to trunk, you can find the
> > details in the mail thread [1].
> >
> > I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
> >
> > Can we have a quick vote for INFRA sign-off to proceed as this is
> blocking
> > all commits?
> >
> > Thanks,
> > Subru
> >
> > [1]
> >
> > http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%
> 3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%
> 40mail.gmail.com%3E
> > [2] https://issues.apache.org/jira/browse/INFRA-16727
> >
>




Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Giovanni Matteo Fumarola
+1

On Thu, Jul 5, 2018 at 2:41 PM, Wangda Tan  wrote:

> +1
>
> On Thu, Jul 5, 2018 at 2:37 PM Subru Krishnan  wrote:
>
> > Folks,
> >
> > There was a merge commit accidentally pushed to trunk, you can find the
> > details in the mail thread [1].
> >
> > I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
> >
> > Can we have a quick vote for INFRA sign-off to proceed as this is
> blocking
> > all commits?
> >
> > Thanks,
> > Subru
> >
> > [1]
> >
> > http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%
> 3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%
> 40mail.gmail.com%3E
> > [2] https://issues.apache.org/jira/browse/INFRA-16727
> >
>


Re: [VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Wangda Tan
+1

On Thu, Jul 5, 2018 at 2:37 PM Subru Krishnan  wrote:

> Folks,
>
> There was a merge commit accidentally pushed to trunk, you can find the
> details in the mail thread [1].
>
> I have raised an INFRA ticket [2] to reset/force push to clean up trunk.
>
> Can we have a quick vote for INFRA sign-off to proceed as this is blocking
> all commits?
>
> Thanks,
> Subru
>
> [1]
>
> http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E
> [2] https://issues.apache.org/jira/browse/INFRA-16727
>


[VOTE] reset/force push to clean up inadvertent merge commit pushed to trunk

2018-07-05 Thread Subru Krishnan
Folks,

There was a merge commit accidentally pushed to trunk, you can find the
details in the mail thread [1].

I have raised an INFRA ticket [2] to reset/force push to clean up trunk.

Can we have a quick vote for INFRA sign-off to proceed as this is blocking
all commits?

Thanks,
Subru

[1]
http://mail-archives.apache.org/mod_mbox/hadoop-yarn-dev/201807.mbox/%3CCAHqguubKBqwfUMwhtJuSD7X1Bgfro_P6FV%2BhhFhMMYRaxFsF9Q%40mail.gmail.com%3E
[2] https://issues.apache.org/jira/browse/INFRA-16727


Re: [DISCUSS]: securing ASF Hadoop releases out of the box

2018-07-05 Thread larry mccay
+1 from me as well.

On Thu, Jul 5, 2018 at 5:19 PM, Steve Loughran 
wrote:

>
>
> > On 5 Jul 2018, at 23:15, Anu Engineer  wrote:
> >
> > +1, on the Non-Routable Idea. We like it so much that we added it to the
> Ozone roadmap.
> > https://issues.apache.org/jira/browse/HDDS-231
> >
> > If there is consensus on bringing this to Hadoop in general, we can
> build this feature in common.
> >
> > --Anu
> >
>
>
> +1 to out the box, everywhere. Web UIs included
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: Merge branch commit in trunk by mistake

2018-07-05 Thread Anu Engineer
I ran  “git revert -c c163d1797ade0f47d35b4a44381b8ef1dfec5b60 -m 1”

that will remove all changes from Giovanni’s branch (There are 3 YARN commits). 
I am presuming that he can recommit the dropped changes directly into trunk.

I do not know off a better way than to lose changes from his branch. I am open 
to force pushing if that is needed.

--Anu


On 7/5/18, 2:20 PM, "Wangda Tan"  wrote:

Adding back hdfs/common/mr-dev again to cc list.

Here's the last merge revert commit:

https://github.com/apache/hadoop/commit/39ad98903a5f042573b97a2e5438bc57af7cc7a1


On Thu, Jul 5, 2018 at 2:17 PM Wangda Tan  wrote:

> It looks like the latest revert is not correct, many of commits get
> reverted.
>
> Dealing with merge commit revert is different from reverting a normal
> commit: https://www.christianengvall.se/undo-pushed-merge-git/
>
> We have to do force reset, now it is a complete mess in trunk.
>
>
>
> On Thu, Jul 5, 2018 at 2:10 PM Vinod Kumar Vavilapalli 

> wrote:
>
>> What is broken due to this merge commit?
>>
>> +Vinod
>>
>> > On Jul 5, 2018, at 2:03 PM, Arun Suresh  wrote:
>> >
>> > I agree with Sean, to be honest.. it is disruptive.
>> > Also, we have to kind of lock down the repo till it is completed..
>> >
>> > I recommend we be careful and try not to get into this situation 
again..
>> >
>> > -1 on force pushing..
>> >
>> > Cheers
>> > -Arun
>> >
>> > On Thu, Jul 5, 2018, 1:55 PM Sean Busbey  wrote:
>> >
>> >> If we need a vote, please have a thread with either DISCUSS or
>> >> preferably VOTE in the subject so folks are more likely to see it.
>> >>
>> >> that said, I'm -1 (non-binding). force pushes are extremely
>> >> disruptive. there's no way to know who's updated their local git repo
>> >> to include these changes in the last few hours. if a merge commit is
>> >> so disruptive that we need to subject folks to the inconvenience of a
>> >> force push then we should have more tooling in place to avoid them
>> >> (like client side git hooks for all committers).
>> >>
>> >> On Thu, Jul 5, 2018 at 3:36 PM, Wangda Tan 
>> wrote:
>> >>> +1 for force reset the branch.
>> >>>
>> >>> On Thu, Jul 5, 2018 at 12:14 PM Subru Krishnan 
>> wrote:
>> >>>
>>  Looking at the merge commit, I feel it's better to reset/force push
>>  especially since this is still the latest commit on trunk.
>> 
>>  I have raised an INFRA ticket requesting the same:
>>  https://issues.apache.org/jira/browse/INFRA-16727
>> 
>>  -S
>> 
>>  On Thu, Jul 5, 2018 at 11:45 AM, Sean Busbey
>> >> 
>>  wrote:
>> 
>> > FYI, no images make it through ASF mailing lists. I presume the
>> image
>> >> was
>> > of the git history? If that's correct, here's what that looks like
>> in
>> >> a
>> > paste:
>> >
>> > https://paste.apache.org/eRix
>> >
>> > There are no force pushes on trunk, so backing the change out would
>>  require
>> > the PMC asking INFRA to unblock force pushes for a period of time.
>> >
>> > Probably the merge commit isn't a big enough deal to do that. There
>> >> was a
>> > merge commit ~5 months ago for when YARN-6592 merged into trunk.
>> >
>> > So I'd say just try to avoid doing it in the future?
>> >
>> > -busbey
>> >
>> > On Thu, Jul 5, 2018 at 1:31 PM, Giovanni Matteo Fumarola <
>> > giovanni.fumar...@gmail.com> wrote:
>> >
>> >> Hi folks,
>> >>
>> >> After I pushed something on trunk a merge commit showed up in the
>> > history. *My
>> >> bad*.
>> >>
>> >>
>> >>
>> >> Since it was one of my first patches, I run a few tests on my
>> >> machine
>> >> before checked in.
>> >> While I was running all the tests, someone else checked in. I
>> >> correctly
>> >> pulled all the new changes.
>> >>
>> >> Even before I did the "git push" there was no merge commit in my
>>  history.
>> >>
>> >> Can someone help me reverting this change?
>> >>
>> >> Thanks
>> >> Giovanni
>> >>
>> >>
>> >>
>> >
>> >
>> > --
>> > busbey
>> >
>> 
>> >>
>> >>
>> >>
>> >> --
>> >> busbey
>> >>
>>
>>



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [DISCUSS]: securing ASF Hadoop releases out of the box

2018-07-05 Thread Steve Loughran



> On 5 Jul 2018, at 23:15, Anu Engineer  wrote:
> 
> +1, on the Non-Routable Idea. We like it so much that we added it to the 
> Ozone roadmap.
> https://issues.apache.org/jira/browse/HDDS-231
> 
> If there is consensus on bringing this to Hadoop in general, we can build 
> this feature in common.
> 
> --Anu
> 


+1 to out the box, everywhere. Web UIs included


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Merge branch commit in trunk by mistake

2018-07-05 Thread Wangda Tan
Adding back hdfs/common/mr-dev again to cc list.

Here's the last merge revert commit:
https://github.com/apache/hadoop/commit/39ad98903a5f042573b97a2e5438bc57af7cc7a1


On Thu, Jul 5, 2018 at 2:17 PM Wangda Tan  wrote:

> It looks like the latest revert is not correct, many of commits get
> reverted.
>
> Dealing with merge commit revert is different from reverting a normal
> commit: https://www.christianengvall.se/undo-pushed-merge-git/
>
> We have to do force reset, now it is a complete mess in trunk.
>
>
>
> On Thu, Jul 5, 2018 at 2:10 PM Vinod Kumar Vavilapalli 
> wrote:
>
>> What is broken due to this merge commit?
>>
>> +Vinod
>>
>> > On Jul 5, 2018, at 2:03 PM, Arun Suresh  wrote:
>> >
>> > I agree with Sean, to be honest.. it is disruptive.
>> > Also, we have to kind of lock down the repo till it is completed..
>> >
>> > I recommend we be careful and try not to get into this situation again..
>> >
>> > -1 on force pushing..
>> >
>> > Cheers
>> > -Arun
>> >
>> > On Thu, Jul 5, 2018, 1:55 PM Sean Busbey  wrote:
>> >
>> >> If we need a vote, please have a thread with either DISCUSS or
>> >> preferably VOTE in the subject so folks are more likely to see it.
>> >>
>> >> that said, I'm -1 (non-binding). force pushes are extremely
>> >> disruptive. there's no way to know who's updated their local git repo
>> >> to include these changes in the last few hours. if a merge commit is
>> >> so disruptive that we need to subject folks to the inconvenience of a
>> >> force push then we should have more tooling in place to avoid them
>> >> (like client side git hooks for all committers).
>> >>
>> >> On Thu, Jul 5, 2018 at 3:36 PM, Wangda Tan 
>> wrote:
>> >>> +1 for force reset the branch.
>> >>>
>> >>> On Thu, Jul 5, 2018 at 12:14 PM Subru Krishnan 
>> wrote:
>> >>>
>>  Looking at the merge commit, I feel it's better to reset/force push
>>  especially since this is still the latest commit on trunk.
>> 
>>  I have raised an INFRA ticket requesting the same:
>>  https://issues.apache.org/jira/browse/INFRA-16727
>> 
>>  -S
>> 
>>  On Thu, Jul 5, 2018 at 11:45 AM, Sean Busbey
>> >> 
>>  wrote:
>> 
>> > FYI, no images make it through ASF mailing lists. I presume the
>> image
>> >> was
>> > of the git history? If that's correct, here's what that looks like
>> in
>> >> a
>> > paste:
>> >
>> > https://paste.apache.org/eRix
>> >
>> > There are no force pushes on trunk, so backing the change out would
>>  require
>> > the PMC asking INFRA to unblock force pushes for a period of time.
>> >
>> > Probably the merge commit isn't a big enough deal to do that. There
>> >> was a
>> > merge commit ~5 months ago for when YARN-6592 merged into trunk.
>> >
>> > So I'd say just try to avoid doing it in the future?
>> >
>> > -busbey
>> >
>> > On Thu, Jul 5, 2018 at 1:31 PM, Giovanni Matteo Fumarola <
>> > giovanni.fumar...@gmail.com> wrote:
>> >
>> >> Hi folks,
>> >>
>> >> After I pushed something on trunk a merge commit showed up in the
>> > history. *My
>> >> bad*.
>> >>
>> >>
>> >>
>> >> Since it was one of my first patches, I run a few tests on my
>> >> machine
>> >> before checked in.
>> >> While I was running all the tests, someone else checked in. I
>> >> correctly
>> >> pulled all the new changes.
>> >>
>> >> Even before I did the "git push" there was no merge commit in my
>>  history.
>> >>
>> >> Can someone help me reverting this change?
>> >>
>> >> Thanks
>> >> Giovanni
>> >>
>> >>
>> >>
>> >
>> >
>> > --
>> > busbey
>> >
>> 
>> >>
>> >>
>> >>
>> >> --
>> >> busbey
>> >>
>>
>>


Re: [DISCUSS]: securing ASF Hadoop releases out of the box

2018-07-05 Thread Anu Engineer
+1, on the Non-Routable Idea. We like it so much that we added it to the Ozone 
roadmap.
https://issues.apache.org/jira/browse/HDDS-231

If there is consensus on bringing this to Hadoop in general, we can build this 
feature in common.

--Anu


On 7/5/18, 1:09 PM, "Sean Busbey"  wrote:

I really, really like the approach of defaulting to only non-routeable
IPs allowed. it seems like a good tradeoff for complexity of
implementation, pain to reconfigure, and level of protection.

On Thu, Jul 5, 2018 at 2:25 PM, Todd Lipcon  
wrote:
> The approach we took in Apache Kudu is that, if Kerberos hasn't been
> enabled, we default to a whitelist of subnets. The default whitelist is
> 127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16 which
> matches the IANA "non-routeable IP" subnet list.
>
> In other words, out-of-the-box, you get a deployment that works fine 
within
> a typical LAN environment, but won't allow some remote hacker to locate
> your cluster and access your data. We thought this was a nice balance
> between "works out of the box without lots of configuration" and "decent
> security". In my opinion a "localhost-only by default" would be be overly
> restrictive since I'd usually be deploying on some datacenter or EC2
> machine and then trying to access it from a client on my laptop.
>
> We released this first a bit over a year ago if my memory serves me, and
> we've had relatively few complaints or questions about it. We also made
> sure that the error message that comes back to clients is pretty
> reasonable, indicating the specific configuration that is disallowing
> access, so if people hit the issue on upgrade they had a clear idea what 
is
> going on.
>
> Of course it's not foolproof, since as Eric says, you're still likely open
> to the entirety of your corporation, and you may not want that, but as he
> also pointed out, that might be true even if you enable Kerberos
> authentication.
>
> -Todd
>
> On Thu, Jul 5, 2018 at 11:38 AM, Eric Yang  wrote:
>
>> Hadoop default configuration aimed for user friendliness to increase
>> adoption, and security can be enabled one by one.  This approach is most
>> problematic to security because system can be compromised before all
>> security features are turned on.
>> Larry's proposal will add some safety to remind system admin if security
>> is disabled.  However, reducing the number of knobs on security configs 
are
>> likely required to make the system secure for the banner idea to work
>> without writing too much guessing logic to determine if UI is secured.
>> Penetration test can provide better insights of what hasn't been secured 
to
>> improve the next release.  Thankfully most Hadoop vendors have done this
>> work periodically to help the community secure Hadoop.
>>
>> There are plenty of company advertised if you want security, use
>> Kerberos.  This statement is not entirely true.  Kerberos makes security
>> more difficult to crack for external parties, but it shouldn't be the 
only
>> method to secure Hadoop.  When the Kerberos environment is larger than
>> Hadoop cluster, anyone within Kerberos environment can access Hadoop
>> cluster freely without restriction.  In large scale enterprises or some
>> cloud vendors that sublet their resources, this might not be acceptable.
>>
>> From my point of view, a secure Hadoop release must default all settings
>> to localhost only and allow users to add more hosts through authorized
>> white list of servers.  This will keep security perimeter in check.  All
>> wild card ACLs will need to be removed or default to current user/current
>> host only.  Proxy user/host ACL list must be enforced on http channels.
>> This is basically realigning the default configuration to single node
>> cluster or firewalled configuration.
>>
>> Regards,
>> Eric
>>
>> On 7/5/18, 8:24 AM, "larry mccay"  wrote:
>>
>> Hi Steve -
>>
>> This is a long overdue DISCUSS thread!
>>
>> Perhaps the UIs can very visibly state (in red) "WARNING: UNSECURED 
UI
>> ACCESS - OPEN TO COMPROMISE" - maybe even force a click through the
>> warning
>> to get to the page like SSL exceptions in the browser do?
>> Similar tactic for UI access without SSL?
>> A new AuthenticationFilter can be added to the filter chains that
>> blocks
>> API calls unless explicitly configured to be open and obvious log a
>> similar
>> message?
>>
>> thanks,
>>
>> --larry
>>
>>
>>
>>
>> On Wed, Jul 4, 2018 at 11:58 AM, Steve Loughran <
>> ste...@hortonworks.com>
>> wrote:
>>
>> > Bitcoins are profitable enough to justify writing malwar

[jira] [Created] (HADOOP-15584) move httpcomponents version in pom.xml

2018-07-05 Thread Brandon Scheller (JIRA)
Brandon Scheller created HADOOP-15584:
-

 Summary: move httpcomponents version in pom.xml
 Key: HADOOP-15584
 URL: https://issues.apache.org/jira/browse/HADOOP-15584
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.8.3
Reporter: Brandon Scheller


Move httpcomponents version to their own config.

By moving httpcomponent versions in pom.xml to their own variables this will 
allow for easy overriding



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Merge branch commit in trunk by mistake

2018-07-05 Thread Wangda Tan
+ hdfs-dev/common-dev/mapreduce-dev

On Thu, Jul 5, 2018 at 2:09 PM Sunil G  wrote:

> I just see that this is reverted.
>
> commit 39ad98903a5f042573b97a2e5438bc57af7cc7a1 (origin/trunk, origin/HEAD)
> Author: Anu Engineer 
> Date:   Thu Jul 5 12:22:18 2018 -0700
>
> Revert "Merge branch 'trunk' of
> https://git-wip-us.apache.org/repos/asf/hadoop into trunk"
>
> This reverts commit c163d1797ade0f47d35b4a44381b8ef1dfec5b60, reversing
> changes made to 0d9804dcef2eab5ebf84667d9ca49bb035d9a731.
>
> commit c163d1797ade0f47d35b4a44381b8ef1dfec5b60
> Merge: 0d9804dcef2 99febe7fd50
> Author: Giovanni Matteo Fumarola 
> Date:   Thu Jul 5 10:55:05 2018 -0700
>
> Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/hadoop
> into trunk
>
>
> - Sunil
>
> On Thu, Jul 5, 2018 at 2:04 PM Arun Suresh  wrote:
>
> > I agree with Sean, to be honest.. it is disruptive.
> > Also, we have to kind of lock down the repo till it is completed..
> >
> > I recommend we be careful and try not to get into this situation again..
> >
> > -1 on force pushing..
> >
> > Cheers
> > -Arun
> >
> > On Thu, Jul 5, 2018, 1:55 PM Sean Busbey  wrote:
> >
> > > If we need a vote, please have a thread with either DISCUSS or
> > > preferably VOTE in the subject so folks are more likely to see it.
> > >
> > > that said, I'm -1 (non-binding). force pushes are extremely
> > > disruptive. there's no way to know who's updated their local git repo
> > > to include these changes in the last few hours. if a merge commit is
> > > so disruptive that we need to subject folks to the inconvenience of a
> > > force push then we should have more tooling in place to avoid them
> > > (like client side git hooks for all committers).
> > >
> > > On Thu, Jul 5, 2018 at 3:36 PM, Wangda Tan 
> wrote:
> > > > +1 for force reset the branch.
> > > >
> > > > On Thu, Jul 5, 2018 at 12:14 PM Subru Krishnan 
> > wrote:
> > > >
> > > >> Looking at the merge commit, I feel it's better to reset/force push
> > > >> especially since this is still the latest commit on trunk.
> > > >>
> > > >> I have raised an INFRA ticket requesting the same:
> > > >> https://issues.apache.org/jira/browse/INFRA-16727
> > > >>
> > > >> -S
> > > >>
> > > >> On Thu, Jul 5, 2018 at 11:45 AM, Sean Busbey
> > > 
> > > >> wrote:
> > > >>
> > > >> > FYI, no images make it through ASF mailing lists. I presume the
> > image
> > > was
> > > >> > of the git history? If that's correct, here's what that looks like
> > in
> > > a
> > > >> > paste:
> > > >> >
> > > >> > https://paste.apache.org/eRix
> > > >> >
> > > >> > There are no force pushes on trunk, so backing the change out
> would
> > > >> require
> > > >> > the PMC asking INFRA to unblock force pushes for a period of time.
> > > >> >
> > > >> > Probably the merge commit isn't a big enough deal to do that.
> There
> > > was a
> > > >> > merge commit ~5 months ago for when YARN-6592 merged into trunk.
> > > >> >
> > > >> > So I'd say just try to avoid doing it in the future?
> > > >> >
> > > >> > -busbey
> > > >> >
> > > >> > On Thu, Jul 5, 2018 at 1:31 PM, Giovanni Matteo Fumarola <
> > > >> > giovanni.fumar...@gmail.com> wrote:
> > > >> >
> > > >> > > Hi folks,
> > > >> > >
> > > >> > > After I pushed something on trunk a merge commit showed up in
> the
> > > >> > history. *My
> > > >> > > bad*.
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > Since it was one of my first patches, I run a few tests on my
> > > machine
> > > >> > > before checked in.
> > > >> > > While I was running all the tests, someone else checked in. I
> > > correctly
> > > >> > > pulled all the new changes.
> > > >> > >
> > > >> > > Even before I did the "git push" there was no merge commit in my
> > > >> history.
> > > >> > >
> > > >> > > Can someone help me reverting this change?
> > > >> > >
> > > >> > > Thanks
> > > >> > > Giovanni
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> >
> > > >> >
> > > >> > --
> > > >> > busbey
> > > >> >
> > > >>
> > >
> > >
> > >
> > > --
> > > busbey
> > >
> >
>


Re: Merge branch commit in trunk by mistake

2018-07-05 Thread Anu Engineer
Based on conversations with Giovanni and Subru, I have pushed a revert for this 
merge.

Thanks
Anu


On 7/5/18, 12:55 PM, "Giovanni Matteo Fumarola"  
wrote:

+ common-dev and hdfs-dev as fyi.

Thanks Subru and Sean for the answer.

On Thu, Jul 5, 2018 at 12:14 PM, Subru Krishnan  wrote:

> Looking at the merge commit, I feel it's better to reset/force push
> especially since this is still the latest commit on trunk.
>
> I have raised an INFRA ticket requesting the same:
> https://issues.apache.org/jira/browse/INFRA-16727
>
> -S
>
> On Thu, Jul 5, 2018 at 11:45 AM, Sean Busbey 
> wrote:
>
>> FYI, no images make it through ASF mailing lists. I presume the image was
>> of the git history? If that's correct, here's what that looks like in a
>> paste:
>>
>> https://paste.apache.org/eRix
>>
>> There are no force pushes on trunk, so backing the change out would
>> require
>> the PMC asking INFRA to unblock force pushes for a period of time.
>>
>> Probably the merge commit isn't a big enough deal to do that. There was a
>> merge commit ~5 months ago for when YARN-6592 merged into trunk.
>>
>> So I'd say just try to avoid doing it in the future?
>>
>> -busbey
>>
>> On Thu, Jul 5, 2018 at 1:31 PM, Giovanni Matteo Fumarola <
>> giovanni.fumar...@gmail.com> wrote:
>>
>> > Hi folks,
>> >
>> > After I pushed something on trunk a merge commit showed up in the
>> history. *My
>> > bad*.
>> >
>> >
>> >
>> > Since it was one of my first patches, I run a few tests on my machine
>> > before checked in.
>> > While I was running all the tests, someone else checked in. I correctly
>> > pulled all the new changes.
>> >
>> > Even before I did the "git push" there was no merge commit in my
>> history.
>> >
>> > Can someone help me reverting this change?
>> >
>> > Thanks
>> > Giovanni
>> >
>> >
>> >
>>
>>
>> --
>> busbey
>>
>
>




[jira] [Created] (HADOOP-15583) S3Guard to get AWS Credential chain from S3AFS

2018-07-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15583:
---

 Summary: S3Guard to get AWS Credential chain from S3AFS
 Key: HADOOP-15583
 URL: https://issues.apache.org/jira/browse/HADOOP-15583
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Steve Loughran


S3Guard builds its DDB auth chain itself, which stops it having to worry about 
being created standalone vs part of an S3AFS, but it means its authenticators 
are in a separate chain.

When you are using short-lived assumed roles or other session credentials 
updated in the S3A FS authentication chain, you need that same set of 
credentials picked up by DDB. Otherwise, at best you are doubling load, at 
worse: the DDB connector may not get refreshed credentials.

Proposed: {{DynamoDBClientFactory.createDynamoDBClient()}} to take an optional 
ref to aws credentials. If set: don't create a new set. 

There's one little complication here: our {{AWSCredentialProviderList}} list is 
autocloseable; it's close() will go through all children and close them. 
Apparently the AWS S3 client (And hopefully the DDB client) will close this 
when they are closed themselves. If DDB  has the same set of credentials as the 
FS, then there could be trouble if they are closed in one place when the other 
still wants to use them.

Solution; have a use count the uses of the credentials list, starting at one: 
every close() call decrements, and when this hits zero the cleanup is kicked off





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS]: securing ASF Hadoop releases out of the box

2018-07-05 Thread Sean Busbey
I really, really like the approach of defaulting to only non-routeable
IPs allowed. it seems like a good tradeoff for complexity of
implementation, pain to reconfigure, and level of protection.

On Thu, Jul 5, 2018 at 2:25 PM, Todd Lipcon  wrote:
> The approach we took in Apache Kudu is that, if Kerberos hasn't been
> enabled, we default to a whitelist of subnets. The default whitelist is
> 127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16 which
> matches the IANA "non-routeable IP" subnet list.
>
> In other words, out-of-the-box, you get a deployment that works fine within
> a typical LAN environment, but won't allow some remote hacker to locate
> your cluster and access your data. We thought this was a nice balance
> between "works out of the box without lots of configuration" and "decent
> security". In my opinion a "localhost-only by default" would be be overly
> restrictive since I'd usually be deploying on some datacenter or EC2
> machine and then trying to access it from a client on my laptop.
>
> We released this first a bit over a year ago if my memory serves me, and
> we've had relatively few complaints or questions about it. We also made
> sure that the error message that comes back to clients is pretty
> reasonable, indicating the specific configuration that is disallowing
> access, so if people hit the issue on upgrade they had a clear idea what is
> going on.
>
> Of course it's not foolproof, since as Eric says, you're still likely open
> to the entirety of your corporation, and you may not want that, but as he
> also pointed out, that might be true even if you enable Kerberos
> authentication.
>
> -Todd
>
> On Thu, Jul 5, 2018 at 11:38 AM, Eric Yang  wrote:
>
>> Hadoop default configuration aimed for user friendliness to increase
>> adoption, and security can be enabled one by one.  This approach is most
>> problematic to security because system can be compromised before all
>> security features are turned on.
>> Larry's proposal will add some safety to remind system admin if security
>> is disabled.  However, reducing the number of knobs on security configs are
>> likely required to make the system secure for the banner idea to work
>> without writing too much guessing logic to determine if UI is secured.
>> Penetration test can provide better insights of what hasn't been secured to
>> improve the next release.  Thankfully most Hadoop vendors have done this
>> work periodically to help the community secure Hadoop.
>>
>> There are plenty of company advertised if you want security, use
>> Kerberos.  This statement is not entirely true.  Kerberos makes security
>> more difficult to crack for external parties, but it shouldn't be the only
>> method to secure Hadoop.  When the Kerberos environment is larger than
>> Hadoop cluster, anyone within Kerberos environment can access Hadoop
>> cluster freely without restriction.  In large scale enterprises or some
>> cloud vendors that sublet their resources, this might not be acceptable.
>>
>> From my point of view, a secure Hadoop release must default all settings
>> to localhost only and allow users to add more hosts through authorized
>> white list of servers.  This will keep security perimeter in check.  All
>> wild card ACLs will need to be removed or default to current user/current
>> host only.  Proxy user/host ACL list must be enforced on http channels.
>> This is basically realigning the default configuration to single node
>> cluster or firewalled configuration.
>>
>> Regards,
>> Eric
>>
>> On 7/5/18, 8:24 AM, "larry mccay"  wrote:
>>
>> Hi Steve -
>>
>> This is a long overdue DISCUSS thread!
>>
>> Perhaps the UIs can very visibly state (in red) "WARNING: UNSECURED UI
>> ACCESS - OPEN TO COMPROMISE" - maybe even force a click through the
>> warning
>> to get to the page like SSL exceptions in the browser do?
>> Similar tactic for UI access without SSL?
>> A new AuthenticationFilter can be added to the filter chains that
>> blocks
>> API calls unless explicitly configured to be open and obvious log a
>> similar
>> message?
>>
>> thanks,
>>
>> --larry
>>
>>
>>
>>
>> On Wed, Jul 4, 2018 at 11:58 AM, Steve Loughran <
>> ste...@hortonworks.com>
>> wrote:
>>
>> > Bitcoins are profitable enough to justify writing malware to run on
>> Hadoop
>> > clusters & schedule mining jobs: there have been a couple of
>> incidents of
>> > this in the wild, generally going in through no security, well known
>> > passwords, open ports.
>> >
>> > Vendors of Hadoop-related products get to deal with their lockdown
>> > themselves, which they often do by installing kerberos from the
>> outset,
>> > making users make up their own password for admin accounts, etc.
>> >
>> > The ASF releases though: we just provide something insecure out the
>> box
>> > and some docs saying "use kerberos if you want security"
>> >
>> > What we can do here?
>> >
>>

Re: Why are some Jenkins builds failing in Yarn-UI

2018-07-05 Thread Steve Loughran
thanks.

This is "unfortunate". I know nobody wanted this to happen, but it is still 
pretty serious.

For the specific HADOOP-15407 branch I'd just rebase onto trunk and force-push 
it out, but apache git seems to be blocking me from doing that...

I will keep an eye on YARN-8387; unrepeatable builds are the enemy of us all.

-steve

On 5 Jul 2018, at 19:11, Sunil G mailto:sun...@apache.org>> 
wrote:

Hi Steve

I was recently looking into this failure and this is due to a repo hosting
problem. It seems Bower team has deprecated old registry url
https://bower.herokuapp.com and need to change 
registry url to
https://registry.bower.io. YARN-8457 addressed this 
problem is backported
till branch-2.9.
I think the branch also needs a rebase to run away from this problem.

 1.  What is Bower?
YARN ui2 is a single page web application and using ember js framework.
ember-2 is using bower as package manager. Bibin is doing some work in
YARN-8387 to move away from bower to resolve such issue in future.

 2.  Why is it breaking Jenkins builds
Bower team has deprecated old registry url 
https://bower.herokuapp.com and
need to change registry url to 
https://registry.bower.io.

 3.  Can you nominate someone to provide the patch to fix this?
YARN-8457 addressed this problem. Cud u pls backport this patch alone to
the branch and help to verify.

 4.  Will every active branch need a similar patch?
Yes. This is a bit unfortunate. If these branches can be rebased to top of
respective main branches or backport this patch alone, we could avoid this
build error.

 5.  Have any releases stopped building (3.1.0?)
Since this patch is already in 3.1.0, this is unblocked.

 6.  Do we have any stability guarantees about Bower's build process in
future?
YARN-8387 will be helping on this. Bibin Chundatt has some offline patches
which he still working on with the help of other communities to resolve
bower dependencies and upgrade to ember 3 which will help in offline
compilation.


- Sunil

On Thu, Jul 5, 2018 at 4:45 AM Steve Loughran 
mailto:ste...@hortonworks.com>>
wrote:


Hi

The HADOOP-15407 "abfs" branch has started failing


https://builds.apache.org/job/PreCommit-HADOOP-Build/14861/artifact/out/patch-compile-root.txt

[INFO] --- frontend-maven-plugin:1.5:bower (bower install) @
hadoop-yarn-ui ---
[INFO] Running 'bower install' in
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp
[ERROR] bower ember-cli-test-loader#0.2.1  EINVRES Request to
https://bower.herokuapp.com/packages/ember-cli-test-loader failed with 502
[INFO]

[INFO] Reactor Summary:

And the namedc web page returns : This Bower version is deprecated. Please
update it: npm install -g bower. The new registry address is
https://registry.bower.io


We haven't gone near the YARN code, and yet a branch which is only a few
weeks old is now failing to build on Jenkins systems

I have a few questions for the Yarn team


 1.  What is Bower?
 2.  Why is it breaking Jenkins builds
 3.  Can you nominate someone to provide the patch to fix this?
 4.  Will every active branch need a similar patch?
 5.  Have any releases stopped building (3.1.0?)
 6.  Do we have any stability guarantees about Bower's build process in
future?

Given a fork I'm worried about the long-term reproducibility of builds. Is
this just a one off move of some build artifact repository, or is this a
moving target which will stop source code releases from working. I can deal
with cherry picking patches to WiP branches, tuning Jenkins, etc, but if we
lose the ability to rebuild releases, the whole notion of "stable release"
is dead. I know Maven exposes us to similar risks, but the maven central
repo is stabile and available, so hasn't forced us into building an
SCM-msnaged artifact tree the way I've done elsewhere (Ivy makes that
straightforward; maven less so as you need to be online to boot). And we
are now relying on docker for release builds. So we are already
vulnerable...I'm just worried that Bower is making things worse.

-Steve

(ps: Sides from Olaf Febbe on the bigtop team on attacking builds through
mavan https://oflebbe.de/presentations/2018/attackingiotdev.pdf)



Re: Merge branch commit in trunk by mistake

2018-07-05 Thread Giovanni Matteo Fumarola
+ common-dev and hdfs-dev as fyi.

Thanks Subru and Sean for the answer.

On Thu, Jul 5, 2018 at 12:14 PM, Subru Krishnan  wrote:

> Looking at the merge commit, I feel it's better to reset/force push
> especially since this is still the latest commit on trunk.
>
> I have raised an INFRA ticket requesting the same:
> https://issues.apache.org/jira/browse/INFRA-16727
>
> -S
>
> On Thu, Jul 5, 2018 at 11:45 AM, Sean Busbey 
> wrote:
>
>> FYI, no images make it through ASF mailing lists. I presume the image was
>> of the git history? If that's correct, here's what that looks like in a
>> paste:
>>
>> https://paste.apache.org/eRix
>>
>> There are no force pushes on trunk, so backing the change out would
>> require
>> the PMC asking INFRA to unblock force pushes for a period of time.
>>
>> Probably the merge commit isn't a big enough deal to do that. There was a
>> merge commit ~5 months ago for when YARN-6592 merged into trunk.
>>
>> So I'd say just try to avoid doing it in the future?
>>
>> -busbey
>>
>> On Thu, Jul 5, 2018 at 1:31 PM, Giovanni Matteo Fumarola <
>> giovanni.fumar...@gmail.com> wrote:
>>
>> > Hi folks,
>> >
>> > After I pushed something on trunk a merge commit showed up in the
>> history. *My
>> > bad*.
>> >
>> >
>> >
>> > Since it was one of my first patches, I run a few tests on my machine
>> > before checked in.
>> > While I was running all the tests, someone else checked in. I correctly
>> > pulled all the new changes.
>> >
>> > Even before I did the "git push" there was no merge commit in my
>> history.
>> >
>> > Can someone help me reverting this change?
>> >
>> > Thanks
>> > Giovanni
>> >
>> >
>> >
>>
>>
>> --
>> busbey
>>
>
>


Re: [DISCUSS]: securing ASF Hadoop releases out of the box

2018-07-05 Thread Todd Lipcon
The approach we took in Apache Kudu is that, if Kerberos hasn't been
enabled, we default to a whitelist of subnets. The default whitelist is
127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,169.254.0.0/16 which
matches the IANA "non-routeable IP" subnet list.

In other words, out-of-the-box, you get a deployment that works fine within
a typical LAN environment, but won't allow some remote hacker to locate
your cluster and access your data. We thought this was a nice balance
between "works out of the box without lots of configuration" and "decent
security". In my opinion a "localhost-only by default" would be be overly
restrictive since I'd usually be deploying on some datacenter or EC2
machine and then trying to access it from a client on my laptop.

We released this first a bit over a year ago if my memory serves me, and
we've had relatively few complaints or questions about it. We also made
sure that the error message that comes back to clients is pretty
reasonable, indicating the specific configuration that is disallowing
access, so if people hit the issue on upgrade they had a clear idea what is
going on.

Of course it's not foolproof, since as Eric says, you're still likely open
to the entirety of your corporation, and you may not want that, but as he
also pointed out, that might be true even if you enable Kerberos
authentication.

-Todd

On Thu, Jul 5, 2018 at 11:38 AM, Eric Yang  wrote:

> Hadoop default configuration aimed for user friendliness to increase
> adoption, and security can be enabled one by one.  This approach is most
> problematic to security because system can be compromised before all
> security features are turned on.
> Larry's proposal will add some safety to remind system admin if security
> is disabled.  However, reducing the number of knobs on security configs are
> likely required to make the system secure for the banner idea to work
> without writing too much guessing logic to determine if UI is secured.
> Penetration test can provide better insights of what hasn't been secured to
> improve the next release.  Thankfully most Hadoop vendors have done this
> work periodically to help the community secure Hadoop.
>
> There are plenty of company advertised if you want security, use
> Kerberos.  This statement is not entirely true.  Kerberos makes security
> more difficult to crack for external parties, but it shouldn't be the only
> method to secure Hadoop.  When the Kerberos environment is larger than
> Hadoop cluster, anyone within Kerberos environment can access Hadoop
> cluster freely without restriction.  In large scale enterprises or some
> cloud vendors that sublet their resources, this might not be acceptable.
>
> From my point of view, a secure Hadoop release must default all settings
> to localhost only and allow users to add more hosts through authorized
> white list of servers.  This will keep security perimeter in check.  All
> wild card ACLs will need to be removed or default to current user/current
> host only.  Proxy user/host ACL list must be enforced on http channels.
> This is basically realigning the default configuration to single node
> cluster or firewalled configuration.
>
> Regards,
> Eric
>
> On 7/5/18, 8:24 AM, "larry mccay"  wrote:
>
> Hi Steve -
>
> This is a long overdue DISCUSS thread!
>
> Perhaps the UIs can very visibly state (in red) "WARNING: UNSECURED UI
> ACCESS - OPEN TO COMPROMISE" - maybe even force a click through the
> warning
> to get to the page like SSL exceptions in the browser do?
> Similar tactic for UI access without SSL?
> A new AuthenticationFilter can be added to the filter chains that
> blocks
> API calls unless explicitly configured to be open and obvious log a
> similar
> message?
>
> thanks,
>
> --larry
>
>
>
>
> On Wed, Jul 4, 2018 at 11:58 AM, Steve Loughran <
> ste...@hortonworks.com>
> wrote:
>
> > Bitcoins are profitable enough to justify writing malware to run on
> Hadoop
> > clusters & schedule mining jobs: there have been a couple of
> incidents of
> > this in the wild, generally going in through no security, well known
> > passwords, open ports.
> >
> > Vendors of Hadoop-related products get to deal with their lockdown
> > themselves, which they often do by installing kerberos from the
> outset,
> > making users make up their own password for admin accounts, etc.
> >
> > The ASF releases though: we just provide something insecure out the
> box
> > and some docs saying "use kerberos if you want security"
> >
> > What we can do here?
> >
> > Some things to think about
> >
> > * docs explaining IN CAPITAL LETTERS why you need to lock down your
> > cluster to a private subnet or use Kerberos
> > * Anything which can be done to make Kerberos easier (?). I see
> there are
> > some oustanding patches for HADOOP-12649 which need review, but what
> else?
> >
> > Could we have Hadoop d

Re: [DISCUSS]: securing ASF Hadoop releases out of the box

2018-07-05 Thread Eric Yang
Hadoop default configuration aimed for user friendliness to increase adoption, 
and security can be enabled one by one.  This approach is most problematic to 
security because system can be compromised before all security features are 
turned on.  
Larry's proposal will add some safety to remind system admin if security is 
disabled.  However, reducing the number of knobs on security configs are likely 
required to make the system secure for the banner idea to work without writing 
too much guessing logic to determine if UI is secured.  Penetration test can 
provide better insights of what hasn't been secured to improve the next 
release.  Thankfully most Hadoop vendors have done this work periodically to 
help the community secure Hadoop.

There are plenty of company advertised if you want security, use Kerberos.  
This statement is not entirely true.  Kerberos makes security more difficult to 
crack for external parties, but it shouldn't be the only method to secure 
Hadoop.  When the Kerberos environment is larger than Hadoop cluster, anyone 
within Kerberos environment can access Hadoop cluster freely without 
restriction.  In large scale enterprises or some cloud vendors that sublet 
their resources, this might not be acceptable.
 
From my point of view, a secure Hadoop release must default all settings to 
localhost only and allow users to add more hosts through authorized white list 
of servers.  This will keep security perimeter in check.  All wild card ACLs 
will need to be removed or default to current user/current host only.  Proxy 
user/host ACL list must be enforced on http channels.  This is basically 
realigning the default configuration to single node cluster or firewalled 
configuration.  

Regards,
Eric

On 7/5/18, 8:24 AM, "larry mccay"  wrote:

Hi Steve -

This is a long overdue DISCUSS thread!

Perhaps the UIs can very visibly state (in red) "WARNING: UNSECURED UI
ACCESS - OPEN TO COMPROMISE" - maybe even force a click through the warning
to get to the page like SSL exceptions in the browser do?
Similar tactic for UI access without SSL?
A new AuthenticationFilter can be added to the filter chains that blocks
API calls unless explicitly configured to be open and obvious log a similar
message?

thanks,

--larry




On Wed, Jul 4, 2018 at 11:58 AM, Steve Loughran 
wrote:

> Bitcoins are profitable enough to justify writing malware to run on Hadoop
> clusters & schedule mining jobs: there have been a couple of incidents of
> this in the wild, generally going in through no security, well known
> passwords, open ports.
>
> Vendors of Hadoop-related products get to deal with their lockdown
> themselves, which they often do by installing kerberos from the outset,
> making users make up their own password for admin accounts, etc.
>
> The ASF releases though: we just provide something insecure out the box
> and some docs saying "use kerberos if you want security"
>
> What we can do here?
>
> Some things to think about
>
> * docs explaining IN CAPITAL LETTERS why you need to lock down your
> cluster to a private subnet or use Kerberos
> * Anything which can be done to make Kerberos easier (?). I see there are
> some oustanding patches for HADOOP-12649 which need review, but what else?
>
> Could we have Hadoop determine when it's coming up on an open network and
> start warning? And how?
>
> At the very least, single node hadoop should be locked down. You shouldn't
> have to bring up kerberos to run it like that. And for more sophisticated
> multinode deployments, should the scripts refuse to work without kerberos
> unless you pass in some argument like "--Dinsecure-clusters-permitted"
>
> Any other ideas?
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>




Re: Why are some Jenkins builds failing in Yarn-UI

2018-07-05 Thread Sunil G
Hi Steve

I was recently looking into this failure and this is due to a repo hosting
problem. It seems Bower team has deprecated old registry url
https://bower.herokuapp.com and need to change registry url to
https://registry.bower.io. YARN-8457 addressed this problem is backported
till branch-2.9.
I think the branch also needs a rebase to run away from this problem.

  1.  What is Bower?
YARN ui2 is a single page web application and using ember js framework.
ember-2 is using bower as package manager. Bibin is doing some work in
YARN-8387 to move away from bower to resolve such issue in future.

  2.  Why is it breaking Jenkins builds
Bower team has deprecated old registry url https://bower.herokuapp.com and
need to change registry url to https://registry.bower.io.

  3.  Can you nominate someone to provide the patch to fix this?
YARN-8457 addressed this problem. Cud u pls backport this patch alone to
the branch and help to verify.

  4.  Will every active branch need a similar patch?
Yes. This is a bit unfortunate. If these branches can be rebased to top of
respective main branches or backport this patch alone, we could avoid this
build error.

  5.  Have any releases stopped building (3.1.0?)
Since this patch is already in 3.1.0, this is unblocked.

  6.  Do we have any stability guarantees about Bower's build process in
future?
YARN-8387 will be helping on this. Bibin Chundatt has some offline patches
which he still working on with the help of other communities to resolve
bower dependencies and upgrade to ember 3 which will help in offline
compilation.


- Sunil

On Thu, Jul 5, 2018 at 4:45 AM Steve Loughran 
wrote:

>
> Hi
>
> The HADOOP-15407 "abfs" branch has started failing
>
>
> https://builds.apache.org/job/PreCommit-HADOOP-Build/14861/artifact/out/patch-compile-root.txt
>
> [INFO] --- frontend-maven-plugin:1.5:bower (bower install) @
> hadoop-yarn-ui ---
> [INFO] Running 'bower install' in
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp
> [ERROR] bower ember-cli-test-loader#0.2.1  EINVRES Request to
> https://bower.herokuapp.com/packages/ember-cli-test-loader failed with 502
> [INFO]
> 
> [INFO] Reactor Summary:
>
> And the namedc web page returns : This Bower version is deprecated. Please
> update it: npm install -g bower. The new registry address is
> https://registry.bower.io
>
>
> We haven't gone near the YARN code, and yet a branch which is only a few
> weeks old is now failing to build on Jenkins systems
>
> I have a few questions for the Yarn team
>
>
>   1.  What is Bower?
>   2.  Why is it breaking Jenkins builds
>   3.  Can you nominate someone to provide the patch to fix this?
>   4.  Will every active branch need a similar patch?
>   5.  Have any releases stopped building (3.1.0?)
>   6.  Do we have any stability guarantees about Bower's build process in
> future?
>
> Given a fork I'm worried about the long-term reproducibility of builds. Is
> this just a one off move of some build artifact repository, or is this a
> moving target which will stop source code releases from working. I can deal
> with cherry picking patches to WiP branches, tuning Jenkins, etc, but if we
> lose the ability to rebuild releases, the whole notion of "stable release"
> is dead. I know Maven exposes us to similar risks, but the maven central
> repo is stabile and available, so hasn't forced us into building an
> SCM-msnaged artifact tree the way I've done elsewhere (Ivy makes that
> straightforward; maven less so as you need to be online to boot). And we
> are now relying on docker for release builds. So we are already
> vulnerable...I'm just worried that Bower is making things worse.
>
> -Steve
>
> (ps: Sides from Olaf Febbe on the bigtop team on attacking builds through
> mavan https://oflebbe.de/presentations/2018/attackingiotdev.pdf)
>
>
>
>
>


Re: [DISCUSS]: securing ASF Hadoop releases out of the box

2018-07-05 Thread larry mccay
Hi Steve -

This is a long overdue DISCUSS thread!

Perhaps the UIs can very visibly state (in red) "WARNING: UNSECURED UI
ACCESS - OPEN TO COMPROMISE" - maybe even force a click through the warning
to get to the page like SSL exceptions in the browser do?
Similar tactic for UI access without SSL?
A new AuthenticationFilter can be added to the filter chains that blocks
API calls unless explicitly configured to be open and obvious log a similar
message?

thanks,

--larry




On Wed, Jul 4, 2018 at 11:58 AM, Steve Loughran 
wrote:

> Bitcoins are profitable enough to justify writing malware to run on Hadoop
> clusters & schedule mining jobs: there have been a couple of incidents of
> this in the wild, generally going in through no security, well known
> passwords, open ports.
>
> Vendors of Hadoop-related products get to deal with their lockdown
> themselves, which they often do by installing kerberos from the outset,
> making users make up their own password for admin accounts, etc.
>
> The ASF releases though: we just provide something insecure out the box
> and some docs saying "use kerberos if you want security"
>
> What we can do here?
>
> Some things to think about
>
> * docs explaining IN CAPITAL LETTERS why you need to lock down your
> cluster to a private subnet or use Kerberos
> * Anything which can be done to make Kerberos easier (?). I see there are
> some oustanding patches for HADOOP-12649 which need review, but what else?
>
> Could we have Hadoop determine when it's coming up on an open network and
> start warning? And how?
>
> At the very least, single node hadoop should be locked down. You shouldn't
> have to bring up kerberos to run it like that. And for more sophisticated
> multinode deployments, should the scripts refuse to work without kerberos
> unless you pass in some argument like "--Dinsecure-clusters-permitted"
>
> Any other ideas?
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Why are some Jenkins builds failing in Yarn-UI

2018-07-05 Thread Steve Loughran

Hi

The HADOOP-15407 "abfs" branch has started failing

https://builds.apache.org/job/PreCommit-HADOOP-Build/14861/artifact/out/patch-compile-root.txt

[INFO] --- frontend-maven-plugin:1.5:bower (bower install) @ hadoop-yarn-ui ---
[INFO] Running 'bower install' in 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/target/webapp
[ERROR] bower ember-cli-test-loader#0.2.1  EINVRES Request to 
https://bower.herokuapp.com/packages/ember-cli-test-loader failed with 502
[INFO] 
[INFO] Reactor Summary:

And the namedc web page returns : This Bower version is deprecated. Please 
update it: npm install -g bower. The new registry address is 
https://registry.bower.io


We haven't gone near the YARN code, and yet a branch which is only a few weeks 
old is now failing to build on Jenkins systems

I have a few questions for the Yarn team


  1.  What is Bower?
  2.  Why is it breaking Jenkins builds
  3.  Can you nominate someone to provide the patch to fix this?
  4.  Will every active branch need a similar patch?
  5.  Have any releases stopped building (3.1.0?)
  6.  Do we have any stability guarantees about Bower's build process in future?

Given a fork I'm worried about the long-term reproducibility of builds. Is this 
just a one off move of some build artifact repository, or is this a moving 
target which will stop source code releases from working. I can deal with 
cherry picking patches to WiP branches, tuning Jenkins, etc, but if we lose the 
ability to rebuild releases, the whole notion of "stable release" is dead. I 
know Maven exposes us to similar risks, but the maven central repo is stabile 
and available, so hasn't forced us into building an SCM-msnaged artifact tree 
the way I've done elsewhere (Ivy makes that straightforward; maven less so as 
you need to be online to boot). And we are now relying on docker for release 
builds. So we are already vulnerable...I'm just worried that Bower is making 
things worse.

-Steve

(ps: Sides from Olaf Febbe on the bigtop team on attacking builds through mavan 
https://oflebbe.de/presentations/2018/attackingiotdev.pdf)






[jira] [Created] (HADOOP-15582) Document ABFS

2018-07-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-15582:
---

 Summary: Document ABFS
 Key: HADOOP-15582
 URL: https://issues.apache.org/jira/browse/HADOOP-15582
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation, fs/azure
Affects Versions: 3.2.0
Reporter: Steve Loughran


Add documentation for abfs under {{hadoop-tools/hadoop-azure/src/site/markdown}}

Possible topics include
* intro to scheme
* why abfs (link to MSDN, etc)
* config options
* switching from wasb/interop
* troubleshooting

testing.md should add a section on testing this stuff too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15545) ABFS initialize() throws string out of bounds exception of the URI isn't fully qualified

2018-07-05 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15545.
-
   Resolution: Duplicate
 Assignee: Steve Loughran
Fix Version/s: 3.2.0

Fixed this as part of HADOOP-15546 so I could debug why the tests weren't 
working. 

> ABFS initialize() throws string out of bounds exception of the URI isn't 
> fully qualified
> 
>
> Key: HADOOP-15545
> URL: https://issues.apache.org/jira/browse/HADOOP-15545
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.2.0
>
>
> if you try to connect to a store like {{abfs://user@something/}} you'll see a 
> StringIndexOutOfBoundsExceptionbetter to have something useful like "not 
> a FQDN"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-07-05 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/518/

[Jul 3, 2018 8:50:11 PM] (fabbri) HADOOP-15215 s3guard set-capacity command to 
fail on read/write of 0
[Jul 3, 2018 9:11:52 PM] (aengineer) HDDS-175. Refactor ContainerInfo to remove 
Pipeline object from it.
[Jul 4, 2018 7:03:24 AM] (yqlin) HDFS-13528. RBF: If a directory exceeds quota 
limit then quota usage is




-1 overall


The following subsystems voted -1:
compile mvninstall pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestIPC 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestNativeCodeLoader 
   hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.hdfs.server.namenode.TestLargeDirectoryDelete 
   hadoop.hdfs.TestBlocksScheduledCounter 
   hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs 
   hadoop.hdfs.TestDFSClientRetries 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy 
   hadoop.hdfs.TestDFSUpgradeFromImage 
   hadoop.hdfs.TestFetchImage 
   hadoop.hdfs.TestFileConcurrentReader 
   hadoop.hdfs.TestHDFSFileSystemContract 
   hadoop.hdfs.TestLeaseRecovery 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestSecureEncryptionZoneWithKMS 
   hadoop.hdfs.TestTrashWithSecureEncryptionZones 
   hadoop.hdfs.tools.TestDFSAdmin 
   hadoop.hdfs.web.TestWebHDFS 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   
hadoop.yarn.logaggregation.filecontroller.ifile.TestLogAggregationIndexFileController
 
   
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch 
   hadoop.yarn.server.nodemanager.containermanager.TestAuxServices 
   hadoop.yarn.server.nodemanager.containermanager.TestContainerManager 
   hadoop.yarn.server.nodemanager.TestContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeManagerResync 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestFSSchedulerConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestLeveldbConfigurationStore
 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler 
   
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing 
   
hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor 
   
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseSto