[jira] [Created] (HADOOP-13470) GenericTestUtils$LogCapturer is flaky

2016-08-04 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-13470:
--

 Summary: GenericTestUtils$LogCapturer is flaky
 Key: HADOOP-13470
 URL: https://issues.apache.org/jira/browse/HADOOP-13470
 Project: Hadoop Common
  Issue Type: Bug
  Components: test, util
Affects Versions: 2.8.0
Reporter: Mingliang Liu
Assignee: Mingliang Liu


{{GenericTestUtils$LogCapturer}} is useful for assertions against service logs. 
However it should be fixed in following aspects:
# In the constructor, it uses the stdout appender's layout.
{code}
Layout layout = Logger.getRootLogger().getAppender("stdout").getLayout();
{code}
However, the stdout appender may be named "console" or alike which makes the 
constructor throw NPE. Actually the layout does not matter and we can use a 
default pattern layout that only captures application logs.
# {{stopCapturing()}} method is not working. The major reason is that the 
{{appender}} internal variable is never assigned and thus removing it to stop 
capturing makes no sense.
# It does not support {{org.slf4j.Logger}} which is preferred to log4j in many 
modules.
# There is no unit test for it.

This jira is to address these.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13469) Fix TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration test failure

2016-08-04 Thread Rakesh R (JIRA)
Rakesh R created HADOOP-13469:
-

 Summary: Fix 
TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration test failure
 Key: HADOOP-13469
 URL: https://issues.apache.org/jira/browse/HADOOP-13469
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Rakesh R
Assignee: Rakesh R


This jira is to analyse and fix the test case failure, which is failing in 
Jenkins build, 
[Build_16326|https://builds.apache.org/job/PreCommit-HDFS-Build/16326/testReport/org.apache.hadoop.security/TestRefreshUserMappings/testRefreshSuperUserGroupsConfiguration/]
 very frequently.

{code}
Error Message

first auth for user2 should've succeeded: User: super_userL is not allowed to 
impersonate userL2
Stacktrace

java.lang.AssertionError: first auth for user2 should've succeeded: User: 
super_userL is not allowed to impersonate userL2
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.security.TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration(TestRefreshUserMappings.java:200)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Release numbering semantics with concurrent (>2) releases [Was Setting JIRA fix versions for 3.0.0 releases]

2016-08-04 Thread Konstantin Shvachko
On Thu, Aug 4, 2016 at 11:20 AM, Andrew Wang 
wrote:

> Hi Konst, thanks for commenting,
>
> On Wed, Aug 3, 2016 at 11:29 PM, Konstantin Shvachko  > wrote:
>
>> 1. I probably missed something but I didn't get it how "alpha"s made
>> their way into release numbers again. This was discussed on several
>> occasions and I thought the common perception was to use just three level
>> numbers for release versioning and avoid branding them.
>> It is particularly confusing to have 3.0.0-alpha1 and 3.0.0-alpha2. What
>> is alphaX - fourth level? I think releasing 3.0.0 and setting trunk to
>> 3.1.0 would be perfectly in line with our current release practices.
>>
>
> We discussed release numbering a while ago when discussing the release
> plan for 3.0.0, and agreed on this scheme. "-alphaX" is essentially a
> fourth level as you say, but the intent is to only use it (and "-betaX") in
> the leadup to 3.0.0.
>
> The goal here is clarity for end users, since most other enterprise
> software uses a a.0.0 version to denote the GA of a new major version. Same
> for a.b.0 for a new minor version, though we haven't talked about that yet.
> The alphaX and betaX scheme also shares similarity to release versioning of
> other enterprise software.
>

As you remember we did this (alpha, beta) for Hadoop-2 and I don't think it
went well with user perception.
Say release 2.0.5-alpha turned out to be quite good even though still
branded "alpha", while 2.2 was not and not branded.
We should move a release to stable, when people ran it and agree it is GA
worthy. Otherwise you never know.


>
>> 2. I do not see any confusions with releasing 2.8.0 after 3.0.0.
>> The release number is not intended to reflect historical release
>> sequence, but rather the point in the source tree, which it was branched
>> off. So one can release 2.8, 2.9, etc. after or before 3.0.
>>
>
> As described earlier in this thread, the issue here is setting the fix
> versions such that the changelog is a useful diff from a previous version,
> and also clear about what changes are present in each branch. If we do not
> order a specific 2.x before 3.0, then we don't know what 2.x to diff from.
>

So the problem is in determining the latest commit, which was not present
in the last release, when the last release bears higher number than the one
being released.
Interesting problem. Don't have a strong opinion on that. I guess it's OK
to have overlapping in changelogs.
As long as we keep following the rule that commits should be made to trunk
first and them propagated to lower branches until the target branch is
reached.


>
>> 3. I agree that current 3.0.0 branch can be dropped and re-cut. We may
>> think of another rule that if a release branch is not released in 3 month
>> it should be abandoned. Which is applicable to branch 2.8.0 and it is too
>> much work syncing it with branch-2.
>>
>> Time-based rules are tough here. I'd prefer we continue to leave this up
> to release managers. If you think we should recut branch-2.8, recommend
> pinging Vinod and discussing on a new thread.
>

Not recut, but abandon 2.8.0. And Vinod (or anybody who volunteers to RM)
can recut  from the desired point.
People were committing to branch-2 and branch-2.8 for months. And they are
out of sync anyways. So what's the point of the extra commit.
Probably still a different thread.

Thanks,
--Konst


Re: HDFS NFS Gateway - Exporting multiple Directories

2016-08-04 Thread Senthil Kumar
Hi Team, Pls check this and let me your comment(s) ..

--Senthil

On Aug 4, 2016 7:22 PM, "Senthil Kumar"  wrote:

> Hi Team ,
>
>
> Current HDFS NFS gateway Supports exporting only one Directory..
>
> Example :
> 
> nfs.export.point
> /user
> 
>
> This property helps us to export particular directory ..
>
> Code Block :
>
> public RpcProgramMountd(NfsConfiguration config,
> DatagramSocket registrationSocket, boolean allowInsecurePorts)
> throws IOException
> { // Note that RPC cache is not enabled super("mountd", "localhost",
> config.getInt( NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
> NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
> VERSION_3, registrationSocket, allowInsecurePorts); exports = new
> ArrayList(); 
> exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT)); this.hostsMatcher =
> NfsExports.getInstance(config); this.mounts =
> Collections.synchronizedList(new ArrayList());
> UserGroupInformation.setConfiguration(config); SecurityUtil.login(config,
> NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY, 
> NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
> this.dfsClient = new DFSClient(NameNode.getAddress(config), config); }
>
> Export List:
> exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
> NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
>
> Current Code is supporting only one directory to be exposed ... Based on
> our example /user can be exported ..
>
> Most of the production environment expects more number of directories
> should be exported and the same can be mounted for different clients..
>
> Example:
>
> 
> nfs.export.point
> /user,/data/web_crawler,/app-logs
> 
>
> Here i have three directories to be exposed ..
>
> 1) /user
> 2) /data/web_crawler
> 3) /app-logs
>
> This would help us to mount directories for particular client ( Say client
> A wants to write data in /app-logs - Hadoop Admin can mount and handover to
> clients ).
>
> Please advise here..
>
>
> Have created JIRA for this issue : https://issues.apache.org/
> jira/browse/HDFS-10721.
>
>
> --Senthil
>


Re: [DISCUSS] Release numbering semantics with concurrent (>2) releases [Was Setting JIRA fix versions for 3.0.0 releases]

2016-08-04 Thread Chris Douglas
> I'm certainly open to alternate proposals for versioning and fix versions,
> but to reiterate, I like this versioning since it imitates other enterprise
> software. RHEL has versions like 6.2 Beta 2 and 7.0 Beta, so versions like
> 3.0.0-alpha1 will be immediately familiar to end users. Conversely, I've
> met people who were confused by the 2.0/2.1/2.2 progression. As far as I
> know, we were unique in using this style of versioning.

Yes, but even after a stable release, we haven't blocked new (or
incompatible) features in minor releases. Some minor releases- 0.16,
0.21, 2.6.0- included complicated features that took awhile to
stabilize. Most of our x.y.0 releases are not stable, because
downstream reports come back only after we cut a release. Signaling
stability is useful, but this identifier is not reliable. At least,
it's less reliable than a section on the website recommending the
latest stable release beside alpha/beta versions.

Anyway- if we do use this, then we can use Maven's schema:
..--

Though I hope we won't need it, starting with 3.0.0-alpha-01 will
avoid -alpha10 as preceding -alpha2. We've discussed it enough; I'll
let it go.

> I also think what we're doing now *is* considered releasing from trunk.
> Maybe I'm missing something, but we can't literally release from trunk
> without a release branch (e.g. branch-3.0.0-alpha1) unless we hold commits
> until the release or change fix versions afterwards.

We're not constrained on where we cut releases. If subsequent releases
will be off of trunk- not the -alpha branch- and we aren't
removing/holding any change until stabilizing at -beta, then we don't
need to maintain a separate release branch concurrent with development
on trunk. Bumping the release version, etc. might be useful, but a
living branch just mixes up the lineage and adds steps to commit
(trunk > 3.0.0-alpha > branch-2 > branch-2.8 > ...). If it's easier
for you to create the RC then OK. -C

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.3 RC0

2016-08-04 Thread Andrew Wang
Could a YARN person please comment on these two issues, one of which Vinay
also hit? If someone already triaged or filed JIRAs, I missed it.

On Mon, Jul 25, 2016 at 11:52 AM, Andrew Wang 
wrote:

> I'll also add that, as a YARN newbie, I did hit two usability issues.
> These are very unlikely to be regressions, and I can file JIRAs if they
> seem fixable.
>
> * I didn't have SSH to localhost set up (new laptop), and when I tried to
> run the Pi job, it'd exit my window manager session. I feel there must be a
> more developer-friendly solution here.
> * If you start the NodeManager and not the RM, the NM has a handler for
> SIGTERM and SIGINT that blocked my Ctrl-C and kill attempts during startup.
> I had to kill -9 it.
>
> On Mon, Jul 25, 2016 at 11:44 AM, Andrew Wang 
> wrote:
>
>> I got asked this off-list, so as a reminder, only PMC votes are binding
>> on releases. Everyone is encouraged to vote on releases though!
>>
>> +1 (binding)
>>
>> * Downloaded source, built
>> * Started up HDFS and YARN
>> * Ran Pi job which as usual returned 4, and a little teragen
>>
>> On Mon, Jul 25, 2016 at 11:08 AM, Karthik Kambatla 
>> wrote:
>>
>>> +1 (binding)
>>>
>>> * Downloaded and build from source
>>> * Checked LICENSE and NOTICE
>>> * Pseudo-distributed cluster with FairScheduler
>>> * Ran MR and HDFS tests
>>> * Verified basic UI
>>>
>>> On Sun, Jul 24, 2016 at 1:07 PM, larry mccay  wrote:
>>>
>>> > +1 binding
>>> >
>>> > * downloaded and built from source
>>> > * checked LICENSE and NOTICE files
>>> > * verified signatures
>>> > * ran standalone tests
>>> > * installed pseudo-distributed instance on my mac
>>> > * ran through HDFS and mapreduce tests
>>> > * tested credential command
>>> > * tested webhdfs access through Apache Knox
>>> >
>>> >
>>> > On Fri, Jul 22, 2016 at 10:15 PM, Vinod Kumar Vavilapalli <
>>> > vino...@apache.org> wrote:
>>> >
>>> > > Hi all,
>>> > >
>>> > > I've created a release candidate RC0 for Apache Hadoop 2.7.3.
>>> > >
>>> > > As discussed before, this is the next maintenance release to follow
>>> up
>>> > > 2.7.2.
>>> > >
>>> > > The RC is available for validation at:
>>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/ <
>>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/>
>>> > >
>>> > > The RC tag in git is: release-2.7.3-RC0
>>> > >
>>> > > The maven artifacts are available via repository.apache.org <
>>> > > http://repository.apache.org/> at
>>> > > https://repository.apache.org/content/repositories/
>>> orgapachehadoop-1040/
>>> > <
>>> > > https://repository.apache.org/content/repositories/
>>> orgapachehadoop-1040/
>>> > >
>>> > >
>>> > > The release-notes are inside the tar-balls at location
>>> > > hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html.
>>> I
>>> > > hosted this at
>>> > > http://home.apache.org/~vinodkv/hadoop-2.7.3-RC0/releasenotes.html <
>>> > > http://people.apache.org/~vinodkv/hadoop-2.7.2-RC1/releasenotes.html
>>> >
>>> > for
>>> > > your quick perusal.
>>> > >
>>> > > As you may have noted, a very long fix-cycle for the License & Notice
>>> > > issues (HADOOP-12893) caused 2.7.3 (along with every other Hadoop
>>> > release)
>>> > > to slip by quite a bit. This release's related discussion thread is
>>> > linked
>>> > > below: [1].
>>> > >
>>> > > Please try the release and vote; the vote will run for the usual 5
>>> days.
>>> > >
>>> > > Thanks,
>>> > > Vinod
>>> > >
>>> > > [1]: 2.7.3 release plan:
>>> > > https://www.mail-archive.com/hdfs-dev%40hadoop.apache.org/
>>> msg24439.html
>>> > <
>>> > > http://markmail.org/thread/6yv2fyrs4jlepmmr>
>>> >
>>>
>>
>>
>


Re: [DISCUSS] Release numbering semantics with concurrent (>2) releases [Was Setting JIRA fix versions for 3.0.0 releases]

2016-08-04 Thread Andrew Wang
On Thu, Aug 4, 2016 at 12:41 PM, Chris Douglas  wrote:

> I agree with Konst. The virtues of branching (instead of releasing
> from trunk) and using the version suffix for the 3.x releases are lost
> on me. Both introduce opportunities for error, in commits, in
> consistent JIRA tagging, in packaging...
>
> I'm certainly open to alternate proposals for versioning and fix versions,
but to reiterate, I like this versioning since it imitates other enterprise
software. RHEL has versions like 6.2 Beta 2 and 7.0 Beta, so versions like
3.0.0-alpha1 will be immediately familiar to end users. Conversely, I've
met people who were confused by the 2.0/2.1/2.2 progression. As far as I
know, we were unique in using this style of versioning.

I also think what we're doing now *is* considered releasing from trunk.
Maybe I'm missing something, but we can't literally release from trunk
without a release branch (e.g. branch-3.0.0-alpha1) unless we hold commits
until the release or change fix versions afterwards.

Best,
Andrew


Re: Jenkins Node Labelling Documentation

2016-08-04 Thread Gav
On Fri, Aug 5, 2016 at 7:52 AM, Sean Busbey  wrote:

> On Thu, Aug 4, 2016 at 4:16 PM, Gav  wrote:
> >
> >
> > On Fri, Aug 5, 2016 at 3:14 AM, Sean Busbey  wrote:
> >>
> >> > Why? yahoo-not-h2 is really not required since H2 is the same as all
> the
> >> > other H* nodes.
> >>
> >> The yahoo-not-h2 label exists because the H2 node was misconfigured
> >> for a long time and would fail builds as a result.
> >
> >
> > Yes I know, but now its not, so is no longer needed.
> >
> >>
> >> What label will
> >> jobs taht are currently configured to avoid H2 be migrated to? Will
> >> they be migrated automatically?
> >
> >
> > Currently I'm asking that projects make the move themselves. Most jobs
> would
> > be fine as they have
> > multiple labels, so just need to drop the yahoo-not-h2 label to give them
> > access to H2. If, when I drop the label I
> > see jobs with it in use, I'll remove it.
> >
>
> I don't see a label I can move to that covers the same machines as the
> current yahoo-not-h2 nodes and H2. It looks like a union of "hadoop"
> and "docker" would do it, but "docker" is going away. Also I have to
> have a single label for use in multi-configuration builds or jenkins
> will treat the two labels as an axis for test selection rather than as
> just a restriction for where the jobs can run. I could try to go back
> to using an expression, but IIRC that gave us things like *s in the
> path used for tests, which was not great.
>
> Can we maybe expand the Hadoop label? (or would "beefy" cover the set?)
>
> If the H* nodes are all the same, why do we need the labels HDFS,
> MapReduce, Pig, Falcon, Tez, and ZooKeeper in addition to the Hadoop
> label?
>

I was thinking the same yep, those could all go too imho but wanted to
discuss
that one seperately.

Gav...


>
>
> --
> busbey
>


[jira] [Resolved] (HADOOP-13119) Web UI authorization error accessing /logs/ when Kerberos

2016-08-04 Thread Jeffrey E Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey E  Rodriguez resolved HADOOP-13119.
---
  Resolution: Invalid
Release Note: This Jira should have been a HDFS Jira. I am closing since 
the  solution is to set the property dfs.cluster.administrators  which would 
allow access to /log to a group or user.

> Web UI authorization error accessing /logs/ when Kerberos
> -
>
> Key: HADOOP-13119
> URL: https://issues.apache.org/jira/browse/HADOOP-13119
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.4
>Reporter: Jeffrey E  Rodriguez
>
> User Hadoop on secure mode.
> login as kdc user, kinit.
> start firefox and enable Kerberos
> access http://localhost:50070/logs/
> Get 403 authorization errors.
> only hdfs user could access logs.
> Would expect as a user to be able to web interface logs link.
> Same results if using curl:
> curl -v  --negotiate -u tester:  http://localhost:50070/logs/
>  HTTP/1.1 403 User tester is unauthorized to access this page.
> so:
> 1. either don't show links if hdfs user  is able to access.
> 2. provide mechanism to add users to web application realm.
> 3. note that we are pass authentication so the issue is authorization to 
> /logs/
> suspect that /logs/ path is secure in webdescriptor so suspect users by 
> default don't have access to secure paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Jenkins Node Labelling Documentation

2016-08-04 Thread Sean Busbey
On Thu, Aug 4, 2016 at 4:16 PM, Gav  wrote:
>
>
> On Fri, Aug 5, 2016 at 3:14 AM, Sean Busbey  wrote:
>>
>> > Why? yahoo-not-h2 is really not required since H2 is the same as all the
>> > other H* nodes.
>>
>> The yahoo-not-h2 label exists because the H2 node was misconfigured
>> for a long time and would fail builds as a result.
>
>
> Yes I know, but now its not, so is no longer needed.
>
>>
>> What label will
>> jobs taht are currently configured to avoid H2 be migrated to? Will
>> they be migrated automatically?
>
>
> Currently I'm asking that projects make the move themselves. Most jobs would
> be fine as they have
> multiple labels, so just need to drop the yahoo-not-h2 label to give them
> access to H2. If, when I drop the label I
> see jobs with it in use, I'll remove it.
>

I don't see a label I can move to that covers the same machines as the
current yahoo-not-h2 nodes and H2. It looks like a union of "hadoop"
and "docker" would do it, but "docker" is going away. Also I have to
have a single label for use in multi-configuration builds or jenkins
will treat the two labels as an axis for test selection rather than as
just a restriction for where the jobs can run. I could try to go back
to using an expression, but IIRC that gave us things like *s in the
path used for tests, which was not great.

Can we maybe expand the Hadoop label? (or would "beefy" cover the set?)

If the H* nodes are all the same, why do we need the labels HDFS,
MapReduce, Pig, Falcon, Tez, and ZooKeeper in addition to the Hadoop
label?


-- 
busbey

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Jenkins Node Labelling Documentation

2016-08-04 Thread Gav
On Fri, Aug 5, 2016 at 7:28 AM, Andrew Bayer  wrote:

> fwiw, I think the docker label should remain - the Rackspace dynamically
> provisioned agents, for example, are too small to really be a good option
> for most, if not all, jobs that use Docker. *shrug*
>

Too small how? Disk space, RAM, CPU, other?


>
> Alternatively, a label that distinguishes between
> the-same-in-configuration physical vs non-physical (or beefy vs non-beefy)
> agents might be worthwhile.
>

Might be worth looking into I guess. I'm trying to make things easier and
not certain that removing one label to replace with another will do that,
though it would drop the 'software naming' aspect.

At the end of the day, we are discussing a proposal, just like the recent
JDK, Maven and Ant discussions.
If consensus is that the 'docker' label is useful and we are better with it
than without, I'm fine with that.

Gav...


>
> A.
>
> On Thu, Aug 4, 2016 at 2:16 PM, Gav  wrote:
>
>> On Fri, Aug 5, 2016 at 3:14 AM, Sean Busbey  wrote:
>>
>> > > Why? yahoo-not-h2 is really not required since H2 is the same as all
>> the
>> > other H* nodes.
>> >
>> > The yahoo-not-h2 label exists because the H2 node was misconfigured
>> > for a long time and would fail builds as a result.
>>
>>
>> Yes I know, but now its not, so is no longer needed.
>>
>>
>> > What label will
>> > jobs taht are currently configured to avoid H2 be migrated to? Will
>> > they be migrated automatically?
>> >
>>
>> Currently I'm asking that projects make the move themselves. Most jobs
>> would be fine as they have
>> multiple labels, so just need to drop the yahoo-not-h2 label to give them
>> access to H2. If, when I drop the label I
>> see jobs with it in use, I'll remove it.
>>
>>
>> >
>> > > The 'docker' label references installed software and should be
>> dropped.
>> > We have and will continue to install docker wherever it is required.
>> >
>> > How do we determine where it's required? If I have a job that relies
>> > on docker being installed, do I just get to have it run unlabeled?
>> >
>>
>> You are reading too much into it, what if you have a job that relies on
>> Ant, or Tomcat, or  Gradle, or ... Where are the
>> labels for those? There aren't any , and their shouldn't be, just like
>> Docker should never have been a label.
>> Docker is installed already on most nodes. If you find it missing , report
>> it.
>>
>> HTH
>>
>> Gav...
>>
>>
>> > On Thu, Aug 4, 2016 at 4:18 AM, Gav  wrote:
>> > > Hi All,
>> > >
>> > > Following on from my earlier mails regarding Java, Maven and Ant
>> > > consolidations, I thought
>> > > you might like a page detailing the Jenkins Labels and which nodes
>> they
>> > > belong to.
>> > >
>> > > I've put it up here :-
>> > >
>> > > https://cwiki.apache.org/confluence/display/INFRA/Jenkins+node+labels
>> > >
>> > > I hope you find it useful.
>> > >
>> > > In addition I propose to remove a couple of redundant labels to make
>> > > choosing a label
>> > > easier.
>> > >
>> > > Proposal is to remove labels yahoo-not-h2, ubuntu and docker. Why?
>> > > yahoo-not-h2 is really not required since H2 is the same as all the
>> other
>> > > H* nodes. ubuntu is a copy of Ubuntu and both are identical.
>> > > The 'docker' label references installed software and should be
>> dropped.
>> > We
>> > > have and will continue to install docker wherever it is required.
>> > >
>> > > If no objections I'll remove these labels in ~2 weeks time on 19th
>> August
>> > >
>> > > HTH
>> > >
>> > > Gav... (ASF Infrastructure Team)
>> >
>> >
>> >
>> > --
>> > busbey
>> >
>>
>
>


Re: Jenkins Node Labelling Documentation

2016-08-04 Thread Andrew Bayer
fwiw, I think the docker label should remain - the Rackspace dynamically
provisioned agents, for example, are too small to really be a good option
for most, if not all, jobs that use Docker. *shrug*

Alternatively, a label that distinguishes between the-same-in-configuration
physical vs non-physical (or beefy vs non-beefy) agents might be worthwhile.

A.

On Thu, Aug 4, 2016 at 2:16 PM, Gav  wrote:

> On Fri, Aug 5, 2016 at 3:14 AM, Sean Busbey  wrote:
>
> > > Why? yahoo-not-h2 is really not required since H2 is the same as all
> the
> > other H* nodes.
> >
> > The yahoo-not-h2 label exists because the H2 node was misconfigured
> > for a long time and would fail builds as a result.
>
>
> Yes I know, but now its not, so is no longer needed.
>
>
> > What label will
> > jobs taht are currently configured to avoid H2 be migrated to? Will
> > they be migrated automatically?
> >
>
> Currently I'm asking that projects make the move themselves. Most jobs
> would be fine as they have
> multiple labels, so just need to drop the yahoo-not-h2 label to give them
> access to H2. If, when I drop the label I
> see jobs with it in use, I'll remove it.
>
>
> >
> > > The 'docker' label references installed software and should be dropped.
> > We have and will continue to install docker wherever it is required.
> >
> > How do we determine where it's required? If I have a job that relies
> > on docker being installed, do I just get to have it run unlabeled?
> >
>
> You are reading too much into it, what if you have a job that relies on
> Ant, or Tomcat, or  Gradle, or ... Where are the
> labels for those? There aren't any , and their shouldn't be, just like
> Docker should never have been a label.
> Docker is installed already on most nodes. If you find it missing , report
> it.
>
> HTH
>
> Gav...
>
>
> > On Thu, Aug 4, 2016 at 4:18 AM, Gav  wrote:
> > > Hi All,
> > >
> > > Following on from my earlier mails regarding Java, Maven and Ant
> > > consolidations, I thought
> > > you might like a page detailing the Jenkins Labels and which nodes they
> > > belong to.
> > >
> > > I've put it up here :-
> > >
> > > https://cwiki.apache.org/confluence/display/INFRA/Jenkins+node+labels
> > >
> > > I hope you find it useful.
> > >
> > > In addition I propose to remove a couple of redundant labels to make
> > > choosing a label
> > > easier.
> > >
> > > Proposal is to remove labels yahoo-not-h2, ubuntu and docker. Why?
> > > yahoo-not-h2 is really not required since H2 is the same as all the
> other
> > > H* nodes. ubuntu is a copy of Ubuntu and both are identical.
> > > The 'docker' label references installed software and should be dropped.
> > We
> > > have and will continue to install docker wherever it is required.
> > >
> > > If no objections I'll remove these labels in ~2 weeks time on 19th
> August
> > >
> > > HTH
> > >
> > > Gav... (ASF Infrastructure Team)
> >
> >
> >
> > --
> > busbey
> >
>


Re: Jenkins Node Labelling Documentation

2016-08-04 Thread Gav
On Fri, Aug 5, 2016 at 3:14 AM, Sean Busbey  wrote:

> > Why? yahoo-not-h2 is really not required since H2 is the same as all the
> other H* nodes.
>
> The yahoo-not-h2 label exists because the H2 node was misconfigured
> for a long time and would fail builds as a result.


Yes I know, but now its not, so is no longer needed.


> What label will
> jobs taht are currently configured to avoid H2 be migrated to? Will
> they be migrated automatically?
>

Currently I'm asking that projects make the move themselves. Most jobs
would be fine as they have
multiple labels, so just need to drop the yahoo-not-h2 label to give them
access to H2. If, when I drop the label I
see jobs with it in use, I'll remove it.


>
> > The 'docker' label references installed software and should be dropped.
> We have and will continue to install docker wherever it is required.
>
> How do we determine where it's required? If I have a job that relies
> on docker being installed, do I just get to have it run unlabeled?
>

You are reading too much into it, what if you have a job that relies on
Ant, or Tomcat, or  Gradle, or ... Where are the
labels for those? There aren't any , and their shouldn't be, just like
Docker should never have been a label.
Docker is installed already on most nodes. If you find it missing , report
it.

HTH

Gav...


> On Thu, Aug 4, 2016 at 4:18 AM, Gav  wrote:
> > Hi All,
> >
> > Following on from my earlier mails regarding Java, Maven and Ant
> > consolidations, I thought
> > you might like a page detailing the Jenkins Labels and which nodes they
> > belong to.
> >
> > I've put it up here :-
> >
> > https://cwiki.apache.org/confluence/display/INFRA/Jenkins+node+labels
> >
> > I hope you find it useful.
> >
> > In addition I propose to remove a couple of redundant labels to make
> > choosing a label
> > easier.
> >
> > Proposal is to remove labels yahoo-not-h2, ubuntu and docker. Why?
> > yahoo-not-h2 is really not required since H2 is the same as all the other
> > H* nodes. ubuntu is a copy of Ubuntu and both are identical.
> > The 'docker' label references installed software and should be dropped.
> We
> > have and will continue to install docker wherever it is required.
> >
> > If no objections I'll remove these labels in ~2 weeks time on 19th August
> >
> > HTH
> >
> > Gav... (ASF Infrastructure Team)
>
>
>
> --
> busbey
>


Re: [DISCUSS] Release numbering semantics with concurrent (>2) releases [Was Setting JIRA fix versions for 3.0.0 releases]

2016-08-04 Thread Chris Douglas
I agree with Konst. The virtues of branching (instead of releasing
from trunk) and using the version suffix for the 3.x releases are lost
on me. Both introduce opportunities for error, in commits, in
consistent JIRA tagging, in packaging...

We can mark stability on the website. If someone builds a cluster
without doing this basic research, marking stability in the version
number saves them from the least of the problems they'll have. This
adds complexity for clarity that's redundant, at best. -C


On Thu, Aug 4, 2016 at 11:20 AM, Andrew Wang  wrote:
> Hi Konst, thanks for commenting,
>
> On Wed, Aug 3, 2016 at 11:29 PM, Konstantin Shvachko 
> wrote:
>
>> 1. I probably missed something but I didn't get it how "alpha"s made their
>> way into release numbers again. This was discussed on several occasions and
>> I thought the common perception was to use just three level numbers for
>> release versioning and avoid branding them.
>> It is particularly confusing to have 3.0.0-alpha1 and 3.0.0-alpha2. What
>> is alphaX - fourth level? I think releasing 3.0.0 and setting trunk to
>> 3.1.0 would be perfectly in line with our current release practices.
>>
>
> We discussed release numbering a while ago when discussing the release plan
> for 3.0.0, and agreed on this scheme. "-alphaX" is essentially a fourth
> level as you say, but the intent is to only use it (and "-betaX") in the
> leadup to 3.0.0.
>
> The goal here is clarity for end users, since most other enterprise
> software uses a a.0.0 version to denote the GA of a new major version. Same
> for a.b.0 for a new minor version, though we haven't talked about that yet.
> The alphaX and betaX scheme also shares similarity to release versioning of
> other enterprise software.
>
>>
>> 2. I do not see any confusions with releasing 2.8.0 after 3.0.0.
>> The release number is not intended to reflect historical release sequence,
>> but rather the point in the source tree, which it was branched off. So one
>> can release 2.8, 2.9, etc. after or before 3.0.
>>
>
> As described earlier in this thread, the issue here is setting the fix
> versions such that the changelog is a useful diff from a previous version,
> and also clear about what changes are present in each branch. If we do not
> order a specific 2.x before 3.0, then we don't know what 2.x to diff from.
>
>>
>> 3. I agree that current 3.0.0 branch can be dropped and re-cut. We may
>> think of another rule that if a release branch is not released in 3 month
>> it should be abandoned. Which is applicable to branch 2.8.0 and it is too
>> much work syncing it with branch-2.
>>
>> Time-based rules are tough here. I'd prefer we continue to leave this up
> to release managers. If you think we should recut branch-2.8, recommend
> pinging Vinod and discussing on a new thread.
>
> Best,
> Andrew

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Release numbering semantics with concurrent (>2) releases [Was Setting JIRA fix versions for 3.0.0 releases]

2016-08-04 Thread Andrew Wang
Hi Konst, thanks for commenting,

On Wed, Aug 3, 2016 at 11:29 PM, Konstantin Shvachko 
wrote:

> 1. I probably missed something but I didn't get it how "alpha"s made their
> way into release numbers again. This was discussed on several occasions and
> I thought the common perception was to use just three level numbers for
> release versioning and avoid branding them.
> It is particularly confusing to have 3.0.0-alpha1 and 3.0.0-alpha2. What
> is alphaX - fourth level? I think releasing 3.0.0 and setting trunk to
> 3.1.0 would be perfectly in line with our current release practices.
>

We discussed release numbering a while ago when discussing the release plan
for 3.0.0, and agreed on this scheme. "-alphaX" is essentially a fourth
level as you say, but the intent is to only use it (and "-betaX") in the
leadup to 3.0.0.

The goal here is clarity for end users, since most other enterprise
software uses a a.0.0 version to denote the GA of a new major version. Same
for a.b.0 for a new minor version, though we haven't talked about that yet.
The alphaX and betaX scheme also shares similarity to release versioning of
other enterprise software.

>
> 2. I do not see any confusions with releasing 2.8.0 after 3.0.0.
> The release number is not intended to reflect historical release sequence,
> but rather the point in the source tree, which it was branched off. So one
> can release 2.8, 2.9, etc. after or before 3.0.
>

As described earlier in this thread, the issue here is setting the fix
versions such that the changelog is a useful diff from a previous version,
and also clear about what changes are present in each branch. If we do not
order a specific 2.x before 3.0, then we don't know what 2.x to diff from.

>
> 3. I agree that current 3.0.0 branch can be dropped and re-cut. We may
> think of another rule that if a release branch is not released in 3 month
> it should be abandoned. Which is applicable to branch 2.8.0 and it is too
> much work syncing it with branch-2.
>
> Time-based rules are tough here. I'd prefer we continue to leave this up
to release managers. If you think we should recut branch-2.8, recommend
pinging Vinod and discussing on a new thread.

Best,
Andrew


[jira] [Created] (HADOOP-13468) In HA, Namenode is failed to start If any of the Quorum hostname is unresolved

2016-08-04 Thread Karthik Palanisamy (JIRA)
Karthik Palanisamy created HADOOP-13468:
---

 Summary: In HA, Namenode is failed to start If any of the Quorum 
hostname is unresolved
 Key: HADOOP-13468
 URL: https://issues.apache.org/jira/browse/HADOOP-13468
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0, 2.8.0
 Environment: HDP-2.4.0
Reporter: Karthik Palanisamy


2016-08-03 02:53:53,760 ERROR namenode.NameNode (NameNode.java:main(1712)) - 
Failed to start namenode.
java.lang.IllegalArgumentException: Unable to construct journal, 
qjournal://1:8485;2:8485;3:8485/shva
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1637)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:282)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.initSharedJournalsForRead(FSEditLog.java:260)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.initEditLog(FSImage.java:789)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:634)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:983)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:688)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:662)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:726)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:951)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:935)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1641)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1707)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.createJournal(FSEditLog.java:1635)
... 13 more
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.getName(IPCLoggerChannelMetrics.java:107)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannelMetrics.create(IPCLoggerChannelMetrics.java:91)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.(IPCLoggerChannel.java:178)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel$1.createLogger(IPCLoggerChannel.java:156)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:367)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createLoggers(QuorumJournalManager.java:149)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:116)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.(QuorumJournalManager.java:105)
... 18 more
2016-08-03 02:53:53,765 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
Exiting with status 1
2016-08-03 02:53:53,768 INFO  namenode.NameNode (LogAdapter.java:info(47)) - 
SHUTDOWN_MSG:

*and the failover is not successful*

I have attached the patch, It allows the Namenode to start if the majority of 
the Quorums are resolvable.
throws warning if the quorum is unresolvable.
throws Unknown host exception if the majority of the journals are unresolvable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Jenkins Node Labelling Documentation

2016-08-04 Thread Sean Busbey
> Why? yahoo-not-h2 is really not required since H2 is the same as all the 
> other H* nodes.

The yahoo-not-h2 label exists because the H2 node was misconfigured
for a long time and would fail builds as a result. What label will
jobs taht are currently configured to avoid H2 be migrated to? Will
they be migrated automatically?

> The 'docker' label references installed software and should be dropped. We 
> have and will continue to install docker wherever it is required.

How do we determine where it's required? If I have a job that relies
on docker being installed, do I just get to have it run unlabeled?

On Thu, Aug 4, 2016 at 4:18 AM, Gav  wrote:
> Hi All,
>
> Following on from my earlier mails regarding Java, Maven and Ant
> consolidations, I thought
> you might like a page detailing the Jenkins Labels and which nodes they
> belong to.
>
> I've put it up here :-
>
> https://cwiki.apache.org/confluence/display/INFRA/Jenkins+node+labels
>
> I hope you find it useful.
>
> In addition I propose to remove a couple of redundant labels to make
> choosing a label
> easier.
>
> Proposal is to remove labels yahoo-not-h2, ubuntu and docker. Why?
> yahoo-not-h2 is really not required since H2 is the same as all the other
> H* nodes. ubuntu is a copy of Ubuntu and both are identical.
> The 'docker' label references installed software and should be dropped. We
> have and will continue to install docker wherever it is required.
>
> If no objections I'll remove these labels in ~2 weeks time on 19th August
>
> HTH
>
> Gav... (ASF Infrastructure Team)



-- 
busbey

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-04 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/

[Aug 3, 2016 2:31:49 PM] (kihwal) HADOOP-13426. More efficiently build IPC 
responses. Contributed by Daryn
[Aug 3, 2016 4:53:41 PM] (kihwal) HDFS-10656. Optimize conversion of byte 
arrays back to path string.
[Aug 3, 2016 5:14:59 PM] (kihwal) HDFS-742. A down DataNode makes Balancer to 
hang on repeatingly asking
[Aug 3, 2016 6:12:43 PM] (kihwal) HDFS-10674. Optimize creating a full path 
from an inode. Contributed by
[Aug 3, 2016 6:22:22 PM] (kihwal) HADOOP-13483. Optimize IPC server protobuf 
decoding. Contributed by
[Aug 3, 2016 6:53:14 PM] (jlowe) YARN-4280. CapacityScheduler reservations may 
not prevent indefinite
[Aug 3, 2016 7:17:25 PM] (jlowe) YARN-5462. 
TestNodeStatusUpdater.testNodeStatusUpdaterRetryAndNMShutdown
[Aug 3, 2016 7:42:05 PM] (jing9) HDFS-10710. In 
BlockManager#rescanPostponedMisreplicatedBlocks(),
[Aug 3, 2016 7:51:44 PM] (jlowe) YARN-5469. Increase timeout of 
TestAmFilter.testFilter. Contributed by
[Aug 3, 2016 8:17:30 PM] (jlowe) HADOOP-10980. TestActiveStandbyElector fails 
occasionally in trunk.
[Aug 3, 2016 8:20:20 PM] (kihwal) HDFS-10569. A bug causes OutOfIndex error in 
BlockListAsLongs.
[Aug 3, 2016 9:19:47 PM] (weichiu) HADOOP-13458. 
LoadBalancingKMSClientProvider#doOp should log IOException
[Aug 4, 2016 5:51:47 AM] (brahma) MAPREDUCE-6682. TestMRCJCFileOutputCommitter 
fails intermittently




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.tracing.TestTracing 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.yarn.logaggregation.TestAggregatedLogFormat 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.client.api.impl.TestYarnClient 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/diff-compile-javac-root.txt
  [172K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/diff-javadoc-javadoc-root.txt
  [2.3M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [524K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [24K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/123/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr.

Jenkins Node Labelling Documentation

2016-08-04 Thread Gav
Hi All,

Following on from my earlier mails regarding Java, Maven and Ant
consolidations, I thought
you might like a page detailing the Jenkins Labels and which nodes they
belong to.

I've put it up here :-

https://cwiki.apache.org/confluence/display/INFRA/Jenkins+node+labels

I hope you find it useful.

In addition I propose to remove a couple of redundant labels to make
choosing a label
easier.

Proposal is to remove labels yahoo-not-h2, ubuntu and docker. Why?
yahoo-not-h2 is really not required since H2 is the same as all the other
H* nodes. ubuntu is a copy of Ubuntu and both are identical.
The 'docker' label references installed software and should be dropped. We
have and will continue to install docker wherever it is required.

If no objections I'll remove these labels in ~2 weeks time on 19th August

HTH

Gav... (ASF Infrastructure Team)