Re: [OpenStack-Infra] [infra] confusion on projec/jobs between zuul and jenkins job builder

2017-06-12 Thread Lenny Verkhovsky
Hi Xinliang,
Can you provide more details?
If you can share the files in [1]
In general JJB yaml files are translated into Jenkins Jobs
And zuul yaml ( layout.yaml ) is used by zuul to trigger a specific job 
according to the patchset

[1] paste.openstack.org

Best Regards
Lenny

From: Xinliang Liu [mailto:xinliang@linaro.org]
Sent: Tuesday, June 13, 2017 6:00 AM
To: openstack-infra 
Subject: [OpenStack-Infra] [infra] confusion on projec/jobs between zuul and 
jenkins job builder

Hi,
I got a confusion on zuul and jenkins job builder.
I see that both zuul and jenkins job builder describe projects/jobs in their 
ymal conf files.
Then what's the difference? How these two kinds projects/jobs work together?

Best,
-xinliang
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] [infra] confusion on projec/jobs between zuul and jenkins job builder

2017-06-12 Thread Xinliang Liu
Hi,
I got a confusion on zuul and jenkins job builder.
I see that both zuul and jenkins job builder describe projects/jobs in
their ymal conf files.
Then what's the difference? How these two kinds projects/jobs work together?

Best,
-xinliang
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] About aarch64 third party CI

2017-06-12 Thread Xinliang Liu
On 13 June 2017 at 01:00, Ricardo Carrillo Cruz <
ricardo.carrillo.c...@gmail.com> wrote:

>
>
> 2017-06-09 22:18 GMT+02:00 Paul Belanger :
>
>> On Fri, Jun 09, 2017 at 07:58:44PM +, Jeremy Stanley wrote:
>> > On 2017-06-07 14:26:10 +0800 (+0800), Xinliang Liu wrote:
>> > [...]
>> > > we already have our own pre-built debian cloud image, could I just
>> > > use it and not use the one built by diskimage-builder?
>> > [...]
>> >
>> > The short answer is that nodepool doesn't currently have support for
>> > directly using an image provided independent of its own image build
>> > process. Clark was suggesting[*] in IRC today that it might be
>> > possible to inject records into Zookeeper (acting as a "fake"
>> > nodepool-builder daemon basically) to accomplish this, but nobody
>> > has yet implemented such a solution to our knowledge.
>> >
>> > Longer term, I think we do want a feature in nodepool to be able to
>> > specify the ID of a prebuilt image for a label/provider (at least we
>> > discussed that we wouldn't reject the idea if someone proposed a
>> > suitable implementation). Just be aware that nodepool's use of
>> > diskimage-builder to regularly rebuild images is intentional and
>> > useful since it ensures images are updated with the latest packages,
>> > kernels, warm caches and whatever else you specify in your elements
>> > so reducing job runtimes as they spend less effort updating these
>> > things on every run.
>> >
>> > [*] http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%
>> 23openstack-infra.2017-06-09.log.html#t2017-06-09T15:32:27-2 >
>> > --
>> > Jeremy Stanley
>>
>> Actually, I think 458073[1] aims to fix this use case.  I haven't tired it
>> myself but it adds support for using images which are not built and
>> managed by
>> nodepool.
>>
>> This is currently only on feature/zuulv3 branch.
>>
>> [1] https://review.openstack.org/#/c/458073/
>>
>>
> That's right, support for cloud-images on feature/zuulv3 is now merged and
> working.
> I just setup a Nodepool using this new feature over the weekend.
>
> This is a nodepool.yaml that can help you get going:
>
> http://paste.openstack.org/show/612191/
>

​Great! I will try and see what happen.

Thanks,
-xinliang
​

>
>
> HTH
>
>
>> > ___
>> > OpenStack-Infra mailing list
>> > OpenStack-Infra@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>>
>>
>> ___
>> OpenStack-Infra mailing list
>> OpenStack-Infra@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
>
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] About aarch64 third party CI

2017-06-12 Thread Xinliang Liu
Hi Jeremy,
Thanks for reply;-)

On 10 June 2017 at 03:58, Jeremy Stanley  wrote:

> On 2017-06-07 14:26:10 +0800 (+0800), Xinliang Liu wrote:
> [...]
> > we already have our own pre-built debian cloud image, could I just
> > use it and not use the one built by diskimage-builder?
> [...]
>
> The short answer is that nodepool doesn't currently have support for
> directly using an image provided independent of its own image build
> process. Clark was suggesting[*] in IRC today that it might be
> possible to inject records into Zookeeper (acting as a "fake"
> nodepool-builder daemon basically) to accomplish this, but nobody
> has yet implemented such a solution to our knowledge.
>

​Got it, thanks.


>
> Longer term, I think we do want a feature in nodepool to be able to
> specify the ID of a prebuilt image for a label/provider (at least we
> discussed that we wouldn't reject the idea if someone proposed a
> suitable implementation). Just be aware that nodepool's use of
> diskimage-builder to regularly rebuild images is intentional and
> useful since it ensures images are updated with the latest packages,
>

I think this is the point that make a really update gating image to run the
tests.
​

> kernels, warm caches and whatever else you specify in your elements
> so reducing job runtimes as they spend less effort updating these
> things on every run.
>

​How often nodepool will rebuild the images?
If it too frequency every day. ​Shall we just make a job to publish the
​pre-built gating images every day then other CI test just use them
(something like docker image, though it is container image)?
You know making a gating image need to include lots of elements, even
though with warm caches, when using diskimage-builder
we still need to rebuild step by step. What I mean is that building image
is taking a bit time.

Thanks,
-xinliang



>
> [*] http://eavesdrop.openstack.org/irclogs/%23openstack-
> infra/%23openstack-infra.2017-06-09.log.html#t2017-06-09T15:32:27-2 >
> --
> Jeremy Stanley
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] On the subject of HTTP interfaces and Zuul

2017-06-12 Thread James E. Blair
Clint Byrum  writes:

> Excerpts from corvus's message of 2017-06-09 13:11:00 -0700:
>> Clark Boylan  writes:
>> 
>> > I'm wary of this simply because it looks a lot like repeating
>> > OpenStack's (now failed) decision to stick web servers in a bunch of
>> > python processes then do cooperative multithreading with them along with
>> > all your application logic. It just gets complicated. I also think this
>> > underestimates the value of using tools people are familiar with (wsgi
>> > and flask) particularly if making it easy to jump in and building
>> > community is a goal.
>> 
>> I agree that mixing an asyncio based httpserver with application logic
>> using cooperative multithreading is not a good idea.  Happily that is
>> not the proposal.  The proposal is that the webserver be a separate
>> process from the rest of Zuul, it would be an independently scaleable
>> component, and *only* the webserver would use asyncio.
>> 
>
> I'm not totally convinced that having an HTTP service in the scheduler
> that gets proxied to when appropriate is the worst idea in the short term,
> since we already have one and it already works reasonably well with paste,
> we just want to get rid of paste faster than we can refactor it out by
> making a ZK backend.
>
> Even if we remove paste and create a web tier aiohttp thing, we end up
> writing most of what would be complex about doing it in-process in the
> scheduler. So, to tack gearman on top of that, versus just letting the
> reverse proxy do its job, seems like extra work.

What I'd like to get out of this conversation is a shared understanding
of what the web tier for Zuul should look like in the future, so that we
can know where we want to end up eventually, but *not* a set of
additional requirements for Zuul v3.0.  In other words, I think this is
a long-term, rather than short-term conversation.

The way I see it is that we're adding a bunch of new functionality to an
area of Zuul that we've traditionally kept very simple.  We're growing
from a simple JSON endpoint to support websockets, event injection via
hooks, and a full-blown API for historic data.

That last item in particular calls out for a real web framework.  Since
it is new work and has substantial interaction with the web framework,
it would be good to know what our end state is, so that folks working on
it can go ahead and head in that direction.

The other aspects, which are largely already implemented, can be ported
over in the fullness of time.

We do not need to change how we are doing webhooks or log streaming for
Zuul v3.0.

In fact, I imagine that at least initially, we would implement something
in openstack-infra like what you describe, Clint.  We will have an
Apache server which proxies status.json requests and webhooks to
zuul-scheduler, and proxies websocket requests to the streaming server.

As time permits, we can incorporate those into a comprehensive web
server with the framework we choose.

Does that sound like a good plan?

Does aiohttp alone fit the bill as Monty suggests, or do we need to
consider something else?

-Jim

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] About aarch64 third party CI

2017-06-12 Thread James E. Blair
Ricardo Carrillo Cruz  writes:

> This is a nodepool.yaml that can help you get going:
>
> http://paste.openstack.org/show/612191/

Glad it worked!

You can drop 'zmq-publishers' from the config entirely.

If 'images-dir' and 'diskimages' are required, then I would consider
that a bug; we should have default values for those so you don't need to
provide them in this case.

That config snippet also illustrates something I didn't quite realize at
the time I reviewed https://review.openstack.org/472959.  I don't think
we should be using UUIDs as keys in nodepool because they are hard for
humans to distinguish from each other.  It could make for somewhat
error-prone configuration.

So instead of:

cloud-images:
  - name: 9e884aab-a46e-46de-b57c-a044da0f45cd
pools:
  - name: main
labels:
  - name: xenial
cloud-image: 9e884aab-a46e-46de-b57c-a044da0f45cd

If someone wants to specify an image by id, we should have:

cloud-images:
  - name: mycloudimagename
id: 9e884aab-a46e-46de-b57c-a044da0f45cd
pools:
  - name: main
labels:
  - name: xenial
cloud-image: mycloudimagename

And then if you omit the 'id' field, we should just implicitly use
'name' as before.  This way it's easy to see which of several
cloud-images a label uses, and, when it's time to update the UUID for
that cloud image, that only needs to happen in one place.

-Jim

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

[OpenStack-Infra] Opendayight Internship (Jenkins job builder) Week 2 status update June 5th-June 11th 2017

2017-06-12 Thread Yolande Amate
Hi,

This past week I have done the following:

- Update GitLab Plugin to use convert_mapping_to_xml[1]
- Update artifactory_repository function to use convert_mapping_to_xml[2]
- Update artifactory_common_details function to use convert_mapping_to_xml[3]
- Update shell and python plugins to use convert_mapping_to_xml[4]
- Update convert_mapping_to_xml function to support hard-coded tag values[5]

One main issue I ran into while working this week was figuring out how
to handle tags whose values were hard-coded like
"XML.SubElement(shell, 'command').text = data", without having to
depend on the yaml optname or value returned by data.get(...);
provided so many functions have these hard-coded tag values.

This week, I plan on updating more plugins to make use of
convert_mapping_to_xml and doing code review.

Attached below is a link to my Jenkins job builder project proposal
document with a table of plugins(under activity details) that I have
worked on so far[6].

Cheers,
Yolande

[1]https://review.openstack.org/#/c/471867/
[2]https://review.openstack.org/#/c/471894/
[3]https://review.openstack.org/#/c/472860/
[4]https://review.openstack.org/#/c/473051/
[5]https://review.openstack.org/#/c/473156/
[6]https://docs.google.com/document/d/13YGxDwt76K5mhRUlmC_bzXK_mCRSiVGStp-CJL4VYtA/edit#

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [OpenStack-Infra] About aarch64 third party CI

2017-06-12 Thread Ricardo Carrillo Cruz
2017-06-09 22:18 GMT+02:00 Paul Belanger :

> On Fri, Jun 09, 2017 at 07:58:44PM +, Jeremy Stanley wrote:
> > On 2017-06-07 14:26:10 +0800 (+0800), Xinliang Liu wrote:
> > [...]
> > > we already have our own pre-built debian cloud image, could I just
> > > use it and not use the one built by diskimage-builder?
> > [...]
> >
> > The short answer is that nodepool doesn't currently have support for
> > directly using an image provided independent of its own image build
> > process. Clark was suggesting[*] in IRC today that it might be
> > possible to inject records into Zookeeper (acting as a "fake"
> > nodepool-builder daemon basically) to accomplish this, but nobody
> > has yet implemented such a solution to our knowledge.
> >
> > Longer term, I think we do want a feature in nodepool to be able to
> > specify the ID of a prebuilt image for a label/provider (at least we
> > discussed that we wouldn't reject the idea if someone proposed a
> > suitable implementation). Just be aware that nodepool's use of
> > diskimage-builder to regularly rebuild images is intentional and
> > useful since it ensures images are updated with the latest packages,
> > kernels, warm caches and whatever else you specify in your elements
> > so reducing job runtimes as they spend less effort updating these
> > things on every run.
> >
> > [*] http://eavesdrop.openstack.org/irclogs/%23openstack-
> infra/%23openstack-infra.2017-06-09.log.html#t2017-06-09T15:32:27-2 >
> > --
> > Jeremy Stanley
>
> Actually, I think 458073[1] aims to fix this use case.  I haven't tired it
> myself but it adds support for using images which are not built and
> managed by
> nodepool.
>
> This is currently only on feature/zuulv3 branch.
>
> [1] https://review.openstack.org/#/c/458073/
>
>
That's right, support for cloud-images on feature/zuulv3 is now merged and
working.
I just setup a Nodepool using this new feature over the weekend.

This is a nodepool.yaml that can help you get going:

http://paste.openstack.org/show/612191/

HTH


> > ___
> > OpenStack-Infra mailing list
> > OpenStack-Infra@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
>
> ___
> OpenStack-Infra mailing list
> OpenStack-Infra@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra