Re: [openstack-dev] [horizon][all] How to properly depend on Horizon

2016-04-03 Thread Akihiro Motoki
Hi Serg,

I proposed another way to sahara-dashboard [1].
This is the way used in neutron sub-projects.
It also allows us to use constrained environments used in OpenStack gate jobs.
I believe it works for all horizon related projects.

I agree the simplest way is to publish horizon to PyPI, but
the OpenStack does not release server projects to PyPI yet.

Akihiro

[1] https://github.com/openstack/sahara-dashboard/blob/master/tox.ini#L8

2016-04-04 10:37 GMT+09:00 Serg Melikyan :
> Hi folks,
>
> while I was working on bug [0] with incorrect dependency to horizon in
> stable/mitaka I discovered at least three different ways how people
> add such dependency:
>
> 1. tarball dependency in tox.ini [1]
> 2. tarball dependency in test-requirements.txt [2]
> 3. git repo dependency in  test-requirements.txt [3]
>
> Question: How to properly depend on horizon?
>
> P.S. Looks like update.py in openstack/requirements simply ignores #2
> and #3 and don't count as extra dependency.
>
> P.P.S Why we can't publish horizon to pypi.openstack.org?
>
> Reference:
> [0] https://bugs.launchpad.net/bugs/1565577
> [1] 
> https://github.com/openstack/designate-dashboard/blob/dfa2fc6660467da2f1c53e12aeb7d7aab5d7531e/tox.ini#L20
> [2] 
> https://github.com/openstack/monasca-ui/blob/8861bede7e06d19b265d3425208b4865c480eb69/test-requirements.txt#L25
> [3] 
> https://github.com/openstack/manila-ui/blob/bf382083b281a77f77df9e0bd51376df49d53b2e/test-requirements.txt#L5
>
> --
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-04-03 Thread Andrew Beekhof
On Tue, Mar 29, 2016 at 6:02 AM, Dan Prince  wrote:

[...]

> That said regardless of what we eventually do with Pacemaker or Puppet
> it should be feasible for them both to co-exist.

The key thing to keep in mind if you're using Puppet to build a
cluster is that if you're doing something to a service that is or will
be managed by the cluster, that service should either:

- not be part of the cluster at that time, or
- the cluster needs to be told to ignore the service temporarily, or
- the act of taking the service down or bringing it up needs to be
done via cluster tools

NOT doing one of those, telling the cluster "here's a service, make
sure its available" and then screwing around with it, puts the cluster
and Puppet into conflict (essentially an internal split-brain) that
rarely ends well.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-04-03 Thread Steven Dake (stdake)


On 4/3/16, 6:38 PM, "Dan Prince"  wrote:

>
>
>
>On Mon, 2016-03-21 at 16:14 -0400, Zane Bitter wrote:
>> As of the Liberty release, Magnum now supports provisioning Mesos
>> clusters, so TripleO wouldn't have to maintain the installer for
>> that 
>> either. (The choice of Mesos is somewhat unfortunate in our case,
>> because Magnum's Kubernetes support is much more mature than its
>> Mesos 
>> support, and because the reasons for the decision are about to be or
>> have already been overtaken by events - I've heard reports that the
>> features that Kubernetes was missing to allow it to be used for
>> controller nodes, and maybe even compute nodes, are now available.
>> Nonetheless, I expect the level of Magnum support for Mesos is
>> likely 
>> workable.) This is where the TripleO strategy of using OpenStack to
>> deploy OpenStack can really pay dividends: because we use Ironic all
>> of 
>> our servers are accessible through the Nova API, so in theory we can
>> just run Magnum out of the box.
>> 
>> 
>> The chances of me personally having time to prototype this are
>> slim-to-zero, but I think this is a path worth investigating.
>
>Looking at Magnum more closely... At a high level I like the idea of
>Magnum. And interestingly it could be a surprisingly good fit for
>someone wanting containers on baremetal to consider using the TripleO
>paving machine (instack-undercloud).

Dan,

When I originally got involved in Magnum and submitted the first 100 or so
patches to the repository to kick off development, my thinking was to use
Magnum as an integration point for Kubernetes for Kolla (whiich at the
time had no ansible but kubernetes pod files instead) running an atomic
distro.

It looked good on paper but in practice, all those layers and dependencies
introduced unnecessary complexity making the system I had envisioned
unwieldy and more complex then the U.S. Space Shuttle.

When I finally took off my architecture astronaut helmet, I went back to
basics and dismissed the idea of a Magnum and a tripleo integration.

Remember, that was my idea - and I gave up on it - for a reason.

Magnum standalone, however, is still very viable and I like where the core
reviewer team has taken Magnum since I stopped participation in that
project.

I keep telling people underlays for OpenStack deployment are much more
complex then they look and are 5-10 years down the road.  Yet people keep
trying - good for them ;)

Regards
-steve



>
>We would need to add a few services I think to instack to supply the
>Magnum heat templates with the required API's. Specifically:
>
> -barbican
> -neutron L3 agent
> -neutron Lbaas
> -Magnum (API, and conductor)
>
>This isn't hard and would be a cool thing to have supported withing
>instack (although I wouldn't enable these services by default I
>think... at least not for now).
>
>So again, at a high level things look good. Taking a closer look at how
>Magnum architects its network things start to fall apart a bit I think.
>From what I can tell the Magnum network architecture with its usage of
>the L3 agent, and Lbaas the undercloud itself would be much more
>important. Depending on the networking vendor we would possibly need to
>make the Undercloud itself HA in order to ensure anything built on top
>was also HA. Contrast this with the fact that you can deploy an
>Overcloud today that will continue to function should the undercloud
>(momentarily) go down.
>
>Then there is the fact that Magnum would be calling Heat to create our
>baremetal servers (Magnum creates the OS::Nova::Server resources... not
>our own Heat templates). This is fine but we have a lot of value add in
>our own templates. We could actually write our own Heat templates and
>plug them into magnum.conf via k8s_atomic_template_path= or
>mesos_fedora_template_path= (doesn't exist yet but it could?). What
>this means for our workflow and how end users would would configure
>underlying parameters would need to be discussed. Would we still have
>our own Heat templates that created OS::Magnum::Bay resources? Or would
>we use totally separate stacks to generate these things? The former
>causes a bit of a "Yo Dawg: I hear you like Heat, so I'm calling Heat
>to call Magnum to call Heat to spin up your cloud". Perhaps I'm off
>here but we'd still want to expose many of the service level parameters
>to end users via our workflows... and then use them to deploy
>containers into the bays so something like this would need to happen I
>think.
>
>Aside from creating the bays we likely wouldn't use the /containers API
>to spin up containers but would go directly at Mesos or Kubernetes
>instead. The Magnum API just isn't leaky enough yet for us to get
>access to all the container bits we'd need at the moment. Over time it
>could get there... but I don't think it is there yet.
>
>So all that to say maybe we should integrate it into instack-undercloud
>as a baremetal containers side project. This would also make it easier

Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-04-03 Thread Steven Dake (stdake)


From: Dan Prince >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, April 3, 2016 at 4:54 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

On Thu, 2016-03-31 at 08:22 +, Steven Dake (stdake) wrote:
Kevin,

I am not directly answering your question, but from the perspective of Kolla, 
our upgrades are super simple because we don't make a big mess in the first 
place to upgrade from.  In my experience, this is the number one problem with 
upgrades – everyone makes a mess of the first deployment, so upgrading from 
there is a minefield.  Better to walk straight through that minefield by not 
making a mess of the system in the first place using my favorite deployment 
tool: Kolla ;-)

I think any containers based solution (Kolla or not) would be naturally "less 
messy" than a baremetal deployment that isn't containerized. So I think TripleO 
would achieve much of the same by switching to any containerized deployment 
architecture right? Is there something special about the Kolla/Ansible approach 
that I'm missing here?

Dan,

Yes there is something your missing.

What I'd ask of you, and all of the TripleO developers in fact, is to spend 4 
hours and deploy Kolla on a single bare metal node (AIO).  Get a feel for the 
workflow.  Get a feel for the nearly-dependency-free deployment model.  Get a 
feel for the simplicity.

I know 4 hours is a lot to ask, but it won't be a waste of your time.

If you run into trouble which can happen if the QSG isn't followed, come hit us 
up in #openstack-kolla on IRC.  Our community is very inviting and helpful and 
I can guarantee will get you a working deployment of Kolla.

Kolla may have gaps compared to tripleO, but for the moment, I'd ask you to put 
those aside, since that seems to be the main objection raised when a 
tripleo+kolla integration is proposed.  Any gap can be fixed easily in Kolla if 
you tell us what you want, or do the work yourself.

Once you have a working AIO deployment, you will have the answer to your 
question and more...

Regards,
-steve



Kolla upgrades rock.  I have no doubt we will have some minor issues in the 
field, but we have tested 1 month old master to master upgrades with database 
migrations of the services we deploy, and it takes approximately 10 minutes on 
a 64 (3 control rest compute) node cluster without VM downtime or loss of 
networking service to the virtual machines.  This is because our upgrades, 
while not totally atomic across the clsuter, are pretty darn close and upgrade 
the entire filesystem runtime in one atomic action per service while rolling 
the ugprade in the controller nodes.

During the upgrade process there may be some transient failures for API service 
calls, but they are typically repeated by clients and no real harm is done.  
Note we follow project's best practices for handling upgrades, without the mess 
of dealing with packaging or configuration on the filesystem and migration 
thereof.

Regards
-steve


From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, March 30, 2016 at 9:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO

The main issue is one of upgradability, not stability. We all know tripleo is 
stable. Tripleo cant do upgrades today. We're looking for ways to get there. So 
"upgrading" to ansible isnt nessisary for sure since folks deploying tripleo 
today must assume they cant upgrade anyway.

Honestly I have doubts any config management system from puppet to heat 
software deployments can be coorced to do a cloud upgrade without downtime 
without a huge amount of workarounds. You really either need a workflow 
oriented system with global knowledge like ansible or a container orchestration 
system like kubernes to ensure you dont change too many things at once and 
break things. You need to be able to run some old things and some new, all at 
the same time. And in some cases different versions/config of the same service 
on different machines.

Thoughts on how this may be made to work with puppet/heat?

Thanks,
Kevin


From: Dan Prince
Sent: Monday, March 28, 2016 12:07:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] 

Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-04-03 Thread Dan Prince
On Wed, 2016-03-23 at 19:11 +, Fox, Kevin M wrote:
> If heat convergence worked (Is that a thing yet?), it could
> potentially be used instead of a COE like kubernetes.
> 
> The thing ansible buys us today would be upgradeability. Ansible is
> config management, but its also a workflow like tool. Heats bad at
> workflow.
> 
> I think between Heat with Convergence, Kolla containers, and some
> kind of Mistral workflow for upgrades, you could replace Ansible.

There is nothing I've seen that Kolla/Ansible does that we can't do
with Heat, Mistral, and Containers. We just need to decide how we want
to architect it and do it. We've got a proof of concept we can iterate
on with the Compute role... I'd like to see more details about
comparing that approach to using something like Kubernetes/Mesos in the
middle.

Understood that everyone has their opinions on tooling but from my
prospective there seems to be quite a bit of divergence between Kolla
and what TripleO is striving for. 

We are trying to build a deployment Workflow that converges the UI and
CLI. Having a set of API's around which to build these things is highly
beneficial. Think any UI, not just "a" UI which has hard coded
parameters to work with a given implementation of the Kolla Ansible
scripts. I'm thinking of something like 'heat stack-validate' which
gives you a list of parameter, types, and descriptions up/front base on
the roles and services that your nested stacks configure. This allows
the UI to dynamically discover parameters, and evolve over time to work
with new scripts or plugins with little to no UI coding changes. With
Kolla Ansible I'm not sure how you'd get the equivalent of something
that can dynamically build out your parameters (including types, and
descriptions) without rolling your own tool to manage these things. You
could probably build a tool to do this but I don't see it yet. This is
just one example... we could go deeper but I'm not seeing the lets
replace TripleO heat templates w/ Kolla as a drop in.

Our goal was never to "replace Ansible". The architecture of our
approach to deployment is just different. The architecture of our HA is
different. The services we support are different. How we configure
these services is different.

We do however want to switch to containers. So rather than argue about
tooling I'd rather focus on things we have in common. We are
interested, at least for now I think, in consuming Kolla containers.
And the conversation about whether a COE like kubernetes or mesos might
be more beneficial to Openstack deployment is actually worth
collaborating on. In short I think the two projects could have much in
common with regards to the containers and the architecture that manages
them. Lets start there and leave the workflow tooling opinions to the
side for now.

Dan

> 
> Then there's the nova instance user thing again (https://review.opens
> tack.org/93)... How do you get secrets to the instances
> securely... Kubernetes has a secure store we could use... OpenStack
> still hasn't really gotten this one figured out. :/ Barbican is a
> piece of that puzzle, but there's no really good to hook it and nova
> together. 
> 
> Thanks,
> Kevin
> 
> From: Michał Jastrzębski [inc...@gmail.com]
> Sent: Wednesday, March 23, 2016 8:42 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen
> of Heat, containers, and the future of TripleO
> 
> Hello,
> 
> So Ryan, I think you can make use of heat all the way. Architecture
> of
> kolla doesn't require you to use ansible at all (in fact, we separate
> ansible code to a different repo). Truth is that ansible-kolla is
> developed by most people and considered "the way to deploy kolla" by
> most of us, but we make sure that we won't cut out other deployment
> engines from our potential.
> 
> So bottom line, heat may very well replace ansible code if you can
> duplicate logic we have in playbooks in heat templates. That may
> require docker resource with pretty complete featureset of docker
> itself (named volumes being most important). Bootstrap is usually
> done
> inside container, so that would be possible too.
> 
> To be honest, as for tripleo doing just bare metal deployment would
> defeat idea of tripleo. We have bare metal deployment tools already
> (cobbler which is used widely, bifrost which use ansible same as
> kolla
> and integration would be easier), and these comes with significantly
> less footprint than whole tripleo infrastructure. Strength of tripleo
> comes from it's rich config of openstack itself, and I think that
> should be portable to kolla.
> 
> 
> 
> On 23 March 2016 at 06:54, Ryan Hallisey  wrote:
> > 
> > *Snip*
> > 
> > > 
> > > Indeed, this has literally none of the benefits of the ideal Heat
> > > deployment enumerated above save one: it may be entirely the
> > > wrong tool
> > > in every way 

[openstack-dev] [horizon][all] How to properly depend on Horizon

2016-04-03 Thread Serg Melikyan
Hi folks,

while I was working on bug [0] with incorrect dependency to horizon in
stable/mitaka I discovered at least three different ways how people
add such dependency:

1. tarball dependency in tox.ini [1]
2. tarball dependency in test-requirements.txt [2]
3. git repo dependency in  test-requirements.txt [3]

Question: How to properly depend on horizon?

P.S. Looks like update.py in openstack/requirements simply ignores #2
and #3 and don't count as extra dependency.

P.P.S Why we can't publish horizon to pypi.openstack.org?

Reference:
[0] https://bugs.launchpad.net/bugs/1565577
[1] 
https://github.com/openstack/designate-dashboard/blob/dfa2fc6660467da2f1c53e12aeb7d7aab5d7531e/tox.ini#L20
[2] 
https://github.com/openstack/monasca-ui/blob/8861bede7e06d19b265d3425208b4865c480eb69/test-requirements.txt#L25
[3] 
https://github.com/openstack/manila-ui/blob/bf382083b281a77f77df9e0bd51376df49d53b2e/test-requirements.txt#L5

-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-04-03 Thread Dan Prince



On Mon, 2016-03-21 at 16:14 -0400, Zane Bitter wrote:
> As of the Liberty release, Magnum now supports provisioning Mesos 
> clusters, so TripleO wouldn't have to maintain the installer for
> that 
> either. (The choice of Mesos is somewhat unfortunate in our case, 
> because Magnum's Kubernetes support is much more mature than its
> Mesos 
> support, and because the reasons for the decision are about to be or 
> have already been overtaken by events - I've heard reports that the 
> features that Kubernetes was missing to allow it to be used for 
> controller nodes, and maybe even compute nodes, are now available. 
> Nonetheless, I expect the level of Magnum support for Mesos is
> likely 
> workable.) This is where the TripleO strategy of using OpenStack to 
> deploy OpenStack can really pay dividends: because we use Ironic all
> of 
> our servers are accessible through the Nova API, so in theory we can 
> just run Magnum out of the box.
> 
> 
> The chances of me personally having time to prototype this are 
> slim-to-zero, but I think this is a path worth investigating.

Looking at Magnum more closely... At a high level I like the idea of
Magnum. And interestingly it could be a surprisingly good fit for
someone wanting containers on baremetal to consider using the TripleO
paving machine (instack-undercloud).

We would need to add a few services I think to instack to supply the
Magnum heat templates with the required API's. Specifically:

 -barbican
 -neutron L3 agent
 -neutron Lbaas
 -Magnum (API, and conductor)

This isn't hard and would be a cool thing to have supported withing
instack (although I wouldn't enable these services by default I
think... at least not for now).

So again, at a high level things look good. Taking a closer look at how
Magnum architects its network things start to fall apart a bit I think.
From what I can tell the Magnum network architecture with its usage of
the L3 agent, and Lbaas the undercloud itself would be much more
important. Depending on the networking vendor we would possibly need to
make the Undercloud itself HA in order to ensure anything built on top
was also HA. Contrast this with the fact that you can deploy an
Overcloud today that will continue to function should the undercloud
(momentarily) go down.

Then there is the fact that Magnum would be calling Heat to create our
baremetal servers (Magnum creates the OS::Nova::Server resources... not
our own Heat templates). This is fine but we have a lot of value add in
our own templates. We could actually write our own Heat templates and
plug them into magnum.conf via k8s_atomic_template_path= or
mesos_fedora_template_path= (doesn't exist yet but it could?). What
this means for our workflow and how end users would would configure
underlying parameters would need to be discussed. Would we still have
our own Heat templates that created OS::Magnum::Bay resources? Or would
we use totally separate stacks to generate these things? The former
causes a bit of a "Yo Dawg: I hear you like Heat, so I'm calling Heat
to call Magnum to call Heat to spin up your cloud". Perhaps I'm off
here but we'd still want to expose many of the service level parameters
to end users via our workflows... and then use them to deploy
containers into the bays so something like this would need to happen I
think.

Aside from creating the bays we likely wouldn't use the /containers API
to spin up containers but would go directly at Mesos or Kubernetes
instead. The Magnum API just isn't leaky enough yet for us to get
access to all the container bits we'd need at the moment. Over time it
could get there... but I don't think it is there yet.

So all that to say maybe we should integrate it into instack-undercloud 
as a baremetal containers side project. This would also make it easier
to develop and evolve Magnum baremetal capabilities if we really want
to pursue them. But I think we'd have an easier go of implementing our
containers architecture (with all the network isolation, HA
architecture, and underpinnings we desire) by managing our own
deployment of these things in the immediate future.

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] tripleo-quickstart import

2016-04-03 Thread Steve Baker

On 30/03/16 13:37, John Trowbridge wrote:


On 03/29/2016 08:30 PM, John Trowbridge wrote:

Hola,

With the approval of the tripleo-quickstart spec[1], it is time to
actually start doing the work. The first work item is moving it to the
openstack git. The spec talks about moving it as is, and this would
still be fine.

However, there are roles in the tripleo-quickstart tree that are not
directly related to the instack-virt-setup replacement aspect that is
approved in the spec (image building, deployment). I think these should
be split into their own ansible-role-* repos, so that they can be
consumed using ansible-galaxy. It would actually even make sense to do
that with the libvirt role responsible for setting up the virtual
environment. The tripleo-quickstart would then be just an integration
layer making consuming these roles for virtual deployments easy.

This way if someone wanted to make a different role for say OVB
deployments, it would be easy to use the other roles on top of a
differently provisioned undercloud.
I'm maintaining my own OVB playbooks and have been pondering how to make 
them more broadly consumable, so I'm +1 on a role structure which allows 
this.

Similarly, if we wanted to adopt ansible to drive tripleo-ci, it would
be very easy to only consume the roles that make sense for the tripleo
cloud.

So the first question is, should we split the roles out of
tripleo-quickstart?

If so, should we do that before importing it to the openstack git?

Also, should the split out roles also be on the openstack git?

Maybe this all deserves its own spec and we tackle it after completing
all of the work for the first spec. I put this on the meeting agenda for
today, but we didn't get to it.

- trown


whoops
[1]
https://github.com/openstack/tripleo-specs/blob/master/specs/mitaka/tripleo-quickstart.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][mitaka][all] Update dependency to horizon for stable/mitaka

2016-04-03 Thread Serg Melikyan
Hi folks,

I've noticed that several projects which have dashboard which depend
on horizon reference incorrect version [0]:

* designate-dashboard
* mistral-dashboard
* murano-dashboard (fix proposed)
* manila-ui
* ironic-ui
* searchlight-ui
* magnum-ui
* monasca-ui

all these projects, and some others for sure, still depend on horizon
from master in they stable/mitaka branch. This may lead to broken
stable/mitaka once horizon will merge something incompatible to the
master branch.

Please update you dependency to horizon correspondingly ASAP.

Reference:
[0] https://bugs.launchpad.net/bugs/1565577

-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Implementing tempest test for Keystone federation functional tests

2016-04-03 Thread Jamie Lennox
On 2 April 2016 at 09:21, Rodrigo Duarte  wrote:

>
>
> On Thu, Mar 31, 2016 at 1:11 PM, Matthew Treinish 
> wrote:
>
>> On Thu, Mar 31, 2016 at 11:38:55AM -0400, Minying Lu wrote:
>> > Hi all,
>> >
>> > I'm working on resource federation at the Massachusetts Open Cloud. We
>> want
>> > to implement functional test on the k2k federation, which requires
>> > authentication with both a local keystone and a remote keystone (in a
>> > different cloud installation). It also requires a K2K/SAML assertion
>> > exchange with the local and remote keystones. These functions are not
>> > implemented in the current tempest.lib.service library, so I'm adding
>> code
>> > to the service library.
>> >
>> > My question is, is it possible to adapt keystoneauth python clients? Or
>> do
>> > you prefer implementing it with http requests.
>>
>> So tempest's clients have to be completely independent. That's part of
>> tempest's
>> design points about testing APIs, not client implementations. If you need
>> to add
>> additional functionality to the tempest clients that's fine, but pulling
>> in
>> keystoneauth isn't really an option.
>>
>
> ++
>
>
>>
>> >
>> > And since this test requires a lot of environment set up including: 2
>> > separate cloud installations, shibboleth, creating mapping and
>> protocols on
>> > remote cloud, etc. Would it be within the scope of tempest's mission?
>>
>> From the tempest perspective it expects the environment to be setup and
>> already
>> exist by the time you run the test. If it's a valid use of the API, which
>> I'd
>> say this is and an important one too, then I feel it's fair game to have
>> tests
>> for this live in tempest. We'll just have to make the configuration
>> options
>> around how tempest will do this very explicit to make sure the necessary
>> environment exists before the tests are executed.
>>
>
> Another option is to add those tests to keystone itself (if you are not
> including tests that triggers other components APIs). See
> https://blueprints.launchpad.net/keystone/+spec/keystone-tempest-plugin-tests
>
>

Again though, the problem is not where the tests live but where we run
them. To practically run these tests we need to either add K2K testing
support to devstack (not sure this is appropriate) or come up with a new
test environment that deploys 2 keystones and federation support that we
can CI against in the gate. This is doable but i think something we need
support with from infra before worrying about tempest.



>
>> The fly in the ointment for this case will be CI though. For tests to
>> live in
>> tempest they need to be verified by a CI system before they can land. So
>> to
>> land the additional testing in tempest you'll have to also ensure there
>> is a
>> CI job setup in infra to configure the necessary environment. While I
>> think
>> this is a good thing to have in the long run, it's not necessarily a small
>> undertaking.
>>
>
>> -Matt Treinish
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

2016-04-03 Thread Dan Prince
On Thu, 2016-03-31 at 08:22 +, Steven Dake (stdake) wrote:
> Kevin,
> 
> I am not directly answering your question, but from the perspective
> of Kolla, our upgrades are super simple because we don't make a big
> mess in the first place to upgrade from.  In my experience, this is
> the number one problem with upgrades – everyone makes a mess of the
> first deployment, so upgrading from there is a minefield.  Better to
> walk straight through that minefield by not making a mess of the
> system in the first place using my favorite deployment tool: Kolla ;-
> )
I think any containers based solution (Kolla or not) would be naturally
"less messy" than a baremetal deployment that isn't containerized. So I
think TripleO would achieve much of the same by switching to any
containerized deployment architecture right? Is there something special
about the Kolla/Ansible approach that I'm missing here?
> 

> 

> 
> Kolla upgrades rock.  I have no doubt we will have some minor issues in the 
> field, but we have tested 1 month old master to master upgrades with database 
> migrations of the services we deploy, and it takes approximately 10 minutes 
> on a 64 (3 control rest
>  compute) node cluster without VM downtime or loss of networking service to 
> the virtual machines.  This is because our upgrades, while not totally atomic 
> across the clsuter, are pretty darn close and upgrade the entire filesystem 
> runtime in one atomic action
>  per service while rolling the ugprade in the controller nodes.
> 

> 

> 
> During the upgrade process there may be some transient failures for API 
> service calls, but they are typically repeated by clients and no real harm is 
> done.  Note we follow project's best practices for handling upgrades, without 
> the mess of dealing with
>  packaging or configuration on the filesystem and migration thereof.
> 

> 

> 
> Regards
> 
> -steve
> 

> 

> 

> 

> 
> 
> 
> From: > "Fox, Kevin M" 
> 
> Reply-To: > "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> Date: > Wednesday, March 30, 2016 at 9:12 PM
> 
> To: > "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> Subject: > Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of 
> Heat, containers, and the future of TripleO
> 

> 

> 

> 
> 
> 
> 
> 
> 
> The main issue is one of upgradability, not stability. We all know tripleo is 
> stable. Tripleo cant do upgrades today. We're looking for ways to get there. 
> So "upgrading" to ansible isnt nessisary for sure since folks deploying 
> tripleo today must assume
>  they cant upgrade anyway.
> 

> 
> Honestly I have doubts any config management system from puppet to heat 
> software deployments can be coorced to do a cloud upgrade without downtime 
> without a huge amount of workarounds. You really either need a workflow 
> oriented system with global knowledge
>  like ansible or a container orchestration system like kubernes to ensure you 
> dont change too many things at once and break things. You need to be able to 
> run some old things and some new, all at the same time. And in some cases 
> different versions/config of
>  the same service on different machines. 
> 

> 
> Thoughts on how this may be made to work with puppet/heat?
> 

> 
> Thanks,
> 
> Kevin > 
>  
> 
> 
> 
> From: Dan Prince
> 
Sent: Monday, March 28, 2016 12:07:22 PM
> 
To: OpenStack Development Mailing List (not for usage questions)
> 
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, 
containers, and the future of TripleO
> 

> 
> 

> 
> 
> On Wed, 2016-03-23 at 07:54 -0400, Ryan Hallisey wrote:
> 
> > *Snip*
> 
> > 
> 
> > > 
> 
> > > Indeed, this has literally none of the benefits of the ideal Heat 
> 
> > > deployment enumerated above save one: it may be entirely the wrong
> 
> > > tool 
> 
> > > in every way for the job it's being asked to do, but at least it
> 
> > > is 
> 
> > > still well-integrated with the rest of the infrastructure.
> 
> > > 
> 
> > > Now, at the Mitaka summit we discussed the idea of a 'split
> 
> > > stack', 
> 
> > > where we have one stack for the infrastructure and a separate one
> 
> > > for 
> 
> > > the software deployments, so that there is no longer any tight 
> 
> > > integration between infrastructure and software. Although it makes
> 
> > > me a 
> 
> > > bit sad in some ways, I can certainly appreciate the merits of the
> 
> > > idea 
> 
> > > as well. However, from the argument above we can deduce that if
> 
> > > this is 
> 
> > > the *only* thing we do then we will end up in the very worst of
> 
> > > all 
> 
> > > possible worlds: the wrong tool for the job, poorly integrated.
> 
> > > Every 
> 
> > > single advantage of using Heat to deploy software will have
> 
> > > evaporated, 
> 
> > > leaving only disadvantages.
> 
> > I think Heat is a very powerful tool having done the container
> 
> > 

Re: [openstack-dev] [murano][release] missing build artifacts

2016-04-03 Thread Serg Melikyan
Hi Doug,

I +1 your commit with removing build artifacts for
openstack/murano-apps repo, we indeed don't have artifacts for
murano-apps.

>It would be good to understand what your intent is for builds. Can you follow 
>up here on this thread with some details?

openstack/murano-apps contains set of working applications for murano
intended to be used as examples and as well as production working
apps. Using stable branches (stable/liberty, stable/mitaka) we
separate applications working on corresponding version of OpenStack.
Hopefully at some point of time our built artifact is going to be each
application published to apps.openstack.org.

I believe we don't plan to have artifacts in this repo which are part
of the OpenStack release itself. Is there other steps that we need to
take regarding this?

On Fri, Apr 1, 2016 at 12:16 PM, Doug Hellmann  wrote:
> Murano team,
>
> We noticed in our audit of the links on
> http://releases.openstack.org/mitaka/index.html that the links to the
> build artifacts for murano-apps point to missing files. The murano-apps
> repository doesn't seem to have any real build jobs configured in
> openstack-infra/project-config/zuul/layout.yaml, so it's not clear how
> tagging is producing a release for you.
>
> For now, we are disabling links to the artifacts for that repo via
> https://review.openstack.org/300457 but we're also planning to remove
> murano-apps from the official Mitaka page since there don't appear to be
> any actual related deliverables (https://review.openstack.org/300473).
>
> It would be good to understand what your intent is for builds. Can
> you follow up here on this thread with some details?
>
> Thanks,
> Doug



-- 
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Ansible] Mitaka release

2016-04-03 Thread Jesse Pretorius
The OpenStack-Ansible project is pleased to announce the availability of
it's Mitaka release, v13.0.0, and with it comes the following features:

Increased modularity:
The Ansible roles we provide deploy OpenStack services directly from a git
source into python venvs, and deploy their key infrastructure dependencies.
For Mitaka we have broken the roles out into their own repositories in
order to allow deployers to make use of them with their own Ansible
playbooks. This further increases the options available to deployers,
giving even more choice for how to use the tools we provide to suit the
needs of the target environment.

Improved Usability:
We have made great strides in improving documentation for both developers
and deployers in order to improve the usability of OpenStack-Ansible
overall.

Additional services:
OpenStack-Ansible can now deploy Neutron LBaaSv2.
OpenStack-Ansible can now deploy Neutron FWaaS.
OpenStack-Ansible has a new experimental roles for the deployment of
Ironic, Designate, Zaqar, Magnum, Barbican. Each of these roles are still
in their early stages of development and are in varying states of
functional completion. Anyone interested in joining the development process
is welcome to make contact with us through the ML or on IRC in
#openstack-ansible.

Increased test coverage:
While we still have full integration testing on every commit to ensure that
the deployment of OpenStack by OpenStack-Ansible's playbooks really works,
we increased test coverage for the dynamic inventory and individual roles
in order to increase the test coverage, improve code quality, reduce
regressions and to cover more difficult test cases (eg: the major version
upgrade of MariaDB).

Some of the work intended for inclusion in the Mitaka release unfortunately
missed the deadline, so we expect that it will be completed and backported
to Mitaka early in the Newton cycle. This work includes:
 - The inclusion of Ironic in the integrated build.
 - Support for Nuage as a networking provider for Neutron.
 - Support for OVS as an ML2 provider for Neutron.

Generally speaking, it has been exciting to see how our community has grown
in the Mitaka cycle. The activity in the IRC channel has shown that we now
have even more organisations making use of OSA to deploy both private and
public OpenStack clouds.

Looking forward into the Newton cycle we'll be continuing work on Multi-OS
enablement, adding support for Ubuntu 16.04 LTS, taking advantage of
Ansible 2.x, revisiting the dynamic inventory with a view on how
environments are deployed across regions and with Cells v2, and of course
adding support for Liberty->Mitaka upgrades.

-- 
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel]

2016-04-03 Thread Kyrylo Galanov
Hi team,

An issue with rsync was recently reported [0].
We have a symlink 'liberty-9.0/modules/osnailyfacter/modular/master' to
examples directory in a fuel master node. However, rsync does not copy the
symlink to slave nodes. Being easy to fix (by adding -l flag) this issue
raises a question: do we need the symlink in prod? Can we just remove it
from package?


[0] https://bugs.launchpad.net/fuel/+bug/1538624

Best regards,
Kyrylo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev