Re: [openstack-dev] [all] Cross-Project track topic proposals

2015-09-24 Thread Flavio Percoco

On 23/09/15 18:07 -0700, Jim Rollenhagen wrote:

On Wed, Sep 23, 2015 at 10:45:56AM +0200, Flavio Percoco wrote:

Greetings,

The community is in the process of collecting topics for the
cross-project tack that we'll have in the Mitaka summit.

The good ol' OSDREG has been setup[0] to help collectiong these topics
and we'd like to encourage the community to propose sessions there.


As a note, ODSREG appears to require a valid oauth login, even for read
access. Though all developers should have a valid login, this isn't
awesome in the spirit of openness. Is this intentional / should that be
fixed?


I keep calling it "osdreg" (facepalm).

It would be great to have odsreg not require a oauth login for
reading. However, if I recall correctly, it's always been like this
(not clear memory here). If it doesn't take much time to fix this, it
would be great. Otherwise, considering that using odsreg was a late
decision and we are all stuck with RCs, I'd recommend to change this
for the next cycle.

Thanks for noticing, Jim.
Flavio

--
@flaper87
Flavio Percoco


pgp0JpkGLQbQK.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Sylvain Bauza



Le 24/09/2015 09:04, Duncan Thomas a écrit :

Hi

I thought I was late on this thread, but looking at the time stamps, 
it is just something that escalated very quickly. I am honestly 
surprised an cross-project interaction option went from 'we don't seem 
to understand this' to 'deprecation merged' in 4 hours, with only a 12 
hour discussion on the mailing list, right at the end of a cycle when 
we're supposed to be stabilising features.




So, I agree it was maybe a bit too quick hence the revert. That said, 
Nova master is now Mitaka, which means that the deprecation change was 
provided for the next cycle, not the one currently stabilising.


Anyway, I'm really all up with discussing why Cinder needs to know the 
Nova AZs.


I proposed a session at the Tokyo summit for a discussion of Cinder 
AZs, since there was clear confusion about what they are intended for 
and how they should be configured.


Cool, count me in from the Nova standpoint.

Since then I've reached out to and gotten good feedback from, a number 
of operators. There are two distinct configurations for AZ behaviour 
in cinder, and both sort-of worked until very recently.


1) No AZs in cinder
This is the config where a single 'blob' of storage (most of the 
operators who responded so far are using Ceph, though that isn't 
required). The storage takes care of availability concerns, and any AZ 
info from nova should just be ignored.


2) Cinder AZs map to Nova AZs
In this case, some combination of storage / networking / etc couples 
storage to nova AZs. It is may be that an AZ is used as a unit of 
scaling, or it could be a real storage failure domain. Eitehr way, 
there are a number of operators who have this configuration and want 
to keep it. Storage can certainly have a failure domain, and limiting 
the scalability problem of storage to a single cmpute AZ can have 
definite advantages in failure scenarios. These people do not want 
cross-az attach.




Ahem, Nova AZs are not failure domains - I mean the current 
implementation, in the sense of many people understand what is a failure 
domain, ie. a physical unit of machines (a bay, a room, a floor, a 
datacenter).
All the AZs in Nova share the same controlplane with the same message 
queue and database, which means that one failure can be propagated to 
the other AZ.


To be honest, there is one very specific usecase where AZs *are* failure 
domains : when cells exact match with AZs (ie. one AZ grouping all the 
hosts behind one cell). That's the very specific usecase that Sam is 
mentioning in his email, and I certainly understand we need to keep that.


What are AZs in Nova is pretty well explained in a quite old blogpost : 
http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/


We also added a few comments in our developer doc here 
http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs


tl;dr: AZs are aggregate metadata that makes those aggregates of compute 
nodes visible to the users. Nothing more than that, no magic sauce. 
That's just a logical abstraction that can be mapping your physical 
deployment, but like I said, which would share the same bus and DB.
Of course, you could still provide networks distinct between AZs but 
that just gives you the L2 isolation, not the real failure domain in a 
Business Continuity Plan way.


What puzzles me is how Cinder is managing a datacenter-level of 
isolation given there is no cells concept AFAIK. I assume that 
cinder-volumes are belonging to a specific datacenter but how is managed 
the controlplane of it ? I can certainly understand the need of affinity 
placement between physical units, but I'm missing that piece, and 
consequently I wonder why Nova need to provide AZs to Cinder on a 
general case.




My hope at the summit session was to agree these two configurations, 
discuss any scenarios not covered by these two configuration, and nail 
down the changes we need to get these to work properly. There's 
definitely been interest and activity in the operator community in 
making nova and cinder AZs interact, and every desired interaction 
I've gotten details about so far matches one of the above models.




I'm all with you about providing a way for users to get volume affinity 
for Nova. That's a long story I'm trying to consider and we are 
constantly trying to improve the nova scheduler interfaces so that other 
projects could provide resources to the nova scheduler for decision 
making. I just want to consider whether AZs are the best concept for 
that or we should do thing by other ways (again, because AZs are not 
what people expect).


Again, count me in for the Cinder session, and just lemme know when the 
session is planned so I could attend it.


-Sylvain




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [fuel] PTL & Component Leads elections

2015-09-24 Thread Vladimir Kuklin
Dmitry

Thank you for the clarification, but my questions still remain unanswered,
unfortunately. It seems I did not phrase them correctly.

1) For each of the positions, which set of git repositories should I run
this command against? E.g. which stackforge/fuel-* projects contributors
are electing PTL or CL?
2) Who is voting for component leads? Mike's email says these are core
reviewers. Our previous IRC meeting mentioned all the contributors to
particular components. Documentation link you sent is mentioning all
contributors to Fuel projects. Whom should I trust? What is the final
version? Is it fine that documentation contributor is eligible to nominate
himself and vote for Library Component Lead?

Until there is a clear and sealed answer to these questions we do not have
a list of people who can vote and who can nominate. Let's get it clear at
least before PTL elections start.

On Thu, Sep 24, 2015 at 4:49 AM, Dmitry Borodaenko  wrote:

> Vladimir,
>
> Sergey's initial email from this thread has a link to the Fuel elections
> wiki page that describes the exact procedure to determine the electorate
> and the candidates [0]:
>
> The electorate for a given PTL and Component Leads election are the
> Foundation individual members that are also committers for one of
> the Fuel team's repositories over the last year timeframe (September
> 18, 2014 06:00 UTC to September 18, 2015 05:59 UTC).
>
> ...
>
> Any member of an election electorate can propose their candidacy for
> the same election.
>
> [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015#Electorate
>
> If you follow more links from that page, you will find the Governance
> page [1] and from there the Election Officiating Guidelines [2] that
> provide a specific shell one-liner to generate that list:
>
> git log --pretty=%aE --since '1 year ago' | sort -u
>
> [1] https://wiki.openstack.org/wiki/Governance
> [2] https://wiki.openstack.org/wiki/Election_Officiating_Guidelines
>
> As I have specified in the proposed Team Structure policy document [3],
> this is the same process that is used by other OpenStack projects.
>
> [3] https://review.openstack.org/225376
>
> Having a different release schedule is not a sufficient reason for Fuel
> to reinvent the wheel, for example OpenStack Infrastructure project
> doesn't even have a release schedule for many of its deliverables, and
> still follows the same elections schedule as the rest of OpenStack:
>
> [4] http://governance.openstack.org/reference/projects/infrastructure.html
>
> Lets keep things simple.
>
> --
> Dmitry Borodaenko
>
>
> On Wed, Sep 23, 2015 at 01:27:07PM +0300, Vladimir Kuklin wrote:
> > Dmitry, Mike
> >
> > Thank you for the list of usable links.
> >
> > But still - we do not have clearly defined procedure on determening who
> is
> > eligible to nominate and vote for PTL and Component Leads. Remember, that
> > Fuel still has different release cycle and Kilo+Liberty contributors list
> > is not exactly the same for "365days" contributors list.
> >
> > Can we finally come up with the list of people eligible to nominate and
> > vote?
> >
> > On Sun, Sep 20, 2015 at 2:37 AM, Mike Scherbakov <
> mscherba...@mirantis.com>
> > wrote:
> >
> > > Let's move on.
> > > I started work on MAINTAINERS files, proposed two patches:
> > > https://review.openstack.org/#/c/225457/1
> > > https://review.openstack.org/#/c/225458/1
> > >
> > > These can be used as templates for other repos / folders.
> > >
> > > Thanks,
> > >
> > > On Fri, Sep 18, 2015 at 7:45 PM Davanum Srinivas 
> > > wrote:
> > >
> > >> +1 Dmitry
> > >>
> > >> -- Dims
> > >>
> > >> On Fri, Sep 18, 2015 at 9:07 PM, Dmitry Borodaenko <
> > >> dborodae...@mirantis.com> wrote:
> > >>
> > >>> Dims,
> > >>>
> > >>> Thanks for the reminder!
> > >>>
> > >>> I've summarized the uncontroversial parts of that thread in a policy
> > >>> proposal as per you suggestion [0], please review and comment. I've
> > >>> renamed SMEs to maintainers since Mike has agreed with that part,
> and I
> > >>> omitted code review SLAs from the policy since that's the part that
> has
> > >>> generated the most discussion.
> > >>>
> > >>> [0] https://review.openstack.org/225376
> > >>>
> > >>> I don't think we should postpone the election: the PTL election
> follows
> > >>> the same rules as OpenStack so we don't need a Fuel-specific policy
> for
> > >>> that, and the component leads election doesn't start until October 9,
> > >>> which gives us 3 weeks to confirm consensus on that aspect of the
> > >>> policy.
> > >>>
> > >>> --
> > >>> Dmitry Borodaenko
> > >>>
> > >>>
> > >>> On Fri, Sep 18, 2015 at 07:30:39AM -0400, Davanum Srinivas wrote:
> > >>> > Sergey,
> > >>> >
> > >>> > Please see [1]. Did we codify some of these roles and
> responsibilities
> > >>> as a
> > >>> > community in a spec? There was also a request to use terminology
> like
> > >>> say
> > >>> > MAINTAINERS in that 

[openstack-dev] [Nova] [Trove] Liberty RC1 available

2015-09-24 Thread Thierry Carrez
Hello everyone,

Nova and Trove just produced their first release candidate for the end
of the Liberty cycle. The RC1 tarballs, as well as a list of last-minute
features and fixed bugs since liberty-1 are available at:

https://launchpad.net/nova/liberty/liberty-rc1
https://launchpad.net/trove/liberty/liberty-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as final versions
on October 15. You are therefore strongly encouraged to test and
validate these tarballs !

Alternatively, you can directly test the stable/liberty release branch at:

http://git.openstack.org/cgit/openstack/nova/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/trove/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/nova/+filebug
or
https://bugs.launchpad.net/trove/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branches of Nova and Trove are now officially
open for Mitaka development, so feature freeze restrictions no longer
apply there.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] use zuul-cloner when running rspec

2015-09-24 Thread Sofer Athlan-Guyot
Emilien Macchi  writes:

> Background
> ==
>
> Current rspec tests are tested with modules mentioned in .fixtures.yaml
> file of each module.
>
> * the file is not consistent across all modules
> * it hardcodes module names & versions

IMHO, this alone justify it.

> * this way does not allow to use "Depend-On" feature, that would allow
> to test cross-modules patches
>
> Proposal
> 
>
> * Like we do in beaker & integration jobs, use zuul-cloner to clone
> modules in our CI jobs.
> * Use r10k to prepare fixtures modules.
> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>
> In that way:
> * we will have modules name + versions testing consistency across all
> modules
> * the same Puppetfile would be used by unit/beaker/integration testing.
> * the patch that pass tests on your laptop would pass tests in upstream CI
> * if you don't have zuul-cloner on your laptop, don't worry it will use
> git clone. Though you won't have Depends-On feature working on your
> laptop (technically not possible).
> * Though your patch will support Depends-On in OpenStack Infra for unit
> tests. If you submit a patch in puppet-openstacklib that drop something
> wrong, you can send a patch in puppet-nova that will test it, and unit
> tests will fail.

+1

>
> Drawbacks
> =
> * cloning from .fixtures.yaml takes ~ 10 seconds
> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>
> I think 40 seconds is something accept regarding the benefit.

Especially if one is using this workflow : 
 1. rake spec_prep and then:
- rake spec_standalone;
- rake spec_standalone;
- rake spec_standalone;
- ...

So it's a one time 40 seconds.

>
> Next steps
> ==
>
> * PoC in puppet-nova: https://review.openstack.org/#/c/226830/
> * Patch openstack/puppet-modulesync-config to be consistent across all
> our modules.
>
> Bonus
> =
> we might need (asap) a canary job for puppet-openstack-integration
> repository, that would run tests on a puppet-* module (since we're using
> install_modules.sh & Puppetfile files in puppet-* modules).
> Nothing has been done yet for this work.
>
>
> Thoughts?

-- 
Sofer Athlan-Guyot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Fix order of arguments in assertEqual

2015-09-24 Thread Andrey Kurilin
Hi everyone!

I agree that wrong order of arguments misleads while debugging errors, BUT
how we can prevent regression? Imo, this is not a good idea to make patches
like https://review.openstack.org/#/c/64415/ in each release(without check
in CI, such patches are redundant).


PS: This question relates not only for murano.

On Thu, Sep 24, 2015 at 11:48 AM, Kekane, Abhishek <
abhishek.kek...@nttdata.com> wrote:

> Hi,
>
> There is a bug for this, you can add murano projects to this bug.
>
> https://bugs.launchpad.net/heat/+bug/1259292
>
> Thanks,
>
> Abhishek Kekane
>
> -Original Message-
> From: Bulat Gaifullin [mailto:bgaiful...@mirantis.com]
> Sent: 24 September 2015 13:29
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [murano] Fix order of arguments in assertEqual
>
> +1
>
> > On 24 Sep 2015, at 10:45, Tetiana Lashchova 
> wrote:
> >
> > Hi folks!
> >
> > Some tests in murano code use incorrect order of arguments in
> assertEqual.
> > The correct order expected by the testtools is
> >
> > def assertEqual(self, expected, observed, message=''):
> > """Assert that 'expected' is equal to 'observed'.
> >
> > :param expected: The expected value.
> > :param observed: The observed value.
> > :param message: An optional message to include in the error.
> > """
> >
> > Error message has the following format:
> >
> > raise mismatch_error
> > testtools.matchers._impl.MismatchError: !=:
> > reference = 
> > actual= 
> >
> > Use of arguments in incorrect order could make debug output very
> confusing.
> > Let's fix it to make debugging easier.
> >
> > Best regards,
> > Tetiana Lashchova
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Consistent support for SSL termination proxies across all API services

2015-09-24 Thread Sean Dague
On 09/24/2015 03:40 AM, Julien Danjou wrote:
> On Thu, Sep 24 2015, Jamie Lennox wrote:
> 
> Hi Jamie,
> 
>> So this is a long thread and i may have missed something in it,
>> however this exact topic came up as a blocker on a devstack patch to
>> get TLS testing in the gate with HAproxy.
>>
>> The long term solution we had come up with (but granted not proposed
>> anywhere public) is that we should transition services to use relative
>> links.
> 
> This would be a good solution too indeed, but I'm not sure it's *always*
> doable.
> 
>> As far as i'm aware this is only a problem within the services
>> themselves as the URL they receive is not what was actually requested
>> if it went via HAproxy. It is not a problem with interservice requests
>> because they should get URLs from the service catalog (or otherwise
>> not display them to the user). Which means that this generally affects
>> the version discovery page, and "links" from resources to like a next,
>> prev, and base url.
> 
> Yes, but what we were saying is that this is fixable by using HTTP
> headers that the proxy set, and translating them to a correct WSGI
> environment. Basically, that will make think WSGI that it's a front-end,
> so it'll build URL correctly for the outer world.
> 
>> Is there a reason we can't transition this to use a relative URL
>> possibly with a django style WEBROOT so that a discovery response
>> returned /v2.0 and /v3 rather than the fully qualified URL and the
>> clients be smart enough to figure this out?
> 
> We definitely can do that, but there is still a use case that would not
> be covered without a configuration somewhere which is:
>   e.g. http://foobar/myservice/v3 -> http://myservice/v3
> 
> If you return an absolute /v3, it won't work. :)

It's also a pretty serious change in document content. We've been
returning absolute URLs forever, so assuming that all the client code
out there would work with relative code is a really big assumption.
That's a major API bump for sure.

And it seems like we have enough pieces here to get something better
with the proxy headers (which could happen early in Mitaka) and to fill
in the remaining bits if we clean up the service catalogue use.

-Sean


-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Remove Tuskar from tripleo-common and python-tripleoclient

2015-09-24 Thread Dougal Matthews
On 15 September 2015 at 17:33, Dougal Matthews  wrote:

>  [snip]
>
>
[2]: https://review.openstack.org/223527
> [3]: https://review.openstack.org/223535
> [4]: https://review.openstack.org/223605
>

For anyone interested in reviewing this work, the above reviews are now
ready for feedback and should be otherwise complete.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][doc] Oslo doc sprint 9/24-9/25

2015-09-24 Thread Davanum Srinivas
Thanks Anita, +1 to switching to the #openstack-sprint channel - i've
updated the wiki page

-- Dims

On Wed, Sep 23, 2015 at 7:26 PM, Anita Kuno  wrote:

> On 09/23/2015 07:18 PM, Davanum Srinivas wrote:
> > Reminder, we are doing the Doc Sprint tomorrow. Please help out with what
> > ever item or items you can.
> >
> > Thanks,
> > Dims
> >
> > On Wed, Sep 16, 2015 at 5:40 PM, James Carey <
> bellerop...@flyinghorsie.com>
> > wrote:
> >
> >> In order to improve the Oslo libraries documentation, the Oslo team is
> >> having a documentation sprint from 9/24 to 9/25.
> >>
> >> We'll kick things off at 14:00 UTC on 9/24 in the
> >> #openstack-oslo-docsprint IRC channel and we'll use an etherpad [0].
>
> Have you considered using the #openstack-sprint channel, which can be
> booked here: https://wiki.openstack.org/wiki/VirtualSprints
>
> and was created for just this kind of occasion. Also it has channel
> logging, helpful for those trying to co-ordinate across timezones.
>
> May you have a good sprint,
> Anita.
>
> >>
> >> All help is appreciated.   If you can help or have suggestions for
> >> areas of focus, please update the etherpad.
> >>
> >> [0] https://etherpad.openstack.org/p/oslo-liberty-virtual-doc-sprint
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][nova][qos] network QoS support driven by VM flavor/image requirements

2015-09-24 Thread Irena Berezovsky
I would like to start discussion regarding user experience when certain
level of network QoS is expected to be applied on VM ports. As you may know
basic networking QoS support was introduced during Liberty Release
following spec, Ref [1]
As it was discussed during last networking-QoS meeting, Ref [2], nova team
drives to the approach where neutron port is created with all required
settings and then VM is created with pre-created port  and not with
requested network. While this approach serves decoupling and separation of
compute and networking concerns, it will require smarter Client
orchestration and  we may loose some functionality we have today. One of
the usage scenarios that currently supported, is that Cloud Provider may
associate certain requirements with nova flavors. Once Tenant requests VM
for this flavor, nova (nova-scheduler) will make sure to fulfill the
requirements. Possible way to make this work for networking -qos is to set :
 nova-manage flavor set_key --name m1.small --key quota:vif_qos_policy
--value 

With current VM creation workflow  this will require nova to request
neutron to create port and apply qos policy with specified policy_id. This
will require changes on nova side.
I am not sure how to support the above user scenario with pre-created port
approach.

I would like to ask your opinion regarding the direction  for QoS in
particular, but the question is general for nova-neutron integration.
Should explicitly  decoupled networking/compute approach replace the
current way that nova delegates networking requirements to neutron.

BR,
Irena


[1]
http://specs.openstack.org/openstack/neutron-specs/specs/liberty/qos-api-extension.html
[2]
http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-09-16-14.02.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][release] 2015.1.2

2015-09-24 Thread Matthias Runge
On Wed, Sep 23, 2015 at 08:47:31PM -0400, Chuck Short wrote:
> Hi,
> 
> We would like to do a stable/kilo branch release, next Thursday. In order
> to do that I would like to freeze the branches on Friday. Cut some test
> tarballs on Tuesday and release on Thursday. Does anyone have an opinnon on
> this?

For Horizon, it would make sense to move this a week back. We discovered
a few issues in Liberty, which are present in current kilo, too. I'd
love to cherry-pick a few of them to kilo.

Unfortunately, it takes a bit, until Kilo (or in general: stable)
reviews are being done.
-- 
Matthias Runge 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Fix order of arguments in assertEqual

2015-09-24 Thread Bulat Gaifullin
+1

> On 24 Sep 2015, at 10:45, Tetiana Lashchova  wrote:
> 
> Hi folks!
> 
> Some tests in murano code use incorrect order of arguments in assertEqual.
> The correct order expected by the testtools is
> 
> def assertEqual(self, expected, observed, message=''):
> """Assert that 'expected' is equal to 'observed'.
> 
> :param expected: The expected value.
> :param observed: The observed value.
> :param message: An optional message to include in the error.
> """
> 
> Error message has the following format:
> 
> raise mismatch_error
> testtools.matchers._impl.MismatchError: !=:
> reference = 
> actual= 
> 
> Use of arguments in incorrect order could make debug output very confusing.
> Let's fix it to make debugging easier.
> 
> Best regards,
> Tetiana Lashchova
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mitaka travel tips ?

2015-09-24 Thread Thierry Carrez
David Moreau Simard wrote:
> There was a travel tips document for the Kilo summit in Paris [1].
> Lots of great helpful information in there not covered on the Openstack
> Summit page [2] like where to get SIM cards and stuff.
> 
> Is there one for Mitaka yet ? I can't find it.

There isn't one yet (that I know of). In Paris (and Hong-Kong) it was
created by the local OpenStack user group, so hopefully the Japanese
user group will set up something :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Fix order of arguments in assertEqual

2015-09-24 Thread Kekane, Abhishek
Hi,

There is a bug for this, you can add murano projects to this bug.

https://bugs.launchpad.net/heat/+bug/1259292

Thanks,

Abhishek Kekane

-Original Message-
From: Bulat Gaifullin [mailto:bgaiful...@mirantis.com] 
Sent: 24 September 2015 13:29
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [murano] Fix order of arguments in assertEqual

+1

> On 24 Sep 2015, at 10:45, Tetiana Lashchova  wrote:
> 
> Hi folks!
> 
> Some tests in murano code use incorrect order of arguments in assertEqual.
> The correct order expected by the testtools is
> 
> def assertEqual(self, expected, observed, message=''):
> """Assert that 'expected' is equal to 'observed'.
> 
> :param expected: The expected value.
> :param observed: The observed value.
> :param message: An optional message to include in the error.
> """
> 
> Error message has the following format:
> 
> raise mismatch_error
> testtools.matchers._impl.MismatchError: !=:
> reference = 
> actual= 
> 
> Use of arguments in incorrect order could make debug output very confusing.
> Let's fix it to make debugging easier.
> 
> Best regards,
> Tetiana Lashchova
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Etherpad for Tokyo design summit topics

2015-09-24 Thread Renat Akhmerov
Hi,

I created an etherpad where you can suggest summit topics for Mistral: 
https://etherpad.openstack.org/p/mistral-tokyo-summit-2015 



Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Duncan Thomas
Hi

I thought I was late on this thread, but looking at the time stamps, it is
just something that escalated very quickly. I am honestly surprised an
cross-project interaction option went from 'we don't seem to understand
this' to 'deprecation merged' in 4 hours, with only a 12 hour discussion on
the mailing list, right at the end of a cycle when we're supposed to be
stabilising features.

I proposed a session at the Tokyo summit for a discussion of Cinder AZs,
since there was clear confusion about what they are intended for and how
they should be configured. Since then I've reached out to and gotten good
feedback from, a number of operators. There are two distinct configurations
for AZ behaviour in cinder, and both sort-of worked until very recently.

1) No AZs in cinder
This is the config where a single 'blob' of storage (most of the operators
who responded so far are using Ceph, though that isn't required). The
storage takes care of availability concerns, and any AZ info from nova
should just be ignored.

2) Cinder AZs map to Nova AZs
In this case, some combination of storage / networking / etc couples
storage to nova AZs. It is may be that an AZ is used as a unit of scaling,
or it could be a real storage failure domain. Eitehr way, there are a
number of operators who have this configuration and want to keep it.
Storage can certainly have a failure domain, and limiting the scalability
problem of storage to a single cmpute AZ can have definite advantages in
failure scenarios. These people do not want cross-az attach.

My hope at the summit session was to agree these two configurations,
discuss any scenarios not covered by these two configuration, and nail down
the changes we need to get these to work properly. There's definitely been
interest and activity in the operator community in making nova and cinder
AZs interact, and every desired interaction I've gotten details about so
far matches one of the above models.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Fix order of arguments in assertEqual

2015-09-24 Thread Ekaterina Chernova
Hi!

Good catch. Have no objections for fixing it right now.

Regards,
Kate.


On Thu, Sep 24, 2015 at 10:58 AM, Bulat Gaifullin 
wrote:

> +1
>
> > On 24 Sep 2015, at 10:45, Tetiana Lashchova 
> wrote:
> >
> > Hi folks!
> >
> > Some tests in murano code use incorrect order of arguments in
> assertEqual.
> > The correct order expected by the testtools is
> >
> > def assertEqual(self, expected, observed, message=''):
> > """Assert that 'expected' is equal to 'observed'.
> >
> > :param expected: The expected value.
> > :param observed: The observed value.
> > :param message: An optional message to include in the error.
> > """
> >
> > Error message has the following format:
> >
> > raise mismatch_error
> > testtools.matchers._impl.MismatchError: !=:
> > reference = 
> > actual= 
> >
> > Use of arguments in incorrect order could make debug output very
> confusing.
> > Let's fix it to make debugging easier.
> >
> > Best regards,
> > Tetiana Lashchova
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mitaka travel tips ?

2015-09-24 Thread Tom Fifield

On 24/09/15 16:43, Thierry Carrez wrote:

David Moreau Simard wrote:

There was a travel tips document for the Kilo summit in Paris [1].
Lots of great helpful information in there not covered on the Openstack
Summit page [2] like where to get SIM cards and stuff.

Is there one for Mitaka yet ? I can't find it.


There isn't one yet (that I know of). In Paris (and Hong-Kong) it was
created by the local OpenStack user group, so hopefully the Japanese
user group will set up something :)



I found some! buried in the FAQ!

https://www.openstack.org/summit/tokyo-2015/faq/#Category-5

but, maybe we need a wiki page to collect more. I suggest:

https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Travel_Tips


Regards,


Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] Fix order of arguments in assertEqual

2015-09-24 Thread Tetiana Lashchova
Hi folks!

Some tests in murano code use incorrect order of arguments in assertEqual.
The correct order expected by the testtools is

def assertEqual(self, expected, observed, message=''):
"""Assert that 'expected' is equal to 'observed'.

:param expected: The expected value.
:param observed: The observed value.
:param message: An optional message to include in the error.
"""

Error message has the following format:

raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = 
actual= 

Use of arguments in incorrect order could make debug output very confusing.
Let's fix it to make debugging easier.

Best regards,
Tetiana Lashchova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Consistent support for SSL termination proxies across all API services

2015-09-24 Thread Julien Danjou
On Thu, Sep 24 2015, Jamie Lennox wrote:

Hi Jamie,

> So this is a long thread and i may have missed something in it,
> however this exact topic came up as a blocker on a devstack patch to
> get TLS testing in the gate with HAproxy.
>
> The long term solution we had come up with (but granted not proposed
> anywhere public) is that we should transition services to use relative
> links.

This would be a good solution too indeed, but I'm not sure it's *always*
doable.

> As far as i'm aware this is only a problem within the services
> themselves as the URL they receive is not what was actually requested
> if it went via HAproxy. It is not a problem with interservice requests
> because they should get URLs from the service catalog (or otherwise
> not display them to the user). Which means that this generally affects
> the version discovery page, and "links" from resources to like a next,
> prev, and base url.

Yes, but what we were saying is that this is fixable by using HTTP
headers that the proxy set, and translating them to a correct WSGI
environment. Basically, that will make think WSGI that it's a front-end,
so it'll build URL correctly for the outer world.

> Is there a reason we can't transition this to use a relative URL
> possibly with a django style WEBROOT so that a discovery response
> returned /v2.0 and /v3 rather than the fully qualified URL and the
> clients be smart enough to figure this out?

We definitely can do that, but there is still a use case that would not
be covered without a configuration somewhere which is:
  e.g. http://foobar/myservice/v3 -> http://myservice/v3

If you return an absolute /v3, it won't work. :)

-- 
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] [Designate] Liberty RC1 available

2015-09-24 Thread Thierry Carrez
Hello everyone,

Cinder and Designate just produced their first release candidate for the
end of the Liberty cycle. The RC1 tarballs, as well as a list of
last-minute features and fixed bugs since liberty-1 are available at:

https://launchpad.net/cinder/liberty/liberty-rc1
https://launchpad.net/designate/liberty/liberty-rc1

Unless release-critical issues are found that warrant a release
candidate respin, these RC1s will be formally released as final versions
on October 15. You are therefore strongly encouraged to test and
validate these tarballs !

Alternatively, you can directly test the stable/liberty release branch at:

http://git.openstack.org/cgit/openstack/cinder/log/?h=stable/liberty
http://git.openstack.org/cgit/openstack/designate/log/?h=stable/liberty

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/cinder/+filebug
or
https://bugs.launchpad.net/designate/+filebug

and tag it *liberty-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branches of Cinder and Designate are now
officially open for Mitaka development, so feature freeze restrictions
no longer apply there.

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] CephFS native driver

2015-09-24 Thread John Spray
Hi all,

I've recently started work on a CephFS driver for Manila.  The (early)
code is here:
https://github.com/openstack/manila/compare/master...jcsp:ceph

It requires a special branch of ceph which is here:
https://github.com/ceph/ceph/compare/master...jcsp:wip-manila

This isn't done yet (hence this email rather than a gerrit review),
but I wanted to give everyone a heads up that this work is going on,
and a brief status update.

This is the 'native' driver in the sense that clients use the CephFS
client to access the share, rather than re-exporting it over NFS.  The
idea is that this driver will be useful for anyone who has such
clients, as well as acting as the basis for a later NFS-enabled
driver.

The export location returned by the driver gives the client the Ceph
mon IP addresses, the share path, and an authentication token.  This
authentication token is what permits the clients access (Ceph does not
do access control based on IP addresses).

It's just capable of the minimal functionality of creating and
deleting shares so far, but I will shortly be looking into hooking up
snapshots/consistency groups, albeit for read-only snapshots only
(cephfs does not have writeable shapshots).  Currently deletion is
just a move into a 'trash' directory, the idea is to add something
later that cleans this up in the background: the downside to the
"shares are just directories" approach is that clearing them up has a
"rm -rf" cost!

A note on the implementation: cephfs recently got the ability (not yet
in master) to restrict client metadata access based on path, so this
driver is simply creating shares by creating directories within a
cluster-wide filesystem, and issuing credentials to clients that
restrict them to their own directory.  They then mount that subpath,
so that from the client's point of view it's like having their own
filesystem.  We also have a quota mechanism that I'll hook in later to
enforce the share size.

Currently the security here requires clients (i.e. the ceph-fuse code
on client hosts, not the userspace applications) to be trusted, as
quotas are enforced on the client side.  The OSD access control
operates on a per-pool basis, and creating a separate pool for each
share is inefficient.  In the future it is expected that CephFS will
be extended to support file layouts that use RADOS namespaces, which
are cheap, such that we can issue a new namespace to each share and
enforce the separation between shares on the OSD side.

However, for many people the ultimate access control solution will be
to use a NFS gateway in front of their CephFS filesystem: it is
expected that an NFS-enabled cephfs driver will follow this native
driver in the not-too-distant future.

This will be my first openstack contribution, so please bear with me
while I come up to speed with the submission process.  I'll also be in
Tokyo for the summit next month, so I hope to meet other interested
parties there.

All the best,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn][vtep] Proposal: support for vtep-gateway in ovn

2015-09-24 Thread Russell Bryant
On 09/24/2015 01:17 AM, Amitabha Biswas wrote:
> Hi everyone,
> 
> I want to open up the discussion regarding how to support OVN
> VTEP gateway deployment and its lifecycle in Neutron. 

Thanks a lot for looking into this!

> In the "Life Cycle of a VTEP gateway" part in the OVN architecture
> document (http://www.russellbryant.net/ovs-docs/ovn-architecture.7.pdf),
> step 3 is where the Neutron OVN plugin is involved. At a minimum, the
> Neutron OVN plugin will enable setting the type as "vtep" and the
> vtep-logical-switch and vtep-physical-switch options in the
> OVN_Northbound database.

I have the docs published there just to make it easier to read the
rendered version.  The source of that document is:

https://github.com/openvswitch/ovs/blob/master/ovn/ovn-architecture.7.xml

> There are 2 parts to the proposal/discussion - a short term solution and
> a long term one:
> 
> A short term solution (proposed by Russell Bryant) is similar to the
> work that was done for container support in OVN - using a binding
> profile http://networking-ovn.readthedocs.org/en/latest/containers.html.
> A ovn logical network/switch can be mapped to a vtep logical gateway by
> creating a port in that logical network and creating a binding profile
> for that port in the following manner:
> 
> neutron port-create --binding-profile
> '{"vtep-logical-switch":"vtep_lswitch_key",
> "vtep-physical-switch":"vtep_pswitch_key"}' private.
> 
> Where vtep-logical-switch and vtep-physical-switch should have been
> defined in the OVN_Southbound database by the previous steps (1,2) in
> the life cycle. 

Yes, this sounds great to me.  Since there's not a clear well accepted
API to use, we should go this route to get the functionality exposed
more quickly.  We should also include in our documentation that this is
not expected to be how this is done long term.

The comparison to the containers-in-VMs support is a good one.  In that
case we used binding:profile as a quick way to expose it, but we're
aiming to support a proper API.  For that feature, we've identified the
"VLAN aware VMs" API as the way forward, which will hopefully be
available next cycle.

> For the longer term solution, there needs to be a discussion:
> 
> Should the knowledge about the physical and logical step gateway should
> be exposed to Neutron - if yes how? This would allow a Neutron NB
> API/extension to bind a “known” vtep gateway to the neutron logical
> network. This would be similar to the workflow done in the
> networking-l2gw extension
> https://review.openstack.org/#/c/144173/3/specs/kilo/l2-gateway-api.rst
> 
> 1. Allow the admin to define and manage the vtep gateway through Neutron
> REST API.
> 
> 2. Define connections between Neutron networks and gateways. This is
> conceptually similar to Step 3 of the step gateway performed by the OVN
> Plugin in the short term solution.

networking-l2gw does seem to be the closest thing to what's needed, but
it's not a small amount of work.  I think the API might need to be
extended a bit for our needs.  A bigger concern for me is actually with
some of the current implementation details.

One particular issue is that the project implements the ovsdb protocol
from scratch.  The ovs project provides a Python library for this.  Both
Neutron and networking-ovn use it, at least.  From some discussion, I've
gathered that the ovs Python library lacked one feature that was needed,
but has since been added because we wanted the same thing in networking-ovn.

The networking-l2gw route will require some pretty significant work.
It's still the closest existing effort, so I think we should explore it
until it's absolutely clear that it *can't* work for what we need.

> OR
> 
> Should OVN pursue it’s own Neutron extension (including vtep gateway
> support).

I don't think this option provides a lot of value over the short term
binding:profile solution.  Both are OVN specific.  I think I'd rather
just stick to binding:profile as the OVN specific stopgap because it's a
*lot* less work.

Thanks again,

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]gerri performance

2015-09-24 Thread Vikram Choudhary
+1 for me as well :(

On Thu, Sep 24, 2015 at 5:58 PM, Neil Jerram 
wrote:

> Yes, a 'git review' just took around 20s for me.
>
> On 24/09/15 13:21, Miguel Angel Ajo wrote:
> >I am experiencing it, yes :/
> >>Gary Kotton wrote:
> >>Hi,
> >>Anyone else experiencing bad performance with gerri at the moment?
> Access files in a review takes ages. So now the review cycle will be months
> instead of week 
> >>Thanks
> >>Gary
>
> __
> >>OpenStack Development Mailing List (not for usage questions)
> >>Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> >>__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][release] 2015.1.2

2015-09-24 Thread Tony Breeds
On Wed, Sep 23, 2015 at 08:47:31PM -0400, Chuck Short wrote:
> Hi,
> 
> We would like to do a stable/kilo branch release, next Thursday. In order
> to do that I would like to freeze the branches on Friday. Cut some test
> tarballs on Tuesday and release on Thursday. Does anyone have an opinnon on
> this?

I'm trying to fix a series of issues in Juno and it resulting in
global-requirents changes for kilo.  I'd hope to have them settled by this time
next week.

I think it'd good but not essential for them to be in 2015.1.2

Yours Tony.


pgp1ddvp_TGoz.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] gerri performance

2015-09-24 Thread Gary Kotton
Hi,
Anyone else experiencing bad performance with gerri at the moment? Access files 
in a review takes ages. So now the review cycle will be months instead of week 

Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gerri performance

2015-09-24 Thread Miguel Angel Ajo

I am experiencing it, yes :/

Gary Kotton wrote:

Hi,
Anyone else experiencing bad performance with gerri at the moment? Access files 
in a review takes ages. So now the review cycle will be months instead of week 

Thanks
Gary

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]gerri performance

2015-09-24 Thread Neil Jerram
Yes, a 'git review' just took around 20s for me.

On 24/09/15 13:21, Miguel Angel Ajo wrote:
>I am experiencing it, yes :/
>>Gary Kotton wrote:
>>Hi,
>>Anyone else experiencing bad performance with gerri at the moment? Access 
>>files in a review takes ages. So now the review cycle will be months instead 
>>of week 
>>Thanks
>>Gary
__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] use zuul-cloner when running rspec

2015-09-24 Thread Alex Schultz
On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi  wrote:
> Background
> ==
>
> Current rspec tests are tested with modules mentioned in .fixtures.yaml
> file of each module.
>
> * the file is not consistent across all modules
> * it hardcodes module names & versions
> * this way does not allow to use "Depend-On" feature, that would allow
> to test cross-modules patches
>
> Proposal
> 
>
> * Like we do in beaker & integration jobs, use zuul-cloner to clone
> modules in our CI jobs.
> * Use r10k to prepare fixtures modules.
> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>
> In that way:
> * we will have modules name + versions testing consistency across all
> modules
> * the same Puppetfile would be used by unit/beaker/integration testing.
> * the patch that pass tests on your laptop would pass tests in upstream CI
> * if you don't have zuul-cloner on your laptop, don't worry it will use
> git clone. Though you won't have Depends-On feature working on your
> laptop (technically not possible).
> * Though your patch will support Depends-On in OpenStack Infra for unit
> tests. If you submit a patch in puppet-openstacklib that drop something
> wrong, you can send a patch in puppet-nova that will test it, and unit
> tests will fail.
>
> Drawbacks
> =
> * cloning from .fixtures.yaml takes ~ 10 seconds
> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>
> I think 40 seconds is something accept regarding the benefit.
>

As someone who consumes these modules downstream and has our own CI
setup to run the rspec items, this ties it too closely to the
openstack infrastructure. If we replace the .fixtures.yml with
zuul-cloner, it assumes I always want the openstack version of the
modules. This is not necessarily true. I like being able to replace
items within fixtures.yml when doing dev work. For example If i want
to test upgrading another module not related to openstack, like
inifile, how does that work with the proposed solution?  This is also
moving away from general puppet module conventions for testing. My
preference would be that this be a different task and we have both
.fixtures.yml (for general use/development) and the zuul method of
cloning (for CI).  You have to also think about this from a consumer
standpoint and this is adding an external dependency on the OpenStack
infrastructure for anyone trying to run rspec or trying to consume the
published versions from the forge.  Would I be able to run these tests
in an offline mode with this change? With the .fixures.yml it's a
minor edit to switch to local versions. Is the same true for the
zuul-cloner version?

>
> Next steps
> ==
>
> * PoC in puppet-nova: https://review.openstack.org/#/c/226830/
> * Patch openstack/puppet-modulesync-config to be consistent across all
> our modules.
>
> Bonus
> =
> we might need (asap) a canary job for puppet-openstack-integration
> repository, that would run tests on a puppet-* module (since we're using
> install_modules.sh & Puppetfile files in puppet-* modules).
> Nothing has been done yet for this work.
>
>
> Thoughts?
> --
> Emilien Macchi
>
>

I think we need this functionality, I just don't think it's a
replacement for the .fixures.yml.

Thanks,
-Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn][vtep] Proposal: support for vtep-gateway in ovn

2015-09-24 Thread Salvatore Orlando
Random comments inline.

Salvatore

On 24 September 2015 at 14:05, Russell Bryant  wrote:

> On 09/24/2015 01:17 AM, Amitabha Biswas wrote:
> > Hi everyone,
> >
> > I want to open up the discussion regarding how to support OVN
> > VTEP gateway deployment and its lifecycle in Neutron.
>
> Thanks a lot for looking into this!
>
> > In the "Life Cycle of a VTEP gateway" part in the OVN architecture
> > document (http://www.russellbryant.net/ovs-docs/ovn-architecture.7.pdf),
> > step 3 is where the Neutron OVN plugin is involved. At a minimum, the
> > Neutron OVN plugin will enable setting the type as "vtep" and the
> > vtep-logical-switch and vtep-physical-switch options in the
> > OVN_Northbound database.
>
> I have the docs published there just to make it easier to read the
> rendered version.  The source of that document is:
>
> https://github.com/openvswitch/ovs/blob/master/ovn/ovn-architecture.7.xml
>
> > There are 2 parts to the proposal/discussion - a short term solution and
> > a long term one:
> >
> > A short term solution (proposed by Russell Bryant) is similar to the
> > work that was done for container support in OVN - using a binding
> > profile http://networking-ovn.readthedocs.org/en/latest/containers.html.
> > A ovn logical network/switch can be mapped to a vtep logical gateway by
> > creating a port in that logical network and creating a binding profile
> > for that port in the following manner:
> >
> > neutron port-create --binding-profile
> > '{"vtep-logical-switch":"vtep_lswitch_key",
> > "vtep-physical-switch":"vtep_pswitch_key"}' private.
> >
> > Where vtep-logical-switch and vtep-physical-switch should have been
> > defined in the OVN_Southbound database by the previous steps (1,2) in
> > the life cycle.
>
> Yes, this sounds great to me.  Since there's not a clear well accepted
> API to use, we should go this route to get the functionality exposed
> more quickly.  We should also include in our documentation that this is
> not expected to be how this is done long term.
>
> The comparison to the containers-in-VMs support is a good one.  In that
> case we used binding:profile as a quick way to expose it, but we're
> aiming to support a proper API.  For that feature, we've identified the
> "VLAN aware VMs" API as the way forward, which will hopefully be
> available next cycle.
>
> > For the longer term solution, there needs to be a discussion:
> >
> > Should the knowledge about the physical and logical step gateway should
> > be exposed to Neutron - if yes how? This would allow a Neutron NB
> > API/extension to bind a “known” vtep gateway to the neutron logical
> > network. This would be similar to the workflow done in the
> > networking-l2gw extension
> > https://review.openstack.org/#/c/144173/3/specs/kilo/l2-gateway-api.rst
> >
> > 1. Allow the admin to define and manage the vtep gateway through Neutron
> > REST API.
> >
> > 2. Define connections between Neutron networks and gateways. This is
> > conceptually similar to Step 3 of the step gateway performed by the OVN
> > Plugin in the short term solution.
>
> networking-l2gw does seem to be the closest thing to what's needed, but
> it's not a small amount of work.  I think the API might need to be
> extended a bit for our needs.  A bigger concern for me is actually with
> some of the current implementation details.
>

It is indeed. While I like very much the solution based on binding profiles
it does not work very well from a UX perspective in environments where
operators control the whole cloud with openstack tools.


>
> One particular issue is that the project implements the ovsdb protocol
> from scratch.  The ovs project provides a Python library for this.  Both
> Neutron and networking-ovn use it, at least.  From some discussion, I've
> gathered that the ovs Python library lacked one feature that was needed,
> but has since been added because we wanted the same thing in
> networking-ovn.
>

My take here is that we don't need to use the whole implementation of
networking-l2gw, but only the APIs and the DB management layer it exposes.
Networking-l2gw provides a VTEP network gateway solution that, if you want,
will eventually be part of Neutron's "reference" control plane.
OVN provides its implementation; I think it should be possible to leverage
networking-l2gw either by pushing an OVN driver there, or implementing the
same driver in openstack/networking-ovn.


>
> The networking-l2gw route will require some pretty significant work.
> It's still the closest existing effort, so I think we should explore it
> until it's absolutely clear that it *can't* work for what we need.
>

I would say that it is definitely not trivial but probably a bit less than
"significant". abhraut from my team has done something quite similar for
openstack/vmware-nsx [1]


> > OR
> >
> > Should OVN pursue it’s own Neutron extension (including vtep gateway
> > support).
>
> I don't think this option provides a lot of value over the short 

[openstack-dev] repairing so many OpenStack components writing configuration files in /usr/etc

2015-09-24 Thread Thomas Goirand
Hi,

It's about the 3rd time just this week, that I'm repairing an OpenStack
component which is trying to write config files in /usr/etc. Could this
non-sense stop please?

FYI, this time, it's with os-brick... but it happened with so many
components already:
- bandit (with an awesome reply from upstream to my launchpad bug,
basically saying he doesn't care about downstream distros...)
- neutron
- neutron-fwaas
- tempest
- lots of Neutron drivers (ie: networking-FOO)
- pycadf
- and probably more which I forgot.

Yes, I can repair things at the packaging level, but I just hope I wont
have to do this for each and every OpenStack component, and I suppose
everyone understands how frustrating it is...

I also wonder where this /usr/etc is coming from. If it was
/usr/local/etc, I could somehow get it. But here... ?!?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-24 Thread WANG, Ming Hao (Tony T)
Russell,

Thanks for your detail explanation and kind help!
I have understand how container in VM can acquire network interfaces in 
different neutron networks now.
For the connections between compute nodes, I think I need to study Geneve 
protocol and VTEP first.
Any further question, I may need to continue consulting you. :-) 

Thanks for your help again, 
Tony

-Original Message-
From: Russell Bryant [mailto:rbry...@redhat.com] 
Sent: Wednesday, September 23, 2015 10:22 PM
To: OpenStack Development Mailing List (not for usage questions); WANG, Ming 
Hao (Tony T)
Subject: Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to 
setup multiple neutron networks for one container?

I'll reply to each of your 3 messages here:

On 09/23/2015 05:57 AM, WANG, Ming Hao (Tony T) wrote:
> Hi Russell,
> 
> I just realized OVN plugin is an independent plugin of OVS plugin.

Yes, it's a plugin developed in the "networking-ovn" project.

http://git.openstack.org/cgit/openstack/networking-ovn/

> In this case, how do we handle the provider network connections between 
> compute nodes? Is it handled by OVN actually?

I'm going to start by explaining the status of OVN itself, and then I'll come 
back and address the Neutron integration:

 -- OVN --

OVN implements logical networks as overlays using the Geneve protocol.
Connecting from logical to physical networks is done by one of two ways.

The first is using VTEP gateways.  This could be hardware or software gateways 
that implement the hardware_vtep schema.  This is typically a TOR switch that 
supports the vtep schema, but I believe someone is going to build a software 
version based on ovs and dpdk.  OVN includes a daemon called 
"ovn-controller-vtep" that is run for each vtep gateway to manage connectivity 
between OVN networks and the gateway.  It could run on the switch itself, or 
some other management host.  The last set of patches to get this working 
initially were merged just 8 days ago.

The ovn-architecture document describes "Life Cycle of a VTEP gateway":


https://github.com/openvswitch/ovs/blob/master/ovn/ovn-architecture.7.xml#L820

or you can find a temporary copy of a rendered version here:

  http://www.russellbryant.net/ovs-docs/ovn-architecture.7.pdf

The second is what Neutron refers to as "provider networks".  OVN does support 
this, as well.  It was merge just a couple weeks ago.  The commit message for 
OVN "localnet" ports goes into quite a bit of detail about how this works in 
OVN:


https://github.com/openvswitch/ovs/commit/c02819293d52f7ea7b714242d871b2b01f57f905

 -- Neutron --

Both of these things are freshly implemented in OVN so the Neutron integration 
is a WIP.

For vtep gateways, there's not an established API.  networking-l2gw is the 
closest thing, but I've got some concerns with both the API and implementation. 
 As a first baby step, we're just going to provide a hack that lets an admin 
create a connection between a network and gateway using a neutron port with a 
special binding:profile.  We'll also be continuing to look at providing a 
proper API.

For provider networks, working with them in Neutron will be no different than 
it is today with the current OVS support.  I just have to finish the Neutron 
plugin integration, which I just started on yesterday.

> 
> Thanks,
> Tony
> 
> -Original Message-
> From: WANG, Ming Hao (Tony T)
> Sent: Wednesday, September 23, 2015 1:58 PM
> To: WANG, Ming Hao (Tony T); 'OpenStack Development Mailing List (not for 
> usage questions)'
> Subject: RE: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support 
> to setup multiple neutron networks for one container?
> 
> Hi Russell,
> 
> Is there any material to explain how OVN parent port work?

Note that while this uses a binding:profile hack for now, we're going to update 
the plugin to support the vlan-aware-vms API for this use case once that is 
completed.

http://docs.openstack.org/developer/networking-ovn/containers.html

http://specs.openstack.org/openstack/neutron-specs/specs/liberty/vlan-aware-vms.html

https://github.com/openvswitch/ovs/blob/master/ovn/CONTAINERS.OpenStack.md

https://github.com/shettyg/ovn-docker

> Thanks,
> Tony
> 
> -Original Message-
> From: WANG, Ming Hao (Tony T)
> Sent: Wednesday, September 23, 2015 10:02 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: RE: [openstack-dev] [neutron] Does neutron ovn plugin support to 
> setup multiple neutron networks for one container?
> 
> Russell,
> 
> Thanks for your info.
> If I want to assign multiple interfaces to a container on different 
> neutron networks(for example, netA and netB), is it mandatory to let 
> the VM hosting containers have network interfaces in netA and netB, 
> and ovn will help to direct the container traffic to its corresponding 
> VM network interfaces?
> 
> from 
> https://github.com/openvswitch/ovs/blob/master/ovn/CONTAINERS.OpenStack.md :
> "This VLAN tag is 

Re: [openstack-dev] repairing so many OpenStack components writing configuration files in /usr/etc

2015-09-24 Thread Julien Danjou
On Thu, Sep 24 2015, Thomas Goirand wrote:

Hi Thomas,

> I also wonder where this /usr/etc is coming from. If it was
> /usr/local/etc, I could somehow get it. But here... ?!?

Do you have a way to reproduce that, or a backtrace maybe?

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] repairing so many OpenStack components writing configuration files in /usr/etc

2015-09-24 Thread Matthew Treinish
On Thu, Sep 24, 2015 at 04:25:31PM +0200, Thomas Goirand wrote:
> Hi,
> 
> It's about the 3rd time just this week, that I'm repairing an OpenStack
> component which is trying to write config files in /usr/etc. Could this
> non-sense stop please?

So I'm almost 100% that the intent for everyone doing this is for the files to
be written to /etc when system installing the packaging. It's being caused by
data_file lines in the setup.cfg putting things in etc/foo. Like in neutron:

http://git.openstack.org/cgit/openstack/neutron/tree/setup.cfg#n23

The PBR docs [1] say this will go to /etc if installing it in the system python
which obviously isn't the case. The are instead being installed to
sys.prefix/etc which works well for the venv case but not so much for system
installing a package.

The issue is with the use of data_files. I'm sure dstufft can elaborate on all
the prickly bits, but IIRC it's the use of setuptools of distutils depending on
how the package is being installed. (either via a wheel or sdist) I think the
distutils behavior is to install relative to sys.prefix and setuptools puts it
relative to site-packages. But, neither of those are really the desired
behavior...

> 
> FYI, this time, it's with os-brick... but it happened with so many
> components already:
> - bandit (with an awesome reply from upstream to my launchpad bug,
> basically saying he doesn't care about downstream distros...)
> - neutron
> - neutron-fwaas
> - tempest
> - lots of Neutron drivers (ie: networking-FOO)
> - pycadf
> - and probably more which I forgot.
> 
> Yes, I can repair things at the packaging level, but I just hope I wont
> have to do this for each and every OpenStack component, and I suppose
> everyone understands how frustrating it is...

It's an issue with python packaging that we need to fix, likely in PBR first.
But, I doubt this is isolated to PBR, we'll probably have to work on fixes to
distutils and/or setuptools too.

> 
> I also wonder where this /usr/etc is coming from. If it was
> /usr/local/etc, I could somehow get it. But here... ?!?

IIRC if you set the python sys.prefix to /usr/local it'll put the etc files
in /usr/local.

-Matt Treinish

[1] http://docs.openstack.org/developer/pbr/#files


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Matt Riedemann



On 9/24/2015 9:06 AM, Matt Riedemann wrote:



On 9/24/2015 3:19 AM, Sylvain Bauza wrote:



Le 24/09/2015 09:04, Duncan Thomas a écrit :

Hi

I thought I was late on this thread, but looking at the time stamps,
it is just something that escalated very quickly. I am honestly
surprised an cross-project interaction option went from 'we don't seem
to understand this' to 'deprecation merged' in 4 hours, with only a 12
hour discussion on the mailing list, right at the end of a cycle when
we're supposed to be stabilising features.



So, I agree it was maybe a bit too quick hence the revert. That said,
Nova master is now Mitaka, which means that the deprecation change was
provided for the next cycle, not the one currently stabilising.

Anyway, I'm really all up with discussing why Cinder needs to know the
Nova AZs.


I proposed a session at the Tokyo summit for a discussion of Cinder
AZs, since there was clear confusion about what they are intended for
and how they should be configured.


Cool, count me in from the Nova standpoint.


Since then I've reached out to and gotten good feedback from, a number
of operators. There are two distinct configurations for AZ behaviour
in cinder, and both sort-of worked until very recently.

1) No AZs in cinder
This is the config where a single 'blob' of storage (most of the
operators who responded so far are using Ceph, though that isn't
required). The storage takes care of availability concerns, and any AZ
info from nova should just be ignored.

2) Cinder AZs map to Nova AZs
In this case, some combination of storage / networking / etc couples
storage to nova AZs. It is may be that an AZ is used as a unit of
scaling, or it could be a real storage failure domain. Eitehr way,
there are a number of operators who have this configuration and want
to keep it. Storage can certainly have a failure domain, and limiting
the scalability problem of storage to a single cmpute AZ can have
definite advantages in failure scenarios. These people do not want
cross-az attach.



Ahem, Nova AZs are not failure domains - I mean the current
implementation, in the sense of many people understand what is a failure
domain, ie. a physical unit of machines (a bay, a room, a floor, a
datacenter).
All the AZs in Nova share the same controlplane with the same message
queue and database, which means that one failure can be propagated to
the other AZ.

To be honest, there is one very specific usecase where AZs *are* failure
domains : when cells exact match with AZs (ie. one AZ grouping all the
hosts behind one cell). That's the very specific usecase that Sam is
mentioning in his email, and I certainly understand we need to keep that.

What are AZs in Nova is pretty well explained in a quite old blogpost :
http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/


We also added a few comments in our developer doc here
http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs


tl;dr: AZs are aggregate metadata that makes those aggregates of compute
nodes visible to the users. Nothing more than that, no magic sauce.
That's just a logical abstraction that can be mapping your physical
deployment, but like I said, which would share the same bus and DB.
Of course, you could still provide networks distinct between AZs but
that just gives you the L2 isolation, not the real failure domain in a
Business Continuity Plan way.

What puzzles me is how Cinder is managing a datacenter-level of
isolation given there is no cells concept AFAIK. I assume that
cinder-volumes are belonging to a specific datacenter but how is managed
the controlplane of it ? I can certainly understand the need of affinity
placement between physical units, but I'm missing that piece, and
consequently I wonder why Nova need to provide AZs to Cinder on a
general case.




My hope at the summit session was to agree these two configurations,
discuss any scenarios not covered by these two configuration, and nail
down the changes we need to get these to work properly. There's
definitely been interest and activity in the operator community in
making nova and cinder AZs interact, and every desired interaction
I've gotten details about so far matches one of the above models.



I'm all with you about providing a way for users to get volume affinity
for Nova. That's a long story I'm trying to consider and we are
constantly trying to improve the nova scheduler interfaces so that other
projects could provide resources to the nova scheduler for decision
making. I just want to consider whether AZs are the best concept for
that or we should do thing by other ways (again, because AZs are not
what people expect).

Again, count me in for the Cinder session, and just lemme know when the
session is planned so I could attend it.

-Sylvain




__

OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] repairing so many OpenStack components writing configuration files in /usr/etc

2015-09-24 Thread Steve Martinelli

just a general FYI - so for pyCADF (and I'm guessing others), it was a very
subtle error:

https://github.com/openstack/pycadf/commit/4e70ff2e6204f74767c5cab13f118d72c2594760

Essentially the entry points in setup.cfg were missing a leading slash.

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:   Julien Danjou 
To: Thomas Goirand 
Cc: "openstack-dev@lists.openstack.org"

Date:   2015/09/24 10:55 AM
Subject:Re: [openstack-dev] repairing so many OpenStack components
writing configuration files in /usr/etc



On Thu, Sep 24 2015, Thomas Goirand wrote:

Hi Thomas,

> I also wonder where this /usr/etc is coming from. If it was
> /usr/local/etc, I could somehow get it. But here... ?!?

Do you have a way to reproduce that, or a backtrace maybe?

--
Julien Danjou
# Free Software hacker
# http://julien.danjou.info
[attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Matt Riedemann



On 9/24/2015 3:19 AM, Sylvain Bauza wrote:



Le 24/09/2015 09:04, Duncan Thomas a écrit :

Hi

I thought I was late on this thread, but looking at the time stamps,
it is just something that escalated very quickly. I am honestly
surprised an cross-project interaction option went from 'we don't seem
to understand this' to 'deprecation merged' in 4 hours, with only a 12
hour discussion on the mailing list, right at the end of a cycle when
we're supposed to be stabilising features.



So, I agree it was maybe a bit too quick hence the revert. That said,
Nova master is now Mitaka, which means that the deprecation change was
provided for the next cycle, not the one currently stabilising.

Anyway, I'm really all up with discussing why Cinder needs to know the
Nova AZs.


I proposed a session at the Tokyo summit for a discussion of Cinder
AZs, since there was clear confusion about what they are intended for
and how they should be configured.


Cool, count me in from the Nova standpoint.


Since then I've reached out to and gotten good feedback from, a number
of operators. There are two distinct configurations for AZ behaviour
in cinder, and both sort-of worked until very recently.

1) No AZs in cinder
This is the config where a single 'blob' of storage (most of the
operators who responded so far are using Ceph, though that isn't
required). The storage takes care of availability concerns, and any AZ
info from nova should just be ignored.

2) Cinder AZs map to Nova AZs
In this case, some combination of storage / networking / etc couples
storage to nova AZs. It is may be that an AZ is used as a unit of
scaling, or it could be a real storage failure domain. Eitehr way,
there are a number of operators who have this configuration and want
to keep it. Storage can certainly have a failure domain, and limiting
the scalability problem of storage to a single cmpute AZ can have
definite advantages in failure scenarios. These people do not want
cross-az attach.



Ahem, Nova AZs are not failure domains - I mean the current
implementation, in the sense of many people understand what is a failure
domain, ie. a physical unit of machines (a bay, a room, a floor, a
datacenter).
All the AZs in Nova share the same controlplane with the same message
queue and database, which means that one failure can be propagated to
the other AZ.

To be honest, there is one very specific usecase where AZs *are* failure
domains : when cells exact match with AZs (ie. one AZ grouping all the
hosts behind one cell). That's the very specific usecase that Sam is
mentioning in his email, and I certainly understand we need to keep that.

What are AZs in Nova is pretty well explained in a quite old blogpost :
http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/

We also added a few comments in our developer doc here
http://docs.openstack.org/developer/nova/aggregates.html#availability-zones-azs

tl;dr: AZs are aggregate metadata that makes those aggregates of compute
nodes visible to the users. Nothing more than that, no magic sauce.
That's just a logical abstraction that can be mapping your physical
deployment, but like I said, which would share the same bus and DB.
Of course, you could still provide networks distinct between AZs but
that just gives you the L2 isolation, not the real failure domain in a
Business Continuity Plan way.

What puzzles me is how Cinder is managing a datacenter-level of
isolation given there is no cells concept AFAIK. I assume that
cinder-volumes are belonging to a specific datacenter but how is managed
the controlplane of it ? I can certainly understand the need of affinity
placement between physical units, but I'm missing that piece, and
consequently I wonder why Nova need to provide AZs to Cinder on a
general case.




My hope at the summit session was to agree these two configurations,
discuss any scenarios not covered by these two configuration, and nail
down the changes we need to get these to work properly. There's
definitely been interest and activity in the operator community in
making nova and cinder AZs interact, and every desired interaction
I've gotten details about so far matches one of the above models.



I'm all with you about providing a way for users to get volume affinity
for Nova. That's a long story I'm trying to consider and we are
constantly trying to improve the nova scheduler interfaces so that other
projects could provide resources to the nova scheduler for decision
making. I just want to consider whether AZs are the best concept for
that or we should do thing by other ways (again, because AZs are not
what people expect).

Again, count me in for the Cinder session, and just lemme know when the
session is planned so I could attend it.

-Sylvain




__
OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [cinder] How to make a mock effactive for all method of a testclass

2015-09-24 Thread Gorka Eguileor
On 23/09, Eric Harney wrote:
> On 09/23/2015 04:06 AM, liuxinguo wrote:
> > Hi,
> > 
> > In a.py we have a function:
> > def _change_file_mode(filepath):
> > utils.execute('chmod', '600', filepath, run_as_root=True)
> > 
> > In test_xxx.py, there is a testclass:
> > class DriverTestCase(test.TestCase):
> > def test_a(self)
> > ...
> > Call a. _change_file_mode
> > ...
> > 
> > def test_b(self)
> > ...
> > Call a. _change_file_mode
> > ...
> > 
> > I have tried to mock like mock out function _change_file_mode like this:
> > @mock.patch.object(a, '_change_file_mode', return_value=None)
> > class DriverTestCase(test.TestCase):
> > def test_a(self)
> > ...
> > Call a. _change_file_mode
> > ...
> > 
> > def test_b(self)
> > ...
> > Call a. _change_file_mode
> > ...
> > 
> > But the mock takes no effort, the real function _change_file_mode is still 
> > executed.
> > So how to make a mock effactive for all method of a testclass?
> > Thanks for any input!
> > 
> > Wilson Liu
> 
> The simplest way I found to do this was to use mock.patch in the test
> class's setUp() method, and tear it down again in tearDown().
> 
> There may be cleaner ways to do this with tools in oslotest etc. (I'm
> not sure), but this is fairly straightforward.
> 
> See here -- self._clear_patch stores the mock:
> http://git.openstack.org/cgit/openstack/cinder/tree/cinder/tests/unit/test_volume.py?id=8de60a8b#n257
> 

When doing the mock in the setUp it is recommended to add the stop to
the cleanup instead of doing it in the tearDown, in that code it would
be: self.addCleanUp(self._clear_patch.stop)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] gerri performance

2015-09-24 Thread Jeremy Stanley
On 2015-09-24 12:10:56 + (+), Gary Kotton wrote:
> Anyone else experiencing bad performance with gerri at the moment?
> Access files in a review takes ages. So now the review cycle will
> be months instead of week  Thanks

The version we're currently running seems to get into a state where
it's continually garbage collecting within the JVM, system load
spikes up into the teens and performance suffers. So far we've been
restarting Gerrit when it gets itself into this state, which is of
course a terrible non-solution to the problem, but I'll see if I can
find any upstream bug reports and confirm whether this is solved in
the version for which we've been preparing to upgrade.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Barbican][Security] Automatic Certificate Management Environment

2015-09-24 Thread Clark, Robert Graham
Hi All,

So I did a bit of tyre kicking with Letsencrypt today, one of the things I 
thought was interesting was the adherence to the burgeoning Automatic 
Certificate Management Environment (ACME) standard.

https://letsencrypt.github.io/acme-spec/

It’s one of the more readable crypto related standards drafts out there, 
reading it has me wondering how this might be useful for Anchor, or indeed for 
Barbican where things get quite interesting, both at the front end (enabling 
ACME clients to engage with Barbican) or at the back end (enabling Barbican to 
talk to any number of ACME enabled CA endpoints.

I was wondering if there’s been any discussion/review here, I’m new to ACME but 
I’m not sure if I’m late to the party…

Cheers
-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] repairing so many OpenStack components writing configuration files in /usr/etc

2015-09-24 Thread Matthew Treinish
On Thu, Sep 24, 2015 at 10:57:48AM -0400, Steve Martinelli wrote:
> 
> just a general FYI - so for pyCADF (and I'm guessing others), it was a very
> subtle error:
> 
> https://github.com/openstack/pycadf/commit/4e70ff2e6204f74767c5cab13f118d72c2594760
> 
> Essentially the entry points in setup.cfg were missing a leading slash.

I would actually view adding the leading slash as a bug in the setup.cfg. You
don't want your package trying to write to /etc when you're installing it inside
a venv without as a user that doesn't have write access to /etc.

Which is exactly why that commit was reverted over a year ago:

https://github.com/openstack/pycadf/commit/39a99398ce79067b1ae98e7273a8b47eb576bb54

-Matt Treinish

> 
> 
> 
> From: Julien Danjou 
> To:   Thomas Goirand 
> Cc:   "openstack-dev@lists.openstack.org"
> 
> Date: 2015/09/24 10:55 AM
> Subject:  Re: [openstack-dev] repairing so many OpenStack components
> writing   configuration files in /usr/etc
> 
> 
> 
> On Thu, Sep 24 2015, Thomas Goirand wrote:
> 
> Hi Thomas,
> 
> > I also wonder where this /usr/etc is coming from. If it was
> > /usr/local/etc, I could somehow get it. But here... ?!?
> 
> Do you have a way to reproduce that, or a backtrace maybe?
> 
> --
> Julien Danjou
> # Free Software hacker
> # http://julien.danjou.info
> [attachment "signature.asc" deleted by Steve Martinelli/Toronto/IBM]
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Walter A. Boring IV

>> ​To be honest this is probably my fault, AZ's were pulled in as part of
>> the nova-volume migration to Cinder and just sort of died.  Quite
>> frankly I wasn't sure "what" to do with them but brought over the
>> concept and the zones that existing in Nova-Volume.  It's been an issue
>> since day 1 of Cinder, and as you note there are little hacks here and
>> there over the years to do different things.
>>
>> I think your question about whether they should be there at all or not
>> is a good one.  We have had some interest from folks lately that want to
>> couple Nova and Cinder AZ's (I'm really not sure of any details or
>> use-cases here).
>>
>> My opinion would be until somebody proposes a clear use case and need
>> that actually works that we consider deprecating it.
>>
>> While we're on the subject (kinda) I've never been a very fond of having
>> Nova create the volume during boot process either; there's a number of
>> things that go wrong here (timeouts almost guaranteed for a "real"
>> image) and some things that are missing last I looked like type
>> selection etc.
>>
>> We do have a proposal to talk about this at the Summit, so maybe we'll
>> have a descent primer before we get there :)
>>
>> Thanks,
>>
>> John
>>
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> Heh, so when I just asked in the cinder channel if we can just
> deprecate nova boot from volume with source=(image|snapshot|blank)
> (which automatically creates the volume and polls for it to be
> available) and then add a microversion that doesn't allow it, I was
> half joking, but I see we're on the same page.  This scenario seems to
> introduce a lot of orchestration work that nova shouldn't necessarily
> be in the business of handling.
I tend to agree with this.   I believe the ability to boot from a volume
with source=image was just a convenience thing and shortcut for users. 
As John stated, we know that we have issues with large images and/or
volumes here with timeouts.  If we want to continue to support this,
then the only way to make sure we don't run into timeout issues is to
look into a callback mechanism from Cinder to Nova, but that seems
awfully heavy handed, just to continue to support Nova orchestrating
this.   The good thing about the Nova and Cinder clients/APIs is that
anyone can write a quick python script to do the orchestration
themselves, if we want to deprecate this.  I'm all for deprecating this.

Walt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Do not modify (or read) ERROR_ON_CLONE in devstack gate jobs

2015-09-24 Thread James E. Blair
Hi,

Recently we noted some projects modifying the ERROR_ON_CLONE environment
variable in devstack gate jobs.  It is never acceptable to do that.  It
is also not acceptable to read its value and alter a program's behavior.

Devstack is used by developers and users to set up a simple OpenStack
environment.  It does this by cloning all of the projects' git repos and
installing them.

It is also used by our CI system to test changes.  Because the logic
regarding what state each of the repositories should be in is
complicated, that is offloaded to Zuul and the devstack-gate project.
They ensure that all of the repositories involved in a change are set up
correctly before devstack runs.  However, they need to be identified in
advance, and to ensure that we don't accidentally miss one, the
ERROR_ON_CLONE variable is checked by devstack and if it is asked to
clone a repository because it does not already exist (i.e., because it
was not set up in advance by devstack-gate), it fails with an error
message.

If you encounter this, simply add the missing project to the $PROJECTS
variable in your job definition.  There is no need to detect whether
your program is being tested and alter its behavior (a practice which I
gather may be popular but is falling out of favor).

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

2015-09-24 Thread Ionut Balutoiu
Hello, guys!

I'm starting a new implementation for a dhcp provider,
mainly to be used for Ironic standalone. I'm planning to
push it upstream. I'm using isc-dhcp-server service from
Linux. So, when an Ironic node is started, the ironic-conductor
writes in the config file the MAC-IP reservation for that node and
reloads dhcp service. I'm using a SQL database as a backend to store
the dhcp reservations (I think is cleaner and it should allow us
to have more than one DHCP server). What do you think about my
implementation ?
Also, I'm not sure how can I scale this out to provide HA/failover.
Do you guys have any idea ?

Regards,
Ionut Balutoiu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-24 Thread Russell Bryant
On 09/24/2015 10:37 AM, WANG, Ming Hao (Tony T) wrote:
> Russell,
> 
> Thanks for your detail explanation and kind help!
> I have understand how container in VM can acquire network interfaces in 
> different neutron networks now.
> For the connections between compute nodes, I think I need to study Geneve 
> protocol and VTEP first.
> Any further question, I may need to continue consulting you. :-) 

OVN uses Geneve in conceptually the same way as to how the Neutron
reference implementation (ML2+OVS) uses VxLAN to create overlay networks
among the compute nodes for tenant overlay networks.

VTEP gateways or provider networks come into play when you want to
connect these overlay networks to physical, or "underlay" networks.

Hope that helps,

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

2015-09-24 Thread Jay Faulkner
Hi Ionut,

I like the idea -- I think there's only going to be one potential hiccup with 
getting this upstream: the use of an additional external database.

My suggestion is to go ahead and post what you have up to Gerrit -- even if 
there's no spec and it's not ready to merge, everyone will be able to see what 
you're working on. If it's important for you to merge this upstream, I'd 
suggest starting on a spec for Ironic 
(https://wiki.openstack.org/wiki/Ironic/Specs_Process). 

Also as always, feel free to drop by #openstack-ironic on Freenode and chat 
about this as well. It sounds like you have a big use case for Ironic and we'd 
love to have you in the IRC community.

Thanks,
Jay Faulkner


From: Ionut Balutoiu 
Sent: Thursday, September 24, 2015 8:38 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

Hello, guys!

I'm starting a new implementation for a dhcp provider,
mainly to be used for Ironic standalone. I'm planning to
push it upstream. I'm using isc-dhcp-server service from
Linux. So, when an Ironic node is started, the ironic-conductor
writes in the config file the MAC-IP reservation for that node and
reloads dhcp service. I'm using a SQL database as a backend to store
the dhcp reservations (I think is cleaner and it should allow us
to have more than one DHCP server). What do you think about my
implementation ?
Also, I'm not sure how can I scale this out to provide HA/failover.
Do you guys have any idea ?

Regards,
Ionut Balutoiu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Mathieu Gagné
On 2015-09-24 11:53 AM, Walter A. Boring IV wrote:
> The good thing about the Nova and Cinder clients/APIs is that
> anyone can write a quick python script to do the orchestration
> themselves, if we want to deprecate this.  I'm all for deprecating this.

I don't like this kind of reasoning which can justify close to anything.
It's easy to make those suggestions when you know Python. Please
consider non-technical/non-developers users when suggesting deprecating
features or proposing alternative solutions.

I could also say (in bad faith, I know): why have Heat when you can
write your own Python script. And yet, I don't think we would appreciate
anyone making such a controversial statement.

Our users don't know Python, use 3rd party tools (which don't often
perform/support orchestration) or the Horizon dashboard. They don't want
to have to learn Heat or Python so they can orchestrate volume creation
in place of Nova for a single instance. You don't write CloudFormation
templates on AWS just to boot an instance on volume. That's not the UX I
want to offer to my users.

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

2015-09-24 Thread Dmitry Tantsur
2015-09-24 17:38 GMT+02:00 Ionut Balutoiu 
:

> Hello, guys!
>
> I'm starting a new implementation for a dhcp provider,
> mainly to be used for Ironic standalone. I'm planning to
> push it upstream. I'm using isc-dhcp-server service from
> Linux. So, when an Ironic node is started, the ironic-conductor
> writes in the config file the MAC-IP reservation for that node and
> reloads dhcp service. I'm using a SQL database as a backend to store
> the dhcp reservations (I think is cleaner and it should allow us
> to have more than one DHCP server). What do you think about my
> implementation ?
>

What you describe slightly resembles how ironic-inspector works. It needs
to serve DHCP to nodes that are NOT know to Ironic, so it manages iptables
rules giving (or not giving access) to the dnsmasq instance. I wonder if we
may find some common code between these 2, but I definitely don't want to
reinvent Neutron :) I'll think about it after seeing your spec and/or code,
I'm already looking forward to them!


> Also, I'm not sure how can I scale this out to provide HA/failover.
> Do you guys have any idea ?
>
> Regards,
> Ionut Balutoiu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
--
-- Dmitry Tantsur
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Tim Bell
> -Original Message-
> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> Sent: 24 September 2015 16:59
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?
> 
> 
> 
> On 9/24/2015 9:06 AM, Matt Riedemann wrote:
> >
> >
> > On 9/24/2015 3:19 AM, Sylvain Bauza wrote:
> >>
> >>
> >> Le 24/09/2015 09:04, Duncan Thomas a écrit :
> >>> Hi
> >>>
> >>> I thought I was late on this thread, but looking at the time stamps,
> >>> it is just something that escalated very quickly. I am honestly
> >>> surprised an cross-project interaction option went from 'we don't
> >>> seem to understand this' to 'deprecation merged' in 4 hours, with
> >>> only a 12 hour discussion on the mailing list, right at the end of a
> >>> cycle when we're supposed to be stabilising features.
> >>>
> >>
> >> So, I agree it was maybe a bit too quick hence the revert. That said,
> >> Nova master is now Mitaka, which means that the deprecation change
> >> was provided for the next cycle, not the one currently stabilising.
> >>
> >> Anyway, I'm really all up with discussing why Cinder needs to know
> >> the Nova AZs.
> >>
> >>> I proposed a session at the Tokyo summit for a discussion of Cinder
> >>> AZs, since there was clear confusion about what they are intended
> >>> for and how they should be configured.
> >>
> >> Cool, count me in from the Nova standpoint.
> >>
> >>> Since then I've reached out to and gotten good feedback from, a
> >>> number of operators. There are two distinct configurations for AZ
> >>> behaviour in cinder, and both sort-of worked until very recently.
> >>>
> >>> 1) No AZs in cinder
> >>> This is the config where a single 'blob' of storage (most of the
> >>> operators who responded so far are using Ceph, though that isn't
> >>> required). The storage takes care of availability concerns, and any
> >>> AZ info from nova should just be ignored.
> >>>
> >>> 2) Cinder AZs map to Nova AZs
> >>> In this case, some combination of storage / networking / etc couples
> >>> storage to nova AZs. It is may be that an AZ is used as a unit of
> >>> scaling, or it could be a real storage failure domain. Eitehr way,
> >>> there are a number of operators who have this configuration and want
> >>> to keep it. Storage can certainly have a failure domain, and
> >>> limiting the scalability problem of storage to a single cmpute AZ
> >>> can have definite advantages in failure scenarios. These people do
> >>> not want cross-az attach.
> >>>
> >>
> >> Ahem, Nova AZs are not failure domains - I mean the current
> >> implementation, in the sense of many people understand what is a
> >> failure domain, ie. a physical unit of machines (a bay, a room, a
> >> floor, a datacenter).
> >> All the AZs in Nova share the same controlplane with the same message
> >> queue and database, which means that one failure can be propagated to
> >> the other AZ.
> >>
> >> To be honest, there is one very specific usecase where AZs *are*
> >> failure domains : when cells exact match with AZs (ie. one AZ
> >> grouping all the hosts behind one cell). That's the very specific
> >> usecase that Sam is mentioning in his email, and I certainly understand
we
> need to keep that.
> >>
> >> What are AZs in Nova is pretty well explained in a quite old blogpost :
> >> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-
> >> aggregates-in-openstack-compute-nova/
> >>
> >>
> >> We also added a few comments in our developer doc here
> >> http://docs.openstack.org/developer/nova/aggregates.html#availability
> >> -zones-azs
> >>
> >>
> >> tl;dr: AZs are aggregate metadata that makes those aggregates of
> >> compute nodes visible to the users. Nothing more than that, no magic
> sauce.
> >> That's just a logical abstraction that can be mapping your physical
> >> deployment, but like I said, which would share the same bus and DB.
> >> Of course, you could still provide networks distinct between AZs but
> >> that just gives you the L2 isolation, not the real failure domain in
> >> a Business Continuity Plan way.
> >>
> >> What puzzles me is how Cinder is managing a datacenter-level of
> >> isolation given there is no cells concept AFAIK. I assume that
> >> cinder-volumes are belonging to a specific datacenter but how is
> >> managed the controlplane of it ? I can certainly understand the need
> >> of affinity placement between physical units, but I'm missing that
> >> piece, and consequently I wonder why Nova need to provide AZs to
> >> Cinder on a general case.
> >>
> >>
> >>
> >>> My hope at the summit session was to agree these two configurations,
> >>> discuss any scenarios not covered by these two configuration, and
> >>> nail down the changes we need to get these to work properly. There's
> >>> definitely been interest and activity in the operator community in
> >>> making nova and cinder AZs interact, and every desired interaction
> >>> I've gotten details about so far matches one of 

Re: [openstack-dev] [neutron][networking-ovn][vtep] Proposal: support for vtep-gateway in ovn

2015-09-24 Thread Russell Bryant
On 09/24/2015 10:18 AM, Salvatore Orlando wrote:
> One particular issue is that the project implements the ovsdb protocol
> from scratch.  The ovs project provides a Python library for this.  Both
> Neutron and networking-ovn use it, at least.  From some discussion, I've
> gathered that the ovs Python library lacked one feature that was needed,
> but has since been added because we wanted the same thing in
> networking-ovn.
> 
> 
> My take here is that we don't need to use the whole implementation of
> networking-l2gw, but only the APIs and the DB management layer it exposes.
> Networking-l2gw provides a VTEP network gateway solution that, if you
> want, will eventually be part of Neutron's "reference" control plane.
> OVN provides its implementation; I think it should be possible to
> leverage networking-l2gw either by pushing an OVN driver there, or
> implementing the same driver in openstack/networking-ovn.

>From a quick look, it seemed like networking-l2gw was doing 2 things.

  1) Management of vtep switches themselves

  2) Management of connectivity between Neutron networks and VTEP
 gateways

I figured the implementation of #1 would be the same whether you were
using ML2+OVS, OVN, (or whatever else).  This part is not addressed in
OVN.  You point OVN at VTEP gateways, but it's expected you manage the
gateway provisioning some other way.

It's #2 that has a very different implementation.  For OVN, it's just
creating a row in OVN's northbound database.

or did I mis-interpret what networking-l2gw is doing?

> The networking-l2gw route will require some pretty significant work.
> It's still the closest existing effort, so I think we should explore it
> until it's absolutely clear that it *can't* work for what we need.
> 
> 
> I would say that it is definitely not trivial but probably a bit less
> than "significant". abhraut from my team has done something quite
> similar for openstack/vmware-nsx [1]

but specific to nsx.  :(

Does it look like networking-l2gw could be a common API for what's
needed for NSX?

> 
> 
> > OR
> >
> > Should OVN pursue it’s own Neutron extension (including vtep gateway
> > support).
> 
> I don't think this option provides a lot of value over the short term
> binding:profile solution.  Both are OVN specific.  I think I'd rather
> just stick to binding:profile as the OVN specific stopgap because it's a
> *lot* less work.
> 
> 
> I totally agree. The solution based on the binding profile is indeed a
> decent one in my opinion.
> If OVN cannot converge on the extension proposed by networking-l2gw then
> I'd keep using the binding profile for specifying gateway ports.

Great, thanks for the feedback!

> [1] https://review.openstack.org/#/c/210623/

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-24 Thread Shiv Haris
Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user 
instantiates the Usecase-VM. However creating a OVA file is possible only when 
the VM is halted which means Openstack is not running and the user will have to 
run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM 
and using it in another setup is not very straight forward. It involves 
modifying the .vbox file and seems that it is prone to user errors. I am 
leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ….

Thanks,

-Shiv


From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not 
cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you 
posed however I am still working on some of the subtle issues raised. Once I 
have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:t...@styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's 
going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this 
big?  I think we should finish this as a VM but then look into doing it with 
containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB – 
but the OVA compress the image and disk to 3 GB. I will looking at other 
options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup 
time is substantial, and if there's a problem, it's good to assume the user 
won't know how to fix it.  Is it possible to have devstack up and running when 
we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack 
will be down when you bring up  the VM. I agree a snapshot will be a better 
choice.

- It'd be good to have a README to explain how to use the use-case structure. 
It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases 
folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization 
problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script 
so that we can run the use cases one after another without worrying about 
interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris 
> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova

I usually run this on a macbook air – but it should work on other platfroms as 
well. I chose virtualbox since it is free.

Please send me your usecases – I can incorporate in the VM and send you an 
updated image. Please take a look at the structure I have in place for the 
first usecase; would prefer it be the same for other usecases. (However I am 
still open to suggestions for changes)

Thanks,

-Shiv

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Sylvain Bauza



Le 24/09/2015 18:16, Mathieu Gagné a écrit :

On 2015-09-24 11:53 AM, Walter A. Boring IV wrote:

The good thing about the Nova and Cinder clients/APIs is that
anyone can write a quick python script to do the orchestration
themselves, if we want to deprecate this.  I'm all for deprecating this.

I don't like this kind of reasoning which can justify close to anything.
It's easy to make those suggestions when you know Python. Please
consider non-technical/non-developers users when suggesting deprecating
features or proposing alternative solutions.

I could also say (in bad faith, I know): why have Heat when you can
write your own Python script. And yet, I don't think we would appreciate
anyone making such a controversial statement.

Our users don't know Python, use 3rd party tools (which don't often
perform/support orchestration) or the Horizon dashboard. They don't want
to have to learn Heat or Python so they can orchestrate volume creation
in place of Nova for a single instance. You don't write CloudFormation
templates on AWS just to boot an instance on volume. That's not the UX I
want to offer to my users.



I'd tend to answer that if it's an user problem, then I would prefer to 
see the orchestration done by a python-novaclient wrapping CLI module 
like we have for host-evacuate (for example) and deprecate the REST and 
novaclient APIs so it would still be possible for the users to get the 
orchestration done by the same CLI but the API no longer supporting that.


-Sylvain


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Mathieu Gagné
Hi Matt,

On 2015-09-24 1:45 PM, Matt Riedemann wrote:
> 
> 
> On 9/24/2015 11:50 AM, Mathieu Gagné wrote:
>>
>> May I suggest the following solutions:
>>
>> 1) Add ability to disable this whole AZ concept in Cinder so it doesn't
>> fail to create volumes when Nova asks for a specific AZ. This could
>> result in the same behavior as cinder.cross_az_attach config.
> 
> That's essentially what this does:
> 
> https://review.openstack.org/#/c/217857/
> 
> It defaults to False though so you have to be aware and set it if you're
> hitting this problem.
> 
> The nova block_device code that tries to create the volume and passes
> the nova AZ should have probably been taking into account the
> cinder.cross_az_attach config option, because just blindly passing it
> was the reason why cinder added that option.  There is now a change up
> for review to consider cinder.cross_az_attach in block_device:
> 
> https://review.openstack.org/#/c/225119/
> 
> But that's still making the assumption that we should be passing the AZ
> on the volume create request and will still fail if the AZ isn't in
> cinder (and allow_availability_zone_fallback=False in cinder.conf).
> 
> In talking with Duncan this morning he's going to propose a spec for an
> attempt to clean some of this up and decouple nova from handling this
> logic.  Basically a new Cinder API where you give it an AZ and it tells
> you if that's OK.  We could then use this on the nova side before we
> ever get to the compute node and fail.

IMO, the confusion comes from what I consider a wrong usage of AZ. To
quote Sylvain Bauza from a recent review [1][2]:

"because Nova AZs and Cinder AZs are very different failure domains"

This is not the concept of AZ I learned to know from cloud providers
where an AZ is global to the region, not per-service.

Google Cloud Platform:
- Persistent disks are per-zone resources. [3]
- Resources that are specific to a zone or a region can only be used by
other resources in the same zone or region. For example, disks and
instances are both zonal resources. To attach a disk to an instance,
both resources must be in the same zone. [4]

Amazon Web Services:
- Instances and disks are per-zone resources. [5]

So now we are stuck with AZ not being consistent across services and
confusing people.


[1] https://review.openstack.org/#/c/225119/2
[2] https://review.openstack.org/#/c/225119/2/nova/virt/block_device.py
[3] https://cloud.google.com/compute/docs/disks/persistent-disks
[4] https://cloud.google.com/compute/docs/zones
[5] http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resources.html

-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] PTL & Component Leads elections

2015-09-24 Thread Dmitry Borodaenko
I've updated the policy document to explicitly spell out committers to
which repositories vote for PTL and for CLs:

https://review.openstack.org/#/c/225376/3..4/policy/team-structure.rst

This policy document is going to become the primary source of truth on
our governance process, I encourage all Fuel contributors, especially
core reviewers, to read it carefully, provide comments, and vote. So far
only Mike and Alexey have done that.

-- 
Dmitry Borodaenko

On Thu, Sep 24, 2015 at 01:17:58PM +0300, Vladimir Kuklin wrote:
> Dmitry
> 
> Thank you for the clarification, but my questions still remain unanswered,
> unfortunately. It seems I did not phrase them correctly.
> 
> 1) For each of the positions, which set of git repositories should I run
> this command against? E.g. which stackforge/fuel-* projects contributors
> are electing PTL or CL?
> 2) Who is voting for component leads? Mike's email says these are core
> reviewers. Our previous IRC meeting mentioned all the contributors to
> particular components. Documentation link you sent is mentioning all
> contributors to Fuel projects. Whom should I trust? What is the final
> version? Is it fine that documentation contributor is eligible to nominate
> himself and vote for Library Component Lead?
> 
> Until there is a clear and sealed answer to these questions we do not have
> a list of people who can vote and who can nominate. Let's get it clear at
> least before PTL elections start.
> 
> On Thu, Sep 24, 2015 at 4:49 AM, Dmitry Borodaenko  > wrote:
> 
> > Vladimir,
> >
> > Sergey's initial email from this thread has a link to the Fuel elections
> > wiki page that describes the exact procedure to determine the electorate
> > and the candidates [0]:
> >
> > The electorate for a given PTL and Component Leads election are the
> > Foundation individual members that are also committers for one of
> > the Fuel team's repositories over the last year timeframe (September
> > 18, 2014 06:00 UTC to September 18, 2015 05:59 UTC).
> >
> > ...
> >
> > Any member of an election electorate can propose their candidacy for
> > the same election.
> >
> > [0] https://wiki.openstack.org/wiki/Fuel/Elections_Fall_2015#Electorate
> >
> > If you follow more links from that page, you will find the Governance
> > page [1] and from there the Election Officiating Guidelines [2] that
> > provide a specific shell one-liner to generate that list:
> >
> > git log --pretty=%aE --since '1 year ago' | sort -u
> >
> > [1] https://wiki.openstack.org/wiki/Governance
> > [2] https://wiki.openstack.org/wiki/Election_Officiating_Guidelines
> >
> > As I have specified in the proposed Team Structure policy document [3],
> > this is the same process that is used by other OpenStack projects.
> >
> > [3] https://review.openstack.org/225376
> >
> > Having a different release schedule is not a sufficient reason for Fuel
> > to reinvent the wheel, for example OpenStack Infrastructure project
> > doesn't even have a release schedule for many of its deliverables, and
> > still follows the same elections schedule as the rest of OpenStack:
> >
> > [4] http://governance.openstack.org/reference/projects/infrastructure.html
> >
> > Lets keep things simple.
> >
> > --
> > Dmitry Borodaenko
> >
> >
> > On Wed, Sep 23, 2015 at 01:27:07PM +0300, Vladimir Kuklin wrote:
> > > Dmitry, Mike
> > >
> > > Thank you for the list of usable links.
> > >
> > > But still - we do not have clearly defined procedure on determening who
> > is
> > > eligible to nominate and vote for PTL and Component Leads. Remember, that
> > > Fuel still has different release cycle and Kilo+Liberty contributors list
> > > is not exactly the same for "365days" contributors list.
> > >
> > > Can we finally come up with the list of people eligible to nominate and
> > > vote?
> > >
> > > On Sun, Sep 20, 2015 at 2:37 AM, Mike Scherbakov <
> > mscherba...@mirantis.com>
> > > wrote:
> > >
> > > > Let's move on.
> > > > I started work on MAINTAINERS files, proposed two patches:
> > > > https://review.openstack.org/#/c/225457/1
> > > > https://review.openstack.org/#/c/225458/1
> > > >
> > > > These can be used as templates for other repos / folders.
> > > >
> > > > Thanks,
> > > >
> > > > On Fri, Sep 18, 2015 at 7:45 PM Davanum Srinivas 
> > > > wrote:
> > > >
> > > >> +1 Dmitry
> > > >>
> > > >> -- Dims
> > > >>
> > > >> On Fri, Sep 18, 2015 at 9:07 PM, Dmitry Borodaenko <
> > > >> dborodae...@mirantis.com> wrote:
> > > >>
> > > >>> Dims,
> > > >>>
> > > >>> Thanks for the reminder!
> > > >>>
> > > >>> I've summarized the uncontroversial parts of that thread in a policy
> > > >>> proposal as per you suggestion [0], please review and comment. I've
> > > >>> renamed SMEs to maintainers since Mike has agreed with that part,
> > and I
> > > >>> omitted code review SLAs from the policy since that's the part that
> > has
> > > >>> generated the most discussion.
> > > >>>
> > > >>> 

Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Andrew Laski

On 09/24/15 at 12:16pm, Mathieu Gagné wrote:

On 2015-09-24 11:53 AM, Walter A. Boring IV wrote:

The good thing about the Nova and Cinder clients/APIs is that
anyone can write a quick python script to do the orchestration
themselves, if we want to deprecate this.  I'm all for deprecating this.


I don't like this kind of reasoning which can justify close to anything.
It's easy to make those suggestions when you know Python. Please
consider non-technical/non-developers users when suggesting deprecating
features or proposing alternative solutions.

I could also say (in bad faith, I know): why have Heat when you can
write your own Python script. And yet, I don't think we would appreciate
anyone making such a controversial statement.

Our users don't know Python, use 3rd party tools (which don't often
perform/support orchestration) or the Horizon dashboard. They don't want
to have to learn Heat or Python so they can orchestrate volume creation
in place of Nova for a single instance. You don't write CloudFormation
templates on AWS just to boot an instance on volume. That's not the UX I
want to offer to my users.


The issues that I've seen with having this happen in Nova are that there 
are many different ways for this process to fail and the user is 
provided no control or visibility.


As an example we have some images that should convert to volumes quickly 
so failure would be defined as taking longer than x amount of time, but 
for another set of images that are expected to take longer failure would 
be 3x amount of time.  Nova shouldn't be the place to decide how long 
volume creation should take, and I wouldn't expect to ask users to pass 
this in during an API request.


When volume creation does take a decent amount of time there is no 
indication of progress in the Nova API.  When monitoring it via the 
Cinder API you can get a rough approximation of progress.  I don't 
expect Nova to expose volume creation progress as part of the feedback 
during an instance boot request.


At the moment the volume creation request happens from the computes 
themselves.  This means that a failure presents itself as a build 
failure leading to a reschedule and ultimately the user is given a 
NoValidHost.  This is unhelpful and as an operator tracking down the 
root cause is time consuming.


When there is a failure to build an instance while Cinder is creating a 
volume it's possible to end up with the volume left around while the 
instance is deleted.  This is not at all made visible to users in the 
Nova API unless they query the list of volumes and see one they don't 
expect, though it's often immediately clear in the DELETE request sent 
to Cinder.


In short, it ends up being much nicer for users to control the process 
themselves.  Alternatively it would be nice if there was an 
orchestration system that could handle it for them.  But Nova is not 
designed to do that very well.





--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stumped...need help with neutronclient job failure

2015-09-24 Thread Matthew Treinish
On Thu, Sep 24, 2015 at 10:52:45AM -0700, Kevin Benton wrote:
> Can you look to see what process tempest_lib is trying to execute?
> 
> On Wed, Sep 23, 2015 at 4:02 AM, Paul Michali  wrote:
> 
> > Hi,
> >
> > I created a pair of experimental jobs for python-neutronclient that will
> > run functional tests on core and advanced services, respectively. In the
> > python-neutronclient repo, I have a commit [1] that splits the tests into
> > two directories for core/adv-svcs, enables the VPN devstack plugin for the
> > advanced services tests, and removes the skip decorator for the VPN tests.
> >
> > When these two jobs run, the core job pass (as expected). The advanced
> > services job shows all four advanced services tests (testing REST LIST
> > requests for IKE policy, IPSec policy, IPSec site-to-site connection, and
> > VPN service resources) failing, with this T/B:
> >
> > ft1.1: 
> > neutronclient.tests.functional.adv-svcs.test_readonly_neutron_vpn.SimpleReadOnlyNeutronVpnClientTest.test_neutron_vpn_*ipsecpolicy_list*_StringException:
> >  Empty attachments:
> >   pythonlogging:''
> >   stderr
> >   stdout
> >
> > Traceback (most recent call last):
> >   File 
> > "neutronclient/tests/functional/adv-svcs/test_readonly_neutron_vpn.py", 
> > line 37, in test_neutron_vpn_ipsecpolicy_list
> > ipsecpolicy = self.parser.listing(self.neutron('vpn-ipsecpolicy-list'))
> >   File "neutronclient/tests/functional/base.py", line 78, in neutron
> > **kwargs)
> >   File 
> > "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py",
> >  line 292, in neutron
> > 'neutron', action, flags, params, fail_ok, merge_stderr)
> >   File 
> > "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py",
> >  line 361, in cmd_with_auth
> > self.cli_dir)
> >   File 
> > "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py",
> >  line 61, in execute
> > proc = subprocess.Popen(cmd, stdout=stdout, stderr=stderr)
> >   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
> > errread, errwrite)
> >   File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
> > raise child_exception
> > OSError: [Errno 2] No such file or directory

So taking a blind guess without actually looking at anything besides this email
my thinking is that you aren't installing neutronclient in /usr/bin in that job.
Either it's being installed in the tox venv only or going into /usr/local/bin or
something like that. There is a parameter to give tempest-lib the bin dir where
the cli commands live. You need to make sure that's set to where ever you're
installing the CLI commands.

> >
> >
> > When I look at the other logs on this run [2], I see these things:
> > - The VPN agent is running (so the DevStack plugin started up VPN)
> > - screen-q-svc.log shows only two of the four REST GET requests
> > - Initially there was no testr results, but I modified post test hook
> > script similar to what Neutron does (so it shows results now)
> > - No other errors seen, including nothing on the StringException
> >
> > When I run this locally, all four tests pass, and I see four REST requests
> > in the screen-q-svc.log.
> >
> > I tried a hack to enable NEUTRONCLIENT_DEBUG environment variable, but no
> > additional information was shown.
> >
> > Does anyone have any thoughts on what may be going wrong here?
> > Any ideas on how to troubleshoot this issue?
> >
> > Thanks in advance!
> >
> > Paul Michali (pc_m)
> >
> > Refs
> > [1] https://review.openstack.org/#/c/214587/
> > [2]
> > http://logs.openstack.org/87/214587/8/experimental/gate-neutronclient-test-dsvm-functional-adv-svcs/5dfa152/
> >


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn][vtep] Proposal: support for vtep-gateway in ovn

2015-09-24 Thread Armando M.
On 24 September 2015 at 09:12, Russell Bryant  wrote:

> On 09/24/2015 10:18 AM, Salvatore Orlando wrote:
> > One particular issue is that the project implements the ovsdb
> protocol
> > from scratch.  The ovs project provides a Python library for this.
> Both
> > Neutron and networking-ovn use it, at least.  From some discussion,
> I've
> > gathered that the ovs Python library lacked one feature that was
> needed,
> > but has since been added because we wanted the same thing in
> > networking-ovn.
> >
> >
> > My take here is that we don't need to use the whole implementation of
> > networking-l2gw, but only the APIs and the DB management layer it
> exposes.
> > Networking-l2gw provides a VTEP network gateway solution that, if you
> > want, will eventually be part of Neutron's "reference" control plane.
> > OVN provides its implementation; I think it should be possible to
> > leverage networking-l2gw either by pushing an OVN driver there, or
> > implementing the same driver in openstack/networking-ovn.
>
> From a quick look, it seemed like networking-l2gw was doing 2 things.
>
>   1) Management of vtep switches themselves
>
>   2) Management of connectivity between Neutron networks and VTEP
>  gateways
>
> I figured the implementation of #1 would be the same whether you were
> using ML2+OVS, OVN, (or whatever else).  This part is not addressed in
> OVN.  You point OVN at VTEP gateways, but it's expected you manage the
> gateway provisioning some other way.
>
> It's #2 that has a very different implementation.  For OVN, it's just
> creating a row in OVN's northbound database.
>
> or did I mis-interpret what networking-l2gw is doing?
>

No, you did not misinterpret what the objective of the project were (which
I reinstate here):

* Provide an API to OpenStack admins to extend neutron logical networks
into unmanaged pre-existing vlans. Bear in mind that things like address
collision prevention is left in the hands on the operator. Other aspects
like L2/L3 interoperability instead should be taken care of, at least from
an implementation point of view.

* Provide a pluggable framework for multiple drivers of the API.

* Provide an PoC implementation on top of the ovsdb vtep schema. This can
be implemented both in hardware (ToR switches) and software (software L2
gateways).



>
> > The networking-l2gw route will require some pretty significant work.
> > It's still the closest existing effort, so I think we should explore
> it
> > until it's absolutely clear that it *can't* work for what we need.
>

We may have fallen short of some/all expectations, but I would like to
believe than it is nothing that can't be fixed by iterating on, especially
if active project participation raises.

I don't think there's a procedural mandate to make OVN abide by the l2gw
proposed API. As you said, it is not a clear well accepted API, but that's
only because we live in a brand new world, where people should be allowed
to experiment and reconcile later as community forces play out.

That said, should the conclusion that "it (the API) *can't* work for what
OVN needs" be reached, I would like to understand/document why for the sake
of all us involved so that lessons will yield from our mistakes.

>
> >
> > I would say that it is definitely not trivial but probably a bit less
> > than "significant". abhraut from my team has done something quite
> > similar for openstack/vmware-nsx [1]
>
> but specific to nsx.  :(
>
> Does it look like networking-l2gw could be a common API for what's
> needed for NSX?
>
> >
> >
> > > OR
> > >
> > > Should OVN pursue it’s own Neutron extension (including vtep
> gateway
> > > support).
> >
> > I don't think this option provides a lot of value over the short term
> > binding:profile solution.  Both are OVN specific.  I think I'd rather
> > just stick to binding:profile as the OVN specific stopgap because
> it's a
> > *lot* less work.
> >
> >
> > I totally agree. The solution based on the binding profile is indeed a
> > decent one in my opinion.
> > If OVN cannot converge on the extension proposed by networking-l2gw then
> > I'd keep using the binding profile for specifying gateway ports.
>
> Great, thanks for the feedback!
>
> > [1] https://review.openstack.org/#/c/210623/
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core

2015-09-24 Thread Joshua Harlow

+1 from me, welcome aboard.

Please tar the deck and clean up the rigging, thanks :-P

ozamiatin wrote:

+1 from me

9/24/15 20:12, Doug Hellmann пишет:

Oslo team,

I am nominating Brant Knudson for Oslo core.

As liaison from the Keystone team Brant has participated in meetings,
summit sessions, and other discussions at a level higher than some
of our own core team members. He is already core on oslo.policy
and oslo.cache, and given his track record I am confident that he would
make a good addition to the team.

Please indicate your opinion by responding with +1/-1 as usual.

Doug

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core

2015-09-24 Thread Steve Martinelli

Though I'm not Oslo Core, big +1 from me, Brant is a great benefit to any
project.

Thanks,

Steve Martinelli
OpenStack Keystone Core



From:   Doug Hellmann 
To: openstack-dev 
Date:   2015/09/24 01:13 PM
Subject:[openstack-dev] [oslo] nominating Brant Knudson for Oslo core



Oslo team,

I am nominating Brant Knudson for Oslo core.

As liaison from the Keystone team Brant has participated in meetings,
summit sessions, and other discussions at a level higher than some
of our own core team members.  He is already core on oslo.policy
and oslo.cache, and given his track record I am confident that he would
make a good addition to the team.

Please indicate your opinion by responding with +1/-1 as usual.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core

2015-09-24 Thread ozamiatin

+1 from me

9/24/15 20:12, Doug Hellmann пишет:

Oslo team,

I am nominating Brant Knudson for Oslo core.

As liaison from the Keystone team Brant has participated in meetings,
summit sessions, and other discussions at a level higher than some
of our own core team members.  He is already core on oslo.policy
and oslo.cache, and given his track record I am confident that he would
make a good addition to the team.

Please indicate your opinion by responding with +1/-1 as usual.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stumped...need help with neutronclient job failure

2015-09-24 Thread Kevin Benton
Can you look to see what process tempest_lib is trying to execute?

On Wed, Sep 23, 2015 at 4:02 AM, Paul Michali  wrote:

> Hi,
>
> I created a pair of experimental jobs for python-neutronclient that will
> run functional tests on core and advanced services, respectively. In the
> python-neutronclient repo, I have a commit [1] that splits the tests into
> two directories for core/adv-svcs, enables the VPN devstack plugin for the
> advanced services tests, and removes the skip decorator for the VPN tests.
>
> When these two jobs run, the core job pass (as expected). The advanced
> services job shows all four advanced services tests (testing REST LIST
> requests for IKE policy, IPSec policy, IPSec site-to-site connection, and
> VPN service resources) failing, with this T/B:
>
> ft1.1: 
> neutronclient.tests.functional.adv-svcs.test_readonly_neutron_vpn.SimpleReadOnlyNeutronVpnClientTest.test_neutron_vpn_*ipsecpolicy_list*_StringException:
>  Empty attachments:
>   pythonlogging:''
>   stderr
>   stdout
>
> Traceback (most recent call last):
>   File 
> "neutronclient/tests/functional/adv-svcs/test_readonly_neutron_vpn.py", line 
> 37, in test_neutron_vpn_ipsecpolicy_list
> ipsecpolicy = self.parser.listing(self.neutron('vpn-ipsecpolicy-list'))
>   File "neutronclient/tests/functional/base.py", line 78, in neutron
> **kwargs)
>   File 
> "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py",
>  line 292, in neutron
> 'neutron', action, flags, params, fail_ok, merge_stderr)
>   File 
> "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py",
>  line 361, in cmd_with_auth
> self.cli_dir)
>   File 
> "/opt/stack/new/python-neutronclient/.tox/functional-adv-svcs/local/lib/python2.7/site-packages/tempest_lib/cli/base.py",
>  line 61, in execute
> proc = subprocess.Popen(cmd, stdout=stdout, stderr=stderr)
>   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
> errread, errwrite)
>   File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
> raise child_exception
> OSError: [Errno 2] No such file or directory
>
>
> When I look at the other logs on this run [2], I see these things:
> - The VPN agent is running (so the DevStack plugin started up VPN)
> - screen-q-svc.log shows only two of the four REST GET requests
> - Initially there was no testr results, but I modified post test hook
> script similar to what Neutron does (so it shows results now)
> - No other errors seen, including nothing on the StringException
>
> When I run this locally, all four tests pass, and I see four REST requests
> in the screen-q-svc.log.
>
> I tried a hack to enable NEUTRONCLIENT_DEBUG environment variable, but no
> additional information was shown.
>
> Does anyone have any thoughts on what may be going wrong here?
> Any ideas on how to troubleshoot this issue?
>
> Thanks in advance!
>
> Paul Michali (pc_m)
>
> Refs
> [1] https://review.openstack.org/#/c/214587/
> [2]
> http://logs.openstack.org/87/214587/8/experimental/gate-neutronclient-test-dsvm-functional-adv-svcs/5dfa152/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-24 Thread Chris Friesen

On 09/24/2015 10:54 AM, Chris Friesen wrote:


I took another look at the code and realized that the file *should* get rebuilt
on restart after a power outage--if the file already exists it will print a
warning message in the logs but it should still overwrite the contents of the
file with the desired contents.  However, that didn't happen in my case.

That made me confused about how I ever ended up with an empty persistence file.
  I went back to my logs and found this:

File "./usr/lib64/python2.7/site-packages/cinder/volume/manager.py", line 334,
in init_host
File "/usr/lib64/python2.7/site-packages/osprofiler/profiler.py", line 105, in
wrapper
File "./usr/lib64/python2.7/site-packages/cinder/volume/drivers/lvm.py", line
603, in ensure_export
File "./usr/lib64/python2.7/site-packages/cinder/volume/targets/iscsi.py", line
296, in ensure_export
File "./usr/lib64/python2.7/site-packages/cinder/volume/targets/tgt.py", line
185, in create_iscsi_target
TypeError: not enough arguments for format string


So it seems like we might have a bug in the handling of an empty file.


And I think I know how we got the empty file in the first place, and it wasn't 
the original file creation but rather the file re-creation.


I have logs from shortly before the above logs showing cinder-volume receiving a 
SIGTERM while it was processing the volume in question:



2015-09-21 19:23:59.123 12429 WARNING cinder.volume.targets.tgt 
[req-7d092503-198a-4f59-97e9-d4d520d38379 - - - - -] Persistence file already 
exists for volume, found file at: 
/opt/cgcs/cinder/data/volumes/volume-76c5f285-a15e-474e-b59e-fd609a624090
2015-09-21 19:24:01.252 12429 WARNING cinder.volume.targets.tgt 
[req-7d092503-198a-4f59-97e9-d4d520d38379 - - - - -] Persistence file already 
exists for volume, found file at: 
/opt/cgcs/cinder/data/volumes/volume-993c94b2-e256-4baf-ab55-805a8e28f547
2015-09-21 19:24:01.951 8201 INFO cinder.openstack.common.service 
[req-904f88a8-8e6f-425e-8df7-5cbb9baae0c5 - - - - -] Caught SIGTERM, stopping 
children



I think what happened is that we took the SIGTERM after the open() call in 
create_iscsi_target(), but before writing anything to the file.


f = open(volume_path, 'w+')
f.write(volume_conf)
f.close()

The 'w+' causes the file to be immediately truncated on opening, leading to an 
empty file.


To work around this, I think we need to do the classic "write to a temporary 
file and then rename it to the desired filename" trick.  The atomicity of the 
rename ensures that either the old contents or the new contents are present.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] use zuul-cloner when running rspec

2015-09-24 Thread Emilien Macchi


On 09/24/2015 02:19 PM, Alex Schultz wrote:
> On Thu, Sep 24, 2015 at 11:54 AM, Emilien Macchi  wrote:
>>
>>
>> On 09/24/2015 10:14 AM, Alex Schultz wrote:
>>> On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi  wrote:
 Background
 ==

 Current rspec tests are tested with modules mentioned in .fixtures.yaml
 file of each module.

 * the file is not consistent across all modules
 * it hardcodes module names & versions
 * this way does not allow to use "Depend-On" feature, that would allow
 to test cross-modules patches

 Proposal
 

 * Like we do in beaker & integration jobs, use zuul-cloner to clone
 modules in our CI jobs.
 * Use r10k to prepare fixtures modules.
 * Use Puppetfile hosted by openstack/puppet-openstack-integration

 In that way:
 * we will have modules name + versions testing consistency across all
 modules
 * the same Puppetfile would be used by unit/beaker/integration testing.
 * the patch that pass tests on your laptop would pass tests in upstream CI
 * if you don't have zuul-cloner on your laptop, don't worry it will use
 git clone. Though you won't have Depends-On feature working on your
 laptop (technically not possible).
 * Though your patch will support Depends-On in OpenStack Infra for unit
 tests. If you submit a patch in puppet-openstacklib that drop something
 wrong, you can send a patch in puppet-nova that will test it, and unit
 tests will fail.

 Drawbacks
 =
 * cloning from .fixtures.yaml takes ~ 10 seconds
 * using r10k + zuul-clone takes ~50 seconds (more modules to clone).

 I think 40 seconds is something accept regarding the benefit.

>>>
>>> As someone who consumes these modules downstream and has our own CI
>>> setup to run the rspec items, this ties it too closely to the
>>> openstack infrastructure. If we replace the .fixtures.yml with
>>> zuul-cloner, it assumes I always want the openstack version of the
>>> modules. This is not necessarily true. I like being able to replace
>>> items within fixtures.yml when doing dev work. For example If i want
>>> to test upgrading another module not related to openstack, like
>>> inifile, how does that work with the proposed solution?  This is also
>>> moving away from general puppet module conventions for testing. My
>>> preference would be that this be a different task and we have both
>>> .fixtures.yml (for general use/development) and the zuul method of
>>> cloning (for CI).  You have to also think about this from a consumer
>>> standpoint and this is adding an external dependency on the OpenStack
>>> infrastructure for anyone trying to run rspec or trying to consume the
>>> published versions from the forge.  Would I be able to run these tests
>>> in an offline mode with this change? With the .fixures.yml it's a
>>> minor edit to switch to local versions. Is the same true for the
>>> zuul-cloner version?
>>
>> What you did before:
>> * Edit .fixtures.yaml and put the version you like.
>>
>> What you would do this the current proposal:
>> * Edit openstack/puppet-openstack-integration/Puppetfile and put the
>> version you like.
>>
> 
> So I have to edit a file in another module to test changes in
> puppet-neutron, puppet-nova, etc? With the zuul-cloner version, for
> local testing what does that workflow look like?

If you need to test your code with cross-project dependencies, having
current .fixtures.yaml or the proposal won't change anything regarding
that, you'll still have to trick the YAML file that define the modules
name/versions.

> 
>> What you're suggesting has a huge downside:
>> People will still use fixtures by default and not test what is actually
>> tested by our CI.
>> A few people will know about the specific Rake task so a few people will
>> test exactly what upstream does. That will cause frustration to the most
>> of people who will see tests failing in our CI and not on their laptop.
>> I'm not sure we want that.
> 
> You're right that the specific rake task may not be ideal. But that
> was one option, another option could be use fixtures first then
> replace with zuul-cloner provided versions but provide me the ability
> to turn of the zuul cloner part? I'm just saying as it is today, this
> change adds more complexity and hard ties into the OpenStack
> infrastructure with non-trival work arounds. I would love to solve the
> Depends-On issue, but I don't think that should include a deviation
> from generally accepted testing practices of puppet modules.

I agree it's not best practice in Puppet but I don't see that as an huge
blocker. Our Puppet modules are Approved by Puppetlabs and respect most
of best practices AFIK. Is that fixctures thing a big deal?
I would like to hear from *cough*Hunner/Cody*cough* Puppetlabs about that.
Another proposal is welcome though, please go ahead.

>>

Re: [openstack-dev] [puppet] use zuul-cloner when running rspec

2015-09-24 Thread Alex Schultz
On Thu, Sep 24, 2015 at 1:58 PM, Emilien Macchi  wrote:
>
>
> On 09/24/2015 02:19 PM, Alex Schultz wrote:
>> On Thu, Sep 24, 2015 at 11:54 AM, Emilien Macchi  wrote:
>>>
>>>
>>> On 09/24/2015 10:14 AM, Alex Schultz wrote:
 On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi  wrote:
> Background
> ==
>
> Current rspec tests are tested with modules mentioned in .fixtures.yaml
> file of each module.
>
> * the file is not consistent across all modules
> * it hardcodes module names & versions
> * this way does not allow to use "Depend-On" feature, that would allow
> to test cross-modules patches
>
> Proposal
> 
>
> * Like we do in beaker & integration jobs, use zuul-cloner to clone
> modules in our CI jobs.
> * Use r10k to prepare fixtures modules.
> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>
> In that way:
> * we will have modules name + versions testing consistency across all
> modules
> * the same Puppetfile would be used by unit/beaker/integration testing.
> * the patch that pass tests on your laptop would pass tests in upstream CI
> * if you don't have zuul-cloner on your laptop, don't worry it will use
> git clone. Though you won't have Depends-On feature working on your
> laptop (technically not possible).
> * Though your patch will support Depends-On in OpenStack Infra for unit
> tests. If you submit a patch in puppet-openstacklib that drop something
> wrong, you can send a patch in puppet-nova that will test it, and unit
> tests will fail.
>
> Drawbacks
> =
> * cloning from .fixtures.yaml takes ~ 10 seconds
> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>
> I think 40 seconds is something accept regarding the benefit.
>

 As someone who consumes these modules downstream and has our own CI
 setup to run the rspec items, this ties it too closely to the
 openstack infrastructure. If we replace the .fixtures.yml with
 zuul-cloner, it assumes I always want the openstack version of the
 modules. This is not necessarily true. I like being able to replace
 items within fixtures.yml when doing dev work. For example If i want
 to test upgrading another module not related to openstack, like
 inifile, how does that work with the proposed solution?  This is also
 moving away from general puppet module conventions for testing. My
 preference would be that this be a different task and we have both
 .fixtures.yml (for general use/development) and the zuul method of
 cloning (for CI).  You have to also think about this from a consumer
 standpoint and this is adding an external dependency on the OpenStack
 infrastructure for anyone trying to run rspec or trying to consume the
 published versions from the forge.  Would I be able to run these tests
 in an offline mode with this change? With the .fixures.yml it's a
 minor edit to switch to local versions. Is the same true for the
 zuul-cloner version?
>>>
>>> What you did before:
>>> * Edit .fixtures.yaml and put the version you like.
>>>
>>> What you would do this the current proposal:
>>> * Edit openstack/puppet-openstack-integration/Puppetfile and put the
>>> version you like.
>>>
>>
>> So I have to edit a file in another module to test changes in
>> puppet-neutron, puppet-nova, etc? With the zuul-cloner version, for
>> local testing what does that workflow look like?
>
> If you need to test your code with cross-project dependencies, having
> current .fixtures.yaml or the proposal won't change anything regarding
> that, you'll still have to trick the YAML file that define the modules
> name/versions.
>
>>
>>> What you're suggesting has a huge downside:
>>> People will still use fixtures by default and not test what is actually
>>> tested by our CI.
>>> A few people will know about the specific Rake task so a few people will
>>> test exactly what upstream does. That will cause frustration to the most
>>> of people who will see tests failing in our CI and not on their laptop.
>>> I'm not sure we want that.
>>
>> You're right that the specific rake task may not be ideal. But that
>> was one option, another option could be use fixtures first then
>> replace with zuul-cloner provided versions but provide me the ability
>> to turn of the zuul cloner part? I'm just saying as it is today, this
>> change adds more complexity and hard ties into the OpenStack
>> infrastructure with non-trival work arounds. I would love to solve the
>> Depends-On issue, but I don't think that should include a deviation
>> from generally accepted testing practices of puppet modules.
>
> I agree it's not best practice in Puppet but I don't see that as an huge
> blocker. Our Puppet modules are Approved by Puppetlabs and respect most
> of best 

Re: [openstack-dev] [puppet][swift] Applying security recommendations within puppet-swift

2015-09-24 Thread Gui Maluf
I think we should follow bug 1458915 principles and remove any POSIX
user/group control. So all modules are consistent among which other
This hardening actions should be reported to specific package mantainers.

On Wed, Sep 23, 2015 at 6:10 PM, Alex Schultz  wrote:

> On Wed, Sep 23, 2015 at 2:32 PM, Alex Schultz 
> wrote:
> > Hey all,
> >
> > So as part of the Puppet mid-cycle, we did bug triage.  One of the
> > bugs that was looked into was bug 1289631[0].  This bug is about
> > applying the recommendations from the security guide[1] within the
> > puppet-swift module.  So I'm sending a note out to get other feedback
> > on if this is a good idea or not.  Should we be applying this type of
> > security items within the puppet modules by default? Should we make
> > this optional?  Thoughts?
> >
> >
> > Thanks,
> > -Alex
> >
> >
> > [0] https://bugs.launchpad.net/puppet-swift/+bug/1289631
> > [1]
> http://docs.openstack.org/security-guide/object-storage.html#securing-services-general
>
> Also for the puppet side of this conversation, the change for the
> security items[0] also seems to conflict with bug 1458915[1] which is
> about removing the posix users/groups/file modes.  So which direction
> should we go?
>
> [0] https://review.openstack.org/#/c/219883/
> [1] https://bugs.launchpad.net/puppet-swift/+bug/1458915
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
*guilherme* \n
\t *maluf*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-24 Thread Alex Yip
I have been using images, rather than snapshots.


It doesn't take that long to start up.  First, I boot the VM which takes a 
minute or so.  Then I run rejoin-stack.sh which takes just another minute or 
so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack 
state that was running before.


- Alex




From: Shiv Haris 
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user 
instantiates the Usecase-VM. However creating a OVA file is possible only when 
the VM is halted which means Openstack is not running and the user will have to 
run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM 
and using it in another setup is not very straight forward. It involves 
modifying the .vbox file and seems that it is prone to user errors. I am 
leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ….

Thanks,

-Shiv


From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not 
cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you 
posed however I am still working on some of the subtle issues raised. Once I 
have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:t...@styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's 
going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this 
big?  I think we should finish this as a VM but then look into doing it with 
containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB – 
but the OVA compress the image and disk to 3 GB. I will looking at other 
options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup 
time is substantial, and if there's a problem, it's good to assume the user 
won't know how to fix it.  Is it possible to have devstack up and running when 
we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack 
will be down when you bring up  the VM. I agree a snapshot will be a better 
choice.

- It'd be good to have a README to explain how to use the use-case structure. 
It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases 
folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization 
problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script 
so that we can run the use cases one after another without worrying about 
interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris 
> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova

I usually run this on a macbook air – but it should work on other platfroms as 
well. I chose virtualbox since it is free.

Please send me your usecases – I can incorporate in the VM and send you an 
updated image. Please take a look at the structure I have in place for the 
first usecase; would prefer it be the same for other usecases. (However I am 
still open to suggestions for changes)

Thanks,

-Shiv


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Matt Riedemann



On 9/24/2015 11:50 AM, Mathieu Gagné wrote:

On 2015-09-24 3:04 AM, Duncan Thomas wrote:


I proposed a session at the Tokyo summit for a discussion of Cinder AZs,
since there was clear confusion about what they are intended for and how
they should be configured. Since then I've reached out to and gotten
good feedback from, a number of operators.


Thanks for your proposition. I will make sure to attend this session.



There are two distinct
configurations for AZ behaviour in cinder, and both sort-of worked until
very recently.

1) No AZs in cinder
This is the config where a single 'blob' of storage (most of the
operators who responded so far are using Ceph, though that isn't
required). The storage takes care of availability concerns, and any AZ
info from nova should just be ignored.


Unless I'm very mistaken, I think it's the main "feature" missing from
OpenStack itself. The concept of AZ isn't global and anyone can still
make it so Nova AZ != Cinder AZ.

In my opinion, AZ should be a global concept where they are available
and the same for all services so Nova AZ == Cinder AZ. This could result
in a behavior similar to "regions within regions".

We should survey and ask how AZ are actually used by operators and
users. Some might create an AZ for each server racks, others for each
power segments in their datacenter or even business units so they can
segregate to specific physical servers. Some AZ use cases might just be
a "perverted" way of bypassing shortcomings in OpenStack itself. We
should find out those use cases and see if we should still support them
or offer them an existing or new alternatives.

(I don't run Ceph yet, only SolidFire but I guess the same could apply)

For people running Ceph (or other big clustered block storage), they
will have one big Cinder backend. For resources or business reasons,
they can't afford to create as many clusters (and Cinder AZ) as there
are AZ in Nova. So they end up with one big Cinder AZ (lets call it
az-1) in Cinder. Nova won't be able to create volumes in Cinder az-2 if
an instance is created in Nova az-2.

May I suggest the following solutions:

1) Add ability to disable this whole AZ concept in Cinder so it doesn't
fail to create volumes when Nova asks for a specific AZ. This could
result in the same behavior as cinder.cross_az_attach config.


That's essentially what this does:

https://review.openstack.org/#/c/217857/

It defaults to False though so you have to be aware and set it if you're 
hitting this problem.


The nova block_device code that tries to create the volume and passes 
the nova AZ should have probably been taking into account the 
cinder.cross_az_attach config option, because just blindly passing it 
was the reason why cinder added that option.  There is now a change up 
for review to consider cinder.cross_az_attach in block_device:


https://review.openstack.org/#/c/225119/

But that's still making the assumption that we should be passing the AZ 
on the volume create request and will still fail if the AZ isn't in 
cinder (and allow_availability_zone_fallback=False in cinder.conf).


In talking with Duncan this morning he's going to propose a spec for an 
attempt to clean some of this up and decouple nova from handling this 
logic.  Basically a new Cinder API where you give it an AZ and it tells 
you if that's OK.  We could then use this on the nova side before we 
ever get to the compute node and fail.




2) Add ability for a volume backend to be in multiple AZ. Of course,
this would defeat the whole AZ concept. This could however be something
our operators/users might accept.


I'd nix this on the point about it defeating the purpose of AZs.





2) Cinder AZs map to Nova AZs
In this case, some combination of storage / networking / etc couples
storage to nova AZs. It is may be that an AZ is used as a unit of
scaling, or it could be a real storage failure domain. Eitehr way, there
are a number of operators who have this configuration and want to keep
it. Storage can certainly have a failure domain, and limiting the
scalability problem of storage to a single cmpute AZ can have definite
advantages in failure scenarios. These people do not want cross-az attach.

My hope at the summit session was to agree these two configurations,
discuss any scenarios not covered by these two configuration, and nail
down the changes we need to get these to work properly. There's
definitely been interest and activity in the operator community in
making nova and cinder AZs interact, and every desired interaction I've
gotten details about so far matches one of the above models.





--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn][vtep] Proposal: support for vtep-gateway in ovn

2015-09-24 Thread Russell Bryant
On 09/24/2015 01:25 PM, Armando M. wrote:
> 
> 
> 
> On 24 September 2015 at 09:12, Russell Bryant  > wrote:
> 
> On 09/24/2015 10:18 AM, Salvatore Orlando wrote:
> > One particular issue is that the project implements the ovsdb 
> protocol
> > from scratch.  The ovs project provides a Python library for this.  
> Both
> > Neutron and networking-ovn use it, at least.  From some discussion, 
> I've
> > gathered that the ovs Python library lacked one feature that was 
> needed,
> > but has since been added because we wanted the same thing in
> > networking-ovn.
> >
> >
> > My take here is that we don't need to use the whole implementation of
> > networking-l2gw, but only the APIs and the DB management layer it 
> exposes.
> > Networking-l2gw provides a VTEP network gateway solution that, if you
> > want, will eventually be part of Neutron's "reference" control plane.
> > OVN provides its implementation; I think it should be possible to
> > leverage networking-l2gw either by pushing an OVN driver there, or
> > implementing the same driver in openstack/networking-ovn.
> 
> From a quick look, it seemed like networking-l2gw was doing 2 things.
> 
>   1) Management of vtep switches themselves
> 
>   2) Management of connectivity between Neutron networks and VTEP
>  gateways
> 
> I figured the implementation of #1 would be the same whether you were
> using ML2+OVS, OVN, (or whatever else).  This part is not addressed in
> OVN.  You point OVN at VTEP gateways, but it's expected you manage the
> gateway provisioning some other way.
> 
> It's #2 that has a very different implementation.  For OVN, it's just
> creating a row in OVN's northbound database.
> 
> or did I mis-interpret what networking-l2gw is doing?
> 
> 
> No, you did not misinterpret what the objective of the project were
> (which I reinstate here):
> 
> * Provide an API to OpenStack admins to extend neutron logical networks
> into unmanaged pre-existing vlans. Bear in mind that things like address
> collision prevention is left in the hands on the operator. Other aspects
> like L2/L3 interoperability instead should be taken care of, at least
> from an implementation point of view.
> 
> * Provide a pluggable framework for multiple drivers of the API.
> 
> * Provide an PoC implementation on top of the ovsdb vtep schema. This
> can be implemented both in hardware (ToR switches) and software
> (software L2 gateways). 

Thanks for clarifying the project's goals!

> > The networking-l2gw route will require some pretty significant work.
> > It's still the closest existing effort, so I think we should 
> explore it
> > until it's absolutely clear that it *can't* work for what we need.
> 
> 
> We may have fallen short of some/all expectations, but I would like to
> believe than it is nothing that can't be fixed by iterating on,
> especially if active project participation raises.
> 
> I don't think there's a procedural mandate to make OVN abide by the l2gw
> proposed API. As you said, it is not a clear well accepted API, but
> that's only because we live in a brand new world, where people should be
> allowed to experiment and reconcile later as community forces play out.
> 
> That said, should the conclusion that "it (the API) *can't* work for
> what OVN needs" be reached, I would like to understand/document why for
> the sake of all us involved so that lessons will yield from our mistakes.

My gut says we should be able to work together and make it work.  I
expect we'll talk in more detail in the next cycle.  :-)

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Sylvain Bauza



Le 24/09/2015 19:45, Matt Riedemann a écrit :



On 9/24/2015 11:50 AM, Mathieu Gagné wrote:

On 2015-09-24 3:04 AM, Duncan Thomas wrote:


I proposed a session at the Tokyo summit for a discussion of Cinder 
AZs,
since there was clear confusion about what they are intended for and 
how

they should be configured. Since then I've reached out to and gotten
good feedback from, a number of operators.


Thanks for your proposition. I will make sure to attend this session.



There are two distinct
configurations for AZ behaviour in cinder, and both sort-of worked 
until

very recently.

1) No AZs in cinder
This is the config where a single 'blob' of storage (most of the
operators who responded so far are using Ceph, though that isn't
required). The storage takes care of availability concerns, and any AZ
info from nova should just be ignored.


Unless I'm very mistaken, I think it's the main "feature" missing from
OpenStack itself. The concept of AZ isn't global and anyone can still
make it so Nova AZ != Cinder AZ.

In my opinion, AZ should be a global concept where they are available
and the same for all services so Nova AZ == Cinder AZ. This could result
in a behavior similar to "regions within regions".

We should survey and ask how AZ are actually used by operators and
users. Some might create an AZ for each server racks, others for each
power segments in their datacenter or even business units so they can
segregate to specific physical servers. Some AZ use cases might just be
a "perverted" way of bypassing shortcomings in OpenStack itself. We
should find out those use cases and see if we should still support them
or offer them an existing or new alternatives.

(I don't run Ceph yet, only SolidFire but I guess the same could apply)

For people running Ceph (or other big clustered block storage), they
will have one big Cinder backend. For resources or business reasons,
they can't afford to create as many clusters (and Cinder AZ) as there
are AZ in Nova. So they end up with one big Cinder AZ (lets call it
az-1) in Cinder. Nova won't be able to create volumes in Cinder az-2 if
an instance is created in Nova az-2.

May I suggest the following solutions:

1) Add ability to disable this whole AZ concept in Cinder so it doesn't
fail to create volumes when Nova asks for a specific AZ. This could
result in the same behavior as cinder.cross_az_attach config.


That's essentially what this does:

https://review.openstack.org/#/c/217857/

It defaults to False though so you have to be aware and set it if 
you're hitting this problem.


The nova block_device code that tries to create the volume and passes 
the nova AZ should have probably been taking into account the 
cinder.cross_az_attach config option, because just blindly passing it 
was the reason why cinder added that option.  There is now a change up 
for review to consider cinder.cross_az_attach in block_device:


https://review.openstack.org/#/c/225119/

But that's still making the assumption that we should be passing the 
AZ on the volume create request and will still fail if the AZ isn't in 
cinder (and allow_availability_zone_fallback=False in cinder.conf).


In talking with Duncan this morning he's going to propose a spec for 
an attempt to clean some of this up and decouple nova from handling 
this logic.  Basically a new Cinder API where you give it an AZ and it 
tells you if that's OK.  We could then use this on the nova side 
before we ever get to the compute node and fail.


MHO is like you, we should decouple Nova AZs from Cinder AZs and just 
have a lazy relationship between those by getting a way to call Cinder 
to know which AZ before calling the scheduler.







2) Add ability for a volume backend to be in multiple AZ. Of course,
this would defeat the whole AZ concept. This could however be something
our operators/users might accept.


I'd nix this on the point about it defeating the purpose of AZs.


Well, if we rename Cinder AZs to something else, then I'm honestly not 
really opiniated,since it's already always confusing, because Nova AZs 
are groups of hosts, not anything else.


If we keep the naming as AZs, then I'm not OK since it creates more 
confusion.


-Sylvain








2) Cinder AZs map to Nova AZs
In this case, some combination of storage / networking / etc couples
storage to nova AZs. It is may be that an AZ is used as a unit of
scaling, or it could be a real storage failure domain. Eitehr way, 
there

are a number of operators who have this configuration and want to keep
it. Storage can certainly have a failure domain, and limiting the
scalability problem of storage to a single cmpute AZ can have definite
advantages in failure scenarios. These people do not want cross-az 
attach.


My hope at the summit session was to agree these two configurations,
discuss any scenarios not covered by these two configuration, and nail
down the changes we need to get these to work properly. There's
definitely been interest and activity in the 

Re: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core

2015-09-24 Thread Flavio Percoco

On 24/09/15 13:12 -0400, Doug Hellmann wrote:

Oslo team,

I am nominating Brant Knudson for Oslo core.

As liaison from the Keystone team Brant has participated in meetings,
summit sessions, and other discussions at a level higher than some
of our own core team members.  He is already core on oslo.policy
and oslo.cache, and given his track record I am confident that he would
make a good addition to the team.

Please indicate your opinion by responding with +1/-1 as usual.


+1

--
@flaper87
Flavio Percoco


pgpt4msH7DhvY.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] nominating Brant Knudson for Oslo core

2015-09-24 Thread Robert Collins
+1

On 25 September 2015 at 05:12, Doug Hellmann  wrote:
> Oslo team,
>
> I am nominating Brant Knudson for Oslo core.
>
> As liaison from the Keystone team Brant has participated in meetings,
> summit sessions, and other discussions at a level higher than some
> of our own core team members.  He is already core on oslo.policy
> and oslo.cache, and given his track record I am confident that he would
> make a good addition to the team.
>
> Please indicate your opinion by responding with +1/-1 as usual.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] use zuul-cloner when running rspec

2015-09-24 Thread Alex Schultz
On Thu, Sep 24, 2015 at 11:54 AM, Emilien Macchi  wrote:
>
>
> On 09/24/2015 10:14 AM, Alex Schultz wrote:
>> On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi  wrote:
>>> Background
>>> ==
>>>
>>> Current rspec tests are tested with modules mentioned in .fixtures.yaml
>>> file of each module.
>>>
>>> * the file is not consistent across all modules
>>> * it hardcodes module names & versions
>>> * this way does not allow to use "Depend-On" feature, that would allow
>>> to test cross-modules patches
>>>
>>> Proposal
>>> 
>>>
>>> * Like we do in beaker & integration jobs, use zuul-cloner to clone
>>> modules in our CI jobs.
>>> * Use r10k to prepare fixtures modules.
>>> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>>>
>>> In that way:
>>> * we will have modules name + versions testing consistency across all
>>> modules
>>> * the same Puppetfile would be used by unit/beaker/integration testing.
>>> * the patch that pass tests on your laptop would pass tests in upstream CI
>>> * if you don't have zuul-cloner on your laptop, don't worry it will use
>>> git clone. Though you won't have Depends-On feature working on your
>>> laptop (technically not possible).
>>> * Though your patch will support Depends-On in OpenStack Infra for unit
>>> tests. If you submit a patch in puppet-openstacklib that drop something
>>> wrong, you can send a patch in puppet-nova that will test it, and unit
>>> tests will fail.
>>>
>>> Drawbacks
>>> =
>>> * cloning from .fixtures.yaml takes ~ 10 seconds
>>> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>>>
>>> I think 40 seconds is something accept regarding the benefit.
>>>
>>
>> As someone who consumes these modules downstream and has our own CI
>> setup to run the rspec items, this ties it too closely to the
>> openstack infrastructure. If we replace the .fixtures.yml with
>> zuul-cloner, it assumes I always want the openstack version of the
>> modules. This is not necessarily true. I like being able to replace
>> items within fixtures.yml when doing dev work. For example If i want
>> to test upgrading another module not related to openstack, like
>> inifile, how does that work with the proposed solution?  This is also
>> moving away from general puppet module conventions for testing. My
>> preference would be that this be a different task and we have both
>> .fixtures.yml (for general use/development) and the zuul method of
>> cloning (for CI).  You have to also think about this from a consumer
>> standpoint and this is adding an external dependency on the OpenStack
>> infrastructure for anyone trying to run rspec or trying to consume the
>> published versions from the forge.  Would I be able to run these tests
>> in an offline mode with this change? With the .fixures.yml it's a
>> minor edit to switch to local versions. Is the same true for the
>> zuul-cloner version?
>
> What you did before:
> * Edit .fixtures.yaml and put the version you like.
>
> What you would do this the current proposal:
> * Edit openstack/puppet-openstack-integration/Puppetfile and put the
> version you like.
>

So I have to edit a file in another module to test changes in
puppet-neutron, puppet-nova, etc? With the zuul-cloner version, for
local testing what does that workflow look like?

> What you're suggesting has a huge downside:
> People will still use fixtures by default and not test what is actually
> tested by our CI.
> A few people will know about the specific Rake task so a few people will
> test exactly what upstream does. That will cause frustration to the most
> of people who will see tests failing in our CI and not on their laptop.
> I'm not sure we want that.

You're right that the specific rake task may not be ideal. But that
was one option, another option could be use fixtures first then
replace with zuul-cloner provided versions but provide me the ability
to turn of the zuul cloner part? I'm just saying as it is today, this
change adds more complexity and hard ties into the OpenStack
infrastructure with non-trival work arounds. I would love to solve the
Depends-On issue, but I don't think that should include a deviation
from generally accepted testing practices of puppet modules.

>
> I think more than most of people that run tests on their laptops want to
> see them passing in upstream CI.
> The few people that want to trick versions & modules, will have to run
> Rake, trick the Puppetfile and run Rake again. It's not a big deal and
> I'm sure this few people can deal with that.
>

So for me the zuul-cloner task seems more of a CI specific job that
solves the Depends-On issues we currently have. Much like the beaker
and acceptance tests that's not something I run locally. I usually run
the local rspec tests first before shipping off to CI to see how that
plays out but I would manage the .fixtures.yml if necessary to test
cross module dependancies.  I don't expect to replicate an entire 

Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-24 Thread Chris Friesen

On 09/24/2015 12:18 PM, Chris Friesen wrote:



I think what happened is that we took the SIGTERM after the open() call in
create_iscsi_target(), but before writing anything to the file.

 f = open(volume_path, 'w+')
 f.write(volume_conf)
 f.close()

The 'w+' causes the file to be immediately truncated on opening, leading to an
empty file.

To work around this, I think we need to do the classic "write to a temporary
file and then rename it to the desired filename" trick.  The atomicity of the
rename ensures that either the old contents or the new contents are present.


I'm pretty sure that upstream code is still susceptible to zeroing out the file 
in the above scenario.  However, it doesn't take an exception--that's due to a 
local change on our part that attempted to fix the below issue.


The stable/kilo code *does* have a problem in that when it regenerates the file 
it's missing the CHAP authentication line (beginning with "incominguser").


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova request to Neutron - can instance ID be added?

2015-09-24 Thread Gregory Golberg
Hi All,

When launching a new VM, Nova sends a request to Neutron which contains
various items but does not contain instance ID. Would it be a problem to
add that to the request?

[image: --]
Gregory Golberg
[image: http://]about.me/gregorygolberg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] use zuul-cloner when running rspec

2015-09-24 Thread Clark Boylan
On Thu, Sep 24, 2015, at 12:39 PM, Alex Schultz wrote:
> On Thu, Sep 24, 2015 at 1:58 PM, Emilien Macchi 
> wrote:
> >
> >
> > On 09/24/2015 02:19 PM, Alex Schultz wrote:
> >> On Thu, Sep 24, 2015 at 11:54 AM, Emilien Macchi  
> >> wrote:
> >>>
> >>>
> >>> On 09/24/2015 10:14 AM, Alex Schultz wrote:
>  On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi  
>  wrote:
> > Background
> > ==
> >
> > Current rspec tests are tested with modules mentioned in .fixtures.yaml
> > file of each module.
> >
> > * the file is not consistent across all modules
> > * it hardcodes module names & versions
> > * this way does not allow to use "Depend-On" feature, that would allow
> > to test cross-modules patches
> >
> > Proposal
> > 
> >
> > * Like we do in beaker & integration jobs, use zuul-cloner to clone
> > modules in our CI jobs.
> > * Use r10k to prepare fixtures modules.
> > * Use Puppetfile hosted by openstack/puppet-openstack-integration
> >
> > In that way:
> > * we will have modules name + versions testing consistency across all
> > modules
> > * the same Puppetfile would be used by unit/beaker/integration testing.
> > * the patch that pass tests on your laptop would pass tests in upstream 
> > CI
> > * if you don't have zuul-cloner on your laptop, don't worry it will use
> > git clone. Though you won't have Depends-On feature working on your
> > laptop (technically not possible).
> > * Though your patch will support Depends-On in OpenStack Infra for unit
> > tests. If you submit a patch in puppet-openstacklib that drop something
> > wrong, you can send a patch in puppet-nova that will test it, and unit
> > tests will fail.
> >
> > Drawbacks
> > =
> > * cloning from .fixtures.yaml takes ~ 10 seconds
> > * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
> >
> > I think 40 seconds is something accept regarding the benefit.
> >
> 
>  As someone who consumes these modules downstream and has our own CI
>  setup to run the rspec items, this ties it too closely to the
>  openstack infrastructure. If we replace the .fixtures.yml with
>  zuul-cloner, it assumes I always want the openstack version of the
>  modules. This is not necessarily true. I like being able to replace
>  items within fixtures.yml when doing dev work. For example If i want
>  to test upgrading another module not related to openstack, like
>  inifile, how does that work with the proposed solution?  This is also
>  moving away from general puppet module conventions for testing. My
>  preference would be that this be a different task and we have both
>  .fixtures.yml (for general use/development) and the zuul method of
>  cloning (for CI).  You have to also think about this from a consumer
>  standpoint and this is adding an external dependency on the OpenStack
>  infrastructure for anyone trying to run rspec or trying to consume the
>  published versions from the forge.  Would I be able to run these tests
>  in an offline mode with this change? With the .fixures.yml it's a
>  minor edit to switch to local versions. Is the same true for the
>  zuul-cloner version?
> >>>
> >>> What you did before:
> >>> * Edit .fixtures.yaml and put the version you like.
> >>>
> >>> What you would do this the current proposal:
> >>> * Edit openstack/puppet-openstack-integration/Puppetfile and put the
> >>> version you like.
> >>>
> >>
> >> So I have to edit a file in another module to test changes in
> >> puppet-neutron, puppet-nova, etc? With the zuul-cloner version, for
> >> local testing what does that workflow look like?
> >
> > If you need to test your code with cross-project dependencies, having
> > current .fixtures.yaml or the proposal won't change anything regarding
> > that, you'll still have to trick the YAML file that define the modules
> > name/versions.
> >
> >>
> >>> What you're suggesting has a huge downside:
> >>> People will still use fixtures by default and not test what is actually
> >>> tested by our CI.
> >>> A few people will know about the specific Rake task so a few people will
> >>> test exactly what upstream does. That will cause frustration to the most
> >>> of people who will see tests failing in our CI and not on their laptop.
> >>> I'm not sure we want that.
> >>
> >> You're right that the specific rake task may not be ideal. But that
> >> was one option, another option could be use fixtures first then
> >> replace with zuul-cloner provided versions but provide me the ability
> >> to turn of the zuul cloner part? I'm just saying as it is today, this
> >> change adds more complexity and hard ties into the OpenStack
> >> infrastructure with non-trival work arounds. I would love to solve the
> >> Depends-On 

Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-24 Thread Fei Long Wang
Thanks for raising this topic. I don't know why you think the 
images/snapshots created by v1 can't be accessed by v2. But I would say 
it's not true, see http://paste.openstack.org/show/473956/


Personally, I don't think there is any big blocker for this. What we 
need to do is working out a plan on the summit(Glance team have planned 
a design session for this). And meanwhile I'm going to split this 
https://review.openstack.org/#/c/144875/ to make it easier to review.



On 25/09/15 09:17, melanie witt wrote:

Hi All,

I have been looking and haven't yet located documentation about how to upgrade 
from glance v1 to glance v2.

 From what I understand, images and snapshots created with v1 can't be 
listed/accessed through the v2 api. Are there instructions about how to migrate 
images and snapshots from v1 to v2? Are there other incompatibilities between 
v1 and v2?

I'm asking because I have read that glance v1 isn't defcore compliant and so we 
need all projects to move to v2, but the incompatibility from v1 to v2 is 
preventing that in nova. Is there anything else preventing v2 adoption? Could 
we move to glance v2 if there's a migration path from v1 to v2 that operators 
can run through before upgrading to a version that uses v2 as the default?

Thanks,
-melanie (irc: melwitt)







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-24 Thread Sabari Murugesan
Hi Melanie

In general, images created by glance v1 API should be accessible using v2
and
vice-versa. We fixed some bugs [1] [2] [3] where metadata associated with
an image was
causing incompatibility. These fixes were back-ported to stable/kilo.

Thanks
Sabari

[1] - https://bugs.launchpad.net/glance/+bug/1447215
[2] - https://bugs.launchpad.net/bugs/1419823
[3] - https://bugs.launchpad.net/python-glanceclient/+bug/1447193


On Thu, Sep 24, 2015 at 2:17 PM, melanie witt  wrote:

> Hi All,
>
> I have been looking and haven't yet located documentation about how to
> upgrade from glance v1 to glance v2.
>
> From what I understand, images and snapshots created with v1 can't be
> listed/accessed through the v2 api. Are there instructions about how to
> migrate images and snapshots from v1 to v2? Are there other
> incompatibilities between v1 and v2?
>
> I'm asking because I have read that glance v1 isn't defcore compliant and
> so we need all projects to move to v2, but the incompatibility from v1 to
> v2 is preventing that in nova. Is there anything else preventing v2
> adoption? Could we move to glance v2 if there's a migration path from v1 to
> v2 that operators can run through before upgrading to a version that uses
> v2 as the default?
>
> Thanks,
> -melanie (irc: melwitt)
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][release] 2015.1.2

2015-09-24 Thread Alan Pevec
> For Horizon, it would make sense to move this a week back. We discovered
> a few issues in Liberty, which are present in current kilo, too. I'd
> love to cherry-pick a few of them to kilo.

What are LP bug#s ? Are you saying that fixes are still work in
progress on master and not ready for backporting yet?

> Unfortunately, it takes a bit, until Kilo (or in general: stable)
> reviews are being done.

One suggestion how to speed up reviews of such patch series:
put them all under the same topic for an easy gerrit URL, then ping on
#openstack-stable.
Stable backport reviews are supposed to be done primarily by
per-project stable-maint teams but stable-maint-core could also help.

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Large Deployments Team][Performance Team] New informal working group suggestion

2015-09-24 Thread Clint Byrum
Excerpts from Dina Belova's message of 2015-09-22 05:57:19 -0700:
> Hey, OpenStackers!
> 
> I'm writing to propose to organise new informal team to work specifically
> on the OpenStack performance issues. This will be a sub team in already
> existing Large Deployments Team, and I suppose it will be a good idea to
> gather people interested in OpenStack performance in one room and identify
> what issues are worrying contributors, what can be done and share results
> of performance researches :)
> 
> So please volunteer to take part in this initiative. I hope it will be many
> people interested and we'll be able to use cross-projects session slot
>  to meet in Tokyo and hold a
> kick-off meeting.
> 
> I would like to apologise I'm writing to two mailing lists at the same
> time, but I want to make sure that all possibly interested people will
> notice the email.
> 

Dina, this is great. Count me in, and see you in Tokyo!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] New dhcp provider using isc-dhcp-server

2015-09-24 Thread Shraddha Pandhe
Hi Ionut,

I am working on a similar effort: Adding driver for neutron-dhcp-agent [1]
& [2]. Is it similar to what you are trying to do? My approach doesn't need
any extra database. There are two ways to achieve HA in my case:

1. Run multiple neutron-dhcp-agents and set agents_per_network >1 so more
than one dhcp servers will have the config needed to serve the dhcp request
2. ISC-DHCPD itself has some HA where you can setup peers. But I haven't
tried that yet.

I have this driver fully implemented and working here at Yahoo!. Working on
making it more generic and upstreaming it. Please let me know if this
effort is similar so that we can consider working together on a single
effort.



[1] https://review.openstack.org/#/c/212836/
[2] https://bugs.launchpad.net/neutron/+bug/1464793

On Thu, Sep 24, 2015 at 9:40 AM, Dmitry Tantsur 
wrote:

> 2015-09-24 17:38 GMT+02:00 Ionut Balutoiu <
> ibalut...@cloudbasesolutions.com>:
>
>> Hello, guys!
>>
>> I'm starting a new implementation for a dhcp provider,
>> mainly to be used for Ironic standalone. I'm planning to
>> push it upstream. I'm using isc-dhcp-server service from
>> Linux. So, when an Ironic node is started, the ironic-conductor
>> writes in the config file the MAC-IP reservation for that node and
>> reloads dhcp service. I'm using a SQL database as a backend to store
>> the dhcp reservations (I think is cleaner and it should allow us
>> to have more than one DHCP server). What do you think about my
>> implementation ?
>>
>
> What you describe slightly resembles how ironic-inspector works. It needs
> to serve DHCP to nodes that are NOT know to Ironic, so it manages iptables
> rules giving (or not giving access) to the dnsmasq instance. I wonder if we
> may find some common code between these 2, but I definitely don't want to
> reinvent Neutron :) I'll think about it after seeing your spec and/or code,
> I'm already looking forward to them!
>
>
>> Also, I'm not sure how can I scale this out to provide HA/failover.
>> Do you guys have any idea ?
>>
>> Regards,
>> Ionut Balutoiu
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> --
> -- Dmitry Tantsur
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Sam Morrison

> On 24 Sep 2015, at 6:19 pm, Sylvain Bauza  wrote:
> 
> Ahem, Nova AZs are not failure domains - I mean the current implementation, 
> in the sense of many people understand what is a failure domain, ie. a 
> physical unit of machines (a bay, a room, a floor, a datacenter).
> All the AZs in Nova share the same controlplane with the same message queue 
> and database, which means that one failure can be propagated to the other AZ.
> 
> To be honest, there is one very specific usecase where AZs *are* failure 
> domains : when cells exact match with AZs (ie. one AZ grouping all the hosts 
> behind one cell). That's the very specific usecase that Sam is mentioning in 
> his email, and I certainly understand we need to keep that.
> 
> What are AZs in Nova is pretty well explained in a quite old blogpost : 
> http://blog.russellbryant.net/2013/05/21/availability-zones-and-host-aggregates-in-openstack-compute-nova/
>  
> 
Yes an AZ may not be considered a failure domain in terms of control 
infrastructure, I think all operators understand this. If you want control 
infrastructure failure domains use regions.

However from a resource level (eg. running instance/ running volume) I would 
consider them some kind of failure domain. It’s a way of saying to a user if 
you have resources running in 2 AZs you have a more available service. 

Every cloud will have a different definition of what an AZ is, a 
rack/collection of racks/DC etc. openstack doesn’t need to decide what that is.

Sam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

2015-09-24 Thread Stephen Balukoff
Sergey--

When is the Heat IRC meeting? Would it be helpful to have an LBaaS person
there to help explain things?

Also yes, Kevin is right: LBaaS v1 and LBaaS v2 are very incompatible (both
the API and the underlying object model). They are different enough that
when we looked at making some way of making LBaaS v2 backward compatible
with v1, we eventually gave up after a couple months of trying to figure
out how to make this work, and decided people would have to live with the
fact that v1 would eventually be deprecated and go away entirely, but in
the mean time maintain effectively two different major code paths in the
same source tree. Nobody claims it's pretty, eh.

I also agree with Doug's suggestion that a namespace change seems like the
right way to approach this.

Stephen

On Wed, Sep 23, 2015 at 11:39 AM, Fox, Kevin M  wrote:

> One of the weird things about the lbaasv1 vs v2 thing which is different
> from just about every other v1->v2 change I've seen is v1 and v2 lb's are
> totally separate things. Unlike, say cinder, where doing a list volumes
> would show up in both api's, so upgrading is smooth.
>
> Thanks,
> Kevin
> --
> *From:* Sergey Kraynev [skray...@mirantis.com]
> *Sent:* Wednesday, September 23, 2015 11:09 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
>
> Guys. I happy, that you already discussed it here :)
> However, I'd like to raise same question on our Heat IRC meeting.
> Probably we should define some common concepts, because I think, that
> lbaas is not single example of service with
> several APIs.
> I will post update in this thread later (after meeting).
>
> Regards,
> Sergey.
>
> On 23 September 2015 at 14:37, Fox, Kevin M  wrote:
>
>> Seperate ns would work great.
>>
>> Thanks,
>> Kevin
>>
>> --
>> *From:* Banashankar KV
>> *Sent:* Tuesday, September 22, 2015 9:14:35 PM
>>
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>> LbaasV2
>>
>> What you think about separating both of them with the name as Doug
>> mentioned. In future if we want to get rid of the v1 we can just remove
>> that namespace. Everything will be clean.
>>
>> Thanks
>> Banashankar
>>
>>
>> On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M  wrote:
>>
>>> As I understand it, loadbalancer in v2 is more like pool was in v1. Can
>>> we make it such that if you are using the loadbalancer resource and have
>>> the mandatory v2 properties that it tries to use v2 api, otherwise its a v1
>>> resource? PoolMember should be ok being the same. It just needs to call v1
>>> or v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api
>>> different between them? Can it be like pool member?
>>>
>>> Thanks,
>>> Kevin
>>>
>>> --
>>> *From:* Brandon Logan
>>> *Sent:* Tuesday, September 22, 2015 5:39:03 PM
>>>
>>> *To:* openstack-dev@lists.openstack.org
>>> *Subject:* Re: [openstack-dev] [neutron][lbaas] - Heat support for
>>> LbaasV2
>>>
>>> So for the API v1s api is of the structure:
>>>
>>> /lb/(vip|pool|member|health_monitor)
>>>
>>> V2s is:
>>> /lbaas/(loadbalancer|listener|pool|healthmonitor)
>>>
>>> member is a child of pool, so it would go down one level.
>>>
>>> The only difference is the lb for v1 and lbaas for v2.  Not sure if that
>>> is enough of a different.
>>>
>>> Thanks,
>>> Brandon
>>> On Tue, 2015-09-22 at 23:48 +, Fox, Kevin M wrote:
>>> > Thats the problem. :/
>>> >
>>> > I can't think of a way to have them coexist without: breaking old
>>> > templates, including v2 in the name, or having a flag on the resource
>>> > saying the version is v2. And as an app developer I'd rather not have
>>> > my existing templates break.
>>> >
>>> > I haven't compared the api's at all, but is there a required field of
>>> > v2 that is different enough from v1 that by its simple existence in
>>> > the resource you can tell a v2 from a v1 object? Would something like
>>> > that work? PoolMember wouldn't have to change, the same resource could
>>> > probably work for whatever lb it was pointing at I'm guessing.
>>> >
>>> > Thanks,
>>> > Kevin
>>> >
>>> >
>>> >
>>> > __
>>> > From: Banashankar KV [banvee...@gmail.com]
>>> > Sent: Tuesday, September 22, 2015 4:40 PM
>>> > To: OpenStack Development Mailing List (not for usage questions)
>>> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
>>> > LbaasV2
>>> >
>>> >
>>> >
>>> > Ok, sounds good. So now the question is how should we name the new V2
>>> > resources ?
>>> >
>>> >
>>> >
>>> > Thanks
>>> > Banashankar
>>> >
>>> >
>>> >
>>> > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M 
>>> > wrote:
>>> > Yes, hence the need 

Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread James Penick
On Thu, Sep 24, 2015 at 2:22 PM, Sam Morrison  wrote:

>
> Yes an AZ may not be considered a failure domain in terms of control
> infrastructure, I think all operators understand this. If you want control
> infrastructure failure domains use regions.
>
> However from a resource level (eg. running instance/ running volume) I
> would consider them some kind of failure domain. It’s a way of saying to a
> user if you have resources running in 2 AZs you have a more available
> service.
>
> Every cloud will have a different definition of what an AZ is, a
> rack/collection of racks/DC etc. openstack doesn’t need to decide what that
> is.
>
> Sam
>

This seems to map more closely to how we use AZs.

Turning it around to the user perspective:
 My users want to be sure that when they boot compute resources, they can
do so in such a way that their application will be immune to a certain
amount of physical infrastructure failure.

Use cases I get from my users:
1. "I want to boot 10 instances, and be sure that if a single leg of power
goes down, I wont lose more than 2 instances"
2. "My instances move a lot of network traffic. I want to ensure that I
don't have more than 3 of my instances per rack, or else they'll saturate
the ToR"
3. "Compute room #1 has been overrun by crazed ferrets. I need to boot new
instances in compute room #2."
4. "I want to boot 10 instances, striped across at least two power domains,
under no less than 5 top of rack switches, with access to network security
zone X."

For my users, abstractions for availability and scale of the control plane
should be hidden from their view. I've almost never been asked by my users
whether or not the control plane is resilient. They assume that my team, as
the deployers, have taken adequate steps to ensure that the control plane
is deployed in a resilient and highly available fashion.

I think it would be good for the operator community to come to an agreement
on what an AZ should be from the perspective of those who deploy both
public and private clouds and bring that back to the dev teams.

-James
:)=
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova request to Neutron - can instance ID be added?

2015-09-24 Thread Kevin Benton
The device_id field should be populated with the instance ID.

On Thu, Sep 24, 2015 at 4:01 PM, Gregory Golberg 
wrote:

> Hi All,
>
> When launching a new VM, Nova sends a request to Neutron which contains
> various items but does not contain instance ID. Would it be a problem to
> add that to the request?
>
> [image: --]
> Gregory Golberg
> [image: http://]about.me/gregorygolberg
> 
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] [app-catalog] versions for murano assets in the catalog

2015-09-24 Thread Christopher Aedo
On Tue, Sep 22, 2015 at 12:06 PM, Serg Melikyan  wrote:
> Hi Chris,
>
> concern regarding assets versioning in Community App Catalog indeed
> affects Murano because we are constantly improving our language and
> adding new features, e.g. we added ability to select existing Neutron
> network for particular application in Liberty and if user wants to use
> this feature - his application will be incompatible with Kilo. I think
> this also valid for Heat because they HOT language is also improving
> with each release.
>
> Thank you for proposing workaround, I think this is a good way to
> solve immediate blocker while Community App Catalog team is working on
> resolving handling versions elegantly from they side. Kirill proposed
> two changes in Murano to follow this approach that I've already +2 ed:
>
> * https://review.openstack.org/225251 - openstack/murano-dashboard
> * https://review.openstack.org/225249 - openstack/python-muranoclient
>
> Looks like corresponding commit to Community App Catalog is already
> merged [0] and our next step is to prepare new version of applications
> from openstack/murano-apps and then figure out how to publish them
> properly.

Yep, thanks, this looks like a step in the right direction to give us
some wiggle room to handle different versions of assets in the App
Catalog for the next few months.

Down the road we want to make sure that the App Catalog is not closely
tied to any other projects, or how those projects handle versions.  We
will clearly communicate our intentions around versions of assets (and
how to specify which version is desired when retrieving an asset) here
on the mailing list, during the weekly meetings, and during the weekly
cross-project meeting as well.

> P.S. I've also talked with Alexander and Kirill regarding better ways
> to handle versioning for assets in Community App Catalog and they
> shared that they are starting working on PoC using Glance Artifact
> Repository, probably they can share more details regarding this work
> here. We would be happy to work on this together given that in Liberty
> we implemented experimental support for package versioning inside the
> Murano (e.g. having two version of the same app working side-by-side)
> [1]
>
> References:
> [0] https://review.openstack.org/224869
> [1] 
> http://murano-specs.readthedocs.org/en/latest/specs/liberty/murano-versioning.html

Thanks, looking forward to the PoC.  We have discussed whether or not
using Glance Artifact Repository makes sense for the App Catalog and
so far the consensus has been that it is not a great fit for what we
need.  Though it brings a lot of great stuff to the table, all we
really need is a place to drop large (and small) binaries.  Swift as a
storage component is the obvious choice for that - the metadata around
the asset itself (when it was added, by whom, rating, version, etc.)
will have to live in a DB anyway.  Given that, seems like Glance is
not an obvious great fit, but like I said I'm looking forward to
hearing/seeing more on this front :)

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] use zuul-cloner when running rspec

2015-09-24 Thread Jeremy Stanley
On 2015-09-24 14:39:49 -0500 (-0500), Alex Schultz wrote:
[...]
> Being able to run tests without internet connectivity is important to
> some people so I want to make sure that can continue without having to
> break the process mid-cycle to try and inject a workaround. It would
> better if we could have a workaround upfront. For example make a
> Puppetfile location an environment variable and if not defined pull
> down the puppet-openstack-integration one?  I wish there was a better
> dependency resolution method that just pulling everything down from
> the internets.  I just know that doesn't work everywhere.

To build on Clark's response, THIS is basically why we use tools
like zuul-cloner. In our CI we're attempting to minimize or even
eliminate network use during tests, and so zuul-cloner leverages
local caches and is sufficiently flexible to obtain the repositories
in question from anywhere you want (which could also just be to
always use your locally cached/modified copy and never hit the
network at all). Pass it whatever Git repository URLs you want,
including file:///some/thing.git if that's your preference.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][nova] how to upgrade from v1 to v2?

2015-09-24 Thread melanie witt
Hi All,

I have been looking and haven't yet located documentation about how to upgrade 
from glance v1 to glance v2.

From what I understand, images and snapshots created with v1 can't be 
listed/accessed through the v2 api. Are there instructions about how to migrate 
images and snapshots from v1 to v2? Are there other incompatibilities between 
v1 and v2?

I'm asking because I have read that glance v1 isn't defcore compliant and so we 
need all projects to move to v2, but the incompatibility from v1 to v2 is 
preventing that in nova. Is there anything else preventing v2 adoption? Could 
we move to glance v2 if there's a migration path from v1 to v2 that operators 
can run through before upgrading to a version that uses v2 as the default?

Thanks,
-melanie (irc: melwitt)







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] use zuul-cloner when running rspec

2015-09-24 Thread Emilien Macchi


On 09/24/2015 05:16 PM, Jeremy Stanley wrote:
> On 2015-09-24 14:39:49 -0500 (-0500), Alex Schultz wrote:
> [...]
>> Being able to run tests without internet connectivity is important to
>> some people so I want to make sure that can continue without having to
>> break the process mid-cycle to try and inject a workaround. It would
>> better if we could have a workaround upfront. For example make a
>> Puppetfile location an environment variable and if not defined pull
>> down the puppet-openstack-integration one?  I wish there was a better
>> dependency resolution method that just pulling everything down from
>> the internets.  I just know that doesn't work everywhere.
> 
> To build on Clark's response, THIS is basically why we use tools
> like zuul-cloner. In our CI we're attempting to minimize or even
> eliminate network use during tests, and so zuul-cloner leverages
> local caches and is sufficiently flexible to obtain the repositories
> in question from anywhere you want (which could also just be to
> always use your locally cached/modified copy and never hit the
> network at all). Pass it whatever Git repository URLs you want,
> including file:///some/thing.git if that's your preference.
> 

So we had a discussion on #puppet-openstack channel, and Alex's main
concern was people should be able to continue to be able to edit a file
(it was .fixtures.yaml, it will be a Puppetfile now) to run tests
against custom dependencies (modules + version can be whatever you like).

It has been addressed in the last patchset:
https://review.openstack.org/#/c/226830/21..22/Rakefile,cm

So in your care, you'll have to:

1/ Build your Puppetfile that contains your custom deps (instead of
editing the .fixtures.yaml)
2/ Export PUPPETFILE=/path/Puppetfile
3/ Run `rake rspec` like before.

This solution should make happy everyone:

* Default usage (on my laptop) will test the same things as Puppet
OpenStack CI
* Allow to use Depends-On in OpenStack CI
* Allow to customize the dependencies for downstream CI by creating a
Puppetfile and exporting its path.

Deal?
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Compute API (Was Re: [nova][cinder] how to handle AZ bug 1496235?)

2015-09-24 Thread James Penick
>
>
> At risk of getting too offtopic I think there's an alternate solution to
> doing this in Nova or on the client side.  I think we're missing some sort
> of OpenStack API and service that can handle this.  Nova is a low level
> infrastructure API and service, it is not designed to handle these
> orchestrations.  I haven't checked in on Heat in a while but perhaps this
> is a role that it could fill.
>
> I think that too many people consider Nova to be *the* OpenStack API when
> considering instances/volumes/networking/images and that's not something I
> would like to see continue.  Or at the very least I would like to see a
> split between the orchestration/proxy pieces and the "manage my
> VM/container/baremetal" bits


(new thread)
 You've hit on one of my biggest issues right now: As far as many deployers
and consumers are concerned (and definitely what I tell my users within
Yahoo): The value of an OpenStack value-stream (compute, network, storage)
is to provide a single consistent API for abstracting and managing those
infrastructure resources.

 Take networking: I can manage Firewalls, switches, IP selection, SDN, etc
through Neutron. But for compute, If I want VM I go through Nova, for
Baremetal I can -mostly- go through Nova, and for containers I would talk
to Magnum or use something like the nova docker driver.

 This means that, by default, Nova -is- the closest thing to a top level
abstraction layer for compute. But if that is explicitly against Nova's
charter, and Nova isn't going to be the top level abstraction for all
things Compute, then something else needs to fill that space. When that
happens, all things common to compute provisioning should come out of Nova
and move into that new API. Availability zones, Quota, etc.

-James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2

2015-09-24 Thread Doug Wiegley
Hi Sergey,

I agree with the previous comments here. While supporting several APIs at once, 
with one set of objects, is a noble goal, in this case, the object 
relationships are *completely* different. Unless you want to get into the 
business of redefining your own higher-level API abstractions in all cases, 
that general strategy for all things will be awkward and difficult.

Some API changes lend themselves well to object reuse abstractions. Some don’t. 
Lbaas v2 is definitely the latter, IMO.

What was the result of your meeting discussion?  (*goes to grub around in 
eavesdrop logs after typing this.*)

Thanks,
doug



> On Sep 23, 2015, at 12:09 PM, Sergey Kraynev  wrote:
> 
> Guys. I happy, that you already discussed it here :)
> However, I'd like to raise same question on our Heat IRC meeting.
> Probably we should define some common concepts, because I think, that lbaas 
> is not single example of service with
> several APIs.
> I will post update in this thread later (after meeting).
> 
> Regards,
> Sergey.
> 
> On 23 September 2015 at 14:37, Fox, Kevin M  > wrote:
> Seperate ns would work great.
> 
> Thanks,
> Kevin
>  
> From: Banashankar KV
> Sent: Tuesday, September 22, 2015 9:14:35 PM
> 
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
> 
> What you think about separating both of them with the name as Doug mentioned. 
> In future if we want to get rid of the v1 we can just remove that namespace. 
> Everything will be clean. 
> 
> Thanks 
> Banashankar
> 
> 
> On Tue, Sep 22, 2015 at 6:01 PM, Fox, Kevin M  > wrote:
> As I understand it, loadbalancer in v2 is more like pool was in v1. Can we 
> make it such that if you are using the loadbalancer resource and have the 
> mandatory v2 properties that it tries to use v2 api, otherwise its a v1 
> resource? PoolMember should be ok being the same. It just needs to call v1 or 
> v2 depending on if the lb its pointing at is v1 or v2. Is monitor's api 
> different between them? Can it be like pool member?
> 
> Thanks,
> Kevin
>  
> From: Brandon Logan
> Sent: Tuesday, September 22, 2015 5:39:03 PM
> 
> To: openstack-dev@lists.openstack.org 
> 
> Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for LbaasV2
> 
> So for the API v1s api is of the structure:
> 
> /lb/(vip|pool|member|health_monitor)
> 
> V2s is:
> /lbaas/(loadbalancer|listener|pool|healthmonitor)
> 
> member is a child of pool, so it would go down one level.
> 
> The only difference is the lb for v1 and lbaas for v2.  Not sure if that
> is enough of a different.
> 
> Thanks,
> Brandon
> On Tue, 2015-09-22 at 23:48 +, Fox, Kevin M wrote:
> > Thats the problem. :/
> > 
> > I can't think of a way to have them coexist without: breaking old
> > templates, including v2 in the name, or having a flag on the resource
> > saying the version is v2. And as an app developer I'd rather not have
> > my existing templates break.
> > 
> > I haven't compared the api's at all, but is there a required field of
> > v2 that is different enough from v1 that by its simple existence in
> > the resource you can tell a v2 from a v1 object? Would something like
> > that work? PoolMember wouldn't have to change, the same resource could
> > probably work for whatever lb it was pointing at I'm guessing.
> > 
> > Thanks,
> > Kevin
> > 
> > 
> > 
> > __
> > From: Banashankar KV [banvee...@gmail.com ]
> > Sent: Tuesday, September 22, 2015 4:40 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [neutron][lbaas] - Heat support for
> > LbaasV2
> > 
> > 
> > 
> > Ok, sounds good. So now the question is how should we name the new V2
> > resources ? 
> > 
> > 
> > 
> > Thanks  
> > Banashankar
> > 
> > 
> > 
> > On Tue, Sep 22, 2015 at 4:33 PM, Fox, Kevin M  > >
> > wrote:
> > Yes, hence the need to support the v2 resources as seperate
> > things. Then I can rewrite the templates to include the new
> > resources rather then the old resources as appropriate. IE, it
> > will be a porting effort to rewrite them. Then do a heat
> > update on the stack to migrate it from lbv1 to lbv2. Since
> > they are different resources, it should create the new and
> > delete the old.
> > 
> > Thanks,
> > Kevin
> > 
> > 
> > __
> > From: Banashankar KV [banvee...@gmail.com 
> > ]
> > Sent: Tuesday, September 22, 2015 4:16 PM 
> > 
> > To: OpenStack Development 

[openstack-dev] [Glance][Solum] Using os-auth-token and os-image-url with glance client

2015-09-24 Thread Devdatta Kulkarni
Hi, Glance team,


In Solum, we use Glance to store Docker images that we create for applications. 
We use Glance client internally to upload these images. Till recently, 'glance 
image-create' with only token has been

working for us (in devstack). Today, I started noticing that glance 
image-create with just token is not working anymore. It is also not working 
when os-auth-token and os-image-url are passed in. According to documentation 
(http://docs.openstack.org/developer/python-glanceclient/), passing token and 
image-url should work. The client, which I have installed from master, is 
asking username (and password, if username is specified).


Solum does not have access to end-user's password. So we need the ability to 
interact with Glance without providing password, as it has been working till 
recently.


I investigated the issue a bit and have filed a bug with my findings.

https://bugs.launchpad.net/python-glanceclient/+bug/1499540


Can someone help with resolving this issue.


Regards,

Devdatta
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] suggestion on commit message title format for the murano-apps repository

2015-09-24 Thread Alexey Khivin
Hello everyone

Almost an every commit-message in the murano-apps repository contains a
name of the application which it is related to

I suggest to specify application within commit message title using strict
and uniform format


For example, something like this:

[ApacheHTTPServer] Utilize Custom Network selector

[Docker/Kubernetes ] Fix typo


instead of this:

Utilize Custom Network selector in Apache App
Fix typo in Kubernetes Cluster app 


I think it would be useful for readability of the messages list

-- 
Regards,
Alexey Khivin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators][Rally] Rally plugins reference is available

2015-09-24 Thread Boris Pavlovic
Hi stackers,

As far as you know Rally test cases are created as a mix of plugins.

At this point of time we have more than 200 plugins for almost all
OpenStack projects.
Before you had to analyze code of plugins or use "rally plugin find/list"
commands to find plugins that you need, which was the pain in neck.

So finally we have auto generated plugin reference:
https://rally.readthedocs.org/en/latest/plugin/plugin_reference.html


Best regards,
Boris Pavlovic
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] LnP tests for oslo.messaging

2015-09-24 Thread Huang, Oscar
Hi, 
We would like to setup a LnP test environment for oslo.messaging and 
Rabbit MQ so that we can continuously track the stability and performance 
impact of the olso/kombu/amqp library changes and MQ upgrades. 
I wonder whether there are some existing packs of test cases can be 
used as the workload. 
Basically we want to emulate the running status of a nova cell of large 
scales of computes(>1000), but focus only on messaging subsystem. 
Thanks.


Best wishes, 

Oscar (黄析伟)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-24 Thread Alex Yip
I was able to make devstack run without a network connection by disabling 
tempest.  So, I think it uses the loopback IP address, and that does not 
change, so rejoin-stack.sh works without a network at all.


- Alex




From: Zhou, Zhenzan 
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and 
fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:a...@vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a 
minute or so.  Then I run rejoin-stack.sh which takes just another minute or 
so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack 
state that was running before.



- Alex






From: Shiv Haris >
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user 
instantiates the Usecase-VM. However creating a OVA file is possible only when 
the VM is halted which means Openstack is not running and the user will have to 
run devstack again (which is time consuming) when the VM is restarted.

The option is to take a snapshot. It appears that taking a snapshot of the VM 
and using it in another setup is not very straight forward. It involves 
modifying the .vbox file and seems that it is prone to user errors. I am 
leaning towards halting the machine and generating an OVA file.

I am looking for suggestions ….

Thanks,

-Shiv


From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Thursday, September 24, 2015 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

First of all I apologize for not making it at the meeting yesterday, could not 
cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you 
posed however I am still working on some of the subtle issues raised. Once I 
have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:t...@styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's 
going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this 
big?  I think we should finish this as a VM but then look into doing it with 
containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB – 
but the OVA compress the image and disk to 3 GB. I will looking at other 
options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup 
time is substantial, and if there's a problem, it's good to assume the user 
won't know how to fix it.  Is it possible to have devstack up and running when 
we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack 
will be down when you bring up  the VM. I agree a snapshot will be a better 
choice.

- It'd be good to have a README to explain how to use the use-case structure. 
It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases 
folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization 
problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script 
so that we can run the use cases one after another without worrying about 
interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris 
> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi 

Re: [openstack-dev] [nova][cinder] how to handle AZ bug 1496235?

2015-09-24 Thread Mathieu Gagné
On 2015-09-24 3:04 AM, Duncan Thomas wrote:
> 
> I proposed a session at the Tokyo summit for a discussion of Cinder AZs,
> since there was clear confusion about what they are intended for and how
> they should be configured. Since then I've reached out to and gotten
> good feedback from, a number of operators.

Thanks for your proposition. I will make sure to attend this session.


> There are two distinct
> configurations for AZ behaviour in cinder, and both sort-of worked until
> very recently.
> 
> 1) No AZs in cinder
> This is the config where a single 'blob' of storage (most of the
> operators who responded so far are using Ceph, though that isn't
> required). The storage takes care of availability concerns, and any AZ
> info from nova should just be ignored.

Unless I'm very mistaken, I think it's the main "feature" missing from
OpenStack itself. The concept of AZ isn't global and anyone can still
make it so Nova AZ != Cinder AZ.

In my opinion, AZ should be a global concept where they are available
and the same for all services so Nova AZ == Cinder AZ. This could result
in a behavior similar to "regions within regions".

We should survey and ask how AZ are actually used by operators and
users. Some might create an AZ for each server racks, others for each
power segments in their datacenter or even business units so they can
segregate to specific physical servers. Some AZ use cases might just be
a "perverted" way of bypassing shortcomings in OpenStack itself. We
should find out those use cases and see if we should still support them
or offer them an existing or new alternatives.

(I don't run Ceph yet, only SolidFire but I guess the same could apply)

For people running Ceph (or other big clustered block storage), they
will have one big Cinder backend. For resources or business reasons,
they can't afford to create as many clusters (and Cinder AZ) as there
are AZ in Nova. So they end up with one big Cinder AZ (lets call it
az-1) in Cinder. Nova won't be able to create volumes in Cinder az-2 if
an instance is created in Nova az-2.

May I suggest the following solutions:

1) Add ability to disable this whole AZ concept in Cinder so it doesn't
fail to create volumes when Nova asks for a specific AZ. This could
result in the same behavior as cinder.cross_az_attach config.

2) Add ability for a volume backend to be in multiple AZ. Of course,
this would defeat the whole AZ concept. This could however be something
our operators/users might accept.


> 2) Cinder AZs map to Nova AZs
> In this case, some combination of storage / networking / etc couples
> storage to nova AZs. It is may be that an AZ is used as a unit of
> scaling, or it could be a real storage failure domain. Eitehr way, there
> are a number of operators who have this configuration and want to keep
> it. Storage can certainly have a failure domain, and limiting the
> scalability problem of storage to a single cmpute AZ can have definite
> advantages in failure scenarios. These people do not want cross-az attach.
> 
> My hope at the summit session was to agree these two configurations,
> discuss any scenarios not covered by these two configuration, and nail
> down the changes we need to get these to work properly. There's
> definitely been interest and activity in the operator community in
> making nova and cinder AZs interact, and every desired interaction I've
> gotten details about so far matches one of the above models.


-- 
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-24 Thread Shiv Haris
First of all I apologize for not making it at the meeting yesterday, could not 
cut short another overlapping meeting.

Also, Tim thanks for the feedback. I have addressed some of the issues you 
posed however I am still working on some of the subtle issues raised. Once I 
have addressed all I will post another VM by end of the week.

-Shiv


From: Tim Hinrichs [mailto:t...@styra.com]
Sent: Friday, September 18, 2015 5:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

It's great to have this available!  I think it'll help people understand what's 
going on MUCH more quickly.

Some thoughts.
- The image is 3GB, which took me 30 minutes to download.  Are all VMs this 
big?  I think we should finish this as a VM but then look into doing it with 
containers to make it EVEN easier for people to get started.

[shivharis] Yes, unfortunately that is the case. The disk size I set is 20GB – 
but the OVA compress the image and disk to 3 GB. I will looking at other 
options.


- It gave me an error about a missing shared directory when I started up.
[shivharis] will fix this

- I expected devstack to be running when I launched the VM.  devstack startup 
time is substantial, and if there's a problem, it's good to assume the user 
won't know how to fix it.  Is it possible to have devstack up and running when 
we start the VM?  That said, it started up fine for me.
[shivharis] OVA files can be created only when the VM is halted, so devstack 
will be down when you bring up  the VM. I agree a snapshot will be a better 
choice.

- It'd be good to have a README to explain how to use the use-case structure. 
It wasn't obvious to me.
[shivharis] added.

- The top-level dir of the Congress_Usecases folder has a Congress_Usecases 
folder within it.  I assume the inner one shouldn't be there?
[shivharis] my automation issues, fixed.

- When I ran the 10_install_policy.sh, it gave me a bunch of authorization 
problems.
[shivharis] fixed

But otherwise I think the setup looks reasonable.  Will there be an undo script 
so that we can run the use cases one after another without worrying about 
interactions?
[shivharis] tricky, will find some way out.

Tim

[shivharis] Thanks

On Fri, Sep 18, 2015 at 11:03 AM Shiv Haris 
> wrote:
Hi Congress folks,

BTW the login/password for the VM is vagrant/vagrant

-Shiv


From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Thursday, September 17, 2015 5:03 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Congress] Congress Usecases VM

Hi All,

I have put my VM (virtualbox) at:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_17_2015.ova

I usually run this on a macbook air – but it should work on other platfroms as 
well. I chose virtualbox since it is free.

Please send me your usecases – I can incorporate in the VM and send you an 
updated image. Please take a look at the structure I have in place for the 
first usecase; would prefer it be the same for other usecases. (However I am 
still open to suggestions for changes)

Thanks,

-Shiv

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] should we use fsync when writing iscsi config file?

2015-09-24 Thread Chris Friesen

On 09/22/2015 06:19 PM, John Griffith wrote:

On Tue, Sep 22, 2015 at 6:17 PM, John Griffith 

Re: [openstack-dev] [puppet] use zuul-cloner when running rspec

2015-09-24 Thread Emilien Macchi


On 09/24/2015 10:14 AM, Alex Schultz wrote:
> On Wed, Sep 23, 2015 at 4:56 PM, Emilien Macchi  wrote:
>> Background
>> ==
>>
>> Current rspec tests are tested with modules mentioned in .fixtures.yaml
>> file of each module.
>>
>> * the file is not consistent across all modules
>> * it hardcodes module names & versions
>> * this way does not allow to use "Depend-On" feature, that would allow
>> to test cross-modules patches
>>
>> Proposal
>> 
>>
>> * Like we do in beaker & integration jobs, use zuul-cloner to clone
>> modules in our CI jobs.
>> * Use r10k to prepare fixtures modules.
>> * Use Puppetfile hosted by openstack/puppet-openstack-integration
>>
>> In that way:
>> * we will have modules name + versions testing consistency across all
>> modules
>> * the same Puppetfile would be used by unit/beaker/integration testing.
>> * the patch that pass tests on your laptop would pass tests in upstream CI
>> * if you don't have zuul-cloner on your laptop, don't worry it will use
>> git clone. Though you won't have Depends-On feature working on your
>> laptop (technically not possible).
>> * Though your patch will support Depends-On in OpenStack Infra for unit
>> tests. If you submit a patch in puppet-openstacklib that drop something
>> wrong, you can send a patch in puppet-nova that will test it, and unit
>> tests will fail.
>>
>> Drawbacks
>> =
>> * cloning from .fixtures.yaml takes ~ 10 seconds
>> * using r10k + zuul-clone takes ~50 seconds (more modules to clone).
>>
>> I think 40 seconds is something accept regarding the benefit.
>>
> 
> As someone who consumes these modules downstream and has our own CI
> setup to run the rspec items, this ties it too closely to the
> openstack infrastructure. If we replace the .fixtures.yml with
> zuul-cloner, it assumes I always want the openstack version of the
> modules. This is not necessarily true. I like being able to replace
> items within fixtures.yml when doing dev work. For example If i want
> to test upgrading another module not related to openstack, like
> inifile, how does that work with the proposed solution?  This is also
> moving away from general puppet module conventions for testing. My
> preference would be that this be a different task and we have both
> .fixtures.yml (for general use/development) and the zuul method of
> cloning (for CI).  You have to also think about this from a consumer
> standpoint and this is adding an external dependency on the OpenStack
> infrastructure for anyone trying to run rspec or trying to consume the
> published versions from the forge.  Would I be able to run these tests
> in an offline mode with this change? With the .fixures.yml it's a
> minor edit to switch to local versions. Is the same true for the
> zuul-cloner version?

What you did before:
* Edit .fixtures.yaml and put the version you like.

What you would do this the current proposal:
* Edit openstack/puppet-openstack-integration/Puppetfile and put the
version you like.

What you're suggesting has a huge downside:
People will still use fixtures by default and not test what is actually
tested by our CI.
A few people will know about the specific Rake task so a few people will
test exactly what upstream does. That will cause frustration to the most
of people who will see tests failing in our CI and not on their laptop.
I'm not sure we want that.

I think more than most of people that run tests on their laptops want to
see them passing in upstream CI.
The few people that want to trick versions & modules, will have to run
Rake, trick the Puppetfile and run Rake again. It's not a big deal and
I'm sure this few people can deal with that.

>>
>> Next steps
>> ==
>>
>> * PoC in puppet-nova: https://review.openstack.org/#/c/226830/
>> * Patch openstack/puppet-modulesync-config to be consistent across all
>> our modules.
>>
>> Bonus
>> =
>> we might need (asap) a canary job for puppet-openstack-integration
>> repository, that would run tests on a puppet-* module (since we're using
>> install_modules.sh & Puppetfile files in puppet-* modules).
>> Nothing has been done yet for this work.
>>
>>
>> Thoughts?
>> --
>> Emilien Macchi
>>
>>
> 
> I think we need this functionality, I just don't think it's a
> replacement for the .fixures.yml.
> 
> Thanks,
> -Alex
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [third-party] Nodepool: OpenStackCloudException: Image creation failed: 403 Forbidden: Attribute 'is_public' is reserved. (HTTP 403)

2015-09-24 Thread Asselin, Ramy
If anyone is getting the following stack trace in nodepool trying to upload 
images to their providers:

2015-09-24 09:16:53,639 ERROR nodepool.DiskImageUpdater: Exception updating 
image dpc in p222fc:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/nodepool/nodepool.py", line 979, 
in _run
self.updateImage(session)
  File "/usr/local/lib/python2.7/dist-packages/nodepool/nodepool.py", line 
1023, in updateImage
self.image.meta)
  File "/usr/local/lib/python2.7/dist-packages/nodepool/provider_manager.py", 
line 531, in uploadImage
**meta)
  File "/usr/local/lib/python2.7/dist-packages/shade/__init__.py", line 1401, 
in create_image
"Image creation failed: {message}".format(message=str(e)))
OpenStackCloudException: Image creation failed: 403 Forbidden: Attribute 
'is_public' is reserved. (HTTP 403)


This was worked-around / fixed in this patch of shade which merged a few hours 
ago: https://review.openstack.org/#/c/226492/

It will be released to pypi "soon", but I don't know when. In the meantime you 
can pip install it from openstack-infra/shade master.

Ramy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >