Re: [openstack-dev] [tripleo] plans on testing minor updates?

2017-09-28 Thread Marios Andreou
On Thu, Sep 28, 2017 at 9:50 AM, mathieu bultel  wrote:

> Hi,
>
>
> On 09/28/2017 05:05 AM, Emilien Macchi wrote:
> > I was reviewing https://review.openstack.org/#/c/487496/ and
> > https://review.openstack.org/#/c/487488/ when I realized that we still
> > didn't have any test coverage for minor updates.
> > We never had this coverage AFICT but this is not a reason to not push
> > forward it.
> Thank you for the review and the -2! :)
> So I'm agree with you, we need CI coverage for that part, and I was
> wondering how I can put quickly a test in CI for the minor update.
> But before that, just few things to take in account regarding those
> reviews:
>
>
agree on the need for the ci coverage, but disagree on blocking this. by
the same logic we should not have landed anything minor update related
during the previous cycle. This is the very last part for
https://bugs.launchpad.net/tripleo/+bug/1715557 - wiring up the mechanism
into client and what's more matbu has managed to do it 'properly' with a
tripleo-common mistral action wired up to the tripleoclient cli.

I don't think its right we don't have coverage but I also don't think its
right to block these last patches,

thanks





> 1/ Those patches are needed for Pike and we are pretty (pretty, pretty)
> late.
> The reviews was implemented 2 or 3 weeks ago, but we have made a lot of
> tests with both, dev and QE environments (QE more complex and realistic
> than dev env or even CI env, Ceph nodes, multi computes and controllers)
> to be sure to have something clearly working with less bugs as possible.
> I think it is..
>
> 2/ All those patches are touching code which is not (and never) tested
> by CI at all... which is bad, but Rome was not built in one day, right ?
> ;)  No job for config download, no job for minor update, no job for ...
> Yes I can iterate, there is a lot of features in TripleO without CI
> coverage.
>
> 3/ Config download code has no CI tests at all except unit tests of
> course and the minor update "core" feature has been already implemented
> and merged. Those reviews are "only" CLI implementations.
>
> 4/ I tried to push unit tests on all parts of the reviews, I think it's
> an acceptable tests status for now, to get this landed. Unit tests can
> be, in some cases, more relevant than big CI (integration) tests.
>
> 5/ Why, instead of blocking the reviews, not make a follow up review
> with the CI coverage ? I know it would be better to make it now, or even
> early, but I think it can be sane to just create a blocker LP and
> implement the workflow for Master.
>
> 6/ In the mean time, I think we need to work on a workflow for the
> future features to implement regarding CI.
> Can someone from the CI squad can help to implement new features ? Or
> new features only belong to the DFG which creates it ?  (If so, I would
> say for Upgrades, hey guys we don't care about upgrading your stuffs, do
> it yourself and fix your bugs ;))
>
> So I understand the concerns but my worries here is that this feature is
> needed for Pike and implementing a new job now, will take very long time
> and add more delay for the workflow to have it in P.
> If the target was for Queens, I would say "yes, lets push a great CI
> coverage for this feature".
>
> Can we make a consensus ?
>
> > During Ocata and Pike, we saw that having upgrade jobs were extremely
> > useful to actually test the workflow that our users are supposed to do
> > in production, I see zero reason to not doing the same for minor
> > updates.
> > I don't want to be the bad guy here but i've -2 the 2 patches until we
> > find some consensus here (sorry matbu, it's not against you or your
> > code in specific, but more generally speaking about re: implementing
> > features without CI coverage).
> >
> > I'm really willing to help and start to work on tripleo-quickstart
> > roles this week, if someone agrees to pair with me - so we could make
> > progress and have that coverage. Even if the new job would fail,
> > that's OK we know the process might work (or not, TBH, I haven't tried
> > it, probably shardy and some other folks know more about it). Once we
> > have the workflow in place, then iterate into matbu's patches and make
> > it work in CI so we can ship it and be proud to have the feature
> > tested.
> > That's IMHO how we should write our software.
> >
> > If there is any feedback on this, please let us know here, otherwise
> > I'll keep my -2 until we've got this coverage in place. Also please
> > someone (maybe matbu?) raise your hand if you want to pair up and do
> > this quickly.
> >
> > Thanks,
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development 

Re: [openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-28 Thread Zhenguo Niu
Thanks Sean for raising the concerns. We don't really fork nova but some
parts of the "ABI" of it. For the 2 API surfaces, we have different
strategies, please see explanations below:

On Wed, Sep 27, 2017 at 10:34 PM, Sean Dague  wrote:

> On 09/27/2017 09:31 AM, Julia Kreger wrote:
> > [...]
> >>> The short explanation which clicked for me (granted it's probably an
> >>> oversimplification, but still) was this: Ironic provides an admin
> >>> API for managing bare metal resources, while Mogan gives you a user
> >>> API (suitable for public cloud use cases) to your Ironic backend. I
> >>> suppose it could have been implemented in Ironic, but implementing
> >>> it separately allows Ironic to be agnostic to multiple user
> >>> frontends and also frees the Ironic team up from having to take on
> >>> yet more work directly.
> >>
> >>
> >> ditto!
> >>
> >> I had a similar question at the PTG and this was the answer that
> convinced
> >> be
> >> may be worth the effort.
> >>
> >> Flavio
> >>
> >
> > For Ironic, the question did come at the PTG up of tenant aware
> > scheduling of owned hardware, as in Customer A and B are managed by
> > the same ironic, only customer A's users should be able to schedule on
> > to Customer A's hardware, with API access control restrictions such
> > that specific customer can take action on their own hardware.
> >
> > If we go down the path of supporting such views/logic, it could become
> > a massive undertaking for Ironic, so there is absolutely a plus to
> > something doing much of that for Ironic. Personally, I think Mogan is
> > a good direction to continue to explore. That being said, we should
> > improve our communication of plans/directions/perceptions between the
> > teams so we don't adversely impact each other and see where we can
> > help each other moving forward.
>
> My biggest concern with Mogan is that it forks Nova, then starts
> changing interfaces. Nova's got 2 really big API surfaces.
>
> 1) The user facing API, which is reasonably well documented, and under
> tight control. Mogan has taken key things at 95% similarity and changed
> bits. So servers includes things like a partitions parameter.
> https://github.com/openstack/mogan/blob/master/api-ref/
> source/v1/servers.inc#request-4
>
> This being nearly the same but slightly different ends up being really
> weird. Especially as Nova evolves it's code with microversions for
> things like embedded flavor info.
>
>
For user facing API, We defined a new set of API instead of following Nova,
which is more specific for bare metals. The similarity of key things is
because virtual machines and bare metals key attributes are similar
naturally. Mogan is relatively new project, with more features introduced,
things will become different in future.


> 2) The guest facing API of metadata/config drive. This is far less
> documented or tested, and while we try to be strict about adding in
> information here in a versioned way, it's never seen the same attention
> as the user API on either documentation or version rigor.
>
> That's presumably getting changed, going to drift as well, which means
> discovering multiple implementations that are nearly, but not exactly
> the same that drift.
>
>
About guest facing API, we only support config drive now, which is copied
from Nova but we don't want to diverge from it. Regarding this part, we
will try to sync with nova periodically or maybe refactoring these files to
be a shared library is the best way, we will try to figure out.


>
> The point of licensing things under and Apache 2 license was to enable
> folks to do all kind of experiments like this. And experiments are good.
> But part of the point of experiments is to learn lessons to bring back
> into the fold. Digging out of the multi year hole of "close but not
> exactly the same" API differences between nova-net and neutron really
> makes me want to make sure we never intentionally inflict that confusion
> on folks again.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] plans on testing minor updates?

2017-09-28 Thread Steven Hardy
On Thu, Sep 28, 2017 at 8:04 AM, Marios Andreou  wrote:
>
>
> On Thu, Sep 28, 2017 at 9:50 AM, mathieu bultel  wrote:
>>
>> Hi,
>>
>>
>> On 09/28/2017 05:05 AM, Emilien Macchi wrote:
>> > I was reviewing https://review.openstack.org/#/c/487496/ and
>> > https://review.openstack.org/#/c/487488/ when I realized that we still
>> > didn't have any test coverage for minor updates.
>> > We never had this coverage AFICT but this is not a reason to not push
>> > forward it.
>> Thank you for the review and the -2! :)
>> So I'm agree with you, we need CI coverage for that part, and I was
>> wondering how I can put quickly a test in CI for the minor update.
>> But before that, just few things to take in account regarding those
>> reviews:
>>
>
> agree on the need for the ci coverage, but disagree on blocking this. by the
> same logic we should not have landed anything minor update related during
> the previous cycle. This is the very last part for
> https://bugs.launchpad.net/tripleo/+bug/1715557 - wiring up the mechanism
> into client and what's more matbu has managed to do it 'properly' with a
> tripleo-common mistral action wired up to the tripleoclient cli.
>
> I don't think its right we don't have coverage but I also don't think its
> right to block these last patches,

Yeah I agree - FWIW we have discussed this before, and AIUI the plan was:

1 - Get multinode coverage of an HA deployment with more than on
controller (e.g the 3nodes job) but with containers enabled
2- Implement a rolling minor update test based on that
multi-controller HA-with-containers test

AFAIK we're only starting to get containers+pacemaker CI scenarios
working with one controller, so it's not really reasonable to block
this, since that is a prerequisite to the multi-controller test, which
is a prerequisite to the rolling update test.

Personally I think we'd be best to aim directly for the rolling update
test in CI, as doing a single node minor update doesn't really test
the most important aspect (e.g zero downtime).

The other challenge here is the walltime relative to the CI timeout -
we've been running into that for the containers upgrade job, and I
think we need to figure out optimizations there which may also be
required for minor update testing (maybe we can work around that by
only updating a very small number of containers, but that will reduce
the test coverage considerably?)

I completely agree we need this coverage, and honestly we should have
had it a long time ago, but we need to make progress on this last
critical blocker for pike, while continuing to make progress on the CI
coverage (which should certainly be a top priority for the Lifecycle
squad, as soon as we have this completely new-for-pike minor updates
workflow fully implemented and debugged).

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder (1 day left) -- Forum Topic Submission

2017-09-28 Thread Flavio Percoco

Hello Everyone,

This is a friendly reminder that the submission period ends tomorrow (Sep 29th).
Take some time to think about the topics you would like to talk about and submit
them at:

http://forumtopics.openstack.org/cfp/create

Submit your topic before 11:59PM UTC on Friday September 29th!

Regards,

UC/TC

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-28 Thread Lee Yarwood
On 20-09-17 14:56:20, arkady.kanev...@dell.com wrote:
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady

Thanks Arkady!

FYI I see that emccormickva has created the following Forum session to
discuss FF upgrades:

http://forumtopics.openstack.org/cfp/details/19

You might want to reach out to him to help craft the agenda for the
session based on our discussions in Denver.

Thanks again,

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] vPTG schedule

2017-09-28 Thread Antoni Segura Puimedon
Hi fellow Kuryrs!

It's that time of the cycle again where we hold our virtual project team
gathering[0]. The dates this time are:

October 2nd, 3rd and 4th

The proposed sessions are:

October 2nd 13:00utc: Scale discussion
In this session we'll talk about the recent scale testing we have performed
in a 112 node cluster. From this starting point. We'll work on identifying
and prioritizing several initiatives to improve the performance of the
pod-in-VM and the baremetal scenarios.

October 2nd 14:00utc: Scenario testing
The September 27th's release of zuulv3 opens the gates for better scenario
testing, specially regarding multinode scenarios. We'll discuss the tasks
and outstanding challenges to achieve good scenario testing coverage and
document well how to write these tests in our tempest plugin.

October 3rd 13:00utc: Multi networks
As the Kubernetes community Network SIG draws near to having a consensus on
multi network implementations, we must elaborate a plan on a PoC that takes
the upstream Kubernetes consensus and implements it with Kuryr-Kubernetes
in a way that we can serve normal overlay and accelerated networking.

October 4th 14:00utc: Network Policy
Each cycle we aim to narrow the gap between Kubernetes networking entities
and our translations. In this cycle, apart from the Loadbalancer service
type support, we'll be tackling how we map Network Policy to Neutron
networking. This session will first lay out Network Policy and its use and
then discuss about one or more mappings.

October 5th 13:00utc: Kuryr-libnetwork
We'll do the cycle planing for Kuryr-libnetwork. Blueprints and bugs and
general discussion.

October 6th 14:00utc: Fuxi
In this session we'll discuss everything related to storage, both in the
Docker and in the Kubernetes worlds.


I'll put the links to the bluejeans sessions in the etherpad[0].


[0] https://etherpad.openstack.org/p/kuryr-queens-vPTG

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-28 Thread Lee Yarwood
On 21-09-17 15:10:52, Thierry Carrez wrote:
> Sean Dague wrote:
> > Agreed. We're already at 5 upgrade tags now?
> > 
> > I think honestly we're going to need a picture to explain the
> > differences between them. Based on the confusion that kept seeming to
> > come during discussions at the PTG, I think we need to circle around and
> > figure out if there are different ways to explain this to have greater
> > clarity.
> 
> In the TC/SWG room we reviewed the tags, and someone suggested that any
> tag that doesn't even have one project to apply it to should probably be
> removed.
> 
> That would get us rid of 3 of them: supports-accessible-upgrade,
> supports-zero-downtime-upgrade, and supports-zero-impact-upgrade (+
> supports-api-interoperability which has had little support so far).
> 
> They can always be resurrected when a project reaches new heights?

I've added some brief comments to the following change looking to remove
the `supports-accessible-upgrade` tag:

Remove assert:supports-accessible-upgrade tag
https://review.openstack.org/#/c/506263/

Grenade already verifies that some resources are accessible
once services are offline at the start of an upgrade[1][2] for a number
of projects such as nova[3] and cinder[4]. I think that's enough to keep
the tag around and to also associate any such project with this tag.

[1] https://github.com/openstack-dev/grenade#basic-flow
[2] 
https://github.com/openstack-dev/grenade/blob/03de9e0fc7f4fc50a00db5d547413e26cf0780dd/grenade.sh#L315-L317
[3] 
https://github.com/openstack-dev/grenade/blob/master/projects/60_nova/resources.sh#L134-L137
[4] 
https://github.com/openstack-dev/grenade/blob/master/projects/70_cinder/resources.sh#L230-L243

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Running large instances with CPU pinning and OOM

2017-09-28 Thread Sahid Orentino Ferdjaoui
On Wed, Sep 27, 2017 at 11:10:40PM +0200, Premysl Kouril wrote:
> > Lastly, qemu has overhead that varies depending on what you're doing in the
> > guest.  In particular, there are various IO queues that can consume
> > significant amounts of memory.  The company that I work for put in a good
> > bit of effort engineering things so that they work more reliably, and part
> > of that was determining how much memory to reserve for the host.
> >
> > Chris
> 
> Hi, I work with Jakub (the op of this thread) and here is my two
> cents: I think what is critical to realize is that KVM virtual
> machines can have substantial memory overhead of up to 25% of memory,
> allocated to KVM virtual machine itself. This overhead memory is not
> considered in nova code when calculating if the instance being
> provisioned actually fits into host's available resources (only the
> memory, configured in instance's flavor is considered). And this is
> especially being a problem when CPU pinning is used as the memory
> allocation is bounded by limits of specific NUMA node (due to the
> strict memory allocation mode). This renders the global reservation
> parameter reserved_host_memory_mb useless as it doesn't take NUMA into
> account.

Only the memory mapped for the guest is striclty allocated from the
NUMA node selected. The QEMU overhead should float on the host NUMA
nodes. So it seems that the "reserved_host_memory_mb" is enough.

> This KVM virtual machine overhead is what is causing the OOMs in our
> infrastructure and that's what we need to fix.
> 
> Regards,
> Prema
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][oslo.messaging][all] Notice: upcoming change to oslo.messaging RPC server

2017-09-28 Thread ChangBo Guo
Ken, thanks for raising this , Oslo team will send notice early  when we
have major changes like this .

2017-09-27 4:17 GMT+08:00 Ken Giusti :

> Hi Folks,
>
> Just a head's up:
>
> In Queens the default access policy for RPC Endpoints will change from
> LegacyRPCAccessPolicy to DefaultRPCAccessPolicy.  RPC calls to private
> ('_' prefix) methods will no longer be possible.  If you want to allow
> RPC Clients to invoke private methods, you must explicitly set the
> access_policy to LegacyRPCAccessPolicy when you call get_rpc_server()
> or instantiate an RPCDispatcher.  This change [0] has been merged to
> oslo.messaging master and will appear in the next release of
> oslo.messaging.
>
> "Umm What?"
>
> Good question!  Here's the TL;DR details:
>
> Since forever it's been possible for a client to make an RPC call
> against _any_ method defined in the RPC Endpoint object.  And by "any"
> we mean "all methods including private ones (method names prefixed by
> '_' )"
>
> Naturally this ability came as a surprise many folk [1], including
> yours truly and others on the oslo team [2].  It was agreed that
> having this be the default behavior was indeed A Bad Thing.
>
> So starting in Ocata oslo.messaging has provided a means for
> controlling access to Endpoint methods [3].  Oslo.messaging now
> defines three different "access control policies" that can be applied
> to an RPC Server:
>
> LegacyRPCAccessPolicy: original behavior - any method can be invoked
> by an RPC client
> DefaultRPCAccessPolicy: prevent RPC access to private '_' methods, all
> others may be invoked
> ExplicitRPCAccessPolicy: only allow access to those methods that have
> been decorated with @expose decorator
>
> See [4] for more details.
>
> In order not to break anything at the time the default access policy
> was set to 'LegacyRPCAccessPolicy'.  This has been the default for
> Ocata and Pike.
>
> Starting in Queens this will no longer be the case.
> DefaultRPCAccessPolicy will become the default if no access policy is
> specified when calling get_rpc_server() or directly instantiating an
> RPCDispatcher.  To keep the old behavior you must explicitly set the
> access policy to LegacyRPCAccessPolicy:
>
> from oslo_messaging.rpc import LegacyRPCAccessPolicy
> ...
> server = get_rpc_server(transport, target, endpoints,
>  access_policy=LegacyRPCAccessPolicy)
>
>
>
> Reply here if you have any questions or hit any issues, thanks!
>
> -K
>
> [0] https://review.openstack.org/#/c/500456/
> [1] https://bugs.launchpad.net/oslo.messaging/+bug/1194279
> [2] https://bugs.launchpad.net/oslo.messaging/+bug/1555845
> [3] https://review.openstack.org/#/c/358359/
> [4] https://docs.openstack.org/oslo.messaging/latest/reference/server.html
> --
> Ken Giusti  (kgiu...@gmail.com)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] l2gw

2017-09-28 Thread Ricardo Noriega De Soto
I see the exception now Lajos:

class L2GatewayInUse(exceptions.InUse):
message = _("L2 Gateway '%(gateway_id)s' still has active mappings "
"with one or more neutron networks.")

:-)

On Wed, Sep 27, 2017 at 6:40 PM, Ricardo Noriega De Soto <
rnori...@redhat.com> wrote:

> Hey Lajos,
>
> Is this the exception you are encountering?
>
> (neutron) l2-gateway-update --device name=hwvtep,interface_names=eth0,eth1
> gw1
> L2 Gateway 'b8ef7f98-e901-4ef5-b159-df53364ca996' still has active
> mappings with one or more neutron networks.
> Neutron server returns request_ids: ['req-f231dc53-cb7d-4221-ab74-
> fa8715f85869']
>
> I don't see the L2GatewayInUse exception you're talking about, but I guess
> it's the same situation.
>
> We should discuss in which case the l2gw instance could be updated, and in
> which cases it shouldn't.
>
> Please, let me know!
>
>
>
> On Wed, Aug 16, 2017 at 11:14 AM, Lajos Katona 
> wrote:
>
>> Hi,
>>
>> We faced an issue with l2-gw-update, which means that actually if there
>> are connections for a gw the update will throw an exception
>> (L2GatewayInUse), and the update is only possible after deleting first the
>> connections, do the update and add the connections back.
>>
>> It is not exactly clear why this restriction is there in the code (at
>> least I can't find it in docs or comments in the code, or review).
>> As I see the check for network connections was introduced in this patch:
>> https://review.openstack.org/#/c/144097 (https://review.openstack.org/
>> #/c/144097/21..22/networking_l2gw/db/l2gateway/l2gateway_db.py)
>>
>> Could you please give me a little background why the update operation is
>> not allowed on an l2gw with network connections?
>>
>> Thanks in advance for the help.
>>
>> Regards
>> Lajos
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Ricardo Noriega
>
> Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
> Red Hat
> irc: rnoriega @freenode
>
>


-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][oslo.messaging][all] Notice: upcoming change to oslo.messaging RPC server

2017-09-28 Thread ChangBo Guo
BTW,   We plan to release 5.33 with the patch https://review.openstack.org/#
/c/500456/   please let me know if you need hold the release.

[ Unreleased changes in openstack/oslo.messaging (master) ]

Changes between 5.32.0 and a9d10d3

* 3a9c01f 2017-09-24 20:25:38 -0700 Fix default value of RPC dispatcher
access_policy
| * 6efa86a 2017-09-22 17:13:26 -0700 Fix wrong transport warnings in
functional tests
|/
* c2338ee 2017-09-20 16:23:04 + Updated from global requirements


2017-09-28 20:11 GMT+08:00 ChangBo Guo :

> Ken, thanks for raising this , Oslo team will send notice early  when we
> have major changes like this .
>
> 2017-09-27 4:17 GMT+08:00 Ken Giusti :
>
>> Hi Folks,
>>
>> Just a head's up:
>>
>> In Queens the default access policy for RPC Endpoints will change from
>> LegacyRPCAccessPolicy to DefaultRPCAccessPolicy.  RPC calls to private
>> ('_' prefix) methods will no longer be possible.  If you want to allow
>> RPC Clients to invoke private methods, you must explicitly set the
>> access_policy to LegacyRPCAccessPolicy when you call get_rpc_server()
>> or instantiate an RPCDispatcher.  This change [0] has been merged to
>> oslo.messaging master and will appear in the next release of
>> oslo.messaging.
>>
>> "Umm What?"
>>
>> Good question!  Here's the TL;DR details:
>>
>> Since forever it's been possible for a client to make an RPC call
>> against _any_ method defined in the RPC Endpoint object.  And by "any"
>> we mean "all methods including private ones (method names prefixed by
>> '_' )"
>>
>> Naturally this ability came as a surprise many folk [1], including
>> yours truly and others on the oslo team [2].  It was agreed that
>> having this be the default behavior was indeed A Bad Thing.
>>
>> So starting in Ocata oslo.messaging has provided a means for
>> controlling access to Endpoint methods [3].  Oslo.messaging now
>> defines three different "access control policies" that can be applied
>> to an RPC Server:
>>
>> LegacyRPCAccessPolicy: original behavior - any method can be invoked
>> by an RPC client
>> DefaultRPCAccessPolicy: prevent RPC access to private '_' methods, all
>> others may be invoked
>> ExplicitRPCAccessPolicy: only allow access to those methods that have
>> been decorated with @expose decorator
>>
>> See [4] for more details.
>>
>> In order not to break anything at the time the default access policy
>> was set to 'LegacyRPCAccessPolicy'.  This has been the default for
>> Ocata and Pike.
>>
>> Starting in Queens this will no longer be the case.
>> DefaultRPCAccessPolicy will become the default if no access policy is
>> specified when calling get_rpc_server() or directly instantiating an
>> RPCDispatcher.  To keep the old behavior you must explicitly set the
>> access policy to LegacyRPCAccessPolicy:
>>
>> from oslo_messaging.rpc import LegacyRPCAccessPolicy
>> ...
>> server = get_rpc_server(transport, target, endpoints,
>>  access_policy=LegacyRPCAccessPolicy)
>>
>>
>>
>> Reply here if you have any questions or hit any issues, thanks!
>>
>> -K
>>
>> [0] https://review.openstack.org/#/c/500456/
>> [1] https://bugs.launchpad.net/oslo.messaging/+bug/1194279
>> [2] https://bugs.launchpad.net/oslo.messaging/+bug/1555845
>> [3] https://review.openstack.org/#/c/358359/
>> [4] https://docs.openstack.org/oslo.messaging/latest/reference/
>> server.html
>> --
>> Ken Giusti  (kgiu...@gmail.com)
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> ChangBo Guo(gcb)
> Community Director @EasyStack
>



-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-28 Thread Erik McCormick
On Sep 28, 2017 4:31 AM, "Lee Yarwood"  wrote:

On 20-09-17 14:56:20, arkady.kanev...@dell.com wrote:
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady

Thanks Arkady!

FYI I see that emccormickva has created the following Forum session to
discuss FF upgrades:

http://forumtopics.openstack.org/cfp/details/19

You might want to reach out to him to help craft the agenda for the
session based on our discussions in Denver.

.
I just didn't want to risk it not getting in, and it was on our etherpad as
well. I'm happy to help, but would love for you guys to lead.

Thanks,
Erik


Thanks again,

Lee
--
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672
2D76

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][mogan] Need help for replacing the current master

2017-09-28 Thread Jeremy Stanley
On 2017-09-27 20:02:25 -0400 (-0400), Davanum Srinivas wrote:
> I'd like to avoid the ACL update which will make it different from
> other projects. Since we don't expect to do this again, can you please
> help do this?
[...]

He (probably accidentally) left out the word "temporary." The ACL
only needs to allow merge commits to be pushed long enough for that
merge commit to get pushed for review, and then the ACL can be
reverted to its earlier state.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][mogan] Need help for replacing the current master

2017-09-28 Thread Davanum Srinivas
Jeremy, Clark,

Filed a change :)
https://review.openstack.org/508151

Thanks,
Dims

On Thu, Sep 28, 2017 at 8:55 AM, Jeremy Stanley  wrote:
> On 2017-09-27 20:02:25 -0400 (-0400), Davanum Srinivas wrote:
>> I'd like to avoid the ACL update which will make it different from
>> other projects. Since we don't expect to do this again, can you please
>> help do this?
> [...]
>
> He (probably accidentally) left out the word "temporary." The ACL
> only needs to allow merge commits to be pushed long enough for that
> merge commit to get pushed for review, and then the ACL can be
> reverted to its earlier state.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-28 Thread Jeremy Stanley
On 2017-09-28 11:13:56 +1000 (+1000), Tony Breeds wrote:
[...]
> I can see a policy looked more like:
> 
> Phase  Time frameSummary Changes Supported
> I  0-12 months   Maintained release  All bugfixes (that meet the
>   after release  criteria described below) are
>  appropriate
> IImore than 12   Legacy release  Only security patches are acceptable
>   months after
>   release
> 
> The 12 month mark is really only there to line up with our current EOL
> plans, if they changed then we'd need to match them.
[...]

And to be clear, the main reason to only allow very minimal changes
in the last phase is because at that point you no longer have
working CI for the previous release and so cannot test that your
patches don't break the upgrade path from that previous release.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-28 Thread Thierry Carrez
Erik McCormick wrote:
> [...]
> Also, if you'd like to discuss this in detail with a room full of
> bodies, I suggest proposing a session for the Forum in Sydney. If some
> of the contributors will be there, it would be a good opportunity for
> you to get feedback.

Yes, "Bare metal as a service: Ironic vs. Mogan vs. Nova" would make a
great topic for discussion in Sydney, assuming Zhenguo is able to make
the trip... Discussing the user need on one side, and how to best
integrate with the existing pieces on the other side would really help
starting this on the right foot.

Zhenguo: if you plan to be present, could you suggest this topic for
discussion at: http://forumtopics.openstack.org/

Deadline is tomorrow :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-28 Thread Zhenguo Niu
Thanks Thierry, not sure if I can make the trip, but will have a try :)

On Thu, Sep 28, 2017 at 9:48 PM, Thierry Carrez 
wrote:

> Erik McCormick wrote:
> > [...]
> > Also, if you'd like to discuss this in detail with a room full of
> > bodies, I suggest proposing a session for the Forum in Sydney. If some
> > of the contributors will be there, it would be a good opportunity for
> > you to get feedback.
>
> Yes, "Bare metal as a service: Ironic vs. Mogan vs. Nova" would make a
> great topic for discussion in Sydney, assuming Zhenguo is able to make
> the trip... Discussing the user need on one side, and how to best
> integrate with the existing pieces on the other side would really help
> starting this on the right foot.
>
> Zhenguo: if you plan to be present, could you suggest this topic for
> discussion at: http://forumtopics.openstack.org/
>
> Deadline is tomorrow :)
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] plans on testing minor updates?

2017-09-28 Thread Emilien Macchi
On Thu, Sep 28, 2017 at 12:23 AM, Steven Hardy  wrote:
[...]
>
> I completely agree we need this coverage, and honestly we should have
> had it a long time ago, but we need to make progress on this last
> critical blocker for pike, while continuing to make progress on the CI
> coverage (which should certainly be a top priority for the Lifecycle
> squad, as soon as we have this completely new-for-pike minor updates
> workflow fully implemented and debugged).
>
> Thanks,
>
> Steve

I guess my -2 was more to highlight the problem and make sure we take
some actions. I removed it this morning and you're free to merge the
code if you're happy with it.

Several things:

1) I created https://bugs.launchpad.net/tripleo/+bug/1720153 to track
work that will be done for this CI coverage, please use it when doing
the work.
2) I'll allocate some time to work on it with the upgrade team.
3) Since we'll need a new job, I think we might remove some jobs that
don't bring much value to keep. For example, the multinode baremetal
jobs in Queens could be replaced by this container minor update
testing, what do you think?
4) I wanted to point out (and repeat again from what we said at PTG
and even before): we should get CI framework ready before implementing
features like this. Every time we bring this up, I hear "now it's too
late" or "we had no time to work on it". I understand the gap and the
fast pace on the upgrade front, but I really think having more
investment in CI will help on the long term. If upgrade folks need
help on CI, bring it to the TripleO CI squad so they can maybe help,
etc...

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] haproxy fails to receive datagram

2017-09-28 Thread Yipei Niu
Hi, Michael,

Thanks a lot. Look forward  to your further test. I try deploying a new
environment, too. Hope it can work well this time.

Best regards,
Yipei

On Wed, Sep 27, 2017 at 10:27 AM, Yipei Niu  wrote:

> Hi, Michael,
>
> The instructions are listed as follows.
>
> First, create a net1.
> $ neutron net-create net1
> $ neutron subnet-create net1 10.0.1.0/24 --name subnet1
>
> Second, boot two vms in net1
> $ nova boot --flavor 1 --image $image_id --nic net-id=$net1_id vm1
> $ nova boot --flavor 1 --image $image_id --nic net-id=$net1_id vm2
>
> Third, logon to the two vms, respectively. Here take vm1 as an example.
> $ MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print
> $1}')
> $ while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo
> nc -l -p 80 ; done&
>
> Fourth, exit vms and update the default security group shared by the vms
> by adding a rule of allowing traffic to port 80.
> $ neutron security-group-rule-create --direction ingress --protocol tcp
> --port-range-min 80 --port-range-max 80 --remote-ip-refix 0.0.0.0/0
> $default_security_group
> Note: make sure "sudo ip netns exec $qdhcp-net1_id curl -v $vm_ip" works.
> In other words, make sure the vms can accept HTTP requests and return its
> IP, respectively.
>
> Fifth, create a lb, a listener, and a pool. Then add the two vms to the
> pool as members.
> $ neutron lbaas-loadbalancer-create --name lb1 subnet1
> $ neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP
> --protocol-port 80 --name listener1
> $ neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener
> listener1 --protocol HTTP --name pool1
> $ neutron baas-member-create --subnet subnet1 --address $vm1_ip
> --protocol-port 80 pool1
> $ neutron baas-member-create --subnet subnet1 --address $vm2_ip
> --protocol-port 80 pool1
>
> Finally, try "sudo ip netns qdhcp-net1_id curl -v $VIP" to see whether
> lbaas works.
>
> Best regards,
> Yipei
>
> On Wed, Sep 27, 2017 at 1:30 AM, Yipei Niu  wrote:
>
>> Hi, Michael,
>>
>> I think the octavia is the latest, since I pull the up-to-date repo of
>> octavia manually to my server before installation.
>>
>> Anyway, I run "sudo ip netns exec amphora-haproxy ip route show table 1"
>> in the amphora, and find that the route table exists. The info is listed as
>> follows.
>>
>> default via 10.0.1.1 dev eth1 onlink
>>
>> I think it may not be the source.
>>
>> Best regards,
>> Yipei
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Running large instances with CPU pinning and OOM

2017-09-28 Thread Chris Friesen

On 09/28/2017 05:29 AM, Sahid Orentino Ferdjaoui wrote:


Only the memory mapped for the guest is striclty allocated from the
NUMA node selected. The QEMU overhead should float on the host NUMA
nodes. So it seems that the "reserved_host_memory_mb" is enough.


What I see in the code/docs doesn't match that, but it's entirely possible I'm 
missing something.


nova uses LibvirtConfigGuestNUMATuneMemory with a mode of "strict" and a nodeset 
of "the host NUMA nodes used by a guest".


For a guest with single NUMA node, I think this would map to libvirt XML of 
something like


  

  

The docs at https://libvirt.org/formatdomain.html#elementsNUMATuning say, "The 
optional memory element specifies how to allocate memory for the domain process 
on a NUMA host."


That seems to me that the qemu overhead would be NUMA-affined, no?  (If you had 
a multi-NUMA-node guest, then the qemu overhead would float across all the NUMA 
nodes used by the guest.)


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-09-28 Thread Jesse Pretorius
There’s some history around this discussion [1], but times have changed and the 
purpose of the patches I’m submitting is slightly different [2] as far as I can 
see – it’s a little more focused and less intrusive.

The projects which deploy OpenStack from source or using python wheels 
currently have to either carry templates for api-paste, policy and rootwrap 
files or need to source them from git during deployment. This results in some 
rather complex mechanisms which could be radically simplified by simply 
ensuring that all the same files are included in the built wheel. Distribution 
packagers typically also have mechanisms in place to fetch the files from the 
source repo when building the packages – including the files through pbr’s 
data_files for packagers may or may not be beneficial, depending on how the 
packagers do their build processes.

In neutron [3], glance [4], designate [5] and sahara [6] the use of the 
data_files option in the files section of setup.cfg is established and has been 
that way for some time. However, there have been issues in the past 
implementing something similar – for example in keystone there has been a bit 
of a yoyo situation where a patch was submitted, then reverted.

I’ve been proposing patches [7] to try to make the implementation across 
projects consistent and projects have, for the most part, been happy to go 
ahead and merge them. However concern has been raised that we may end up going 
through another yo-yo experience and therefore I’ve been asked to raise this on 
the ML.

Do any packagers or deployment projects have issues with this implementation? 
If there are any issues, what’re your suggestions to resolve them?

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-June/097123.html
[2] https://launchpad.net/bugs/1718356
[3] 
https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39
[4] 
https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21
[5] 
https://github.com/openstack/designate/blob/25eb143db04554d65efe2e5d60ad3afa6b51d73a/setup.cfg#L30-L37
[6] 
https://github.com/openstack/sahara/blob/cff43d6f1eee5c68af16c6f655f4d019669224d9/setup.cfg#L28-L29
[7] 
https://review.openstack.org/#/q/topic:bug/1718356+(status:open+OR+status:merged)



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] multi threads with swift backend

2017-09-28 Thread Arnaud MORIN
Hey all,
So I finally tested your pull requests, it does not work.
1 - For uploads, swiftclient is not using threads when source is given by
glance:
https://github.com/openstack/python-swiftclient/blob/master/swiftclient/service.py#L1847

2 - For downloads, when requesting the file from swift, it is recomposing
the chunks into one big file.


So patch is not so easy.

IMHO, for uploads, we should try to uploads chunks using multithreads.
Sounds doable.
For downloads, I need to dig a little bit more in glance store code to be
sure, but maybe we can try to download the chunks separately and recompose
them locally before sending it to the requester (compute / cli).

Cheers,


On 6 September 2017 at 21:19, Arnaud MORIN  wrote:

> Hey,
> I would love to see that reviving!
>
> Cheers,
> Arnaud
>
> On 6 September 2017 at 21:00, Mikhail Fedosin  wrote:
>
>> Hey! As you said it's not possible now.
>>
>> I implemented the support several years ago, bit unfortunately no one
>> wanted to review it: https://review.openstack.org/#/c/218993
>> If you want, we can revive it.
>>
>> Best,
>> Mike
>>
>> On Wed, Sep 6, 2017 at 9:05 PM, Clay Gerrard 
>> wrote:
>>
>>> I'm pretty sure that would only be possible with a code change in glance
>>> to move the consumption of the swiftclient abstraction up a layer from the
>>> client/connection objects to swiftclient's service objects [1].  I'm not
>>> sure if that'd be something that would make a lot of sense to the Image
>>> Service team.
>>>
>>> -Clay
>>>
>>> 1. https://docs.openstack.org/python-swiftclient/latest/service-api.html
>>>
>>> On Wed, Sep 6, 2017 at 9:02 AM, Arnaud MORIN 
>>> wrote:
>>>
 Hi all,

 Is there any chance that glance can use the multiprocessing from
 swiftclient library (equivalent of xxx-threads options from cli)?
 If yes, how to enable it?
 I did not find anything useful in the glance configuration options.
 And looking at glance_store code make me think that it's not possible...
 Am I wrong?

 Regards,
 Arnaud

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] vGPUs support for Nova - Implementation

2017-09-28 Thread Sahid Orentino Ferdjaoui
Please consider the support of MDEV for the /pci framework which
provides support for vGPUs [0].

Accordingly to the discussion [1]

With this first implementation which could be used as a skeleton for
implementing PCI Devices in Resource Tracker we provide support for
attaching vGPUs to guests. And also to provide affinity per NUMA
nodes. An other important point is that that implementation can take
advantage of the ongoing specs like PCI NUMA policies.

* The Implementation [0]

[PATCH 01/13] pci: update PciDevice object field 'address' to accept
[PATCH 02/13] pci: add for PciDevice object new field mdev
[PATCH 03/13] pci: generalize object unit-tests for different
[PATCH 04/13] pci: add support for mdev device type request
[PATCH 05/13] pci: generalize stats unit-tests for different
[PATCH 06/13] pci: add support for mdev devices type devspec
[PATCH 07/13] pci: add support for resource pool stats of mdev
[PATCH 08/13] pci: make manager to accept handling mdev devices

In this serie of patches we are generalizing the PCI framework to
handle MDEV devices. We arguing it's a lot of patches but most of them
are small and the logic behind is basically to make it understand two
new fields MDEV_PF and MDEV_VF.

[PATCH 09/13] libvirt: update PCI node device to report mdev devices
[PATCH 10/13] libvirt: report mdev resources
[PATCH 11/13] libvirt: add support to start vm with using mdev (vGPU)

In this serie of patches we make libvirt driver support, as usually,
return resources and attach devices returned by the pci manager. This
part can be reused for Resource Provider.

[PATCH 12/13] functional: rework fakelibvirt host pci devices
[PATCH 13/13] libvirt: resuse SRIOV funtional tests for MDEV devices

Here we reuse 100/100 of the functional tests used for SR-IOV
devices. Again here, this part can be reused for Resource Provider.

* The Usage

There are no difference between SR-IOV and MDEV, from operators point
of view who knows how to expose SR-IOV devices in Nova, they already
know how to expose MDEV devices (vGPUs).

Operators will be able to expose MDEV devices in the same manner as
they expose SR-IOV:

 1/ Configure whitelist devices

 ['{"vendor_id":"10de"}']

 2/ Create aliases

 [{"vendor_id":"10de", "name":"vGPU"}]

 3/ Configure the flavor

 openstack flavor set --property "pci_passthrough:alias"="vGPU:1"

* Limitations

The mdev does not provide 'product_id' but 'mdev_type' which should be
considered to exactly identify which resource users can request e.g:
nvidia-10. To provide that support we have to add a new field
'mdev_type' so aliases could be something like:

 {"vendor_id":"10de", mdev_type="nvidia-10" "name":"alias-nvidia-10"}
 {"vendor_id":"10de", mdev_type="nvidia-11" "name":"alias-nvidia-11"}

I do have plan to add but first I need to have support from upstream
to continue that work.


[0] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:pci-mdev-support
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122591.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-28 Thread Arkady.Kanevsky
Erik,
Thanks for setting up a session for it.
Glad it is driven by Operators.
I will be happy to work with you on the session and run it with you.
Thanks,
Arkady

From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: Thursday, September 28, 2017 7:40 AM
To: Lee Yarwood 
Cc: OpenStack Development Mailing List ; 
openstack-operators 
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary


On Sep 28, 2017 4:31 AM, "Lee Yarwood" 
mailto:lyarw...@redhat.com>> wrote:
On 20-09-17 14:56:20, arkady.kanev...@dell.com 
wrote:
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
Thanks Arkady!

FYI I see that emccormickva has created the following Forum session to
discuss FF upgrades:

http://forumtopics.openstack.org/cfp/details/19

You might want to reach out to him to help craft the agenda for the
session based on our discussions in Denver.
.
I just didn't want to risk it not getting in, and it was on our etherpad as 
well. I'm happy to help, but would love for you guys to lead.

Thanks,
Erik


Thanks again,

Lee
--
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]OVS connection tracking cleanup

2017-09-28 Thread Ajay Kalambur (akalambu)
It looks like the conntrack deletion can be skipped for port deletion no?
On bulk deletes of lot of Vms the entries that were deleted never existed in 
conntrack table

From looking the patch below seems to go along those lines
https://review.openstack.org/#/c/243994/

Is there a plan to distinguish between port deletes and port updates when it 
comes to conntrack rule deletions because in a scale scenario on OVS VLAN this 
is really a blocker for back to back scale tests being run


From: Ajay Kalambur mailto:akala...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, September 27, 2017 at 4:42 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Ian Wells (iawells)" mailto:iawe...@cisco.com>>
Subject: Re: [openstack-dev] [neutron]OVS connection tracking cleanup

Also the weird part with this conntrack deletion I perform a conntrack –L to 
view the table I see no entry for any of the entries its trying to delete. 
Those entries are all removed anyways when Vms are cleaned up from the look of 
it. So it looks like all those conntrack deletions were pretty much no-ops
Ajay


From: Ajay Kalambur mailto:akala...@cisco.com>>
Date: Tuesday, September 12, 2017 at 9:30 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Ian Wells (iawells)" mailto:iawe...@cisco.com>>
Subject: Re: [openstack-dev] [neutron]OVS connection tracking cleanup

Hi Kevin
Sure will log a bug
Also does the config change involve having both these lines in the neutron.conf 
file?
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
root_helper_daemon = sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

If I have only the second line I see the exception below on neutron openvswitch 
agent bring up:

2017-09-12 09:23:03.633 35 DEBUG neutron.agent.linux.utils 
[req-0f8fe685-66bd-44d7-beac-bb4c24f0ccfa - - - - -] Running command: ['ps', 
'--ppid', '103', '-o', 'pid='] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:89
2017-09-12 09:23:03.762 35 ERROR ryu.lib.hub 
[req-0f8fe685-66bd-44d7-beac-bb4c24f0ccfa - - - - -] hub: uncaught exception: 
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/ryu/lib/hub.py", line 54, in _launch
return func(*args, **kwargs)
  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 42, in agent_main_wrapper
ovs_agent.main(bridge_classes)
  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2184, in main
agent.daemon_loop()
  File "/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 154, in 
wrapper
return f(*args, **kwargs)
  File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2100, in daemon_loop
self.ovsdb_monitor_respawn_interval) as pm:
  File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
  File "/usr/lib/python2.7/site-packages/neutron/agent/linux/polling.py", line 
35, in get_polling_manager
pm.start()
  File "/usr/lib/python2.7/site-packages/neutron/agent/linux/polling.py", line 
57, in start
while not self.is_active():
  File "/usr/lib/python2.7/site-packages/neutron/agent/linux/async_process.py", 
line 100, in is_active
self.pid, self.cmd_without_namespace)
  File "/usr/lib/python2.7/site-packages/neutron/agent/linux/async_process.py", 
line 159, in pid
run_as_root=self.run_as_root)
  File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 
297, in get_root_helper_child_pid
pid = find_child_pids(pid)[0]
  File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 
179, in find_child_pids
log_fail_as_error=False)
  File "/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 
128, in execute
_stdout, _stderr = obj.communicate(_process_input)
  File "/usr/lib64/python2.7/subprocess.py", line 800, in communicate
return self._communicate(input)
  File "/usr/lib64/python2.7/subprocess.py", line 1403, in _communicate
stdout, stderr = self._communicate_with_select(input)
  File "/usr/lib64/python2.7/subprocess.py", line 1504, in 
_communicate_with_select
rlist, wlist, xlist = select.select(read_set, write_set, [])
  File "/usr/lib/python2.7/site-packages/eventlet/green/select.py", line 86, in 
select
return hub.switch()
  File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in 
switch
return self.greenlet.switch()
Timeout: 5 seconds

2017-09-12 09:23:03.860 35 INFO oslo_rootwrap.client [-] Stopping rootwrap 
daemon process with pid=95


Ajay



From: Kevin Benton mailto:ke...@benton.pub>>
Reply-To: "OpenStack Development Mailing List (no

[openstack-dev] [designate] multi domain usage for handlers

2017-09-28 Thread Kim-Norman Sahm

Hi,

i'm currently testing designate and i have a question about the architecture.
We're using openstack newton with keystone v3 and thus the keystone 
domain/project structure.

I've tried the global nova_fixed and neutron_floating_ip handlers but all dns 
records (for each domains/projects) are stored in the same dns domain 
(instance1.novafixed.example.com and 
anotherinstance.neutronfloatingip.example.com).
is is possible to define a seperate DNS domain for each keystone domain/project 
and auto-assign the instances to this domain?
example: openstack domain "customerA.com" with projects "prod" and "dev". instance1 
starts in project "dev" and the dns record is instance1.dev.customerA.com

Best regards
Kim


Kim-Norman Sahm
Cloud & Infrastructure(OCI)

noris network AG
Thomas-Mann-Straße 16-20
90471 Nürnberg
Deutschland

Tel +49 911 9352 1433
Fax +49 911 9352 100

kim-norman.s...@noris.de

https://www.noris.de - Mehr Leistung als Standard
Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel
Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689










smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-09-28 Thread Doug Hellmann
Excerpts from Jesse Pretorius's message of 2017-09-28 14:50:24 +:
> There’s some history around this discussion [1], but times have changed and 
> the purpose of the patches I’m submitting is slightly different [2] as far as 
> I can see – it’s a little more focused and less intrusive.
> 
> The projects which deploy OpenStack from source or using python wheels 
> currently have to either carry templates for api-paste, policy and rootwrap 
> files or need to source them from git during deployment. This results in some 
> rather complex mechanisms which could be radically simplified by simply 
> ensuring that all the same files are included in the built wheel. 
> Distribution packagers typically also have mechanisms in place to fetch the 
> files from the source repo when building the packages – including the files 
> through pbr’s data_files for packagers may or may not be beneficial, 
> depending on how the packagers do their build processes.
> 
> In neutron [3], glance [4], designate [5] and sahara [6] the use of the 
> data_files option in the files section of setup.cfg is established and has 
> been that way for some time. However, there have been issues in the past 
> implementing something similar – for example in keystone there has been a bit 
> of a yoyo situation where a patch was submitted, then reverted.
> 
> I’ve been proposing patches [7] to try to make the implementation across 
> projects consistent and projects have, for the most part, been happy to go 
> ahead and merge them. However concern has been raised that we may end up 
> going through another yo-yo experience and therefore I’ve been asked to raise 
> this on the ML.
> 
> Do any packagers or deployment projects have issues with this implementation? 
> If there are any issues, what’re your suggestions to resolve them?
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-June/097123.html
> [2] https://launchpad.net/bugs/1718356
> [3] 
> https://github.com/openstack/neutron/blob/d3c393ff6b5fbd0bdaabc8ba678d755ebfba08f7/setup.cfg#L24-L39
> [4] 
> https://github.com/openstack/glance/blob/02cd5cba70a8465a951cb813a573d390887174b7/setup.cfg#L20-L21
> [5] 
> https://github.com/openstack/designate/blob/25eb143db04554d65efe2e5d60ad3afa6b51d73a/setup.cfg#L30-L37
> [6] 
> https://github.com/openstack/sahara/blob/cff43d6f1eee5c68af16c6f655f4d019669224d9/setup.cfg#L28-L29
> [7] 
> https://review.openstack.org/#/q/topic:bug/1718356+(status:open+OR+status:merged)
> 

In the past we had trouble checking those files into git and gating
against the results being "up to date" or not changing in any way
because configuration options that end up in the file are defined in
libraries used by the services. So as long as the implementation you're
considering does not check configuration files into git, but generates
them and then inserts them into the package, it should be fine.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how does UEFI booting of VM manage per-instance copies of OVMF_VARS.fd ?

2017-09-28 Thread Waines, Greg
Any info on this ?

I did launch a VM with UEFI booting and did not see any copy of OVMF_VARS.fd 
proactively copied into /etc/nova/instances// .
Maybe Nova only does that on a change to OVMF_VARS.fd ???
( haven’t figured out how to do that )

anyways any info or pointers would be appreciated,
thanks,
Greg.

From: Greg Waines 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Wednesday, September 27, 2017 at 9:09 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [nova] how does UEFI booting of VM manage per-instance 
copies of OVMF_VARS.fd ?

Hey there ... a question about UEFI booting of VMs.
i.e.

glance image-create --file cloud-2730. qcow --disk-format qcow2 
--container-format bare --property “hw-firmware-type=uefi” --name 
clear-linux-image

in order to specify that you want to use UEFI (instead of BIOS) when booting 
VMs with this image
i.e./usr/share/OVMF/OVMF_CODE.fd
  /usr/share/OVMF/OVMF_VARS.fd

and I believe you can boot into the UEFI Shell, i.e. to change UEFI variables 
in NVRAM (OVMF_VARS.fd) by
booting VM with /usr/share/OVMF/UefiShell.iso in cd ...
e.g. to changes Secure Boot keys or something like that.

My QUESTION ...

· how does NOVA manage a unique instance of OVMF_VARS.fd for each 
instance ?

o   i believe OVMF_VARS.fd is suppose to just be used as a template, and
is supposed to be copied to make a unique instance for each VM that UEFI boots

o   how does NOVA manage this ?

§  e.g. is the unique instance of OVMF_VARS.fd created in
 /etc/nova/instances//  ?

o   ... and does this get migrated to another compute if VM is migrated ?

Greg.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack] RefStack Meeting time change

2017-09-28 Thread Chris Hoge
At the previous RefStack meeting, the team unanimously decided to move
our weekly meeting from Tuesdays at 19:00 UTC to Tuesdays at 17:00 UTC in
#openstack-meeting-alt. [1][2]

Thanks
Chris

[1] 
http://eavesdrop.openstack.org/meetings/refstack/2017/refstack.2017-09-26-19.00.log.html#l-58
[2] https://review.openstack.org/#/c/508202
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] plans on testing minor updates?

2017-09-28 Thread Wesley Hayutin
On Thu, Sep 28, 2017 at 3:23 AM, Steven Hardy  wrote:

> On Thu, Sep 28, 2017 at 8:04 AM, Marios Andreou 
> wrote:
> >
> >
> > On Thu, Sep 28, 2017 at 9:50 AM, mathieu bultel 
> wrote:
> >>
> >> Hi,
> >>
> >>
> >> On 09/28/2017 05:05 AM, Emilien Macchi wrote:
> >> > I was reviewing https://review.openstack.org/#/c/487496/ and
> >> > https://review.openstack.org/#/c/487488/ when I realized that we
> still
> >> > didn't have any test coverage for minor updates.
> >> > We never had this coverage AFICT but this is not a reason to not push
> >> > forward it.
> >> Thank you for the review and the -2! :)
> >> So I'm agree with you, we need CI coverage for that part, and I was
> >> wondering how I can put quickly a test in CI for the minor update.
> >> But before that, just few things to take in account regarding those
> >> reviews:
> >>
> >
> > agree on the need for the ci coverage, but disagree on blocking this. by
> the
> > same logic we should not have landed anything minor update related during
> > the previous cycle. This is the very last part for
> > https://bugs.launchpad.net/tripleo/+bug/1715557 - wiring up the
> mechanism
> > into client and what's more matbu has managed to do it 'properly' with a
> > tripleo-common mistral action wired up to the tripleoclient cli.
> >
> > I don't think its right we don't have coverage but I also don't think its
> > right to block these last patches,
>
> Yeah I agree - FWIW we have discussed this before, and AIUI the plan was:
>
> 1 - Get multinode coverage of an HA deployment with more than on
> controller (e.g the 3nodes job) but with containers enabled
> 2- Implement a rolling minor update test based on that
> multi-controller HA-with-containers test
>
> AFAIK we're only starting to get containers+pacemaker CI scenarios
> working with one controller, so it's not really reasonable to block
> this, since that is a prerequisite to the multi-controller test, which
> is a prerequisite to the rolling update test.
>
> Personally I think we'd be best to aim directly for the rolling update
> test in CI, as doing a single node minor update doesn't really test
> the most important aspect (e.g zero downtime).
>
> The other challenge here is the walltime relative to the CI timeout -
> we've been running into that for the containers upgrade job, and I
> think we need to figure out optimizations there which may also be
> required for minor update testing (maybe we can work around that by
> only updating a very small number of containers, but that will reduce
> the test coverage considerably?)
>

OK.. I think the solution is to start migrating these jobs to RDO Software
Factory third party testing.

Here is what I propose:
1. Start with an experiment check job
https://review.rdoproject.org/r/#/c/9823/
This will help us confirm that everything works or fails as we expect.  We
are
also afforded a configurable timeout \0/. It's currently set to 360 minutes
for the overcloud upgrade jobs.

2. Once this is proven out, we can run upgrade jobs as third party on any
review upstream

3. New coverage should be prototyped in RDO Software Factory

4. If jobs prove to be reliable and consistent and run under 170 minutes we
move what
we can back upstream.

WDYT?


>
> I completely agree we need this coverage, and honestly we should have
> had it a long time ago, but we need to make progress on this last
> critical blocker for pike, while continuing to make progress on the CI
> coverage (which should certainly be a top priority for the Lifecycle
> squad, as soon as we have this completely new-for-pike minor updates
> workflow fully implemented and debugged).
>
> Thanks,
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder][third-party][ci] Tintri Cinder CI failure

2017-09-28 Thread Apoorva Deshpande
It appears that Cinder started using NFS locks around Sept 19th. That
resulted in our CI failures as we don't support it. Tempest tests succeeded
when we added a nolock option in the NFS configuration[1].

Can someone provide more information on this change?

Thanks,
Apoorva

[1] http://openstack-ci.tintri.com/tintri/refs-changes-09-504009-2/

On Tue, Sep 26, 2017 at 1:28 PM, Apoorva Deshpande 
wrote:

> I patched sos-ci and logs are available now [1]. First exception
> occurrence I spot in c-vol.txt is here [2]
>
> [1] http://openstack-ci.tintri.com/tintri/refs-changes-59-507359-1/logs/
> [2] http://paste.openstack.org/show/621983/
>
> On Mon, Sep 25, 2017 at 11:32 PM, Silvan Kaiser 
> wrote:
>
>> Hi Apoorva!
>> The test run is sadly missing the service logs, probably because you're
>> using a current DevStack (systemd based services) but an older sos-ci
>> version? If you apply https://github.com/j-gri
>> ffith/sos-ci/commit/f0f2ce2e2f2b12727ee5aa75a751376dcc1ea3a4 you should
>> be able to get the logs for new test runs. This will help debugging this.
>> Best
>> Silvan
>>
>>
>>
>> 2017-09-26 1:54 GMT+02:00 Apoorva Deshpande :
>>
>>> Hello,
>>>
>>> Tintri's Cinder CI started failing around Sept 19, 2017. There are 29
>>> tests failing[1] with following errors [2][3][4]. Tintri Cinder driver
>>> inherit nfs cinder driver and it's available here[5].
>>>
>>> Please let me know if anyone has recently seen these failures or has any
>>> pointers on how to fix.
>>>
>>> Thanks,
>>> Apoorva
>>>
>>> IRC: Apoorva
>>>
>>> [1] http://openstack-ci.tintri.com/tintri/refs-changes-57-50
>>> 5357-1/testr_results.html
>>> [2] http://paste.openstack.org/show/621886/
>>> [3] http://paste.openstack.org/show/621858/
>>> [4] http://paste.openstack.org/show/621857/
>>> [5] https://github.com/openstack/cinder/blob/master/cinder/v
>>> olume/drivers/tintri.py
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Dr. Silvan Kaiser
>> Quobyte GmbH
>> Hardenbergplatz 2, 10623 Berlin - Germany
>> +49-30-814 591 800 <+49%2030%20814591800> - www.quobyte.com> uobyte.com/>
>> Amtsgericht Berlin-Charlottenburg, HRB 149012B
>> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] multi threads with swift backend

2017-09-28 Thread Erno Kuvaja
On Thu, Sep 28, 2017 at 4:27 PM, Arnaud MORIN  wrote:
> Hey all,
> So I finally tested your pull requests, it does not work.
> 1 - For uploads, swiftclient is not using threads when source is given by
> glance:
> https://github.com/openstack/python-swiftclient/blob/master/swiftclient/service.py#L1847
>
> 2 - For downloads, when requesting the file from swift, it is recomposing
> the chunks into one big file.
>
>
> So patch is not so easy.
>
> IMHO, for uploads, we should try to uploads chunks using multithreads.
> Sounds doable.
> For downloads, I need to dig a little bit more in glance store code to be
> sure, but maybe we can try to download the chunks separately and recompose
> them locally before sending it to the requester (compute / cli).
>
> Cheers,
>

So I'm still trying to understand (without success) why do we want to
do this at all?

- jokke

>
> On 6 September 2017 at 21:19, Arnaud MORIN  wrote:
>>
>> Hey,
>> I would love to see that reviving!
>>
>> Cheers,
>> Arnaud
>>
>> On 6 September 2017 at 21:00, Mikhail Fedosin  wrote:
>>>
>>> Hey! As you said it's not possible now.
>>>
>>> I implemented the support several years ago, bit unfortunately no one
>>> wanted to review it: https://review.openstack.org/#/c/218993
>>> If you want, we can revive it.
>>>
>>> Best,
>>> Mike
>>>
>>> On Wed, Sep 6, 2017 at 9:05 PM, Clay Gerrard 
>>> wrote:

 I'm pretty sure that would only be possible with a code change in glance
 to move the consumption of the swiftclient abstraction up a layer from the
 client/connection objects to swiftclient's service objects [1].  I'm not
 sure if that'd be something that would make a lot of sense to the Image
 Service team.

 -Clay

 1. https://docs.openstack.org/python-swiftclient/latest/service-api.html

 On Wed, Sep 6, 2017 at 9:02 AM, Arnaud MORIN 
 wrote:
>
> Hi all,
>
> Is there any chance that glance can use the multiprocessing from
> swiftclient library (equivalent of xxx-threads options from cli)?
> If yes, how to enable it?
> I did not find anything useful in the glance configuration options.
> And looking at glance_store code make me think that it's not
> possible...
> Am I wrong?
>
> Regards,
> Arnaud
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] plans on testing minor updates?

2017-09-28 Thread Emilien Macchi
On Thu, Sep 28, 2017 at 9:22 AM, Wesley Hayutin  wrote:
[...]
> OK.. I think the solution is to start migrating these jobs to RDO Software
> Factory third party testing.
>
> Here is what I propose:
> 1. Start with an experiment check job
> https://review.rdoproject.org/r/#/c/9823/
> This will help us confirm that everything works or fails as we expect.  We
> are
> also afforded a configurable timeout \0/. It's currently set to 360 minutes
> for the overcloud upgrade jobs.
>
> 2. Once this is proven out, we can run upgrade jobs as third party on any
> review upstream
>
> 3. New coverage should be prototyped in RDO Software Factory
>
> 4. If jobs prove to be reliable and consistent and run under 170 minutes we
> move what
> we can back upstream.
>
> WDYT?

I think this is mega cool, although your work is related to *Upgrades*
and not minor updates but still super cool.

Note: FTR we discussed on IRC that we would probably do the same kind
of thing for minor updates testing.

Thanks Wes,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2017-09-28 Thread Ed Leafe
Greetings OpenStack community,

It was a quiet meeting this week, probably due to elmiko being absent. And 
probably also due to cdent and edleafe being consumed by work outside of the 
SIG. We did note that we are looking forward to our expanded role with the 
addition of the SDK developers into the SIG, but so far not much has happened 
along these lines.

# Newly Published Guidelines

None

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the 
OpenStack developer mailing list[1] with the tag "[api]" in the subject. In 
your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] plans on testing minor updates?

2017-09-28 Thread Wesley Hayutin
On Thu, Sep 28, 2017 at 12:32 PM, Emilien Macchi  wrote:

> On Thu, Sep 28, 2017 at 9:22 AM, Wesley Hayutin 
> wrote:
> [...]
> > OK.. I think the solution is to start migrating these jobs to RDO
> Software
> > Factory third party testing.
> >
> > Here is what I propose:
> > 1. Start with an experiment check job
> > https://review.rdoproject.org/r/#/c/9823/
> > This will help us confirm that everything works or fails as we expect.
> We
> > are
> > also afforded a configurable timeout \0/. It's currently set to 360
> minutes
> > for the overcloud upgrade jobs.
> >
> > 2. Once this is proven out, we can run upgrade jobs as third party on any
> > review upstream
> >
> > 3. New coverage should be prototyped in RDO Software Factory
> >
> > 4. If jobs prove to be reliable and consistent and run under 170 minutes
> we
> > move what
> > we can back upstream.
> >
> > WDYT?
>
> I think this is mega cool, although your work is related to *Upgrades*
> and not minor updates but still super cool.
>
> Note: FTR we discussed on IRC that we would probably do the same kind
> of thing for minor updates testing.
>
> Thanks Wes,
> --
> Emilien Macchi
>

Right, I'm going to first attempt to get what we *have* running, and then
get the new jobs
we need in there as well. :))


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][ptls][all] documentation retention policy changes

2017-09-28 Thread Doug Hellmann
At the Queens PTG in Denver the documentation team members present
discussed a new retention policy for content published to
docs.openstack.org. I have a spec up for review to document that
policy and the steps needed to implement it. This policy will affect
all projects, now that most of the documentation is managed by
project teams. Please take a few minutes to review it, or at the
very least have your documentation team liaison do so.

https://review.openstack.org/#/c/507629

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how does UEFI booting of VM manage per-instance copies of OVMF_VARS.fd ?

2017-09-28 Thread Jay Pipes

On 09/27/2017 09:09 AM, Waines, Greg wrote:

Hey there ... a question about UEFI booting of VMs.

i.e.

glance image-create --file cloud-2730. qcow --disk-format qcow2 
--container-format bare --property “hw-firmware-type=uefi” --name 
clear-linux-image


in order to specify that you want to use UEFI (instead of BIOS) when 
booting VMs with this image


i.e./usr/share/OVMF/OVMF_CODE.fd

   /usr/share/OVMF/OVMF_VARS.fd

and I believe you can boot into the UEFI Shell, i.e. to change UEFI 
variables in NVRAM (OVMF_VARS.fd) by


booting VM with /usr/share/OVMF/UefiShell.iso in cd ...

e.g. to changes Secure Boot keys or something like that.

My QUESTION ...

·how does NOVA manage a unique instance of OVMF_VARS.fd for each instance ?

oi believe OVMF_VARS.fd is suppose to just be used as a template, and
is supposed to be copied to make a unique instance for each VM that UEFI 
boots


ohow does NOVA manage this ?

§e.g. is the unique instance of OVMF_VARS.fd created in
  /etc/nova/instances//  ?

o... and does this get migrated to another compute if VM is migrated ?


Hi Greg,

I think the following part of the code essentially sums up what you're 
experiencing [1]:


LOG.warning("uefi support is without some kind of "
"functional testing and therefore "
"considered experimental.")

[1] 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4530-L4532


From what I can tell, the bootloader is hardcoded to 
"/usr/share/OVMF/OVMF_CODE.fd" for x86_64:


https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L130

https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4534-L4535

and I see no way to change it via a configuration variable...

Yet another half-baked, completely untested "feature" added to Nova. :(

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] multi domain usage for handlers

2017-09-28 Thread Graham Hayes


On 28/09/17 17:06, Kim-Norman Sahm wrote:
> Hi,
> 
> i'm currently testing designate and i have a question about the
> architecture.
> We're using openstack newton with keystone v3 and thus the keystone
> domain/project structure.
> 
> I've tried the global nova_fixed and neutron_floating_ip handlers but
> all dns records (for each domains/projects) are stored in the same dns
> domain (instance1.novafixed.example.com and
> anotherinstance.neutronfloatingip.example.com).
> is is possible to define a seperate DNS domain for each keystone
> domain/project and auto-assign the instances to this domain?
> example: openstack domain "customerA.com" with projects "prod" and
> "dev". instance1 starts in project "dev" and the dns record is
> instance1.dev.customerA.com
> 
> Best regards
> Kim
> 

Hi Kim,

Unfortunately, with the default handlers, there is no way of assigning
them to different projects.

We also mark any recordsets created by designate-sink as "managed" -
this means that normal users cannot modify them, an admin has to update
them, with the `--all-projects` and `--edit-managed` flags.

The modules provided are only designed to be examples. We expected any
users would end up writing their own handlers [0].

You should also look at the neutron / designate integration [1] as it
may do what you need.

Thanks,

Graham

0 -
https://github.com/openstack/designate/tree/master/contrib/designate-ext-samplehandler

1 -
https://docs.openstack.org/ocata/networking-guide/config-dns-int.html#integration-with-an-external-dns-service

> 
> Kim-Norman Sahm
> Cloud & Infrastructure(OCI)
> 
> noris network AG
> Thomas-Mann-Straße 16-20
> 90471 Nürnberg
> Deutschland
> 
> Tel +49 911 9352 1433
> Fax +49 911 9352 100
> 
> kim-norman.s...@noris.de
> 
> https://www.noris.de - Mehr Leistung als Standard
> Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel
> Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689
> 
>  
> 
>  
> 
>  
> 
>  
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


0x23BA8E2E.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-09-28 Thread Jesse Pretorius
On 9/28/17, 5:11 PM, "Doug Hellmann"  wrote:

> In the past we had trouble checking those files into git and gating
> against the results being "up to date" or not changing in any way
> because configuration options that end up in the file are defined in
> libraries used by the services. So as long as the implementation you're
> considering does not check configuration files into git, but generates
> them and then inserts them into the package, it should be fine.

I’m guessing that the aut-generated files you’re referring to are the .conf 
files? For the most part, those are being left out of my proposed patches 
unless the project team specifically requests their inclusion. My patches are 
focused on the far more static files - policy.json if it exists (yes, the 
policy-in-code will remove those in time), api-paste, rootwrap.conf and the 
rootwrap.d contents. As far as I know none of these are auto-generated. If they 
are, I’m all ears to learn how!




Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] gate-grenade-dsvm-neutron-multinode-live-migration-nv job broken since ~8/18

2017-09-28 Thread Matt Riedemann
I just noticed this today, but the 
gate-grenade-dsvm-neutron-multinode-live-migration-nv job in the nova 
check queue has been 100% fail since around August 18th.


I've reported a bug with the details:

https://bugs.launchpad.net/nova/+bug/1720191

It has something to do with test_live_block_migration on microversion 
2.1 passing block_migration=False and the live migration starting from 
the queens node and failing on the pike node.


The 2.25 microversion version of that test, which passes 
block_migration='auto', is successful.


I've proposed a change to move the job to the experimental queue for nova:

https://review.openstack.org/#/c/508244/

I hate to see this not working since it's pretty valuable in testing 
that we are live migrating successfully between N-1 version and N 
version compute nodes, both ways.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] haproxy fails to receive datagram

2017-09-28 Thread Michael Johnson
Hi Yipei,
Even running through neutron-lbaas I get the same successful test.

Just to double check, you are using the Octavia driver?

stack@devstackpy27-2:~$ sudo ip netns exec
qdhcp-4bcefe3e-038f-4a77-af4f-a560b6316a7a curl 172.21.1.16
Welcome to 172.21.1.17 connection 3

Michael

On Thu, Sep 28, 2017 at 7:46 AM, Yipei Niu  wrote:
> Hi, Michael,
>
> Thanks a lot. Look forward  to your further test. I try deploying a new
> environment, too. Hope it can work well this time.
>
> Best regards,
> Yipei
>
> On Wed, Sep 27, 2017 at 10:27 AM, Yipei Niu  wrote:
>>
>> Hi, Michael,
>>
>> The instructions are listed as follows.
>>
>> First, create a net1.
>> $ neutron net-create net1
>> $ neutron subnet-create net1 10.0.1.0/24 --name subnet1
>>
>> Second, boot two vms in net1
>> $ nova boot --flavor 1 --image $image_id --nic net-id=$net1_id vm1
>> $ nova boot --flavor 1 --image $image_id --nic net-id=$net1_id vm2
>>
>> Third, logon to the two vms, respectively. Here take vm1 as an example.
>> $ MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print
>> $1}')
>> $ while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo
>> nc -l -p 80 ; done&
>>
>> Fourth, exit vms and update the default security group shared by the vms
>> by adding a rule of allowing traffic to port 80.
>> $ neutron security-group-rule-create --direction ingress --protocol tcp
>> --port-range-min 80 --port-range-max 80 --remote-ip-refix 0.0.0.0/0
>> $default_security_group
>> Note: make sure "sudo ip netns exec $qdhcp-net1_id curl -v $vm_ip" works.
>> In other words, make sure the vms can accept HTTP requests and return its
>> IP, respectively.
>>
>> Fifth, create a lb, a listener, and a pool. Then add the two vms to the
>> pool as members.
>> $ neutron lbaas-loadbalancer-create --name lb1 subnet1
>> $ neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP
>> --protocol-port 80 --name listener1
>> $ neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener
>> listener1 --protocol HTTP --name pool1
>> $ neutron baas-member-create --subnet subnet1 --address $vm1_ip
>> --protocol-port 80 pool1
>> $ neutron baas-member-create --subnet subnet1 --address $vm2_ip
>> --protocol-port 80 pool1
>>
>> Finally, try "sudo ip netns qdhcp-net1_id curl -v $VIP" to see whether
>> lbaas works.
>>
>> Best regards,
>> Yipei
>>
>> On Wed, Sep 27, 2017 at 1:30 AM, Yipei Niu  wrote:
>>>
>>> Hi, Michael,
>>>
>>> I think the octavia is the latest, since I pull the up-to-date repo of
>>> octavia manually to my server before installation.
>>>
>>> Anyway, I run "sudo ip netns exec amphora-haproxy ip route show table 1"
>>> in the amphora, and find that the route table exists. The info is listed as
>>> follows.
>>>
>>> default via 10.0.1.1 dev eth1 onlink
>>>
>>> I think it may not be the source.
>>>
>>> Best regards,
>>> Yipei
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][deployment][kolla-kubernetes][magnum][kuryr][zun][qa][api] Proposal for SIG-K8s

2017-09-28 Thread Chris Hoge

> On Sep 18, 2017, at 12:54 PM, Hongbin Lu  wrote:
> 
> Hi Chris,
>  
> Sorry I missed the meeting since I was not in PTG last week. After a quick 
> research on the mission of SIG-K8s, I think we (the OpenStack Zun team) have 
> an item that fits well into this SIG, which is the k8s connector feature:
>  
>   https://blueprints.launchpad.net/zun/+spec/zun-connector-for-k8s 
> 
>  
> I added it to the etherpad and hope it will be well accepted by the SIG.

Of course it is welcome and accepted. Given the length of the subject line
calling out groups, I propose to just shorten the tag related to sig-k8s to 
just use
the tag [k8s]. This will make the subject line more meaningful in 
conveying the intent of the message, and every team and person that
participates in the sig has a simple catch-all tag to search against.

My intention was never to make any individual or team feel excluded. I
apologize if my oversight was read in any other way. Going forward
this simplification should be read as implying the inclusion of all teams
and individuals working with Kubernetes in OpenStack.

Sincerely,
-Chris

>  
> Best regards,
> Hongbin
>  
> From: Chris Hoge [mailto:ch...@openstack.org] 
> Sent: September-15-17 12:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] 
> [k8s][deployment][kolla-kubernetes][magnum][kuryr][qa][api] Proposal for 
> SIG-K8s
>  
> Link to the etherpad for the upcoming meeting.
>  
> https://etherpad.openstack.org/p/queens-ptg-sig-k8s 
> 
>  
>  
> On Sep 14, 2017, at 10:23 AM, Chris Hoge  > wrote:
>  
> This Friday, September 15 at the PTG we will be hosting an organizational
> meeting for SIG-K8s. More information on the proposal, meeting time, and
> remote attendance is in the openstack-sigs mailing list [1].
> 
> Thanks,
> Chris Hoge
> Interop Engineer
> OpenStack Foundation
> 
> [1] 
> http://lists.openstack.org/pipermail/openstack-sigs/2017-September/51.html
>  
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Cinder][third-party][ci] Tintri Cinder CI failure

2017-09-28 Thread Eric Harney
On 09/28/2017 12:22 PM, Apoorva Deshpande wrote:
> It appears that Cinder started using NFS locks around Sept 19th. That
> resulted in our CI failures as we don't support it. Tempest tests succeeded
> when we added a nolock option in the NFS configuration[1].
> 
> Can someone provide more information on this change?
> 

I'm not sure there was a change in Cinder that would result in this
error message:

Command: /usr/bin/python -m oslo_concurrency.prlimit --as=1073741824
--cpu=8 -- sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C
qemu-img info
/opt/stack/data/cinder/mnt/a67d7d4be86399df850bfa711f7837f7/volume-43b54c3e-ae10-47b1-8a43-c9427551f923
Sep 26 12:14:13.919553 303-openstack-test2 cinder-volume[19641]: Exit
code: 1

Stderr: u"qemu-img: Could not open
'/opt/stack/data/cinder/mnt/a67d7d4be86399df850bfa711f7837f7/volume-43b54c3e-ae10-47b1-8a43-c9427551f923':
Failed to lock byte 100\n"

You may need to look at what else changed on the system -- qemu-img
versions, NFS utilities, etc.

> Thanks,
> Apoorva
> 
> [1] http://openstack-ci.tintri.com/tintri/refs-changes-09-504009-2/
> 
> On Tue, Sep 26, 2017 at 1:28 PM, Apoorva Deshpande 
> wrote:
> 
>> I patched sos-ci and logs are available now [1]. First exception
>> occurrence I spot in c-vol.txt is here [2]
>>
>> [1] http://openstack-ci.tintri.com/tintri/refs-changes-59-507359-1/logs/
>> [2] http://paste.openstack.org/show/621983/
>>
>> On Mon, Sep 25, 2017 at 11:32 PM, Silvan Kaiser 
>> wrote:
>>
>>> Hi Apoorva!
>>> The test run is sadly missing the service logs, probably because you're
>>> using a current DevStack (systemd based services) but an older sos-ci
>>> version? If you apply https://github.com/j-gri
>>> ffith/sos-ci/commit/f0f2ce2e2f2b12727ee5aa75a751376dcc1ea3a4 you should
>>> be able to get the logs for new test runs. This will help debugging this.
>>> Best
>>> Silvan
>>>
>>>
>>>
>>> 2017-09-26 1:54 GMT+02:00 Apoorva Deshpande :
>>>
 Hello,

 Tintri's Cinder CI started failing around Sept 19, 2017. There are 29
 tests failing[1] with following errors [2][3][4]. Tintri Cinder driver
 inherit nfs cinder driver and it's available here[5].

 Please let me know if anyone has recently seen these failures or has any
 pointers on how to fix.

 Thanks,
 Apoorva

 IRC: Apoorva

 [1] http://openstack-ci.tintri.com/tintri/refs-changes-57-50
 5357-1/testr_results.html
 [2] http://paste.openstack.org/show/621886/
 [3] http://paste.openstack.org/show/621858/
 [4] http://paste.openstack.org/show/621857/
 [5] https://github.com/openstack/cinder/blob/master/cinder/v
 olume/drivers/tintri.py

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Dr. Silvan Kaiser
>>> Quobyte GmbH
>>> Hardenbergplatz 2, 10623 Berlin - Germany
>>> +49-30-814 591 800 <+49%2030%20814591800> - www.quobyte.com>> uobyte.com/>
>>> Amtsgericht Berlin-Charlottenburg, HRB 149012B
>>> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>>>
>>> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Sydney Forum Session Proposals

2017-09-28 Thread Lance Bragstad
Hey all,

In the weekly meeting on Tuesday, we talked about possible forum
sessions for Sydney. I proposed the following based on the etherpad [0].

  * Keystone User & Operator Feedback [1]
  * Application Credentials Feedback [2]
  * RBAC/Policy Roadmap Feedback [3]

We decided to omit the last possible proposal, since we had a similar
session in Boston and it didn't really go anywhere. If there are any
other topics keystone should have, feel free to make a proposal. The
proposal deadline is tomorrow.

Thanks!

[0] https://etherpad.openstack.org/p/SYD-keystone-forum-sessions
[1] http://forumtopics.openstack.org/cfp/details/36
[2] http://forumtopics.openstack.org/cfp/details/39
[3] http://forumtopics.openstack.org/cfp/details/37



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][update] TRIPLEO_CONFIG_HASH not generated, no update of service

2017-09-28 Thread Janki Chhatbar
Hi

I understand during an update, paunch restarts containers whenever the hash
of image is changed. TRIPLEO_CONFIG_HASH [1] is generated based on the
config value specified [2] which is default to /var/lib/config-data/.
Many services specify path at /var/lib/config-data/puppet-generated/
([3] for example).  Hence the hash is not generated and update would fail
for such services.

Solution:
1. Replace all /var/lib/config-data/puppet-generated/ with
/var/lib/config-data/
in THTs. Downside side is not all files present here need to be mount to
/var/lib/kolla/config_files/src:ro
2. Pass CONFIG_VOLUME_PREFIX for all relevant services for docker-puppet.py
to get correct path.

I have raised a bug for this [4]. This is very important for updates to
work properly for all services.

Looking forward to hear from the community.

-- 
Thanking you

Janki Chhatbar
OpenStack | Docker | SDN
simplyexplainedblog.wordpress.com


[1].
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/docker-puppet.py#L377
[2].
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/docker-puppet.py#L362
[3].
https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/opendaylight-api.yaml#L101
[4]. https://bugs.launchpad.net/tripleo/+bug/1720208
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-21, September 29 - October 6

2017-09-28 Thread Sean McGinnis

Welcome to our regular release countdown email.

Development Focus
-

Team focus should be on spec approval and implementation for priority features.

General Information
---

Just one more mention - teams should review their release liaison information
and make sure it is up to date [1]. We would love to have all the liaisons
attend the release team meeting every Friday [2].

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons
[2] http://eavesdrop.openstack.org/#Release_Team_Meeting

We are already two weeks out from the Q-1 milestone. Please be aware of project
specific deadlines leading up to this milestone.

As mentioned, Newton EOL is coming up the week before Q-1. To wrap things up
for that, any final Newton library releases should be done this week to give a
small window before any final Newton service releases.

Upcoming Deadlines & Dates
--

Queens-1 milestone: October 19 (R-19 week)
Forum at OpenStack Summit in Sydney: November 6-8
Last Newton Library releases September 25-29 (R-22)
Newton Branch EOL October 11th (R-20 week)

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][update] TRIPLEO_CONFIG_HASH not generated, no update of service

2017-09-28 Thread Janki Chhatbar
On Fri, Sep 29, 2017 at 12:04 AM, Janki Chhatbar 
wrote:

> Hi
>
> I understand during an update, paunch restarts containers whenever the
> hash of image is changed. TRIPLEO_CONFIG_HASH [1] is generated based on
> the config value specified [2] which is default to
> /var/lib/config-data/. Many services specify path at
> /var/lib/config-data/puppet-generated/ ([3] for example).  Hence
> the hash is not generated and update would fail for such services.
>
> Solution:
> 1. Replace all /var/lib/config-data/puppet-generated/ with 
> /var/lib/config-data/
> in THTs. Downside side is not all files present here need to be mount to
> /var/lib/kolla/config_files/src:ro
> 2. Pass CONFIG_VOLUME_PREFIX for all relevant services for
> docker-puppet.py to get correct path.
>
from relevant service's THT.

>
> I have raised a bug for this [4]. This is very important for updates to
> work properly for all services.
>
> Looking forward to hear from the community.
>
> --
> Thanking you
>
> Janki Chhatbar
> OpenStack | Docker | SDN
> simplyexplainedblog.wordpress.com
>
>
> [1]. https://github.com/openstack/tripleo-heat-templates/blob/
> master/docker/docker-puppet.py#L377
> [2]. https://github.com/openstack/tripleo-heat-templates/blob/
> master/docker/docker-puppet.py#L362
> [3]. https://github.com/openstack/tripleo-heat-templates/blob/
> master/docker/services/opendaylight-api.yaml#L101
> [4]. https://bugs.launchpad.net/tripleo/+bug/1720208
>



-- 
Thanking you

Janki Chhatbar
OpenStack | Docker | SDN
simplyexplainedblog.wordpress.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how does UEFI booting of VM manage per-instance copies of OVMF_VARS.fd ?

2017-09-28 Thread Steve Gordon
- Original Message -
> From: "Jay Pipes" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, September 28, 2017 12:53:16 PM
> Subject: Re: [openstack-dev] [nova] how does UEFI booting of VM manage 
> per-instance copies of OVMF_VARS.fd ?
> 
> On 09/27/2017 09:09 AM, Waines, Greg wrote:
> > Hey there ... a question about UEFI booting of VMs.
> > 
> > i.e.
> > 
> > glance image-create --file cloud-2730. qcow --disk-format qcow2
> > --container-format bare --property “hw-firmware-type=uefi” --name
> > clear-linux-image
> > 
> > in order to specify that you want to use UEFI (instead of BIOS) when
> > booting VMs with this image
> > 
> > i.e./usr/share/OVMF/OVMF_CODE.fd
> > 
> >/usr/share/OVMF/OVMF_VARS.fd
> > 
> > and I believe you can boot into the UEFI Shell, i.e. to change UEFI
> > variables in NVRAM (OVMF_VARS.fd) by
> > 
> > booting VM with /usr/share/OVMF/UefiShell.iso in cd ...
> > 
> > e.g. to changes Secure Boot keys or something like that.
> > 
> > My QUESTION ...
> > 
> > ·how does NOVA manage a unique instance of OVMF_VARS.fd for each instance ?
> > 
> > oi believe OVMF_VARS.fd is suppose to just be used as a template, and
> > is supposed to be copied to make a unique instance for each VM that UEFI
> > boots
> > 
> > ohow does NOVA manage this ?
> > 
> > §e.g. is the unique instance of OVMF_VARS.fd created in
> >   /etc/nova/instances//  ?
> > 
> > o... and does this get migrated to another compute if VM is migrated ?
> 
> Hi Greg,
> 
> I think the following part of the code essentially sums up what you're
> experiencing [1]:
> 
> LOG.warning("uefi support is without some kind of "
>  "functional testing and therefore "
>  "considered experimental.")
> 
> [1]
> https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4530-L4532
> 
>  From what I can tell, the bootloader is hardcoded to
> "/usr/share/OVMF/OVMF_CODE.fd" for x86_64:
> 
> https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L130
> 
> https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L4534-L4535
> 
> and I see no way to change it via a configuration variable...
> 
> Yet another half-baked, completely untested "feature" added to Nova. :(
> 
> -jay

Pretty much, just enough to convince folks it could work without enough for it 
to actually...work. Kasyhap was looking at this recently and has this WIP 
specification up for further discussion of how to best clean this up:

https://review.openstack.org/#/c/506720/

It's not clear to me that this covers all of the above issues as yet. As noted 
the existing implementation will only work with a bootloader path that lines up 
perfectly with what is hardcoded, and even with the distro included ones that 
is not necessarily the case.

Thanks,

-- 
Steve Gordon,
Principal Product Manager,
Red Hat OpenStack Platform

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Marking Cisco Fibre Channel Zone Manager Unsupported ...

2017-09-28 Thread Jay S Bryant

All,

I am writing to make everyone aware that we have had to move the Cisco 
Fibre Channel Zone Manager driver to the unsupported and deprecated status.


CI has not run successfully for the better part of the last year and as 
per Cinder's compliance policies, the driver needs to be deprecated and 
will be removed in the Rocky release if the problem is not corrected.


Cisco has not been able to keep a 3rd Party CI running and the storage 
vendor who had been running the CI for Cisco is no longer able to 
maintain the CI.


If you are a storage vendor or consumer who would like to volunteer to 
keep the CI running again to avoid removal the Cinder team would welcome 
your support.  Send me an e-mail or find me in IRC if you would like to 
discuss this.  Otherwise, we will be removing the driver in the next 
release.


- Jay

(jungleboyj)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][castellan] oslo.config secret store drivers

2017-09-28 Thread Doug Hellmann
I have updated the old oslo.config drivers spec [1] to remove a bunch of
the information about etcd and focus on the secret store use case we
discussed at the PTG in queens. I think this work is a prerequisite for
the plaintext secrets spec [2] work, because castellan already depends
on oslo.config so we can't have oslo.config invoke castellan directly.

As it stands, the spec itself probably needs some more work.  I can
assist with the design, implementation advice, and code reviews,
but I don't expect to have time to do the implementation work myself
this cycle. If you are interested in helping, please leave a comment
on the review [1].

Doug

[1] https://review.openstack.org/#/c/454897
[2] https://review.openstack.org/#/c/474304

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Security] Secure Hash Algorithm Spec

2017-09-28 Thread McClymont Jr, Scott
Hey All,

I've got a spec up for a change I want to implement in Glance for Queens to
enhance the current checksum (md5) functionality with a stronger hash
algorithm. I'm going to do this in such a way that it is easily altered in
the future for new algorithms as they are released.  I'd appreciate it if
someone on the security team could look it over and comment. Thanks.

Review: https://review.openstack.org/#/c/507568/

-- 
Scott McClymont
Sr. Software Engineer
Verizon Cloud Platform
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-09-28 Thread Doug Hellmann
Excerpts from Jesse Pretorius's message of 2017-09-28 17:17:55 +:
> On 9/28/17, 5:11 PM, "Doug Hellmann"  wrote:
> 
> > In the past we had trouble checking those files into git and gating
> > against the results being "up to date" or not changing in any way
> > because configuration options that end up in the file are defined in
> > libraries used by the services. So as long as the implementation you're
> > considering does not check configuration files into git, but generates
> > them and then inserts them into the package, it should be fine.
> 
> I’m guessing that the aut-generated files you’re referring to are the .conf 
> files? For the most part, those are being left out of my proposed patches 
> unless the project team specifically requests their inclusion. My patches are 
> focused on the far more static files - policy.json if it exists (yes, the 
> policy-in-code will remove those in time), api-paste, rootwrap.conf and the 
> rootwrap.d contents. As far as I know none of these are auto-generated. If 
> they are, I’m all ears to learn how!
> 

Ah, yes, I was talking about oslo.config files but those other files
are more static and shouldn't present the same issue.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-09-28 Thread Jay Pipes

On 09/28/2017 11:37 AM, Sahid Orentino Ferdjaoui wrote:

Please consider the support of MDEV for the /pci framework which
provides support for vGPUs [0].

Accordingly to the discussion [1]

With this first implementation which could be used as a skeleton for
implementing PCI Devices in Resource Tracker


I'm not entirely sure what you're referring to above as "implementing 
PCI devices in Resource Tracker". Could you elaborate? The resource 
tracker already embeds a PciManager object that manages PCI devices, as 
you know. Perhaps you meant "implement PCI devices as Resource Providers"?



we provide support for
attaching vGPUs to guests. And also to provide affinity per NUMA
nodes. An other important point is that that implementation can take
advantage of the ongoing specs like PCI NUMA policies.

* The Implementation [0]

[PATCH 01/13] pci: update PciDevice object field 'address' to accept
[PATCH 02/13] pci: add for PciDevice object new field mdev
[PATCH 03/13] pci: generalize object unit-tests for different
[PATCH 04/13] pci: add support for mdev device type request
[PATCH 05/13] pci: generalize stats unit-tests for different
[PATCH 06/13] pci: add support for mdev devices type devspec
[PATCH 07/13] pci: add support for resource pool stats of mdev
[PATCH 08/13] pci: make manager to accept handling mdev devices

In this serie of patches we are generalizing the PCI framework to
handle MDEV devices. We arguing it's a lot of patches but most of them
are small and the logic behind is basically to make it understand two
new fields MDEV_PF and MDEV_VF.


That's not really "generalizing the PCI framework to handle MDEV 
devices" :) More like it's just changing the /pci module to understand a 
different device management API, but ok.



[PATCH 09/13] libvirt: update PCI node device to report mdev devices
[PATCH 10/13] libvirt: report mdev resources
[PATCH 11/13] libvirt: add support to start vm with using mdev (vGPU)

In this serie of patches we make libvirt driver support, as usually,
return resources and attach devices returned by the pci manager. This
part can be reused for Resource Provider.


Perhaps, but the idea behind the resource providers framework is to 
treat devices as generic things. Placement doesn't need to know about 
the particular device attachment status.



[PATCH 12/13] functional: rework fakelibvirt host pci devices
[PATCH 13/13] libvirt: resuse SRIOV funtional tests for MDEV devices

Here we reuse 100/100 of the functional tests used for SR-IOV
devices. Again here, this part can be reused for Resource Provider.


Probably not, but I'll take a look :)

For the record, I have zero confidence in any existing "functional" 
tests for NUMA, SR-IOV, CPU pinning, huge pages, and the like. 
Unfortunately, due to the fact that these features often require 
hardware that either the upstream community CI lacks or that depends on 
libraries, drivers and kernel versions that really aren't available to 
non-bleeding edge users (or users with very deep pockets).



* The Usage

There are no difference between SR-IOV and MDEV, from operators point
of view who knows how to expose SR-IOV devices in Nova, they already
know how to expose MDEV devices (vGPUs).

Operators will be able to expose MDEV devices in the same manner as
they expose SR-IOV:

  1/ Configure whitelist devices

  ['{"vendor_id":"10de"}']

  2/ Create aliases

  [{"vendor_id":"10de", "name":"vGPU"}]

  3/ Configure the flavor

  openstack flavor set --property "pci_passthrough:alias"="vGPU:1"

* Limitations

The mdev does not provide 'product_id' but 'mdev_type' which should be
considered to exactly identify which resource users can request e.g:
nvidia-10. To provide that support we have to add a new field
'mdev_type' so aliases could be something like:

  {"vendor_id":"10de", mdev_type="nvidia-10" "name":"alias-nvidia-10"}
  {"vendor_id":"10de", mdev_type="nvidia-11" "name":"alias-nvidia-11"}

I do have plan to add but first I need to have support from upstream
to continue that work.


As mentioned in IRC and the previous ML discussion, my focus is on the 
nested resource providers work and reviews, along with the other two 
top-priority scheduler items (move operations and alternate hosts).


I'll do my best to look at your patch series, but please note it's lower 
priority than a number of other items.


One thing that would be very useful, Sahid, if you could get with Eric 
Fried (efried) on IRC and discuss with him the "generic device 
management" system that was discussed at the PTG. It's likely that the 
/pci module is going to be overhauled in Rocky and it would be good to 
have the mdev device management API requirements included in that 
discussion.


Best,
-jay

Best,
-jay



[0] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:pci-mdev-support
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122591.html

___

Re: [openstack-dev] [nova] Running large instances with CPU pinning and OOM

2017-09-28 Thread Premysl Kouril
>
> Only the memory mapped for the guest is striclty allocated from the
> NUMA node selected. The QEMU overhead should float on the host NUMA
> nodes. So it seems that the "reserved_host_memory_mb" is enough.
>

Even if that would be true and overhead memory could float in NUMA
nodes it generally doesn't prevent us from running into OOM troubles.
No matter where (in which NUMA node) the overhead memory gets
allocated, it is not included in available memory calculation for that
NUMA node when provisioning new instance and thus can cause OOM (once
the guest operating system  of the newly provisioned instance actually
starts allocating memory which can only be allocated from its assigned
NUMA node).

Prema

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Running large instances with CPU pinning and OOM

2017-09-28 Thread Chris Dent

On Thu, 28 Sep 2017, Premysl Kouril wrote:


Only the memory mapped for the guest is striclty allocated from the
NUMA node selected. The QEMU overhead should float on the host NUMA
nodes. So it seems that the "reserved_host_memory_mb" is enough.



Even if that would be true and overhead memory could float in NUMA
nodes it generally doesn't prevent us from running into OOM troubles.
No matter where (in which NUMA node) the overhead memory gets
allocated, it is not included in available memory calculation for that
NUMA node when provisioning new instance and thus can cause OOM (once
the guest operating system  of the newly provisioned instance actually
starts allocating memory which can only be allocated from its assigned
NUMA node).


Some of the discussion on this bug may be relevant:

https://bugs.launchpad.net/nova/+bug/1683858

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Forum topics brainstorming

2017-09-28 Thread Matt Riedemann

On 9/21/2017 4:01 PM, Matt Riedemann wrote:
So this shouldn't be news now that I've read back through a few emails 
in the mailing list (I've been distracted with the Pike release, PTG 
planning, etc) [1][2][3] but we have until Sept 29 to come up with 
whatever forum sessions we want to propose.


There is already an etherpad for Nova [4].

The list of proposed topics is here [5]. The good news is we're not the 
last ones to this party.


So let's start throwing things on the etherpad and figure out what we 
want to propose as forum session topis. If memory serves me, in Pike we 
were pretty liberal in what we proposed.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/121783.html 

[2] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122143.html 

[3] 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122454.html 


[4] https://etherpad.openstack.org/p/SYD-nova-brainstorming
[5] http://forumtopics.openstack.org/



The deadline for Queens Forum topic submissions is tomorrow. Based on 
our etherpad:


https://etherpad.openstack.org/p/SYD-nova-brainstorming

I plan to propose something like:

1. Cells v2 update and direction

This would be an update on what happened in Pike, upgrade impacts, known 
issues, etc and what we're doing in Queens. I think we'd also lump the 
Pike quota behavior changes in here too if possible.


2. Placement update and direction

Same as the Cells v2 discussion - a Pike update and the focus items for 
Queens. This would also be a place we can mention the Ironic flavor 
migration to custom resource classes that happens in Pike.


3. Queens development focus and checkpoint

This would be a session to discuss anything in flight for Queens, what 
we're working on, and have a chance to ask questions of operators/users 
for feedback. For example, we plan to add vGPU support but it will be 
quite simple to start, similar with volume multi-attach.


4. Michael Still had an item in the etherpad about privsep. That could 
be a cross-project educational session on it's own if he's going to give 
a primer on what privsep is again and how it's integrated into projects. 
This session could be lumped into #3 above but is probably better on 
it's own if it's going to include discussion about operational impacts. 
I'm going to ask that mikal runs with this though.




There are some other things in the etherpad about hardware acceleration 
features and documentation, and I'll leave it up to others if they want 
to propose those sessions.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Forum topics brainstorming

2017-09-28 Thread Michael Still
On Fri, Sep 29, 2017 at 7:45 AM, Matt Riedemann  wrote:

> On 9/21/2017 4:01 PM, Matt Riedemann wrote:
>
>> So this shouldn't be news now that I've read back through a few emails in
>> the mailing list (I've been distracted with the Pike release, PTG planning,
>> etc) [1][2][3] but we have until Sept 29 to come up with whatever forum
>> sessions we want to propose.
>>
>> There is already an etherpad for Nova [4].
>>
>> The list of proposed topics is here [5]. The good news is we're not the
>> last ones to this party.
>>
>> So let's start throwing things on the etherpad and figure out what we
>> want to propose as forum session topis. If memory serves me, in Pike we
>> were pretty liberal in what we proposed.
>>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-Sept
>> ember/121783.html
>> [2] http://lists.openstack.org/pipermail/openstack-dev/2017-Sept
>> ember/122143.html
>> [3] http://lists.openstack.org/pipermail/openstack-dev/2017-Sept
>> ember/122454.html
>> [4] https://etherpad.openstack.org/p/SYD-nova-brainstorming
>> [5] http://forumtopics.openstack.org/
>>
>>
> The deadline for Queens Forum topic submissions is tomorrow. Based on our
> etherpad:
>
> https://etherpad.openstack.org/p/SYD-nova-brainstorming
>
> I plan to propose something like:
>
> 1. Cells v2 update and direction
>
> This would be an update on what happened in Pike, upgrade impacts, known
> issues, etc and what we're doing in Queens. I think we'd also lump the Pike
> quota behavior changes in here too if possible.
>
> 2. Placement update and direction
>
> Same as the Cells v2 discussion - a Pike update and the focus items for
> Queens. This would also be a place we can mention the Ironic flavor
> migration to custom resource classes that happens in Pike.
>
> 3. Queens development focus and checkpoint
>
> This would be a session to discuss anything in flight for Queens, what
> we're working on, and have a chance to ask questions of operators/users for
> feedback. For example, we plan to add vGPU support but it will be quite
> simple to start, similar with volume multi-attach.
>
> 4. Michael Still had an item in the etherpad about privsep. That could be
> a cross-project educational session on it's own if he's going to give a
> primer on what privsep is again and how it's integrated into projects. This
> session could be lumped into #3 above but is probably better on it's own if
> it's going to include discussion about operational impacts. I'm going to
> ask that mikal runs with this though.
>

I have proposed http://forumtopics.openstack.org/cfp/details/41 for the
privsep discussion.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Queens spec review sprint next week

2017-09-28 Thread Matt Riedemann

Let's do a Queens spec review sprint.

What day works for people that review specs?

Monday came up in the team meeting today, but Tuesday could be good too 
since Monday's are generally evil.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-09-28 Thread Dan Smith

In this serie of patches we are generalizing the PCI framework to
handle MDEV devices. We arguing it's a lot of patches but most of them
are small and the logic behind is basically to make it understand two
new fields MDEV_PF and MDEV_VF.


That's not really "generalizing the PCI framework to handle MDEV 
devices" :) More like it's just changing the /pci module to understand a 
different device management API, but ok.


Yeah, the series is adding more fields to our PCI structure to allow for 
more variations in the kinds of things we lump into those tables. This 
is my primary complaint with this approach, and has been since the topic 
first came up. I really want to avoid building any more dependency on 
the existing pci-passthrough mechanisms and focus any new effort on 
using resource providers for this. The existing pci-passthrough code is 
almost universally hated, poorly understood and tested, and something we 
should not be further building upon.



In this serie of patches we make libvirt driver support, as usually,
return resources and attach devices returned by the pci manager. This
part can be reused for Resource Provider.


Perhaps, but the idea behind the resource providers framework is to 
treat devices as generic things. Placement doesn't need to know about 
the particular device attachment status.


I quickly went through the patches and left a few comments. The base 
work of pulling some of this out of libvirt is there, but it's all 
focused on the act of populating pci structures from the vgpu 
information we get from libvirt. That code could be made to instead 
populate a resource inventory, but that's about the most of the set that 
looks applicable to the placement-based approach.


As mentioned in IRC and the previous ML discussion, my focus is on the 
nested resource providers work and reviews, along with the other two 
top-priority scheduler items (move operations and alternate hosts).


I'll do my best to look at your patch series, but please note it's lower 
priority than a number of other items.


FWIW, I'm not really planning to spend any time reviewing it 
until/unless it is retooled to generate an inventory from the virt driver.


With the two patches that report vgpus and then create guests with them 
when asked converted to resource providers, I think that would be enough 
to have basic vgpu support immediately. No DB migrations, model changes, 
etc required. After that, helping to get the nested-rps and traits work 
landed gets us the ability to expose attributes of different types of 
those vgpus and opens up a lot of possibilities. IMHO, that's work I'm 
interested in reviewing.


One thing that would be very useful, Sahid, if you could get with Eric 
Fried (efried) on IRC and discuss with him the "generic device 
management" system that was discussed at the PTG. It's likely that the 
/pci module is going to be overhauled in Rocky and it would be good to 
have the mdev device management API requirements included in that 
discussion.


Definitely this.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-28 Thread Zhenguo Niu
I have proposed http://forumtopics.openstack.org/cfp/details/33 and will be
present. Thanks Thierry!

On Thu, Sep 28, 2017 at 9:48 PM, Thierry Carrez 
wrote:

> Erik McCormick wrote:
> > [...]
> > Also, if you'd like to discuss this in detail with a room full of
> > bodies, I suggest proposing a session for the Forum in Sydney. If some
> > of the contributors will be there, it would be a good opportunity for
> > you to get feedback.
>
> Yes, "Bare metal as a service: Ironic vs. Mogan vs. Nova" would make a
> great topic for discussion in Sydney, assuming Zhenguo is able to make
> the trip... Discussing the user need on one side, and how to best
> integrate with the existing pieces on the other side would really help
> starting this on the right foot.
>
> Zhenguo: if you plan to be present, could you suggest this topic for
> discussion at: http://forumtopics.openstack.org/
>
> Deadline is tomorrow :)
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Zuul v3 migration update

2017-09-28 Thread Clark Boylan
On Wed, Sep 27, 2017, at 03:24 PM, Monty Taylor wrote:
> Hey everybody,
> 
> We're there. It's ready.
> 
> We've worked through all of the migration script issues and are happy 
> with the results. The cutover trigger is primed and ready to go.
> 
> But as it's 21:51 UTC / 16:52 US Central it's a short day to be 
> available to respond to the questions folks may have... so we're going 
> to postpone one more day.
> 
> Since it's all ready to go we'll be looking at flipping the switch first 
> thing in the morning. (basically as soon as the West Coast wakes up and 
> is ready to go)
> 
> The project-config repo should still be considered frozen except for 
> migration-related changes. Hopefully we'll be able to flip the final 
> switch early tomorrow.
> 
> If you haven't yet, please see [1] for information about the transition.
> 
> [1] https://docs.openstack.org/infra/manual/zuulv3.html
> 

Its done! Except for all the work to make jobs run properly. Early today
(PDT) we converted everything over to our auto generated Zuulv3 config.
Since then we've been working to address problems in job configs.

These problems include:
Missing inclusion of the requirements repo for constraints in some
jobs
Configuration of python35 unittest jobs in some cases
Use of sudo checking not working properly
Multinode jobs not having multinode nodesets

Known issues we will continue to work on:
Multinode devstack and grenade jobs are not working quite right
Releasenote jobs not working due to use of origin/ refs in git
It looks like we may not have job branch exclusions in place for all
cases
The zuul-cloner shim may not work in all cases. We are tracking down
and fixing the broken corner cases.

Keep in mind that with things in flux, there is a good chance that
changes enqueued to the gate will fail. It is a good idea to check
recent check queue results before approving changes.

I don't think we've found any deal breaker problems at this point. I am
sure there are many more than I have listed above. Please feel free to
ask us about any errors. For the adventurous, fixing problems is likely
a great way to get familiar with the new system. You'll want to start by
fixing errors in openstack-infra/openstack-zuul-jobs/playbooks/legacy.
Once that stabilizes the next step is writing native job configs within
your project tree. Documentation can be found at
https://docs.openstack.org/infra/manual/zuulv3.html. I expect we'll
spend the next few days ironing out the transition.

Thank you for your patience,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api-wg][glance] call for comments on Glance spec for Queens

2017-09-28 Thread Brian Rosmaita
Hello API WG,

I've got a patch up for a proposal to fix OSSN-0075 by introducing a
new policy.  There are concerns that this will introduce an
interoperability problem in that an API call that works in one
OpenStack cloud may not work in other OpenStack clouds.  As author of
the spec, I think this is an OK trade-off to fix the security issue,
but not all members of the Glance community agree, so we're trying to
get some wider perspective.  We'd appreciate it if some API-WG members
could take a look and leave a comment:

https://review.openstack.org/#/c/468179/

If you could respond by Tuesday 3 October, that would give us time to
get this worked out before the spec freeze (6 October).

thanks,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] LogMeIn Avi PF9 engineering touch point

2017-09-28 Thread Praveen Yalagandula
Siva,
All changes for #1 are not in 17.1.8; there is a pull-request waiting your
review for last few days.
Note that the version EBSCO finally tested with was making one neutron API
call per one pool PATCH API call. Original one was making three neutron API
calls.

Changes in 17.1.8 only reduce it from 3 to 2 per PATCH API call. The
pull-request that is to go into 17.1.9 has the changes for reducing it to 1
neutron API call.

Cheers,
Praveen

On Thu, Sep 28, 2017 at 8:39 PM Siva kollipara  wrote:

> # avi-internal #
>
> #1 is part of 17.1.8.
>
> @Sambit and Praveen can comment on #2 and #3.
>
>
>
> On Thu, Sep 28, 2017 at 7:52 PM, Jason Price 
> wrote:
>
>> I believe Justin installed 17.1.8 today.  Siva, do we have improvements
>> in that version?
>>
>> Justin reported some initial performance issues (new PF9 with Contrail
>> 4.0.1), and I saw some slow API calls (~120s to add a vNIC), but he found
>> some configuration issues with some bad endpoints, etc.  Didn't get
>> feedback after that, but will check tomorrow.
>>
>> Thanks.
>>
>> Jason
>>
>> On Thu, Sep 28, 2017 at 7:50 PM, Sachin Manpathak <
>> smanpat...@platform9.com> wrote:
>>
>>> We changed
>>> 1. Sequence of network query
>>> 2. Increased timeouts
>>> In addition, pf9 requests to reduce keystone usage by caching tokens
>>>
>>> Thanks,
>>> -Sachin
>>>
>>> On Sep 28, 2017, 7:33 PM -0700, Siva kollipara ,
>>> wrote:
>>>
>>> Jeff,
>>>
>>> I am not aware of those changes being applied at LogMeIn, but please use
>>> Avi 17.1.8.
>>>
>>> - Siva
>>>
>>>
>>>
>>> On Thu, Sep 28, 2017 at 6:31 PM, Jeff Darrish 
>>> wrote:
>>>
 Hi team,

 Do we have a way to validate if the tweaks to the Avi controller
 leveraged at EBSCO have also been put in place at LogMeIn?
 The LogMeIn team is asking which Avi software version they should be
 running for Platform9 integration -- I would think this would be the
 patched releases provided to EBSCO--correct? Can we get validation on the
 specific version that would be best?

 Thank you,

 *Jeff Darrish*
 *Solutions Architect | Platform9 *
 jdarr...@platform9.com
 (404) 317-8344

 On Thu, Sep 21, 2017 at 2:25 PM, Praveen Yalagandula <
 yprav...@avinetworks.com> wrote:

> Folks,
> Here are the meeting notes from today's discussion. Pushkar, Sachin,
> please add if we missed anything.
>
> Attendees:
> Platform 9 (P9): Pushkar, Sachin
> Avi: Praveen, Siva
>
> Customer in question:
> LogMeIn: Contrail + Platform 9 + Avi
>
> - P9 is in cloud and tunnesl to the services on the hosts in the
> customer environment
> - Contrail controller in customer's environment
> - They are currently creating only one tunnel and now need to reuse
> that for two different services -- avi and contrail controller. And they
> have a forwarder agent that can dynamically switch between those two
> services. However, calls from Heat would fail if the tunnel is already in
> use by the neutron plugin. Their ask was to retry on connection errors for
> multiple times with some sleep in between.
> - Avi created a new branch on avi-heat repo: 17.1-conn-backup that
> retries 30 times with 1 second sleep whenever avi API calls run into
> ConnectionError exceptions.
> - Pushkar updated the LogMeIn's controller and will ask customer to
> run some experiments
>
> Long term:
> 1) P9 plans to create multiple tunnels one for each service.
> 2) Avi would reduce the neutron/keystone calls as found in EBSCO
> tests, and that would help this customer case too. Avi's upcoming release
> 17.1.8 will have some optimizations.
>
> Cheers,
> praveen
>
>
> On Wed, Sep 20, 2017 at 6:55 PM Siva kollipara 
> wrote:
>
>> 10am sounds good Jeff.
>>
>> On Wed, Sep 20, 2017 at 4:44 PM, Jason Price 
>> wrote:
>>
>>> Thanks, guys.  I'm booked at 10 tomorrow, so won't be able to make
>>> this call, but let me know if you need anything prior.
>>>
>>> Thanks.
>>>
>>> Jason
>>>
>>> On Wed, Sep 20, 2017 at 4:36 PM, Jeff Darrish <
>>> jdarr...@platform9.com> wrote:
>>>
 Thanks so much Praveen.

 I have sent over a calendar invite tomorrow for 10am Pacific.
 Johnny Tseng, the Platform9 account manager will kick off the call.
 Unfortunately I will be out of the office Thursday & Friday, but our
 engineering leads Pushkar and Sachin are familiar with the integration 
 at
 LogMeIn and can describe our joint milestones to get this working well 
 for
 them.

 Thank you,

 *Jeff Darrish*
 *Systems Engineer | Platform9 *
 jdarr...@platform9.com
 (404) 317-8344

 On Wed, Sep 20, 2017 at 4:23 PM, Praveen Yalagandula <
>>>

Re: [openstack-dev] [infra][mogan] Need help for replacing the current master

2017-09-28 Thread Davanum Srinivas
Jeremy, Clark,

I tried several things, not sure i have enough git-fu to pull this
off. For example.

[dims@dims-mac 00:03] ~/openstack/openstack/mogan ⟩ git push gerrit
HEAD:refs/for/master
Counting objects: 8104, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (2350/2350), done.
Writing objects: 100% (8104/8104), 1.19 MiB | 0 bytes/s, done.
Total 8104 (delta 2634), reused 8103 (delta 2634)
remote: Resolving deltas: 100% (2634/2634)
remote: Processing changes: refs: 1, done
remote:
remote: ERROR:  In commit de26dc69aa28f57512326227a65dc3f9110a7be1
remote: ERROR:  committer email address sleepsonthefl...@gmail.com
remote: ERROR:  does not match your user account.
remote: ERROR:
remote: ERROR:  The following addresses are currently registered:
remote: ERROR:dava...@gmail.com
remote: ERROR:d...@huawei.com
remote: ERROR:
remote: ERROR:  To register an email address, please visit:
remote: ERROR:  https://review.openstack.org/#/settings/contact
remote:
remote:
To ssh://review.openstack.org:29418/openstack/mogan.git
 ! [remote rejected]   HEAD -> refs/for/master (invalid committer)
error: failed to push some refs to
'ssh://dim...@review.openstack.org:29418/openstack/mogan.git'

Would it be simpler for you to do this for mogan team?

Thanks,
Dims

PS: i did get added to mogan-core to try this experiment

On Thu, Sep 28, 2017 at 9:09 AM, Davanum Srinivas  wrote:
> Jeremy, Clark,
>
> Filed a change :)
> https://review.openstack.org/508151
>
> Thanks,
> Dims
>
> On Thu, Sep 28, 2017 at 8:55 AM, Jeremy Stanley  wrote:
>> On 2017-09-27 20:02:25 -0400 (-0400), Davanum Srinivas wrote:
>>> I'd like to avoid the ACL update which will make it different from
>>> other projects. Since we don't expect to do this again, can you please
>>> help do this?
>> [...]
>>
>> He (probably accidentally) left out the word "temporary." The ACL
>> only needs to allow merge commits to be pushed long enough for that
>> merge commit to get pushed for review, and then the ACL can be
>> reverted to its earlier state.
>> --
>> Jeremy Stanley
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] LogMeIn Avi PF9 engineering touch point

2017-09-28 Thread Praveen Yalagandula
Please ignore this email. Wrong mailing list :)

Sorry!
Praveen

On Thu, Sep 28, 2017 at 9:16 PM Praveen Yalagandula <
yprav...@avinetworks.com> wrote:

> Siva,
> All changes for #1 are not in 17.1.8; there is a pull-request waiting your
> review for last few days.
> Note that the version EBSCO finally tested with was making one neutron API
> call per one pool PATCH API call. Original one was making three neutron API
> calls.
>
> Changes in 17.1.8 only reduce it from 3 to 2 per PATCH API call. The
> pull-request that is to go into 17.1.9 has the changes for reducing it to 1
> neutron API call.
>
> Cheers,
> Praveen
>
> On Thu, Sep 28, 2017 at 8:39 PM Siva kollipara 
> wrote:
>
>> # avi-internal #
>>
>> #1 is part of 17.1.8.
>>
>> @Sambit and Praveen can comment on #2 and #3.
>>
>>
>>
>> On Thu, Sep 28, 2017 at 7:52 PM, Jason Price 
>> wrote:
>>
>>> I believe Justin installed 17.1.8 today.  Siva, do we have improvements
>>> in that version?
>>>
>>> Justin reported some initial performance issues (new PF9 with Contrail
>>> 4.0.1), and I saw some slow API calls (~120s to add a vNIC), but he found
>>> some configuration issues with some bad endpoints, etc.  Didn't get
>>> feedback after that, but will check tomorrow.
>>>
>>> Thanks.
>>>
>>> Jason
>>>
>>> On Thu, Sep 28, 2017 at 7:50 PM, Sachin Manpathak <
>>> smanpat...@platform9.com> wrote:
>>>
 We changed
 1. Sequence of network query
 2. Increased timeouts
 In addition, pf9 requests to reduce keystone usage by caching tokens

 Thanks,
 -Sachin

 On Sep 28, 2017, 7:33 PM -0700, Siva kollipara ,
 wrote:

 Jeff,

 I am not aware of those changes being applied at LogMeIn, but please
 use Avi 17.1.8.

 - Siva



 On Thu, Sep 28, 2017 at 6:31 PM, Jeff Darrish 
 wrote:

> Hi team,
>
> Do we have a way to validate if the tweaks to the Avi controller
> leveraged at EBSCO have also been put in place at LogMeIn?
> The LogMeIn team is asking which Avi software version they should be
> running for Platform9 integration -- I would think this would be the
> patched releases provided to EBSCO--correct? Can we get validation on the
> specific version that would be best?
>
> Thank you,
>
> *Jeff Darrish*
> *Solutions Architect | Platform9 *
> jdarr...@platform9.com
> (404) 317-8344
>
> On Thu, Sep 21, 2017 at 2:25 PM, Praveen Yalagandula <
> yprav...@avinetworks.com> wrote:
>
>> Folks,
>> Here are the meeting notes from today's discussion. Pushkar, Sachin,
>> please add if we missed anything.
>>
>> Attendees:
>> Platform 9 (P9): Pushkar, Sachin
>> Avi: Praveen, Siva
>>
>> Customer in question:
>> LogMeIn: Contrail + Platform 9 + Avi
>>
>> - P9 is in cloud and tunnesl to the services on the hosts in the
>> customer environment
>> - Contrail controller in customer's environment
>> - They are currently creating only one tunnel and now need to reuse
>> that for two different services -- avi and contrail controller. And they
>> have a forwarder agent that can dynamically switch between those two
>> services. However, calls from Heat would fail if the tunnel is already in
>> use by the neutron plugin. Their ask was to retry on connection errors 
>> for
>> multiple times with some sleep in between.
>> - Avi created a new branch on avi-heat repo: 17.1-conn-backup that
>> retries 30 times with 1 second sleep whenever avi API calls run into
>> ConnectionError exceptions.
>> - Pushkar updated the LogMeIn's controller and will ask customer to
>> run some experiments
>>
>> Long term:
>> 1) P9 plans to create multiple tunnels one for each service.
>> 2) Avi would reduce the neutron/keystone calls as found in EBSCO
>> tests, and that would help this customer case too. Avi's upcoming release
>> 17.1.8 will have some optimizations.
>>
>> Cheers,
>> praveen
>>
>>
>> On Wed, Sep 20, 2017 at 6:55 PM Siva kollipara 
>> wrote:
>>
>>> 10am sounds good Jeff.
>>>
>>> On Wed, Sep 20, 2017 at 4:44 PM, Jason Price 
>>> wrote:
>>>
 Thanks, guys.  I'm booked at 10 tomorrow, so won't be able to make
 this call, but let me know if you need anything prior.

 Thanks.

 Jason

 On Wed, Sep 20, 2017 at 4:36 PM, Jeff Darrish <
 jdarr...@platform9.com> wrote:

> Thanks so much Praveen.
>
> I have sent over a calendar invite tomorrow for 10am Pacific.
> Johnny Tseng, the Platform9 account manager will kick off the call.
> Unfortunately I will be out of the office Thursday & Friday, but our
> engineering leads Pushkar and Sachin are familiar with the 
> integration at
> LogMeIn and can describe our joint miles

Re: [openstack-dev] [neutron][fwaas] Proposal to change the time for Firewall-as-a-Service Team Meeting

2017-09-28 Thread reedip banerjee
Hi All,
Thanks for your votes.
As per the majority votes, https://review.openstack.org/#/c/507172/ was
created and merged successfully.
FWaaS meeting would now be held on Thursday 1400 UTC in the
#openstack-fwaas channel from 5th October.

On 22-Sep-2017 5:50 AM, "Furukawa, Yushiro" 
wrote:

> Hi,
>
>
>
> I’m sorry I’m late.  I just voted it.
>
>
>
> Thanks,
>
>
>
> 
>
> Yushiro Furukawa
>
>
>
> *From:* reedip banerjee [mailto:reedi...@gmail.com]
> *Sent:* Tuesday, September 19, 2017 11:54 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [neutron][fwaas] Proposal to change the time
> for Firewall-as-a-Service Team Meeting
>
>
>
> Dear All,
>
> Due to clashes of the Firewal-as-a-Service team meetup with the bi-weekly
> Neutron and Common-Classifier meeting, it was suggested in today's meetup
> to change the timing.
>
>
>
> https://doodle.com/poll/c5rgth6y54bpvncu is the Link to vote for the day
> when the meeting can be held.
>
>
>
> --
>
> Thanks and Regards,
> Reedip Banerjee
>
> IRC: reedip
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] multi threads with swift backend

2017-09-28 Thread Arnaud MORIN
My objective is to be able to download and upload from glance/computes to
swift in a faster way.
I was thinking that if glance could parallelizes the connections to swift
for a single image (with chunks), it would be faster.
Am I wrong ?
Is there any other way I am not thinking of?

Arnaud.

Le 28 sept. 2017 6:30 PM, "Erno Kuvaja"  a écrit :

> On Thu, Sep 28, 2017 at 4:27 PM, Arnaud MORIN 
> wrote:
> > Hey all,
> > So I finally tested your pull requests, it does not work.
> > 1 - For uploads, swiftclient is not using threads when source is given by
> > glance:
> > https://github.com/openstack/python-swiftclient/blob/
> master/swiftclient/service.py#L1847
> >
> > 2 - For downloads, when requesting the file from swift, it is recomposing
> > the chunks into one big file.
> >
> >
> > So patch is not so easy.
> >
> > IMHO, for uploads, we should try to uploads chunks using multithreads.
> > Sounds doable.
> > For downloads, I need to dig a little bit more in glance store code to be
> > sure, but maybe we can try to download the chunks separately and
> recompose
> > them locally before sending it to the requester (compute / cli).
> >
> > Cheers,
> >
>
> So I'm still trying to understand (without success) why do we want to
> do this at all?
>
> - jokke
>
> >
> > On 6 September 2017 at 21:19, Arnaud MORIN 
> wrote:
> >>
> >> Hey,
> >> I would love to see that reviving!
> >>
> >> Cheers,
> >> Arnaud
> >>
> >> On 6 September 2017 at 21:00, Mikhail Fedosin 
> wrote:
> >>>
> >>> Hey! As you said it's not possible now.
> >>>
> >>> I implemented the support several years ago, bit unfortunately no one
> >>> wanted to review it: https://review.openstack.org/#/c/218993
> >>> If you want, we can revive it.
> >>>
> >>> Best,
> >>> Mike
> >>>
> >>> On Wed, Sep 6, 2017 at 9:05 PM, Clay Gerrard 
> >>> wrote:
> 
>  I'm pretty sure that would only be possible with a code change in
> glance
>  to move the consumption of the swiftclient abstraction up a layer
> from the
>  client/connection objects to swiftclient's service objects [1].  I'm
> not
>  sure if that'd be something that would make a lot of sense to the
> Image
>  Service team.
> 
>  -Clay
> 
>  1. https://docs.openstack.org/python-swiftclient/latest/
> service-api.html
> 
>  On Wed, Sep 6, 2017 at 9:02 AM, Arnaud MORIN 
>  wrote:
> >
> > Hi all,
> >
> > Is there any chance that glance can use the multiprocessing from
> > swiftclient library (equivalent of xxx-threads options from cli)?
> > If yes, how to enable it?
> > I did not find anything useful in the glance configuration options.
> > And looking at glance_store code make me think that it's not
> > possible...
> > Am I wrong?
> >
> > Regards,
> > Arnaud
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
>  
> __
>  OpenStack Development Mailing List (not for usage questions)
>  Unsubscribe:
>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> >>>
> >>>
> >>>
> >>> 
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-28 Thread Ian Wienand

Hi,

There's a few issues with devstack and the new zuulv3 environment

LIBS_FROM_GIT is broken due to the new repos not having a remote
setup, meaning "pip freeze" doesn't give us useful output.  [1] just
disables the test as a quick fix for this; [2] is a possible real fix
but should be tried a bit more carefully in case there's corners I
missed.  This will be affecting other projects.

However, before we can get this in, we need to fix the gate.  The
"updown" tests have missed a couple of requirement projects due to
them setting flags that were not detected during migration.  [3] is a
fix for that and seems to work.

For some reason, the legacy-tempest-dsvm-nnet job is running against
master, and failing as nova-net is deprecated there.  I'm clutching at
straws to understand this one, as it seems like the branch filters are
setup correctly; [4] is one guess?

I'm not aware of issues other than these at this time

-i

[1] https://review.openstack.org/508344
[2] https://review.openstack.org/508366
[3] https://review.openstack.org/508396
[4] https://review.openstack.org/508405

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-28 Thread Ian Wienand

On 09/29/2017 03:37 PM, Ian Wienand wrote:

I'm not aware of issues other than these at this time


Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons.  Any debugging would be helpful,
thanks.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-09-28 Thread Thomas Bechtold

Hi,

On 28.09.2017 16:50, Jesse Pretorius wrote:
[...]
Do any packagers or deployment projects have issues with this 
implementation? If there are any issues, what’re your suggestions to 
resolve them?


This will still install the files into usr/etc :

$ python setup.py install --skip-build --root /tmp/sahara-install > 
/dev/null

$ ls /tmp/sahara-install/usr/
bin  etc  lib

It's not nice but packagers can workaround that.

Best,

Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev