Re: [openstack-dev] [nova][neutron] New BP for live migration with direct pci passthru

2016-02-18 Thread Xie, Xianshan
Hi, Ian,
Thanks a lot for your reply.

>In general, while you've applied this to networking (and it's not the first 
>time I've seen this proposal), the same technique will work with any device - 
>PF or VF, networking or other:
>- notify the VM via an accepted channel that a device is going to be 
>temporarily removed
>- remove the device
>- migrate the VM
>- notify the VM that the device is going to be returned
>- reattach the device
>Note that, in the above, I've not used said 'PF', 'VF', 'NIC' or 'qemu'.
Yes, I absolutely agree with you and sorry for my vague expression about that.
Actually, now we just attempt to support the live migration of the instance 
which directly connected to the passthru VF.
Although the devices what you mentioned all we should implement but that needs 
a step-by-step plan, I think.


>You would need to document what assumptions the guest is going to make (the 
>reason I mention this is I think it's safe to assume the device has been 
>recently reset here, but for a network device you might want to consider 
>whether the device will have the same MAC address or number of tx and rx 
>buffers, for instance).
Exactly correct, there are a lot things that should be considered, but with 
regard to VF,
many things will be easy to handle or avoid, for instance, the issuse of same 
MAC address.

And in addition to what you mentioned, for VF, I think, the most important 
thing we need to discuss
is the strategy of NIC bonding - How and When and by whom to make this bonding 
- as there are
too much risks of running afoul of something on the VM, for instance 
NetworkManager, which will cause failure of bonding presumably.
For instance:
  - prospectively make the NICs bonding when the VM was launched by an embedded 
script based on DIB?
  - or manually bond the NICs by VM administrators before the live-migration 
command executes?
  - or notify the VM to perform this bonding by openstack components while the 
live-migration command executes?
-  ...


Best regards,
Xiexs


From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: Wednesday, February 17, 2016 3:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] New BP for live migration with 
direct pci passthru

In general, while you've applied this to networking (and it's not the first 
time I've seen this proposal), the same technique will work with any device - 
PF or VF, networking or other:
- notify the VM via an accepted channel that a device is going to be 
temporarily removed
- remove the device
- migrate the VM
- notify the VM that the device is going to be returned
- reattach the device
Note that, in the above, I've not used said 'PF', 'VF', 'NIC' or 'qemu'.

You would need to document what assumptions the guest is going to make (the 
reason I mention this is I think it's safe to assume the device has been 
recently reset here, but for a network device you might want to consider 
whether the device will have the same MAC address or number of tx and rx 
buffers, for instance).

The method of notification I've deliberately skipped here; you have an answer 
for qemu, qemu is not the only hypervisor in the world so this will clearly be 
variable.  A metadata server mechanism is another possibility.

Half of what you've described is one model of how the VM might choose to deal 
with that (and a suggestion that's come up before, in fact) - that's a model we 
would absolutely want Openstack to support (and I think the above is sufficient 
to support it), but we can't easily mandate how VMs behave, so from the 
Openstack perspective it's more a recommendation than anything we can code up.


On 15 February 2016 at 23:25, Xie, Xianshan 
mailto:xi...@cn.fujitsu.com>> wrote:
Hi, Fawad,


> Can you please share the link?
https://blueprints.launchpad.net/nova/+spec/direct-pci-passthrough-live-migration

Thanks in advance.


Best regards,
xiexs

From: Fawad Khaliq [mailto:fa...@plumgrid.com]
Sent: Tuesday, February 16, 2016 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] New BP for live migration with 
direct pci passthru

On Mon, Feb 1, 2016 at 3:25 PM, Xie, Xianshan 
mailto:xi...@cn.fujitsu.com>> wrote:
Hi, all,
  I have registered a new BP about the live migration with a direct pci 
passthru device.
  Could you please help me to review it? Thanks in advance.

Can you please share the link?


The following is the details:
--
SR-IOV has been supported for a long while, in the community's point of view,
the pci passthru with Macvtap can be live migrated possibly, but the direct pci 
passthru
seems hard to implement the migration as the passthru VF is totally controlled 
by
the VMs so that some internal states may be unknown by the hypervisor.

But we think the direct pci passthru model can also be live migrated

[openstack-dev] [Octavia] [Tempest] Tempest tests using tempest-plugin

2016-02-18 Thread Madhusudhan Kandadai
Hi,

We are trying to implement tempest tests for Octavia using tempest-plugin.
I am wondering whether we can import *tempest* common files and use them as
a base to support Octavia tempest tests rather than copying everything in
Octavia tree. I am in favor of importing files directly from tempest to
follow tempest structure. If this is not permissible to import from tempest
directly, do we need to propose any common files in tempest_lib, so we can
import it from tempest_lib instead? I wanted to check with other for
suggestions.

Thanks,
Madhu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-18 Thread Cody A.W. Somerville
On Sat, Feb 6, 2016 at 2:14 AM, Cody A.W. Somerville <
cody.somervi...@gmail.com> wrote:
 

>
> I'd like to suggest we tightly scope this discussion and subsequent
> decision to Poppy exclusively. The reason for this is two fold. The first
> is so that a timely resolution and answer can be provided to the Poppy
> team. The second is that I think once we've answered the specific
> questions and concerns about Poppy (some of which I believe are novel in
> nature) we'll be in a better position to then inductively reason about the
> problem and derive the more generalized rule or principle that I think
> Thierry was hoping to establish.
>
> In that vein, I'll try to summarize the questions or concerns I've seen
> raised here and in the TC meeting[3] - apologies if I've missed any:
>
> Poppy is an OpenStack project designed to make CDN services easier to
> consume with a generic vendor-neutral API[4]. The concern is that it only
> has support for commercial CDN service providers. It does not have support
> for a CDN service that is Open Source.
>
>  1. Is Poppy "open core"[5] or violate OpenStack's 'Four Opens'[6]?
>

I do not believe that Poppy meets the definition of "Open Core". By most
accounts, "Open Core" is a business or licensing model where there are
proprietary editions of a product built on top of a core open source
technology or project and/or project uses copyright assignment in order to
be able to dual license under non-open source licenses. Neither seem
applicable here.


>  2. Do we have a requirement that the primary component/backend (or at
> least one of the components/backends) driven/abstracted/orchestrated by a
> project (directly or via driver/plugin/et al) be considered Open Source? If
> yes, is there room for an exception when one simply doesn't exist? Is
> there special consideration for "services" (ie. think GPL vs. AGPL)?
>

There is clearly the preference, if not the requirement when such an
opportunity exists, but no one has expressed that this is a hard
requirement otherwise.


>  3. Does a project that only enables the use of commercial
> services/projects belong in OpenStack?
>

I think providing a standard abstraction for provisioning and managing
content distribution furthers our goal of being the ubiquitous open cloud
platform. I predict that content distribution will become an important and
very standard capability desired in large cloud deployments, particularly
in enterprise environments that span the globe, and so we'll likely see
such a service developed and probably be powered by swift. Due to the
nature of CDN, augmenting your content distribution capabilities with a
third-party CDN provider will be common and natural.


>  4. Does Poppy violate existing requirements around testing/CI[7][8]?
>

I do not believe that it does. Using mocks and/or unit tests would be
sufficient to meet "test-driven gate" requirement.


>  5. Does dependency on Casandra make Poppy non-free?
>

TBD.


>  6. Does a project that only enables the use of non-OpenStack
> services/projects belong in OpenStack?
>

The big tent model seems to explicitly encourage the idea that projects in
the OpenStack ecosystem are welcome to consider themselves OpenStack
projects. Poppy itself isn't just a consumer but is intended to be a
first-class cloud service.

Some additional facts that have been pointed out include:
>
>  - It currently only supports Akamai - which makes sense to be the first
> provider, Akamai is the CDN provider for Rackspace[9] and the project is
> mostly developed by Rackspace[10] - but implementation is underway for
> Fastly, Amazon CloudFront, and MaxCDN[11].
>  - It currently only supports Rackspace DNS but support for Designate is
> planned[11] (only a stub exists in tree currently).
>

I'm surprised these two points - particularly the latter, the fact that
Poppy currently only supports Rackspace DNS where Designate does exist and
could be integrated with - has not been raised by anyone else.

Cody
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Help regarding openstack error to set method value and it's plugin in keystone.conf

2016-02-18 Thread Vashishtha, Vibhu
Hi,

I have to configure openstack as a service provider for SAML  authentication ,I 
am not able to set methods value in keystone.conf, when I am adding saml2 and 
mapped in methods valuefor saml2 authentication where
saml2= "keystone.auth.plugins.saml2.Saml2"
mapped = keystone.auth.plugins.mapped.Mapped

it is giving me an error  "

"Failed to load saml2 driver
Attempted to authenticate with an unsupported method" ,

And mapped in case of mapped what should I set in place of these values. Kindly 
please guide me as I am very stuck at this point.


Regards,
Vibhu Vashishtha

-- 
The information contained in this electronic mail transmission 
may be privileged and confidential, and therefore, protected 
from disclosure. If you have received this communication in 
error, please notify us immediately by replying to this 
message and deleting it from your computer without copying 
or disclosing it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - DVR L3 data plane performance results and scenarios

2016-02-18 Thread Gal Sagie
Hi Swami,

Thanks for the reply, is there any detailed links that describe this that
we can look at?

(Of course that having results without the full setup (hardware/ NIC, CPU
and threads for OVS and so on..) details
and without the full scenario details is a bit hard, regardless however i
hope it will give us at least
an estimation where we are at)

Thanks
Gal.

On Thu, Feb 18, 2016 at 9:34 PM, Vasudevan, Swaminathan (PNB Roseville) <
swaminathan.vasude...@hpe.com> wrote:

> Hi Gal Sagie,
>
> Yes there was some performance results on DVR that we shared with the
> community during the Liberty summit in Vancouver.
>
>
>
> Also I think there was a performance analysis that was done by Oleg
> Bondarev on DVR during the Paris summit.
>
>
>
> We have done lot more changes to the control plane to improve the scale
> and performance in DVR during the Mitaka cycle and will be sharing some
> performance results in the upcoming summit.
>
>
>
> Definitely we can align on our approach and have all those results
> captured in the upstream for the reference.
>
>
>
> Please let me know if you need any other information.
>
>
>
> Thanks
>
> Swami
>
>
>
> *From:* Gal Sagie [mailto:gal.sa...@gmail.com]
> *Sent:* Thursday, February 18, 2016 6:06 AM
> *To:* OpenStack Development Mailing List (not for usage questions); Eran
> Gampel; Shlomo Narkolayev; Yuli Stremovsky
> *Subject:* [openstack-dev] [Neutron] - DVR L3 data plane performance
> results and scenarios
>
>
>
> Hello All,
>
>
>
> We have started to test Dragonflow [1] data plane L3 performance and was
> wondering
>
> if there is any results and scenarios published for the current Neutron DVR
>
> that we can compare and learn the scenarios to test.
>
>
>
> We mostly want to validate and understand if our results are accurate and
> also join the
>
> community in defining base standards and scenarios to test any solution
> out there.
>
>
>
> For that we also plan to join and contribute to openstack-performance [2]
> efforts which to me
>
> are really important.
>
>
>
> Would love any results/information you can share, also interested in
> control plane
>
> testing and API stress tests (either using Rally or not)
>
>
>
> Thanks
>
> Gal.
>
>
>
> [1]
> http://docs.openstack.org/developer/dragonflow/distributed_dragonflow.html
>
> [2] https://github.com/openstack/performance-docs
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-18 Thread Cody A.W. Somerville
On Wed, Feb 17, 2016 at 1:20 PM, Jay Pipes  wrote:

> On 02/17/2016 09:30 AM, Doug Hellmann wrote:
>
>> Excerpts from Mike Perez's message of 2016-02-17 03:21:51 -0800:
>>
>>> On 02/16/2016 11:30 AM, Doug Hellmann wrote:
>>>
 So I think the project team is doing everything we've asked.  We
 changed our policies around new projects to emphasize the social
 aspects of projects, and community interactions. Telling a bunch
 of folks that they "are not OpenStack" even though they follow those
 policies is rather distressing.  I think we should be looking for
 ways to say "yes" to new projects, rather than "no."

>>>
>>> My disagreements with accepting Poppy has been around testing, so let me
>>> reiterate what I've already said in this thread.
>>>
>>> The governance currently states that under Open Development "The project
>>> has core reviewers and adopts a test-driven gate in the OpenStack
>>> infrastructure for changes" [1].
>>>
>>> If we don't have a solution like OpenCDN, Poppy has to adopt a reference
>>> implementation that is a commercial entity, and infra has to also be
>>> dependent on it. I get Infra is already dependent on public cloud
>>> donations, but if we start opening the door to allow projects to bring
>>> in those commercial dependencies, that's not good.
>>>
>>
>> Only Poppy's test suite would rely on that, though, right? And other
>> projects can choose whether to co-gate with Poppy or not. So I don't see
>> how this limitation has an effect on anyone other than the Poppy team.
>>
>
> But what would really be tested in Poppy without any commercial CDN
> vendor? Nothing functional, right? I believe the fact that Poppy cannot be
> functionally tested in the OpenStack CI gate basically disqualifies it from
> being "in OpenStack".
>

There is no implicit (or explicit) requirement for the tests to be a full
integration/end-to-end test. Mocks and/or unit tests would be sufficient to
satisfy "test-driven gate".
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-18 Thread Chris Friesen

On 02/17/2016 06:59 AM, Chris Dent wrote:


The advantage of a centralized datastore for that information is
that it provides administrative control (e.g. reserving resources for
other needs) and visibility. That level of command and control seems
to be something people really want (unfortunately).


I don't know if it necessarily requires a centralized datastore, but there is 
definitely interest in some sort of "reserving" of resources.


For instance, an orchestrator may want to do a two-stage setup where it reserves 
all the resources before actually trying to bring everything up.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 19 Feb 2016

2016-02-18 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

This week I've been focusing on general housekeeping duties as we prepare to 
rush headlong into the release. I've cleaned up the blueprints list, and also 
been burning through the oldest bugs in our queue. I'm very pleased to report 
that the overall number of bugs has reduced dramatically in the past month or 
two, and the number of untriaged bugs is holding steady at a much lower rate 
than previously. I think this probably has a lot to do with the DocImpact 
changes we made at the beginning of the year, so it's very gratifying to see 
some solid improvement starting to filter through. In other news, thanks to all 
those people who helped us with locating pre-release packages for the Install 
Guide. Please feel free to begin testing! The more people we have testing, the 
better and more accurate our docs will be, which helps everyone.

== Progress towards Mitaka ==

47 days to go!

417 bugs closed so far for this release. There is a global bug smash event 
coming up March 7-9 to try and hit as many Mitaka bugs as possible. You can 
join an in-person group near you, or participate remotely. There are still 
plenty of docs bugs that can be attended to. Details here: 
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka

Docs Testing
* Volunteers required!
* https://wiki.openstack.org/wiki/Documentation/MitakaDocTesting

RST Conversions
* All planned RST conversions are now complete! 

Reorganisations
* Arch Guide: really needs a last minute push to get this complete before 
Mitaka. If you can help out, it would be greatly appreciated!
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-reorg
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* User Guides
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reorganised
** Contact the User Guide Speciality team: 
https://wiki.openstack.org/wiki/User_Guides

DocImpact
* Is now complete

== The Road to Austin ==

* ATC passes are going out now, so keep an eye on your inbox for your discount 
code if you haven't received yours yet.
* Talk voting is now closed
* You should be starting to think about booking travel and accommodation soon! 
If you need a visa to travel to the United States, there's more info here: 
https://www.openstack.org/summit/austin-2016/austin-and-travel/#visa

== Speciality Teams ==

'''HA Guide - Bogdan Dobrelya'''
No update this week.

'''Installation Guide - Matt Kassawara'''
Testing Mitaka milestone 3 using RDO packages and submitting patches for 
review. Lead of the install guide changed from Christian Berendt to Matt 
Kassawara.

'''Networking Guide - Edgar Magana'''
No update this week.

'''Security Guide - Nathaniel Dillon'''
No update this week.

'''User Guides - Joseph Robinson'''
The new User Guide team meetings are a success. We now have more contributors, 
and are discussing doc editing, new content, and Information Architecture of 
the User Guides.

'''Ops and Arch Guides - Shilla Saebi'''
Architecture guide reorganization is underway. We have a drafts repo in 
openstack-manuals, feel free to ping Shilla or Darren Chan if you are 
interested in helping out. Considering doing a swarm or work session at the 
summit in Austin for the Arch guide. Operations guide RST migration pending - 
conversations still happening to see which route we'll take. Still deciding if 
the ops guide should have another revision or edition, see ML e-mails. Call for 
volunteers email went out to the ops and docs ML from Devon Boatwright. We are 
looking for help! There is a global OpenStack bug smash scheduled for the 
Mitaka release in March. Details can be found here: 
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Mitaka We're looking for 
people to specifically help with the Architecture Guide, which is currently 
going through a reorganization. Here is the link for more details: 
https://wiki.openstack.org/wiki/Architecture_Design_Guide_work_items If you are 
interested in helping out, more detai
 l
s can be found about our team and our meetings here: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide

'''API Docs - Anne Gentle'''
http://lists.openstack.org/pipermail/openstack-docs/2016-February/008286.html 
Swagger files now available at http://developer.openstack.org/draft/swagger/

'''Config Ref - Gauvain Pocentek'''
No update this week.

'''Training labs - Pranav Salunke, Roger Luethi'''
Fixing various things in training-labs cluster for VirtualBox and KVM backends. 
Improving workflow, configuration and making the library scripts nicer. Adding 
training-labs landing page (WIP). Summit related work. Some minor improvements.

'''Training Guides - Matjaz Pancur'''
No update this week.

'''Hypervisor Tuning Guide - Joe Topjian'''
No update this week.

'''UX/UI Docs Guidelines - Linette Williams'''
Enhancement to UI content guidelines underway. Expect a new review shortly t

Re: [openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

2016-02-18 Thread joehuang
Hi, Jay,

>From exactly definition, it's not full "independent" concept.

The KeyStone services will be distributed into each data center, so if one data 
center failed, the identity management especially token validation can work 
still in other data centers. That's one case studied in OPNFV multisite project.

There is difference between " An end user is able to import image from another 
Glance in another OpenStack cloud while sharing same identity management( 
KeyStone )" and other use cases. The difference is the image import need to 
reuse the token in the source Glance, other ones don't need this.

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Friday, February 19, 2016 10:43 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [glance]one more use case for Image Import 
Refactor from OPNFV

On 02/18/2016 08:34 PM, joehuang wrote:
> Hello,
>
> Glad to know that the "Image Import Refactor" is the essential BP in 
> Mitaka. One more use case from OPNFV as following:
>
> In OPNFV, one deployment scenario is each data center will be deployed 
> with independent OpenStack instance, but shared identity management.

That's not independent OpenStack instances.

> That means there will be one Glance with its backend in each datacenter.
> This is to make each datacenter can work standalone as much as 
> possible, even others crashed.

If the identity management is shared, it's not standalone.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-02-18 Thread Armando M.
On 18 February 2016 at 08:41, Sean M. Collins  wrote:

> This week's update:
>
> Armando was kind enough to take a look[1], since he's got a fresh
> perspective. I think I've been suffering from Target Fixation[1]
> where I failed to notice a couple other failures in the logs.
>

It's been fun, and I am glad I was able to help. Once I validated the root
cause of the metadata failure [1], I got run [2] and a clean pass in [3] :)

There are still a few things to iron out, ie. choosing metadata over
config-drive, testing both in the gate etc. But that's for another day.

Cheers,
Armando

[1] https://bugs.launchpad.net/nova/+bug/1545101/comments/4
[2]
http://logs.openstack.org/00/281600/6/experimental/gate-grenade-dsvm-neutron-multinode/40e16c8/
[3]
http://logs.openstack.org/00/281600/6/experimental/gate-grenade-dsvm-neutron-multinode/40e16c8/logs/testr_results.html.gz



>
> For example - during the SSH test into the instances, we are able to get
> a full SSH handshake and offer up the SSH key, however authentication
> fails[3], apparently due to the fact that the instance is not successful
> in contacting the metadata service and getting the SSH public key[4].
>
> So, I think the next bit of work is to track down why the metadata
> service isn't functioning properly. We pinged Matt Riedemann about one
> error we saw over in the nova metadata service, however he had seen it
> before us and already wrote a fix[5].
>
> That's the status of where things stand. Metadata service being broken,
> and also still MTU issues lurking in the background.
>
> [1]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-02-18.log.html#t2016-02-18T00:26:29
> [2]: https://en.wikipedia.org/wiki/Target_fixation
> [3]:
> http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-02-18.log.html#t2016-02-18T01:18:32
> [4]:
> http://logs.openstack.org/78/279378/9/experimental/gate-grenade-dsvm-neutron-multinode/40a5659/console.html#_2016-02-17_22_37_33_277
> [5]: https://review.openstack.org/#/c/279721/
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Trove][Sahara] Horizon-Trove/Sahara External Repository

2016-02-18 Thread Akihiro Motoki
Hi,

> Horizon still have some things we need to tidy up on our end to make sure we
> have full support for testing and localization for external plugins.

I am pleased to announce that translations of trove-dashboard and
sahara-dashboard
now work for the master branch. We have the parity to Liberty release :-)
Thanks Andreas and folks involved in this effort.

Akihiro

2015-12-04 6:44 GMT+09:00 Thai Q Tran :
> Hello Trovers and Horizoneers,
>
> The intention of this email is to get everyone on the same page so we are
> all aware of what is going on. As many of you are probably already aware,
> Horizon is moving toward the plugin model for all of its dashboards
> (including existing dashboards). This release cycle, we are aiming to move
> Sahara and Trove into their own repository with joint ownership of the
> respective project. I have spoken to interested parties, Craig, and David
> about it and we are all in agreement. Ideally, this should help speed up the
> review process for Trove, as you now own part of the code and ownership.
>
> Horizon still have some things we need to tidy up on our end to make sure we
> have full support for testing and localization for external plugins. We
> expect this to get resolve within the next few weeks. Work on excising the
> Trove code will begin this week so expect a patch for that soon! It would be
> ideal if we can merge existing Trove code before the excision happens. David
> has agreed to let these patches merge with one core vote if we have enough
> Trovers reviewing/reverifying them. So please help us help you.
>
> David and Craig, if I left anything else out, feel free add to this.
> Otherwise, have a good xmas everyone. Looking to working with you all in the
> coming weeks.
>
> Regard,
> Thai (tqtran)
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

2016-02-18 Thread Jay Pipes

On 02/18/2016 08:34 PM, joehuang wrote:

Hello,

Glad to know that the “Image Import Refactor” is the essential BP in
Mitaka. One more use case from OPNFV as following:

In OPNFV, one deployment scenario is each data center will be deployed
with independent OpenStack instance, but shared identity management.


That's not independent OpenStack instances.


That means there will be one Glance with its backend in each datacenter.
This is to make each datacenter can work standalone as much as possible,
even others crashed.


If the identity management is shared, it's not standalone.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Please do *not* use git (and specifically "git log") when generating the docs

2016-02-18 Thread Mike Bayer



On 02/18/2016 04:39 PM, Dolph Mathews wrote:


On Thu, Feb 18, 2016 at 11:17 AM, Thomas Goirand mailto:z...@debian.org>> wrote:

Hi,

I've seen Reno doing it, then some more. It's time that I raise the
issue globally in this list before the epidemic spreads to the whole of
OpenStack ! :)

The last occurence I have found is in oslo.config (but please keep in
mind this message is for all projects), which has, its
doc/source/conf.py:

git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'",
"--date=local","-n1"]
html_last_updated_fmt = subprocess.check_output(git_cmd,
 stdin=subprocess.PIPE)


Probably a dumb question, but why do you need to build the HTML docs
when you're building a package for Debian?


Sphinx builds in many formats, not just HTML, and includes among others 
man page format which is probably relevant to a Debian package.







Of course, the .git folder is *NOT* available when building a package in
Debian (and more generally, in downstream distros). This means that this
kind of joke *will* break the build of the packages when they also build
the docs of your project. And consequently, the package maintainers have
to patch out the above lines from conf.py. It'd be best if it wasn't
needed to do so.

As a consequence, it is *not ok* to do "git log" anywhere in the sphinx
docs. Please keep this in mind.

More generally, it is wrong to assume that even the git command is
present. For Mitaka b2, I had to add git as build-dependency on nearly
all server packages, otherwise they would FTBFS (fail to build from
source). This is plain wrong and makes no sense. I hope this can be
reverted somehow.

Thanks in advance for considering the above, and to try to see things
from the package maintainer's perspective,
Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Midcycle summary part 6/6

2016-02-18 Thread Jim Rollenhagen
Hi all,

As our midcycle is virtual and split into 6 "sessions" for the sake of
timezones, we'll be sending a brief summary of each session so that
folks can catch up before the next one. All of this info should be on
the etherpad as well.

Session 6/6 was February 19, -0400 UTC.

This will be a quick one. There were four of us present with nothing
relevant to talk about. We talked about John's ansible automation for
setting up a gate-like host for a bit. Then we talked about keyboards
for a few minutes. Then we decided to drop off and call it a day.

This virtual midcycle went far better than I'd expected. In the coming
week or so, I'll be writing a blog post and/or email here with a better
overall summary, and some thoughts on virtual midcycles as a thing.

Thanks to everyone who participated, and a *huge* thanks to the infra
team for providing an awesome VOIP system that had almost zero blips. :D

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] how to run rspec tests? r10k issue

2016-02-18 Thread Iury Gregory
Well, since we need to manually run "bundle install", we can pass the path
to where we want the gems but always require an extra argument is not the
best way.

2016-02-18 22:48 GMT-03:00 Matt Fischer :

> I ended up symlinking the r10k binary I have installed to the place it
> wants it to be and it worked. I do have that in my Gemfile. Question is,
> can we make this work without manual steps?
>
> On Thu, Feb 18, 2016 at 4:57 PM, Alex Schultz 
> wrote:
>
>>
>>
>> On Thu, Feb 18, 2016 at 3:26 PM, Matt Fischer 
>> wrote:
>>
>>> Is anyone able to share the secret of running spec tests since the r10k
>>> transition? bundle install && bundle exec rake spec have issues because
>>> r10k is not being installed. Since I'm not the only one hopefully this
>>> question will help others.
>>>
>>> +
>>> PUPPETFILE=/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/Puppetfile
>>> + /var/lib/gems/1.9.1/bin/r10k puppetfile install -v
>>> /etc/puppet/modules/keystone/openstack/puppet-openstack-integration/functions:
>>> line 51: /var/lib/gems/1.9.1/bin/r10k: No such file or directory
>>> rake aborted!
>>>
>>
>> I assume you were trying to run the tests on the keystone module so it
>> should have been installed with the bundle install as it is listed in the
>> Gemfile[0].  Are you sure your module is up to date?
>>
>> -Alex
>>
>> [0] https://github.com/openstack/puppet-keystone/blob/master/Gemfile#L26
>>
>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

~


*Att[]'sIury Gregory Melo Ferreira **Master student in Computer Science at
UFCG*
*E-mail:  iurygreg...@gmail.com *
~
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Deprecation policy between projects

2016-02-18 Thread Ken'ichi Ohmichi
Hi Gordon,

Thank you for the advice, that is very useful for me now :-)

Thanks
Ken Ohmichi

---

2016-02-15 18:35 GMT-08:00 gordon chung :
>
>
> On 14/02/2016 8:32 AM, Ken'ichi Ohmichi wrote:
>> Hi,
>>
>> Do we have any deprecation policies between projects?
>> When we can remove old drivers of the other projects after they were
>> marked as deprecated?
>> In nova, there are many drivers for the other projects and there are
>> patches which remove this kind of code. (e.g:
>> https://review.openstack.org/#/c/274696/)
>>
>> This seems a common question and I maybe missed previous discussion.
>>
>> Thanks
>> Ken Ohmichi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> the 'official' deprecation policy with regards to tags is this:
> https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
>
> i'd imagine it holds up whether talking about internal features or in
> your case features across projects.
>
> cheers,
> --
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions regarding image "location" and glanceclient behaviour ...

2016-02-18 Thread Martinx - ジェームズ
>
> > But this procedure will force me to download all images in advance,
> which I
> > can not do.
> >
> > I NEED the previous behavior, where Glance download the images by itself,
> > on demand.
> >
> > How to do this with V2 ?
>
> You can use glance image-create without passing it any image data. You
> should then be able to use location-add and I believe Glance will download
> the image itself in that case.
>
> Cheers,
> --
> Ian Cordasco
>

That is good news! I am going to try it right now!

Thank you!

Cheers!
Thiago
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] how to run rspec tests? r10k issue

2016-02-18 Thread Matt Fischer
I ended up symlinking the r10k binary I have installed to the place it
wants it to be and it worked. I do have that in my Gemfile. Question is,
can we make this work without manual steps?

On Thu, Feb 18, 2016 at 4:57 PM, Alex Schultz  wrote:

>
>
> On Thu, Feb 18, 2016 at 3:26 PM, Matt Fischer 
> wrote:
>
>> Is anyone able to share the secret of running spec tests since the r10k
>> transition? bundle install && bundle exec rake spec have issues because
>> r10k is not being installed. Since I'm not the only one hopefully this
>> question will help others.
>>
>> +
>> PUPPETFILE=/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/Puppetfile
>> + /var/lib/gems/1.9.1/bin/r10k puppetfile install -v
>> /etc/puppet/modules/keystone/openstack/puppet-openstack-integration/functions:
>> line 51: /var/lib/gems/1.9.1/bin/r10k: No such file or directory
>> rake aborted!
>>
>
> I assume you were trying to run the tests on the keystone module so it
> should have been installed with the bundle install as it is listed in the
> Gemfile[0].  Are you sure your module is up to date?
>
> -Alex
>
> [0] https://github.com/openstack/puppet-keystone/blob/master/Gemfile#L26
>
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

2016-02-18 Thread joehuang
Hello,

Glad to know that the "Image Import Refactor" is the essential BP in Mitaka. 
One more use case from OPNFV as following:

In OPNFV, one deployment scenario is each data center will be deployed with 
independent OpenStack instance, but shared identity management. That means 
there will be one Glance with its backend in each datacenter. This is to make 
each datacenter can work standalone as much as possible, even others crashed. 
An end user can upload image or create image from VM ( or volume ) in one 
OpenStack cloud, but want to reuse the image in another OpenStack cloud with 
different Glance and backend.

The use case for Image Import Refactor is:


l  An end user is able to import image from another Glance in another OpenStack 
cloud while sharing same identity management( KeyStone )

May this use case be took into account in the "Image Import Refactor". Thanks 
in advance.

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-18 Thread Fox, Kevin M
It might be good to consider the scale that's needed too. For us, some of our 
clouds are on an internal network and its highly non desirable for an external 
cdn to be used. But to have the api work internally and scale out to at least 
the organization, so the same app templates can be used internally and 
externally. that would be very cool. So backing it by swift or something would 
be an interesting way to do it. It might not be very hard to implement that 
way, and would be useful to those that have private only clouds (probably a lot 
of folks). It wouldn't be a true CDN in the regular sense since it wouldn't be 
geographic. But, good enough. Would that be possible?

Thanks,
Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Thursday, February 18, 2016 4:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

On 2016-02-18 17:20:35 -0600 (-0600), Ian Cordasco wrote:
[...]
> Presently, I think we need a F/OSS CDN but it isn't going to
> happen until the infrastructure for a CDN is something any
> OpenStack consumer would want to manage.
[...]

Probably an unusual use case and stretching the definition of CDN:
the Infra team has rolled their own by putting Apache on virtual
machines serving content from AFS for the purpose of hosting a
variety of content mirrors in each of the myriad OpenStack
providers/regions where it runs CI jobs. It may be that the
infrastructure for a CDN is in fact something that a lot of
multi-region/cross-provider applications would like to manage but
that their needs are sufficiently different so as to make a targeted
solution for one useless for another.

Ignorance on my part I'm sure, but I'd like to see a definition of
"content delivery network" that people can agree on before figuring
out what Poppy even is. I've browsed its documentation and it
doesn't seem to actually define this, so I get the impression that
its entire existence is defined and informed solely by other
proprietary application designs.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-18 Thread Jeremy Stanley
On 2016-02-18 17:20:35 -0600 (-0600), Ian Cordasco wrote:
[...]
> Presently, I think we need a F/OSS CDN but it isn't going to
> happen until the infrastructure for a CDN is something any
> OpenStack consumer would want to manage.
[...]

Probably an unusual use case and stretching the definition of CDN:
the Infra team has rolled their own by putting Apache on virtual
machines serving content from AFS for the purpose of hosting a
variety of content mirrors in each of the myriad OpenStack
providers/regions where it runs CI jobs. It may be that the
infrastructure for a CDN is in fact something that a lot of
multi-region/cross-provider applications would like to manage but
that their needs are sufficiently different so as to make a targeted
solution for one useless for another.

Ignorance on my part I'm sure, but I'd like to see a definition of
"content delivery network" that people can agree on before figuring
out what Poppy even is. I've browsed its documentation and it
doesn't seem to actually define this, so I get the impression that
its entire existence is defined and informed solely by other
proprietary application designs.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Update on scheduler and resource tracker progress

2016-02-18 Thread Jay Pipes

On 02/18/2016 07:16 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2016-02-18 11:33:04 -0800:
I'm talking about the destination host selection process too, but I was
just assuming you'd need compound indexes to make this really efficient,
and I assumed that would mean more indexes than exist today.


Well, it's an *entirely* different schema than exists today... kind of 
tough to compare based on the existence of compound indexes in the new 
schema (which uses integer sums for all resource amount comparisons) to 
a schema that uses JSON blobs for some resources, integer fields for 
some resources, and entirely different tables for other resources 
(pci_devices) ;)



So, I guess what I may have missed was that these indexes already exist.


None of them exist in the current database schema. The new schema does 
have indexes:


CREATE TABLE resource_providers (
  id INT NOT NULL,
  uuid CHAR(36) NOT NULL,
  name VARCHAR(200) NULL,
  can_host INT NOT NULL,
  generation INT NOT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY (uuid)
);

CREATE TABLE inventories (
  resource_provider_id INT NOT NULL,
  resource_class_id INT NOT NULL,
  total INT NOT NULL,
  reserved INT NOT NULL,
  min_unit INT NOT NULL,
  max_unit INT NOT NULL,
  step_size INT NOT NULL,
  allocation_ratio FLOAT NOT NULL,
  PRIMARY KEY (resource_provider_id, resource_class_id),
  INDEX (resource_class_id)
);

CREATE TABLE IF NOT EXISTS allocations (
  id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
  resource_provider_id INT NOT NULL,
  resource_class_id INT NOT NULL,
  consumer_uuid CHAR(36) NOT NULL,
  used INT NOT NULL,
  created_at DATETIME NOT NULL,
  INDEX (resource_provider_id, resource_class_id),
  INDEX (consumer_uuid),
  INDEX (resource_class_id, resource_provider_id, used)
);

Lemme know if you spot somewhere that would benefit from alternate 
indexes or if you disagree with the indexing placed on the above tables.



As you would expect, the larger the size of the deployment, the greater
the performance benefit you see using the DB for querying instead of
Python (lower numbers are better here):

DB or Python   # Compute Nodes   Avg Time to SelectDelta

DB 100   0.021035
Python 100   0.022517  +7.0%
DB 200   0.023370
Python 200   0.026526 +13.5%
DB 400   0.027638
Python 400   0.034666 +25.4%
DB 800   0.034814
Python 800   0.048271 +38.6%

The above was for a serialized scenario (1 scheduler process). Parallel
operations at 2, 4 and 8 scheduler processes were virtually identical as
can be expected since this is testing the read operation performance,
not the write operations.


I am not surprised at these results at all. However, I am still a little
wary of anything that happens faster in a central resource. Faster
is great, but it also means we now have to scale _up_ that central
resource. Hopefully it is so much more efficient to read indexes from
that DB instead of filter lists in python that we get a very large margin
between what lots of slow python processes could have done and what one
very fast mysqld can do.


Sure, I understand your concerns. I built the placement-bench project 
precisely to get data to inform us of the benefits and drawbacks of 
different approaches. Hopefully that data will allow us to make good 
decisions in the future.



  > With 1000 active compute nodes updating their status,

each index added will be 1000 more index writes per update period. Still
a net win, but I'm always cautious about shifting things to more writes
on the database server. That said, I do think it will be a win and should
be done.


Again, this isn't what the "move the filtering to the database query"
proposal is about :) You are describing the *claim* operation above, not
the select-destination operation.

The *current* scheduler design is what has each distributed compute node
sending updates to the scheduler^Wdatabase each time a claim occurs.
What the second part of my proposal does is move the claim from the
distributed compute nodes and into the scheduler, which should allow the
scheduler to operate on non-stale data (which will reduce the number of
long retry operations). More below.


The second major scale problem with the current Nova scheduler design
has to do with the fact that the scheduler does *not* actually claim
resources on a provider. Instead, the scheduler selects a destination
host to place the instance on and the Nova conductor then sends a
message to that target host which attempts to spawn the instance on its
hypervisor. If the spawn succeeds, the target compute host updates the
Nova database and decrements its count of available resources. These
steps (from nova-scheduler to nova-conductor to nova-compute to
database) all 

Re: [openstack-dev] [nova] Update on scheduler and resource tracker progress

2016-02-18 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2016-02-18 11:33:04 -0800:
> On 02/12/2016 01:47 PM, Clint Byrum wrote:
> > Excerpts from Jay Pipes's message of 2016-02-11 12:24:04 -0800:
> >> Hello all,
> >>
> >> Performance working group, please pay attention to Chapter 2 in the
> >> details section.
> >>
> >
> > 
> >
> >> Chapter 2 - Addressing performance and scale
> >> 
> >>
> >> One of the significant performance problems with the Nova scheduler is
> >> the fact that for every call to the select_destinations() RPC API method
> >> -- which itself is called at least once every time a launch or migration
> >> request is made -- the scheduler grabs all records for all compute nodes
> >> in the deployment. Once retrieving all these compute node records, the
> >> scheduler runs each through a set of filters to determine which compute
> >> nodes have the required capacity to service the instance's requested
> >> resources. Having the scheduler continually retrieve every compute node
> >> record on each request to select_destinations() is extremely
> >> inefficient. The greater the number of compute nodes, the bigger the
> >> performance and scale problem this becomes.
> >>
> >> On a loaded cloud deployment -- say there are 1000 compute nodes and 900
> >> of them are fully loaded with active virtual machines -- the scheduler
> >> is still going to retrieve all 1000 compute node records on every
> >> request to select_destinations() and process each one of those records
> >> through all scheduler filters. Clearly, if we could filter the amount of
> >> compute node records that are returned by removing those nodes that do
> >> not have available capacity, we could dramatically reduce the amount of
> >> work that each call to select_destinations() would need to perform.
> >>
> >> The resource-providers-scheduler blueprint attempts to address the above
> >> problem by replacing a number of the scheduler filters that currently
> >> run *after* the database has returned all compute node records with
> >> instead a series of WHERE clauses and join conditions on the database
> >> query. The idea here is to winnow the number of returned compute node
> >> results as much as possible. The fewer records the scheduler must
> >> post-process, the faster the performance of each individual call to
> >> select_destinations().
> >
> > This is great, and I think it is the way to go. However, I'm not sure how
> > dramatic the overall benefit will be, since it also shifts some load from
> >  reads to writes.
> 
> No, the above is *only* talking about the destination host selection 
> process, not the claim process. There are no writes here at all.
> 
>  From my benchmarking, I see a 7.0% to 38.6% increase in the average 
> time to perform the destination selection operation when doing the 
> resource filtering on the Python side as opposed to in the DB side.
> 

I'm talking about the destination host selection process too, but I was
just assuming you'd need compound indexes to make this really efficient,
and I assumed that would mean more indexes than exist today.

So, I guess what I may have missed was that these indexes already exist.

> As you would expect, the larger the size of the deployment, the greater 
> the performance benefit you see using the DB for querying instead of 
> Python (lower numbers are better here):
> 
> DB or Python   # Compute Nodes   Avg Time to SelectDelta
> 
> DB 100   0.021035
> Python 100   0.022517  +7.0%
> DB 200   0.023370
> Python 200   0.026526 +13.5%
> DB 400   0.027638
> Python 400   0.034666 +25.4%
> DB 800   0.034814
> Python 800   0.048271 +38.6%
> 
> The above was for a serialized scenario (1 scheduler process). Parallel 
> operations at 2, 4 and 8 scheduler processes were virtually identical as 
> can be expected since this is testing the read operation performance, 
> not the write operations.
> 

I am not surprised at these results at all. However, I am still a little
wary of anything that happens faster in a central resource. Faster
is great, but it also means we now have to scale _up_ that central
resource. Hopefully it is so much more efficient to read indexes from
that DB instead of filter lists in python that we get a very large margin
between what lots of slow python processes could have done and what one
very fast mysqld can do.

>  > With 1000 active compute nodes updating their status,
> > each index added will be 1000 more index writes per update period. Still
> > a net win, but I'm always cautious about shifting things to more writes
> > on the database server. That said, I do think it will be a win and should
> > be done.
> 
> Again, this isn't what the "mov

Re: [openstack-dev] [puppet] how to run rspec tests? r10k issue

2016-02-18 Thread Alex Schultz
On Thu, Feb 18, 2016 at 3:26 PM, Matt Fischer  wrote:

> Is anyone able to share the secret of running spec tests since the r10k
> transition? bundle install && bundle exec rake spec have issues because
> r10k is not being installed. Since I'm not the only one hopefully this
> question will help others.
>
> +
> PUPPETFILE=/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/Puppetfile
> + /var/lib/gems/1.9.1/bin/r10k puppetfile install -v
> /etc/puppet/modules/keystone/openstack/puppet-openstack-integration/functions:
> line 51: /var/lib/gems/1.9.1/bin/r10k: No such file or directory
> rake aborted!
>

I assume you were trying to run the tests on the keystone module so it
should have been installed with the bundle install as it is listed in the
Gemfile[0].  Are you sure your module is up to date?

-Alex

[0] https://github.com/openstack/puppet-keystone/blob/master/Gemfile#L26


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] network question and documentation

2016-02-18 Thread Ian Cordasco
 

-Original Message-
From: Fabrice Grelaud 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: February 17, 2016 at 09:02:49
To: openstack-dev@lists.openstack.org 
Subject:  [openstack-dev] [openstack-ansible] network question and documentation

> Hi,
>  
> after a first test architecture of openstack (juno then upgrade to kilo), 
> installed  
> from scratch, and because we use Ansible in our organization, we decided to 
> deploy our  
> next openstack generation architecture from the project openstack-ansible.
>  
> I studied your documentation (very good work and very appreciate, 
> http://docs.openstack.org/developer/openstack-ansible/[kilo|liberty]/install-guide/index.html)
>   
> and i will need some more clarification compared to network architecture.
>  
> I'm not sure to be on the good mailing-list because it 's dev oriented here, 
> for all that,  
> i fear my request to be embedded in the openstack overall list, because it's 
> very specific  
> to the architecture proposed by your project (bond0 (br-mngt, br-storage), 
> bond1 (br-vxlan,  
> br-vlan)).
>  
> I'm sorry about that if that is the case...
>  
> So, i would like to know if i'm going in the right direction.
> We want to use both, existing vlan from our existing physical architecture 
> inside openstack  
> (vlan provider) and "private tenant network" with IP floating offer (from a 
> flat network).  
>  
> My question is about switch configuration:
>  
> On Bond0:
> the switch port connected to bond0 need to be configured as trunks with:
> - the host management network (vlan untagged but can be tagged ?)
> - container(mngt) network (vlan-container)
> - storage network (vlan-storage)
>  
> On Bond1:
> the switch port connected to bond1 need to be configured as trunks with:
> - vxlan network (vlan-vxlan)
> - vlan X (existing vlan in our existing network infra)
> - vlan Y (existing vlan in our existing network infra)
>  
> Is that right ?
>  
> And do i have to define a new network (a new vlan, flat network) that offer 
> floatting IP  
> for private tenant (not using existing vlan X or Y)? Is that new vlan have to 
> be connected  
> to bond1 and/or bond0 ?
> Is that host management network could play this role ?
>  
> Thank you to consider my request.
> Regards
>  
> ps: otherwise, about the documentation, for great understanding and perhaps 
> consistency  
> In Github (https://github.com/openstack/openstack-ansible), in the file 
> openstack_interface.cfg.example,  
> you point out that for br-vxlan and br-storage, "only compute node have an IP 
> on this bridge.  
> When used by infra nodes, IPs exist in the containers and inet should be set 
> to manual".  
>  
> I think it will be good (but i may be wrong ;-) ) that in chapter 3 of the 
> "install guide: configuring  
> the network on target host", you propose the /etc/network/interfaces for both 
> controller  
> node (br-vxlan, br-storage: manual without IP) and compute node (br-vxlan, 
> br-storage:  
> static with IP).

Hi Fabrice,

Has anyone responded to your questions yet?

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions regarding image "location" and glanceclient behaviour ...

2016-02-18 Thread Ian Cordasco
-Original Message-
From: Martinx - ジェームズ 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: February 17, 2016 at 23:18:11
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] Questions regarding image "location" and 
glanceclient behaviour ...

> But this procedure will force me to download all images in advance, which I
> can not do.
>  
> I NEED the previous behavior, where Glance download the images by itself,
> on demand.
>  
> How to do this with V2 ?

You can use glance image-create without passing it any image data. You should 
then be able to use location-add and I believe Glance will download the image 
itself in that case.

Cheers,
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-18 Thread Ian Cordasco
-Original Message-
From: Flavio Percoco 
Reply: Flavio Percoco , OpenStack Development Mailing List 
(not for usage questions) 
Date: February 18, 2016 at 10:10:18
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

> On 16/02/16 19:17 +, Sean M. Collins wrote:
> >That is certainly a problem. However I think I would lean on Sean
> >Dague's argument about how Neutron had an open source solution that
> >needed a lot of TLC. The point being that at least they had 1 option.
> >Not zero options.
> >
> >And Dean's point about gce and aws API translation into OpenStack
> >Compute is also very relevant. We have precedence for doing API
> >translation layers that take some foreign API and translate it into
> >"openstackanese"
> >
> >I think Poppy would have a lot easier time getting into OpenStack were
> >it to take the steps to build a back-end that would do the required
> >operations to create a CDN - using a multi-region OpenStack cloud. Or
> >even adopting an open source CDN. Something! Anything really!
> >
> >Yes, it's a lot of work, but without that, as I think others have
> >stated - where's the OpenStack part?
>  
> That's not Poppy's business, fwiw. We can't ask a provisioning project to also
> be in the business of providing a data API. As others have mentioned, it's 
> just
> unfortunate that there's no open source solution for CDNs. TBH, I'd rather 
> have
> Poppy not running functional tests (because this is basically what this
> discussion is coming down to) than having the team working on a
> half-implemented, kinda CDN hack just to make the CI happy.
>  
> If someone wants to work on a CDN service, fine. That sounds awesome but let's
> not push the Poppy team down that road. They have a clear goal and mission.
> OpenStack's requirements are a bit too narrow for them.
>  
> That said, as Monty mentioned in the TC meeting, deploying CDN's is not
> necessary something a cloud wants to do. Providing a service that provisions
> CDN's is more likely to be used by a cloud provider.

I've been sitting on the fence for a while now in this discussion but I'd have 
to say that this point of view has swayed me towards being in favor of 
including Poppy in OpenStack.

I understand the arguments against this, but I think in spite of the weak 
similarities being drawn with other projects (e.g., Cinder having F/OSS 
drivers) I think we have to also recognize a difference in the problem domains. 
Presently, I think we need a F/OSS CDN but it isn't going to happen until the 
infrastructure for a CDN is something any OpenStack consumer would want to 
manage.

If anything, a consumer of OpenStack would probably like the freedom that poppy 
will provide in being able to swap out existing CDN providers while consuming 
the same API.

But that's just my two cents as a non-TC community member.
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] how to run rspec tests? r10k issue

2016-02-18 Thread Iury Gregory
Hi Matt, when the r10k is not installed I use gem install or bundle install
passing the directory that should have the gem.
I'll search. Tomorrow i'll try to find the command in my env and post.
Em 18/02/2016 19:27, "Matt Fischer"  escreveu:

> Is anyone able to share the secret of running spec tests since the r10k
> transition? bundle install && bundle exec rake spec have issues because
> r10k is not being installed. Since I'm not the only one hopefully this
> question will help others.
>
> +
> PUPPETFILE=/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/Puppetfile
> + /var/lib/gems/1.9.1/bin/r10k puppetfile install -v
> /etc/puppet/modules/keystone/openstack/puppet-openstack-integration/functions:
> line 51: /var/lib/gems/1.9.1/bin/r10k: No such file or directory
> rake aborted!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-18 Thread melanie witt
On Feb 12, 2016, at 14:49, Jay Pipes  wrote:

> This would be my preference as well, even though it's technically a 
> backwards-incompatible API change.
> 
> The idea behind get-me-a-network was specifically to remove the current 
> required complexity of the nova boot command with regards to networking 
> options and allow a return to the nova-net model where an admin could 
> auto-create a bunch of unassigned networks and the first time a user booted 
> an instance and did not specify any network configuration (the default, sane 
> behaviour in nova-net), one of those unassigned networks would be grabbed for 
> the troject, I mean prenant, sorry.
> 
> So yeah, the "opt-in to having no networking at all with a --no-networking or 
> --no-nics option" would be my preference.

+1 to this, especially opting in to have no network at all. It seems most 
friendly to me to have the network allocation automatically happen if nothing 
special is specified.

This is something where it seems like we need a "reset" to a default behavior 
that is user-friendly. And microversions is the way we have to "fix" an 
undesirable current default behavior.

While I get that a backward-incompatible change may appear to "sneak in" for a 
user specifying a later microversion to get an unrelated feature, it seems 
reasonable to me that a user specifying a microversion would consult the 
documentation for the version delta to get a clear picture of what to expect 
once they specify the new version. This of course hinges on users knowing how 
microversions work and being familiar with consulting documentation when 
changing versions. I hope that is the case and I hope this change will come 
with a very clear and concise release note with a link to [1].

-melanie

[1] http://docs.openstack.org/developer/nova/api_microversion_history.html



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] how to run rspec tests? r10k issue

2016-02-18 Thread Matt Fischer
Is anyone able to share the secret of running spec tests since the r10k
transition? bundle install && bundle exec rake spec have issues because
r10k is not being installed. Since I'm not the only one hopefully this
question will help others.

+
PUPPETFILE=/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/Puppetfile
+ /var/lib/gems/1.9.1/bin/r10k puppetfile install -v
/etc/puppet/modules/keystone/openstack/puppet-openstack-integration/functions:
line 51: /var/lib/gems/1.9.1/bin/r10k: No such file or directory
rake aborted!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dnsmasq]DNS redirection by dnsmasq

2016-02-18 Thread Carl Baldwin
On Tue, Feb 16, 2016 at 11:55 PM, Zhi Chang  wrote:
> DNS redirection is our customer's needs. Customer has their own CDN. They
> want to save traffic in CDN so that they can cost less money.
> So they let us hijack some domain names. We used dnsmasq "--cname" option to
> satisfy their needs. So I think that maybe we can add
> "cnames" into subnet's attributes.

So, you add a CNAME for something like mycdn.somedomain.com and send
it somewhere local.  Is that what you mean by hijack?  Could you
provide a contrived example of how one of these CNAMEs might look?

Right now, you might be able to accomplish this by pointing dnsmasq to
your own upstream DNS resolvers which have the CNAMEs.  Or, do the
CNAMEs need to be tenant/network specific?  You could also bypass
dnsmasq by setting the dns servers on the subnets to go to some
external server.

> BTW, I'm not quite understand about "--cname is limited to target names
> known by dnsmasq itself". Could you give me some explanation about it?

>From the dnsmasq man page:

--cname=,

Return a CNAME record which indicates that  is really .
There are significant limitations on the target; it must be a DNS name
which is known to dnsmasq from /etc/hosts (or additional hosts files),
from DHCP, from --interface-name or from another --cname. If the
target does not satisfy this criteria, the whole cname is ignored. The
cname must be unique, but it is permissable to have more than one
cname pointing to the same target.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Please do *not* use git (and specifically "git log") when generating the docs

2016-02-18 Thread Paul Belanger
On Fri, Feb 19, 2016 at 01:17:08AM +0800, Thomas Goirand wrote:
> Hi,
> 
> I've seen Reno doing it, then some more. It's time that I raise the
> issue globally in this list before the epidemic spreads to the whole of
> OpenStack ! :)
> 
> The last occurence I have found is in oslo.config (but please keep in
> mind this message is for all projects), which has, its doc/source/conf.py:
> 
> git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'",
>"--date=local","-n1"]
> html_last_updated_fmt = subprocess.check_output(git_cmd,
> stdin=subprocess.PIPE)
> 
> Of course, the .git folder is *NOT* available when building a package in
> Debian (and more generally, in downstream distros). This means that this
> kind of joke *will* break the build of the packages when they also build
> the docs of your project. And consequently, the package maintainers have
> to patch out the above lines from conf.py. It'd be best if it wasn't
> needed to do so.
> 
> As a consequence, it is *not ok* to do "git log" anywhere in the sphinx
> docs. Please keep this in mind.
> 
> More generally, it is wrong to assume that even the git command is
> present. For Mitaka b2, I had to add git as build-dependency on nearly
> all server packages, otherwise they would FTBFS (fail to build from
> source). This is plain wrong and makes no sense. I hope this can be
> reverted somehow.
> 
> Thanks in advance for considering the above, and to try to see things
> from the package maintainer's perspective,
> Cheers,
> 
I ran into this in Fedora rawhide a few weeks ago. When talking to Doug in
-infra, there was some discussion to integrate into PBR.  To me, it is just
lacking functionality ATM. For the moment, I dropped reno support on the package
which is not major at this point.

I suspect it will take a few more release to allow packagers to properly use it.

> Thomas Goirand (zigo)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Please do *not* use git (and specifically "git log") when generating the docs

2016-02-18 Thread Matthew Thode
On 02/18/2016 11:17 AM, Thomas Goirand wrote:
> Hi,
> 
> I've seen Reno doing it, then some more. It's time that I raise the
> issue globally in this list before the epidemic spreads to the whole of
> OpenStack ! :)
> 
> The last occurence I have found is in oslo.config (but please keep in
> mind this message is for all projects), which has, its doc/source/conf.py:
> 
> git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'",
>"--date=local","-n1"]
> html_last_updated_fmt = subprocess.check_output(git_cmd,
> stdin=subprocess.PIPE)
> 
> Of course, the .git folder is *NOT* available when building a package in
> Debian (and more generally, in downstream distros). This means that this
> kind of joke *will* break the build of the packages when they also build
> the docs of your project. And consequently, the package maintainers have
> to patch out the above lines from conf.py. It'd be best if it wasn't
> needed to do so.
> 
> As a consequence, it is *not ok* to do "git log" anywhere in the sphinx
> docs. Please keep this in mind.
> 
> More generally, it is wrong to assume that even the git command is
> present. For Mitaka b2, I had to add git as build-dependency on nearly
> all server packages, otherwise they would FTBFS (fail to build from
> source). This is plain wrong and makes no sense. I hope this can be
> reverted somehow.
> 
> Thanks in advance for considering the above, and to try to see things
> from the package maintainer's perspective,
> Cheers,
> 
> Thomas Goirand (zigo)
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Coming across a bit strong :P

While we (gentoo) are able to build the docs in our git branch based
ebuilds (sys-cluster/nova-2015.2. for liberty for example) we can't
do so in our tag based ebuilds don't think tarballs.openstack.org does
(or should) ship the .git folder.  For us, building docs at install (or
binpkg) generation time is the only way, so this restricts docs for us
entirely.

If the docs are built and shipped in the tarballs that could work for us
though, we'd just move the files where needed.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Please do *not* use git (and specifically "git log") when generating the docs

2016-02-18 Thread Dolph Mathews
On Thu, Feb 18, 2016 at 11:17 AM, Thomas Goirand  wrote:

> Hi,
>
> I've seen Reno doing it, then some more. It's time that I raise the
> issue globally in this list before the epidemic spreads to the whole of
> OpenStack ! :)
>
> The last occurence I have found is in oslo.config (but please keep in
> mind this message is for all projects), which has, its doc/source/conf.py:
>
> git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'",
>"--date=local","-n1"]
> html_last_updated_fmt = subprocess.check_output(git_cmd,
> stdin=subprocess.PIPE)
>

Probably a dumb question, but why do you need to build the HTML docs when
you're building a package for Debian?


>
> Of course, the .git folder is *NOT* available when building a package in
> Debian (and more generally, in downstream distros). This means that this
> kind of joke *will* break the build of the packages when they also build
> the docs of your project. And consequently, the package maintainers have
> to patch out the above lines from conf.py. It'd be best if it wasn't
> needed to do so.
>
> As a consequence, it is *not ok* to do "git log" anywhere in the sphinx
> docs. Please keep this in mind.
>
> More generally, it is wrong to assume that even the git command is
> present. For Mitaka b2, I had to add git as build-dependency on nearly
> all server packages, otherwise they would FTBFS (fail to build from
> source). This is plain wrong and makes no sense. I hope this can be
> reverted somehow.
>
> Thanks in advance for considering the above, and to try to see things
> from the package maintainer's perspective,
> Cheers,
>
> Thomas Goirand (zigo)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-18 Thread Matt Riedemann



On 2/15/2016 1:41 AM, Gary Kotton wrote:

Yes, you could consider Neutron as a proxy for this. It creates the
network, subnet, router… To be honest I think that we should maybe
consider ofering a template that can be created by Neutron and then the
template ID passed from Nova or wherever. This will enable an admin to
pre cook a number of different templates for different use cases. But
maybe that I too far down the line.


From: Alex Xu mailto:sou...@gmail.com>>
Reply-To: OpenStack List mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 15, 2016 at 7:06 AM
To: OpenStack List mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [nova][neutron] How would nova microversion
get-me-a-network in the API?

May I ask can we put those thing in to the CLI? I guess there should
have similar discussion and I missed. As we didn't want to provide more
neutron API proxy, this works sounds like adding more proxy. And API is
more simple and more flexible, this make the API have more complex
behaviour. Just like evacuate API, it just does one thing, for evacuate
the all the instances on the host, that should be CLI thing.

Thanks
Alex

2016-02-13 1:15 GMT+08:00 Matt Riedemann mailto:mrie...@linux.vnet.ibm.com>>:

Forgive me for thinking out loud, but I'm trying to sort out how
nova would use a microversion in the nova API for the
get-me-a-network feature recently added to neutron [1] and planned
to be leveraged in nova (there isn't a spec yet for nova, I'm trying
to sort this out for a draft).

Originally I was thinking that a network is required for nova boot,
so we'd simply check for a microversion and allow not specifying a
network, easy peasy.

Turns out you can boot an instance in nova (with neutron as the
network backend) without a network. All you get is a measly debug
log message in the compute logs [2]. That's kind of useless though
and seems silly.

I haven't tested this out yet to confirm, but I suspect that if you
create a nova instance w/o a network, you can latter try to attach a
network using the os-attach-interfaces API as long as you either
provide a network ID *or* there is a public shared network or the
tenant has a network at that point (nova looks those up if a
specific network ID isn't provided).

The high-level plan for get-me-a-network in nova was simply going to
be if the user tries to boot an instance and doesn't provide a
network, and there isn't a tenant network or public shared network
to default to, then nova would call neutron's new
auto-allocated-topology API to get a network. This, however, is a
behavior change.

So I guess the question now is how do we handle that behavior change
in the nova API?

We could add an auto-create-net boolean to the boot server request
which would only be available in a microversion, then we could check
that boolean in the compute API when we're doing network validation.

Today if you don't specify a network and don't have a network
available, then the validation in the API is basically just quota
checking that you can get at least one port in your tenant [3]. With
a flag on a microversion, we could also validate some other things
about auto-creating a network (if we know that's going to be the
case once we hit the compute).

Anyway, this is mostly me getting thoughts out of my head before the
weekend so I don't forget it and am looking for other ideas here or
things I might be missing.

[1] https://blueprints.launchpad.net/neutron/+spec/get-me-a-network
[2]

https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L594-L595


[3]

https://github.com/openstack/nova/blob/30ba0c5eb19a9c9628957ac8e617ae78c0c1fa84/nova/network/neutronv2/api.py#L1107



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [ironic] Midcycle summary part 5/6

2016-02-18 Thread Jim Rollenhagen
Hi all,

As our midcycle is virtual and split into 6 "sessions" for the sake of
timezones, we'll be sending a brief summary of each session so that
folks can catch up before the next one. All of this info should be on
the etherpad as well.

Session 5/6 was February 18, 1500-2000 UTC.

* Discussed live upgrades in general
  * Agreed that we do not need to isolate the API from DB at this time,
nor use the @remotable decorator on object methods
* Currently update are passed via a conductor RPC method, with only
  the delta being passed. What would the perf impact be of passing
  the whole object over RPC (in the @remotable case)
  * Things we need to do to get to live upgrade
* Get grenade gating, and grenade-partial running (even if broken)
  * These should test upgrades both from last stable release and
last intermediate release
* Be able to pin RPC versions, probably via config to start
* Get better at reviewing for compatibility in the rpcapi
  * grenade-partial will help here
* Maintain expand/contract migrations
  * Several WIPs will need to keep this in mind when moving data
  * around
* Good deployer docs on upgrade process

* Reviewed the network provider patch
  * https://review.openstack.org/#/c/139687/70
  * Found a number of issues that will need a major refactor to solve
  * Plan is to make this a NetworkInterface, dynamically loaded, similar
to our existing drivers
  * May need an "enabled network providers" thing
  * jroll to refactor this patch tomorrow; will likely turn out similar
to the driver composition proposal
  * Need folks to review the driver comp proposal, to make sure that's
reasonable for this work
* https://review.openstack.org/#/c/188370/
  * Talked about a "network state" sort of endpoint that can talk to
network providers
* This would initially move the (un)plug_vifs logic out of nova and
  into ironic
* will likely become a summit topic: "three service tug rope?"
  * As a user, would I...
* ask the bare metal service to plug a physical device into a
  network?
* ask the network service to plug my instance into a network?
* ask the compute service to plug my instance into a network?

* Discussed VLAN aware baremetal spec
  * tl;dr unbinds user-facing neutron networking from physical infra
  * https://review.openstack.org/#/c/277853/
  * Seems mostly sane, just some details to work out
  * Discussed how to do more complex port mapping
* i.e. "put the 1g port on net x and the 10g port on net y"
* Decided this is a completely separate piece of work; can be solved
  in parallel.
  * Requires work on glean/cloud-init and the metadata to plumb data
through
* Primarily VLANs/bonding
  * How to determine in the ML2 mech if switchport is trunk or access
mode?
  * How do we support instances that don't support VLANs?
  * Current POC code munges configdrive in ironic driver to pass the
right metadata; need to work with Nova team to figure out how to get
this up in the neutron API; mostly for sake of configdrive
  * This is also likely to become a summit session
  * Distinct action items for now:
* jroll to put this on summit hotlist
* jroll (or other rackspace folks) to dig up cloud-init patches
* sambetts to get the POC code on gerrit for testing/visibility
* sambetts and TheJulia to hack on glean
* all: review the spec :)

Thanks to all for coming to this session, it was very productive! Just
one more to go. See some of you at . :D

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Update on os-vif progress (port binding negotiation)

2016-02-18 Thread Sergey Belous
Thanks, Sean. I'll try to keep you and everybody informed about progress on
those.

2016-02-18 20:20 GMT+03:00 Sean M. Collins :

> Jay Pipes wrote:
> > From our Mirantis team, I've asked Sergey Belous to handle any necessary
> > changes to devstack and project-config (for a functional test gate check
> > job).
>
> I'll keep an eye out in my DevStack review queue for these patches and
> will make sure to review them promptly.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Sergey Belous
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][openstack] os-client-config 1.15.0 release (mitaka)

2016-02-18 Thread no-reply
We are satisfied to announce the release of:

os-client-config 1.15.0: OpenStack Client Configuation Library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-client-config

With package available at:

https://pypi.python.org/pypi/os-client-config

Please report issues through launchpad:

http://bugs.launchpad.net/os-client-config

For more details, please see below.

1.15.0
^^

Swiftclient instantiation now provides authentication information so
that long lived swiftclient objects can reauthenticate if necessary.
This should be a temporary situation until swiftclient supports
keystoneauth sessions at which point os-client-config will instantiate
swiftclient with a keystoneauth session.


New Features


* Swiftclient instantiation now provides authentication information
  so that long lived swiftclient objects can reauthenticate if
  necessary.

* Add support for explicit v2password auth type.

* Add SSL support to VEXXHOST vendor profile.

* Add zetta.io cloud vendor profile.


Bug Fixes
*

* Fix bug where project_domain_{name,id} was set even if
  project_{name,id} was not set.


Other Notes
***

* HPCloud vendor profile removed due to cloud shutdown.

* RunAbove vendor profile removed due to migration to OVH.

Changes in os-client-config 1.14.0..1.15.0
--

7865abc Add release notes
dd1f03c Send swiftclient username/password and token
10a9369 Remove HP and RunAbove from vendor profiles
8264e09 Added SSL support for VEXXHOST
fe2558a Add support for zetta.io
42727a5 Stop ignoring v2password plugin
ae8f4b6 Go ahead and remove final excludes
a2db877 Don't set project_domain if not project scoped
cfd2919 Clean up removed hacking rule from [flake8] ignore lists
2f1d184 set up release notes build

Diffstat (except docs and test files)
-

os_client_config/cloud_config.py   |  37 ++-
os_client_config/config.py |  30 ++-
os_client_config/vendors/hp.json   |  16 --
os_client_config/vendors/runabove.json |  15 --
os_client_config/vendors/vexxhost.json |   2 +-
os_client_config/vendors/zetta.json|  13 +
.../catch-up-release-notes-e385fad34e9f3d6e.yaml   |  22 ++
releasenotes/source/_static/.placeholder   |   0
releasenotes/source/_templates/.placeholder|   0
releasenotes/source/conf.py| 261 +
releasenotes/source/index.rst  |  17 ++
releasenotes/source/unreleased.rst |   5 +
test-requirements.txt  |   2 +-
tox.ini|  10 +-
18 files changed, 601 insertions(+), 91 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index a50a202..5e4c304 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@
-hacking>=0.9.2,<0.10
+hacking>=0.10.2,<0.11  # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - DVR L3 data plane performance results and scenarios

2016-02-18 Thread Georgy Okrokvertskhov
Hi Gal,

We do have DVR testing results on 200 nodes for both VXLAN and VLAN
configurations. We plan to publish them in performance-docs repository.

Thanks
Georgy

On Thu, Feb 18, 2016 at 6:06 AM, Gal Sagie  wrote:

> Hello All,
>
> We have started to test Dragonflow [1] data plane L3 performance and was
> wondering
> if there is any results and scenarios published for the current Neutron DVR
> that we can compare and learn the scenarios to test.
>
> We mostly want to validate and understand if our results are accurate and
> also join the
> community in defining base standards and scenarios to test any solution
> out there.
>
> For that we also plan to join and contribute to openstack-performance [2]
> efforts which to me
> are really important.
>
> Would love any results/information you can share, also interested in
> control plane
> testing and API stress tests (either using Rally or not)
>
> Thanks
> Gal.
>
> [1]
> http://docs.openstack.org/developer/dragonflow/distributed_dragonflow.html
> [2] https://github.com/openstack/performance-docs
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Adam Young

On 02/18/2016 02:00 PM, Morgan Fainberg wrote:

Adam,

CORS shouldn't need catalog integration ever. CORS is a layer above 
anything in the service catalog and doesn't provide extra security 
except signalling to the javascript vm it can access resources outside 
of it's current domain; something that can be worked around in many 
ways including using a non-javascript http client. The underlying 
application can still reject the request.
OK, so Catalog is a vestige of the old discussion.  Look instead at what 
we do with Federations Trusted Dashboard.


http://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/config.py#n532

That is really what I was getting at:

Its not a question of the remote application rejecting the token. It is 
Keystone refusing to tell the browser that the remote application is 
allowed to read the token.


If the deployer does and all-in-one, and all services are on port 443, 
CORS is not an issue.


If Each Service has its own port or hostname, then each service needs to 
know the list of approved dashboards.  Since we do this in Keystone 
already, recommend we have the CORS middleware use the same property.


CONF.federation.trusted_dashboard




I don't see service catalog integration as a blocker for CORS.


On Thu, Feb 18, 2016 at 10:29 AM, John Garbutt > wrote:


On 18 February 2016 at 17:58, Sean Dague mailto:s...@dague.net>> wrote:
> On 02/18/2016 12:17 PM, Michael Krotscheck wrote:
>> Clarifying:
>>
>> On Thu, Feb 18, 2016 at 2:32 AM Sean Dague mailto:s...@dague.net>
>> >> wrote:
>>
>> Ok, to make sure we all ended up on the same page at the
end of this
>> discussion, this is what I think I heard.
>>
>> 1) oslo.config is about to release with a feature that will
make adding
>> config to paste.ini not needed (i.e.
>> https://review.openstack.org/#/c/265415/ is no longer needed).
>>
>>
>> I will need help to do this. More below.
>>
>>
>> 2) ideally the cors middleware will have sane defaults for
that set of
>> headers in oslo.config.
>>
>>
>> I'd like to make sure we agree on what "Sane defaults" means
here. By
>> design, the CORS middleware is generic, and only explicitly
includes the
>> headers prescribed in the w3c spec.  It should not include any
>> additional headers, for reasons of downstream non-openstack
consumers.
>>
>>
>> 3) projects should be able to apply new defaults for these
options in
>> their codebase through a default override process (that is
now nicely
>> documented somewhere... url?)
>>
>>
>> New sample defaults for the generated configuration files, they
should
>> not live anywhere else. The CORS middleware should, if we go
this path,
>> be little more than a true-to-spec implementation, with config
files
>> that extend it for the appropriate API.
>>
>> The big question I now have is: What do we do with respect to
the mitaka
>> freeze?
>>
>> Option 1: Implement as is, keep things consistent, fix them in
Newton.
>
> The problem with Option 1 is that it's not fixable in Newton. It
> requires fixing for the next 3 releases as you have to deprecate out
> bits in paste.ini, make middleware warn for removal first soft, then
> hard, explain the config migration. Once this lands in the wild the
> unwind is very long and involved.
>
> Which is why I -1ed the patch. Because the fix in newton isn't a
revert.

+1 on the upgrade impact being a blocker.
Certainly for all folks meeting these:

https://governance.openstack.org/reference/tags/assert_supports-upgrade.html#requirements

This will require lots of folks to pitch in a help, and bend the
process a touch.
But that seems way more reasonable than dragging our users through
that headache.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo

Re: [openstack-dev] [Neutron] - DVR L3 data plane performance results and scenarios

2016-02-18 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Gal Sagie,
Yes there was some performance results on DVR that we shared with the community 
during the Liberty summit in Vancouver.

Also I think there was a performance analysis that was done by Oleg Bondarev on 
DVR during the Paris summit.

We have done lot more changes to the control plane to improve the scale and 
performance in DVR during the Mitaka cycle and will be sharing some performance 
results in the upcoming summit.

Definitely we can align on our approach and have all those results captured in 
the upstream for the reference.

Please let me know if you need any other information.

Thanks
Swami

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: Thursday, February 18, 2016 6:06 AM
To: OpenStack Development Mailing List (not for usage questions); Eran Gampel; 
Shlomo Narkolayev; Yuli Stremovsky
Subject: [openstack-dev] [Neutron] - DVR L3 data plane performance results and 
scenarios

Hello All,

We have started to test Dragonflow [1] data plane L3 performance and was 
wondering
if there is any results and scenarios published for the current Neutron DVR
that we can compare and learn the scenarios to test.

We mostly want to validate and understand if our results are accurate and also 
join the
community in defining base standards and scenarios to test any solution out 
there.

For that we also plan to join and contribute to openstack-performance [2] 
efforts which to me
are really important.

Would love any results/information you can share, also interested in control 
plane
testing and API stress tests (either using Rally or not)

Thanks
Gal.

[1] http://docs.openstack.org/developer/dragonflow/distributed_dragonflow.html
[2] https://github.com/openstack/performance-docs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Update on scheduler and resource tracker progress

2016-02-18 Thread Jay Pipes

On 02/12/2016 01:47 PM, Clint Byrum wrote:

Excerpts from Jay Pipes's message of 2016-02-11 12:24:04 -0800:

Hello all,

Performance working group, please pay attention to Chapter 2 in the
details section.






Chapter 2 - Addressing performance and scale


One of the significant performance problems with the Nova scheduler is
the fact that for every call to the select_destinations() RPC API method
-- which itself is called at least once every time a launch or migration
request is made -- the scheduler grabs all records for all compute nodes
in the deployment. Once retrieving all these compute node records, the
scheduler runs each through a set of filters to determine which compute
nodes have the required capacity to service the instance's requested
resources. Having the scheduler continually retrieve every compute node
record on each request to select_destinations() is extremely
inefficient. The greater the number of compute nodes, the bigger the
performance and scale problem this becomes.

On a loaded cloud deployment -- say there are 1000 compute nodes and 900
of them are fully loaded with active virtual machines -- the scheduler
is still going to retrieve all 1000 compute node records on every
request to select_destinations() and process each one of those records
through all scheduler filters. Clearly, if we could filter the amount of
compute node records that are returned by removing those nodes that do
not have available capacity, we could dramatically reduce the amount of
work that each call to select_destinations() would need to perform.

The resource-providers-scheduler blueprint attempts to address the above
problem by replacing a number of the scheduler filters that currently
run *after* the database has returned all compute node records with
instead a series of WHERE clauses and join conditions on the database
query. The idea here is to winnow the number of returned compute node
results as much as possible. The fewer records the scheduler must
post-process, the faster the performance of each individual call to
select_destinations().


This is great, and I think it is the way to go. However, I'm not sure how
dramatic the overall benefit will be, since it also shifts some load from
 reads to writes.


No, the above is *only* talking about the destination host selection 
process, not the claim process. There are no writes here at all.


From my benchmarking, I see a 7.0% to 38.6% increase in the average 
time to perform the destination selection operation when doing the 
resource filtering on the Python side as opposed to in the DB side.


As you would expect, the larger the size of the deployment, the greater 
the performance benefit you see using the DB for querying instead of 
Python (lower numbers are better here):


DB or Python   # Compute Nodes   Avg Time to SelectDelta

DB 100   0.021035
Python 100   0.022517  +7.0%
DB 200   0.023370
Python 200   0.026526 +13.5%
DB 400   0.027638
Python 400   0.034666 +25.4%
DB 800   0.034814
Python 800   0.048271 +38.6%

The above was for a serialized scenario (1 scheduler process). Parallel 
operations at 2, 4 and 8 scheduler processes were virtually identical as 
can be expected since this is testing the read operation performance, 
not the write operations.


> With 1000 active compute nodes updating their status,

each index added will be 1000 more index writes per update period. Still
a net win, but I'm always cautious about shifting things to more writes
on the database server. That said, I do think it will be a win and should
be done.


Again, this isn't what the "move the filtering to the database query" 
proposal is about :) You are describing the *claim* operation above, not 
the select-destination operation.


The *current* scheduler design is what has each distributed compute node 
sending updates to the scheduler^Wdatabase each time a claim occurs. 
What the second part of my proposal does is move the claim from the 
distributed compute nodes and into the scheduler, which should allow the 
scheduler to operate on non-stale data (which will reduce the number of 
long retry operations). More below.



The second major scale problem with the current Nova scheduler design
has to do with the fact that the scheduler does *not* actually claim
resources on a provider. Instead, the scheduler selects a destination
host to place the instance on and the Nova conductor then sends a
message to that target host which attempts to spawn the instance on its
hypervisor. If the spawn succeeds, the target compute host updates the
Nova database and decrements its count of available resources. These
steps (from nova-scheduler to nova-con

Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-02-18 12:55:02 -0500:
> On 02/18/2016 12:22 PM, Michael Krotscheck wrote:
> > On Thu, Feb 18, 2016 at 9:07 AM Doug Hellmann  > > wrote:
> > 
> > 
> > If the deployer is only ever supposed to set the value to the default,
> > why do we let them change it at all? Why isn't this just something the
> > app sets?
> > 
> > 
> > There was a specific request from the ironic team to not have headers be
> > prescribed. If, for instance, ironic is deployed with an auth plugin
> > that is not keystone, different allowed headers would be required.
> 
> Here is the future we're going to have.
> 
> Whatever the middleware does with no operator intervention will be how
> the world will work, and how you will need to assume the world will work
> going forward.
> 
> Right now, it appears that the default in the middleware is do nothing.
> That means CORS won't be in a functional state on services by default.
> However, I thought the point of the effort was that all the APIs in the
> wild would be CORS enabled.
> 
> I'm not hugely sympathetic to defaulting to not having the Keystone
> headers specified in the non keystone case. I get there are non keystone
> cases, but keystone is defcore. Making the keystone case worse for the
> non keystone case seems like fundamentally the wrong tradeoff.
> 
> -Sean
> 

I agree. We should make this thing work for our needs first, and allow
flexibility on top of that. But the default should be made useful.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Morgan Fainberg
Adam,

CORS shouldn't need catalog integration ever. CORS is a layer above
anything in the service catalog and doesn't provide extra security except
signalling to the javascript vm it can access resources outside of it's
current domain; something that can be worked around in many ways including
using a non-javascript http client. The underlying application can still
reject the request.

I don't see service catalog integration as a blocker for CORS.


On Thu, Feb 18, 2016 at 10:29 AM, John Garbutt  wrote:

> On 18 February 2016 at 17:58, Sean Dague  wrote:
> > On 02/18/2016 12:17 PM, Michael Krotscheck wrote:
> >> Clarifying:
> >>
> >> On Thu, Feb 18, 2016 at 2:32 AM Sean Dague  >> > wrote:
> >>
> >> Ok, to make sure we all ended up on the same page at the end of this
> >> discussion, this is what I think I heard.
> >>
> >> 1) oslo.config is about to release with a feature that will make
> adding
> >> config to paste.ini not needed (i.e.
> >> https://review.openstack.org/#/c/265415/ is no longer needed).
> >>
> >>
> >> I will need help to do this. More below.
> >>
> >>
> >> 2) ideally the cors middleware will have sane defaults for that set
> of
> >> headers in oslo.config.
> >>
> >>
> >> I'd like to make sure we agree on what "Sane defaults" means here. By
> >> design, the CORS middleware is generic, and only explicitly includes the
> >> headers prescribed in the w3c spec.  It should not include any
> >> additional headers, for reasons of downstream non-openstack consumers.
> >>
> >>
> >> 3) projects should be able to apply new defaults for these options
> in
> >> their codebase through a default override process (that is now
> nicely
> >> documented somewhere... url?)
> >>
> >>
> >> New sample defaults for the generated configuration files, they should
> >> not live anywhere else. The CORS middleware should, if we go this path,
> >> be little more than a true-to-spec implementation, with config files
> >> that extend it for the appropriate API.
> >>
> >> The big question I now have is: What do we do with respect to the mitaka
> >> freeze?
> >>
> >> Option 1: Implement as is, keep things consistent, fix them in Newton.
> >
> > The problem with Option 1 is that it's not fixable in Newton. It
> > requires fixing for the next 3 releases as you have to deprecate out
> > bits in paste.ini, make middleware warn for removal first soft, then
> > hard, explain the config migration. Once this lands in the wild the
> > unwind is very long and involved.
> >
> > Which is why I -1ed the patch. Because the fix in newton isn't a revert.
>
> +1 on the upgrade impact being a blocker.
> Certainly for all folks meeting these:
>
> https://governance.openstack.org/reference/tags/assert_supports-upgrade.html#requirements
>
> This will require lots of folks to pitch in a help, and bend the
> process a touch.
> But that seems way more reasonable than dragging our users through
> that headache.
>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] More attention to PostgreSQL

2016-02-18 Thread Renat Akhmerov
We already have a non-voting gate that runs our unit tests on top of Postgres. 
The thing is that it’s not really stable now and it runs for a long time 
because there’s no parallelism in it (it can be done but requires more work). 
So we just need to keep improving it.

Renat Akhmerov
@ Mirantis Inc.



> On 16 Feb 2016, at 14:17, Elisha, Moshe (Nokia - IL)  
> wrote:
> 
> Hi,
> 
> We have more and more customers that want to run Mistral on top of PostgreSQL 
> database (instead of MySQL).
> I also know that PostgreSQL is important for some of our active contributors.
> 
> Can we add more attention to PostgreSQL? For example, add more gates (like 
> gate-rally-dsvm-mistral-task and gate-mistral-devstack-dsvm) that will run on 
> top of PostgreSQL as well.
> 
> What do you think?
> 
> Thanks.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread John Garbutt
On 18 February 2016 at 17:58, Sean Dague  wrote:
> On 02/18/2016 12:17 PM, Michael Krotscheck wrote:
>> Clarifying:
>>
>> On Thu, Feb 18, 2016 at 2:32 AM Sean Dague > > wrote:
>>
>> Ok, to make sure we all ended up on the same page at the end of this
>> discussion, this is what I think I heard.
>>
>> 1) oslo.config is about to release with a feature that will make adding
>> config to paste.ini not needed (i.e.
>> https://review.openstack.org/#/c/265415/ is no longer needed).
>>
>>
>> I will need help to do this. More below.
>>
>>
>> 2) ideally the cors middleware will have sane defaults for that set of
>> headers in oslo.config.
>>
>>
>> I'd like to make sure we agree on what "Sane defaults" means here. By
>> design, the CORS middleware is generic, and only explicitly includes the
>> headers prescribed in the w3c spec.  It should not include any
>> additional headers, for reasons of downstream non-openstack consumers.
>>
>>
>> 3) projects should be able to apply new defaults for these options in
>> their codebase through a default override process (that is now nicely
>> documented somewhere... url?)
>>
>>
>> New sample defaults for the generated configuration files, they should
>> not live anywhere else. The CORS middleware should, if we go this path,
>> be little more than a true-to-spec implementation, with config files
>> that extend it for the appropriate API.
>>
>> The big question I now have is: What do we do with respect to the mitaka
>> freeze?
>>
>> Option 1: Implement as is, keep things consistent, fix them in Newton.
>
> The problem with Option 1 is that it's not fixable in Newton. It
> requires fixing for the next 3 releases as you have to deprecate out
> bits in paste.ini, make middleware warn for removal first soft, then
> hard, explain the config migration. Once this lands in the wild the
> unwind is very long and involved.
>
> Which is why I -1ed the patch. Because the fix in newton isn't a revert.

+1 on the upgrade impact being a blocker.
Certainly for all folks meeting these:
https://governance.openstack.org/reference/tags/assert_supports-upgrade.html#requirements

This will require lots of folks to pitch in a help, and bend the
process a touch.
But that seems way more reasonable than dragging our users through
that headache.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Adam Young

On 02/18/2016 12:17 PM, Michael Krotscheck wrote:

Clarifying:

On Thu, Feb 18, 2016 at 2:32 AM Sean Dague > wrote:


Ok, to make sure we all ended up on the same page at the end of this
discussion, this is what I think I heard.

1) oslo.config is about to release with a feature that will make
adding
config to paste.ini not needed (i.e.
https://review.openstack.org/#/c/265415/ is no longer needed).


I will need help to do this. More below.

2) ideally the cors middleware will have sane defaults for that set of
headers in oslo.config.


I'd like to make sure we agree on what "Sane defaults" means here. By 
design, the CORS middleware is generic, and only explicitly includes 
the headers prescribed in the w3c spec.  It should not include any 
additional headers, for reasons of downstream non-openstack consumers.


3) projects should be able to apply new defaults for these options in
their codebase through a default override process (that is now nicely
documented somewhere... url?)


New sample defaults for the generated configuration files, they should 
not live anywhere else. The CORS middleware should, if we go this 
path, be little more than a true-to-spec implementation, with config 
files that extend it for the appropriate API.
So, I think we need to treat CORS as experimental for the time being 
anyway.  When I last looked in to it, we really needed Service catalog 
integration to avoid being too permissive:


As I understand it, the CORS middleware as it is currently written does 
not limit what other application would be able to read the data back 
from a POST operation.



Any Application can make a subset of calls to Keystone, but we don't 
want any but a "blessed" application able to read the tokens.  We have a 
hard coded check for this to support Federation already. This pattern 
needs to extend to any Application trusted to get and read a Keystone token.






The big question I now have is: What do we do with respect to the 
mitaka freeze?


Option 1: Implement as is, keep things consistent, fix them in Newton.

Option 2: Try to fix it in Mitaka.
This requires patches against Heat, Nova, Aodh, Ceilometer, Keystone, 
Mistral, Searchlight, Designate, Manila, Barbican, Congress, Neutron, 
Cinder, Magnum, Sahara, Trove, Murano, Glance, Cue, Kite, Solum, 
Ironic. These patches have to land after the next oslo release has 
made it into global requirements, and requires the +2's of the 
appropriate cores.


I will need help, both to write and land those patches. We're super 
tight against feature freeze, and I'm currently overcommitted with the 
Ironic and Horizon midcycles (this week and next). I also have an 
infant at home, with no daycare, so I cannot work long hours to make 
this happen.


I feel that I can commit to landing 5 of the 22 required patches. If I 
cannot get support for the remaining 17, we risk having an 
inconsistent implementation, in which case Option 1 is preferred.


Who's willing to help?

Michael


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Morgan Fainberg
On Thu, Feb 18, 2016 at 9:58 AM, Sean Dague  wrote:

> On 02/18/2016 12:17 PM, Michael Krotscheck wrote:
> > Clarifying:
> >
> > On Thu, Feb 18, 2016 at 2:32 AM Sean Dague  > > wrote:
> >
> > Ok, to make sure we all ended up on the same page at the end of this
> > discussion, this is what I think I heard.
> >
> > 1) oslo.config is about to release with a feature that will make
> adding
> > config to paste.ini not needed (i.e.
> > https://review.openstack.org/#/c/265415/ is no longer needed).
> >
> >
> > I will need help to do this. More below.
> >
> >
> > 2) ideally the cors middleware will have sane defaults for that set
> of
> > headers in oslo.config.
> >
> >
> > I'd like to make sure we agree on what "Sane defaults" means here. By
> > design, the CORS middleware is generic, and only explicitly includes the
> > headers prescribed in the w3c spec.  It should not include any
> > additional headers, for reasons of downstream non-openstack consumers.
> >
> >
> > 3) projects should be able to apply new defaults for these options in
> > their codebase through a default override process (that is now nicely
> > documented somewhere... url?)
> >
> >
> > New sample defaults for the generated configuration files, they should
> > not live anywhere else. The CORS middleware should, if we go this path,
> > be little more than a true-to-spec implementation, with config files
> > that extend it for the appropriate API.
> >
> > The big question I now have is: What do we do with respect to the mitaka
> > freeze?
> >
> > Option 1: Implement as is, keep things consistent, fix them in Newton.
>
> The problem with Option 1 is that it's not fixable in Newton. It
> requires fixing for the next 3 releases as you have to deprecate out
> bits in paste.ini, make middleware warn for removal first soft, then
> hard, explain the config migration. Once this lands in the wild the
> unwind is very long and involved.
>
> Which is why I -1ed the patch. Because the fix in newton isn't a revert.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
Updates to defaults in paste-ini will cause pain regardless of 1, 2, or 3
cycles out and will likely break deployments / make operator lives
miserable. Putting config values in paste-ini  (except in the case of swift
when using middleware that relies on oslo.config, and it wont be in the
paste-ini by default) is going to cause pain and is generally a bad idea.

I am against "option 1". This could be a case where we classify it as a
release blocking bug for Mitaka final (is that reasonable to have m3 with
the current scenario and final to be fixed?), which opens the timeline a
bit rather than hard against feature-freeze.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-02-18 Thread Clark Boylan
On Wed, Feb 10, 2016, at 09:52 AM, Sean M. Collins wrote:
> Ihar Hrachyshka wrote:
> > Also, I added some interface state dump for worlddump, and here is how the
> > main node networking setup looks like:
> > 
> > http://logs.openstack.org/59/265759/20/experimental/gate-grenade-dsvm-neutron-multinode/d64a6e6/logs/worlddump-2016-01-30-164508.txt.gz
> > 
> > br-ex: mtu = 1450
> > inside router: qg mtu = 1450, qr = 1450
> > 
> > So should be fine in this regard. I also set devstack locally enforcing
> > network_device_mtu, and it seems to pass packets of 1450 size through. So
> > it’s probably something tunneling packets to the subnode that fails for us,
> > not local router-to-tap bits.
> 
> Yeah! That's right. So is it the case that we need to do 1500 less the
> GRE overhead less the VXLAN overhead? So 1446? Since the traffic gets
> enacpsulated in VXLAN then encapsulated in GRE (yo dawg, I heard u like
> tunneling).

Looks like you made progress further debugging the problems here and
metadata service is the culprit. But I want to point out that we
shouldn't be nesting tunnels here (at least not in a way that is exposed
to us, the underlying cloud could be doing whatever). br-int is the
neutron managed tunnel using vxlan and that is the only layer of
tunneling for br-int. br-ex is part of the devstack-gate managed VXLAN
tunnel (formerly GRE until new clouds started rejecting GRE packets) on
the DVR jobs but not the normal multinode or grenade jobs because the
DVR job is the only one with more than one router.

All that to say 1450 should be a sufficiently small MTU.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Sean Dague
On 02/18/2016 12:17 PM, Michael Krotscheck wrote:
> Clarifying:
> 
> On Thu, Feb 18, 2016 at 2:32 AM Sean Dague  > wrote:
> 
> Ok, to make sure we all ended up on the same page at the end of this
> discussion, this is what I think I heard.
> 
> 1) oslo.config is about to release with a feature that will make adding
> config to paste.ini not needed (i.e.
> https://review.openstack.org/#/c/265415/ is no longer needed).
> 
> 
> I will need help to do this. More below.
>  
> 
> 2) ideally the cors middleware will have sane defaults for that set of
> headers in oslo.config.
> 
> 
> I'd like to make sure we agree on what "Sane defaults" means here. By
> design, the CORS middleware is generic, and only explicitly includes the
> headers prescribed in the w3c spec.  It should not include any
> additional headers, for reasons of downstream non-openstack consumers.
>  
> 
> 3) projects should be able to apply new defaults for these options in
> their codebase through a default override process (that is now nicely
> documented somewhere... url?)
> 
> 
> New sample defaults for the generated configuration files, they should
> not live anywhere else. The CORS middleware should, if we go this path,
> be little more than a true-to-spec implementation, with config files
> that extend it for the appropriate API.
> 
> The big question I now have is: What do we do with respect to the mitaka
> freeze?
> 
> Option 1: Implement as is, keep things consistent, fix them in Newton.

The problem with Option 1 is that it's not fixable in Newton. It
requires fixing for the next 3 releases as you have to deprecate out
bits in paste.ini, make middleware warn for removal first soft, then
hard, explain the config migration. Once this lands in the wild the
unwind is very long and involved.

Which is why I -1ed the patch. Because the fix in newton isn't a revert.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Sean Dague
On 02/18/2016 12:22 PM, Michael Krotscheck wrote:
> On Thu, Feb 18, 2016 at 9:07 AM Doug Hellmann  > wrote:
> 
> 
> If the deployer is only ever supposed to set the value to the default,
> why do we let them change it at all? Why isn't this just something the
> app sets?
> 
> 
> There was a specific request from the ironic team to not have headers be
> prescribed. If, for instance, ironic is deployed with an auth plugin
> that is not keystone, different allowed headers would be required.

Here is the future we're going to have.

Whatever the middleware does with no operator intervention will be how
the world will work, and how you will need to assume the world will work
going forward.

Right now, it appears that the default in the middleware is do nothing.
That means CORS won't be in a functional state on services by default.
However, I thought the point of the effort was that all the APIs in the
wild would be CORS enabled.

I'm not hugely sympathetic to defaulting to not having the Keystone
headers specified in the non keystone case. I get there are non keystone
cases, but keystone is defcore. Making the keystone case worse for the
non keystone case seems like fundamentally the wrong tradeoff.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][cinder] os-brick 1.0.0 release (mitaka)

2016-02-18 Thread no-reply
We are pumped to announce the release of:

os-brick 1.0.0: OpenStack Cinder brick library for managing local
volume attaches

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-brick

With package available at:

https://pypi.python.org/pypi/os-brick

Please report issues through launchpad:

http://bugs.launchpad.net/os-brick

For more details, please see below.

1.0.0
^


New Features


* Added vStorage protocol support for RemoteFS connections.


Bug Fixes
*

* Improved multipath device handling.


Other Notes
***

* Start using reno to manage release notes.

Changes in os-brick 0.8.0..1.0.0


508c339 Fix iSCSI Multipath
f3f3ce7 Add missing release notes
82cdb40 Lun id's > 255 should be converted to hex
4bdaba0 Updated from global requirements
6998adf Fix output returned from get_all_available_volumes
2b051f7 Raise exception in find_multipath_device
8f31639 Updated from global requirements
ba2100a Remove multipath -l logic from ISCSI connector
6b22d75 Add vzstorage protocol for remotefs connections
4b3dbdc Add reno for release notes management
40d95d8 Fix get_device_size with newlines
6c5490b Updated from global requirements

Diffstat (except docs and test files)
-

.gitignore |   3 +
os_brick/exception.py  |   4 +
os_brick/initiator/connector.py| 113 +
os_brick/initiator/linuxscsi.py|  22 +-
os_brick/version.py|  20 ++
.../add-vstorage-protocol-b536f4e21d764801.yaml|   3 +
.../multipath-improvements-596c2c6eadfba6ea.yaml   |   3 +
.../notes/start-using-reno-23e8d5f1a30851a1.yaml   |   3 +
releasenotes/source/_static/.placeholder   |   0
releasenotes/source/_templates/.placeholder|   0
releasenotes/source/conf.py| 276 +
releasenotes/source/index.rst  |   5 +
requirements.txt   |  14 +-
test-requirements.txt  |  15 +-
tox.ini|   3 +
18 files changed, 520 insertions(+), 87 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 8abb660..de2c245 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,3 +5,3 @@
-pbr>=1.6
-Babel>=1.3
-eventlet>=0.17.4
+pbr>=1.6 # Apache-2.0
+Babel>=1.3 # BSD
+eventlet>=0.18.2 # MIT
@@ -11 +11 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.i18n>=1.5.0 # Apache-2.0
+oslo.i18n>=2.1.0 # Apache-2.0
@@ -13,2 +13,2 @@ oslo.service>=1.0.0 # Apache-2.0
-oslo.utils>=3.2.0 # Apache-2.0
-requests!=2.9.0,>=2.8.1
+oslo.utils>=3.4.0 # Apache-2.0
+requests!=2.9.0,>=2.8.1 # Apache-2.0
@@ -16 +16 @@ retrying!=1.3.0,>=1.2.3 # Apache-2.0
-six>=1.9.0
+six>=1.9.0 # MIT
diff --git a/test-requirements.txt b/test-requirements.txt
index d01fcbd..dece983 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -6,3 +6,4 @@ hacking<0.11,>=0.10.0
-coverage>=3.6
-python-subunit>=0.0.18
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
+coverage>=3.6 # Apache-2.0
+python-subunit>=0.0.18 # Apache-2.0/BSD
+reno>=0.1.1 # Apache2
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
@@ -11,4 +12,4 @@ oslotest>=1.10.0 # Apache-2.0
-testrepository>=0.0.18
-testscenarios>=0.4
-testtools>=1.4.0
-os-testr>=0.4.1
+testrepository>=0.0.18 # Apache-2.0/BSD
+testscenarios>=0.4 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT
+os-testr>=0.4.1 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators] OpenStack rolling upgrade

2016-02-18 Thread Volodymyr Nykytiuk
Hi all,

We are discussing the creation of a tool to perform a scripted rolling upgrade 
of OpenStack from the Liberty to Mitaka release. We would like your input on 
which of the following feature areas are most important to you.

Here’s small questionnaire http://goo.gl/forms/C5NXOxhLPU 
 .

Thanks for participating.
—
Vlad

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Please do *not* use git (and specifically "git log") when generating the docs

2016-02-18 Thread Thomas Goirand
Hi,

I've seen Reno doing it, then some more. It's time that I raise the
issue globally in this list before the epidemic spreads to the whole of
OpenStack ! :)

The last occurence I have found is in oslo.config (but please keep in
mind this message is for all projects), which has, its doc/source/conf.py:

git_cmd = ["git", "log", "--pretty=format:'%ad, commit %h'",
   "--date=local","-n1"]
html_last_updated_fmt = subprocess.check_output(git_cmd,
stdin=subprocess.PIPE)

Of course, the .git folder is *NOT* available when building a package in
Debian (and more generally, in downstream distros). This means that this
kind of joke *will* break the build of the packages when they also build
the docs of your project. And consequently, the package maintainers have
to patch out the above lines from conf.py. It'd be best if it wasn't
needed to do so.

As a consequence, it is *not ok* to do "git log" anywhere in the sphinx
docs. Please keep this in mind.

More generally, it is wrong to assume that even the git command is
present. For Mitaka b2, I had to add git as build-dependency on nearly
all server packages, otherwise they would FTBFS (fail to build from
source). This is plain wrong and makes no sense. I hope this can be
reverted somehow.

Thanks in advance for considering the above, and to try to see things
from the package maintainer's perspective,
Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Update on os-vif progress (port binding negotiation)

2016-02-18 Thread Sean M. Collins
Jay Pipes wrote:
> From our Mirantis team, I've asked Sergey Belous to handle any necessary
> changes to devstack and project-config (for a functional test gate check
> job).

I'll keep an eye out in my DevStack review queue for these patches and
will make sure to review them promptly.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Michael Krotscheck
On Thu, Feb 18, 2016 at 9:07 AM Doug Hellmann  wrote:

>
> If the deployer is only ever supposed to set the value to the default,
> why do we let them change it at all? Why isn't this just something the
> app sets?


There was a specific request from the ironic team to not have headers be
prescribed. If, for instance, ironic is deployed with an auth plugin that
is not keystone, different allowed headers would be required.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Michael Krotscheck
Clarifying:

On Thu, Feb 18, 2016 at 2:32 AM Sean Dague  wrote:

> Ok, to make sure we all ended up on the same page at the end of this
> discussion, this is what I think I heard.
>
> 1) oslo.config is about to release with a feature that will make adding
> config to paste.ini not needed (i.e.
> https://review.openstack.org/#/c/265415/ is no longer needed).
>

I will need help to do this. More below.


> 2) ideally the cors middleware will have sane defaults for that set of
> headers in oslo.config.
>

I'd like to make sure we agree on what "Sane defaults" means here. By
design, the CORS middleware is generic, and only explicitly includes the
headers prescribed in the w3c spec.  It should not include any additional
headers, for reasons of downstream non-openstack consumers.


> 3) projects should be able to apply new defaults for these options in
> their codebase through a default override process (that is now nicely
> documented somewhere... url?)


New sample defaults for the generated configuration files, they should not
live anywhere else. The CORS middleware should, if we go this path, be
little more than a true-to-spec implementation, with config files that
extend it for the appropriate API.

The big question I now have is: What do we do with respect to the mitaka
freeze?

Option 1: Implement as is, keep things consistent, fix them in Newton.

Option 2: Try to fix it in Mitaka.
This requires patches against Heat, Nova, Aodh, Ceilometer, Keystone,
Mistral, Searchlight, Designate, Manila, Barbican, Congress, Neutron,
Cinder, Magnum, Sahara, Trove, Murano, Glance, Cue, Kite, Solum, Ironic.
These patches have to land after the next oslo release has made it into
global requirements, and requires the +2's of the appropriate cores.

I will need help, both to write and land those patches. We're super tight
against feature freeze, and I'm currently overcommitted with the Ironic and
Horizon midcycles (this week and next). I also have an infant at home, with
no daycare, so I cannot work long hours to make this happen.

I feel that I can commit to landing 5 of the 22 required patches. If I
cannot get support for the remaining 17, we risk having an inconsistent
implementation, in which case Option 1 is preferred.

Who's willing to help?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Doug Hellmann
Excerpts from Michael Krotscheck's message of 2016-02-18 16:53:13 +:
> On Wed, Feb 17, 2016 at 2:29 PM Doug Hellmann  wrote:
> 
> >
> > That change only affects sample files and documentation. It has been
> > possible for applications to override config defaults for ages. Were we
> > blocked on making effective use of that because of the doc issue for a
> > long time?
> 
> 
> Sample files and documentation are where we want this. If we override the
> defaults, and then a config file is modified to again overwrite those
> defaults, then suddenly a magic internal value that is presumably required
> for the feature to function properly disappears. What we're looking to do
> is guide operators and deployers to knowing what should live in their
> config file for the feature to work properly. Hence the change you created
> being what we need.
> 
> Michael

If the deployer is only ever supposed to set the value to the default,
why do we let them change it at all? Why isn't this just something the
app sets?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Michael Krotscheck
On Wed, Feb 17, 2016 at 2:29 PM Doug Hellmann  wrote:

>
> That change only affects sample files and documentation. It has been
> possible for applications to override config defaults for ages. Were we
> blocked on making effective use of that because of the doc issue for a
> long time?


Sample files and documentation are where we want this. If we override the
defaults, and then a config file is modified to again overwrite those
defaults, then suddenly a magic internal value that is presumably required
for the feature to function properly disappears. What we're looking to do
is guide operators and deployers to knowing what should live in their
config file for the feature to work properly. Hence the change you created
being what we need.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade

2016-02-18 Thread Sean M. Collins
This week's update:

Armando was kind enough to take a look[1], since he's got a fresh
perspective. I think I've been suffering from Target Fixation[1]
where I failed to notice a couple other failures in the logs.

For example - during the SSH test into the instances, we are able to get
a full SSH handshake and offer up the SSH key, however authentication
fails[3], apparently due to the fact that the instance is not successful
in contacting the metadata service and getting the SSH public key[4].

So, I think the next bit of work is to track down why the metadata
service isn't functioning properly. We pinged Matt Riedemann about one
error we saw over in the nova metadata service, however he had seen it
before us and already wrote a fix[5].

That's the status of where things stand. Metadata service being broken,
and also still MTU issues lurking in the background.

[1]: 
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-02-18.log.html#t2016-02-18T00:26:29
[2]: https://en.wikipedia.org/wiki/Target_fixation
[3]: 
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-02-18.log.html#t2016-02-18T01:18:32
[4]: 
http://logs.openstack.org/78/279378/9/experimental/gate-grenade-dsvm-neutron-multinode/40a5659/console.html#_2016-02-17_22_37_33_277
[5]: https://review.openstack.org/#/c/279721/
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-18 Thread Sean McGinnis
On Thu, Feb 18, 2016 at 03:38:39PM +, D'Angelo, Scott wrote:
> Cinder team is proposing to add support for API microversions [1]. It came up 
> at our mid-cycle that we should add a new /v3 endpoint [2]. Discussions on 
> IRC have raised questions about this [3]
> 
> Please weigh in on the design decision to add a new /v3 endpoint for Cinder 
> for clients to use when they wish to have api-microversions.
> 
> PRO add new /v3 endpoint: A client should not ask for new-behaviour against 
> old /v2 endpoint, because that might hit an old pre-microversion (i.e. 
> Liberty) server, and that server might carry on with old behaviour. The 
> client would not know this without checking, and so strange things happen 
> silently.

The concern here is that not only could "strange things happen
silently". Even if the client is checking the response for reported
microversion support, by the time it realizes it's talking to a server
that does not understand microversions, the request could have caused
something to happen that it can't easily recover from.

> It is possible for client to check the response from the server, but his 
> requires an extra round trip.
> It is possible to implement some type of caching of supported 
> (micro-)version, but not all clients will do this.
> Basic argument is that  continuing to use /v2 endpoint either requires an 
> extra trip for each request (absent caching) meaning performance slow-down, 
> or possibility of unnoticed errors.
> 
> CON add new endpoint:
> Downstream cost of changing endpoints is large. It took ~3 years to move from 
> /v1 -> /v2 and we will have to support the deprecated /v2 endpoint forever.
> If we add microversions with /v2 endpoint, old scripts will keep working on 
> /v2 and they will continue to work.
> We would assume that people who choose to use microversions will check that 
> the server supports it.
> 
> Scottda
> 
> [1] https://etherpad.openstack.org/p/cinder-api-microversions
> [2] https://www.youtube.com/watch?v=tfEidbzPOCc around 1:20
> [3] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2016-02-18.log.html
>   around 13:17
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-18 Thread Flavio Percoco

On 16/02/16 19:17 +, Sean M. Collins wrote:

Doug Hellmann wrote:

Is there? I thought the point was OpenCDN isn't actually usable. Maybe
someone from the Poppy team can provide more details about that.


That is certainly a problem. However I think I would lean on Sean
Dague's argument about how Neutron had an open source solution that
needed a lot of TLC. The point being that at least they had 1 option.
Not zero options.

And Dean's point about gce and aws API translation into OpenStack
Compute is also very relevant. We have precedence for doing API
translation layers that take some foreign API and translate it into
"openstackanese"

I think Poppy would have a lot easier time getting into OpenStack were
it to take the steps to build a back-end that would do the required
operations to create a CDN - using a multi-region OpenStack cloud. Or
even adopting an open source CDN. Something! Anything really!

Yes, it's a lot of work, but without that, as I think others have
stated - where's the OpenStack part?



That's not Poppy's business, fwiw. We can't ask a provisioning project to also
be in the business of providing a data API. As others have mentioned, it's just
unfortunate that there's no open source solution for CDNs. TBH, I'd rather have
Poppy not running functional tests (because this is basically what this
discussion is coming down to) than having the team working on a
half-implemented, kinda CDN hack just to make the CI happy.

If someone wants to work on a CDN service, fine. That sounds awesome but let's
not push the Poppy team down that road. They have a clear goal and mission.
OpenStack's requirements are a bit too narrow for them.

That said, as Monty mentioned in the TC meeting, deploying CDN's is not
necessary something a cloud wants to do. Providing a service that provisions
CDN's is more likely to be used by a cloud provider.

Cheers,
Flavio



Like that Wendy's commercial from way back: "Where's the beef?"

--
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-18 Thread Alex Schultz
On Thu, Feb 18, 2016 at 4:00 AM, Aleksandr Didenko 
wrote:

> > Given the requirements to be able to use new features in fuel, with an
> older version of OpenStack, what alternative would you propose?
>
> For example, it's possible to use existing "release" functionality in Fuel
> (release contains granular tasks configuration, puppet modules and
> manifests, configuration data). So after upgrade from 8.0 to 9.0 it will
> look like this [0] - with separate composition layer for every supported
> "release".
>
> > We should allow a user to specify that they want a build a cloud using X
> fuel release to deploy Y os with Z OpenStack release.
>
> [0] should work for this as well. But the number of X-Y-Z combinations
> will be limited. Well, it will be limited in any case, I don't think that
> it's possible to support unlimited number of OpenStack versions in a single
> Fuel release.
>
>
I agree it should not be unlimited but it should be created than the 1-1-1
we currently support.  Since we push for upstream openstack puppet to
support current and best effort on current-1, I think being able to support
at least that should be doable.


> In case we want to use single composition layer for more than one
> openstack version, we need to resolve the following blockers:
> - Move everything except composition layer (top-scope manifests and other
> granular tasks) from fuel-library to their own repos. Otherwise we'll have
> OpenStack version conditionals in modules manifets, providers and functions
> which would be a mess.
> - Refactor tasks upload/serialization in Nailgun
> - (?) Refactor configuration data serialization in Nailgun
>
> And still we'll have to add conditionals to puppet functions that relay on
> configuration data directly (like generate_network_config.rb). Or write
> some sort of data serialization in front of them in manifests. Or leave
> nailgun serialization based on installed version (which is almost the same
> as using separate composition layers [0]).
>
> In either case (separate releases or single composition layer) it will
> double CI load and testing efforts, because we need to CI/test new features
> and patches for 9.0+mitaka and 9.0+liberty.
>
> Regards,
> Alex
>
> [0] http://paste.openstack.org/show/487383/
>
>
> On Thu, Feb 18, 2016 at 9:31 AM, Bogdan Dobrelya 
> wrote:
>
>> On 17.02.2016 18:23, Bogdan Dobrelya wrote:
>> >> So we'll have tons of conditionals in composition layer, right? Even if
>> >> some puppet-openstack class have just one new parameter in new release,
>> >> then we'll have to write a conditional and duplicate class
>> declaration. Or
>> >> write complex parameters hash definitions/merges and use
>> >> create_resources(). The more releases we want to support the more
>> >> complicated composition layer will become. That won't make
>> contribution to
>> >> fuel-library easier and even can greatly reduce development speed.
>> Also are
>> >> we going to add new features to stable releases using this workflow
>> with
>> >> single composition layer?
>> >
>> > As I can see from an example composition [0], such code would be an
>> > unmaintainable burden for development and QA process. Next imagine a
>> > case for incompatible *providers* like network transformations - shall
>> > we put multiple if/case to the ruby providers as well?..
>> >
>> > That is not a way to go for a composition, sorry. While the idea may be
>> > doable, I agree, but perhaps another way.
>> >
>> > (tl;dr)
>> > By the way, this reminded me "The wrong abstraction" [1] article and
>> > discussion. I agree with the author and believe one should not group
>> > code (here it is versioned puppet modules & compositions) in a way which
>> > introduces abstractions (here a super-composition) with multiple
>> > if/else/case and hardcoded things to switch the execution flow based on
>> > version of things. Just keep code as is - partially duplicated by
>> > different releases in separate directories with separate modules and
>> > composition layers and think of better solutions please.
>> >
>> > There is also a nice comment: "...try to optimize my code around
>> > reducing state, coupling, complexity and code, in that order". I
>> > understood that like a set of "golden rules":
>> > - Make it coupled more tight to decrease (shared) state
>> > - Make it more complex to decrease coupling
>> > - Make it duplicated to decrease complexity (e.g. abstractions)
>> >
>> > (tl;dr, I mean it)
>> > So, bringing those here.
>> > - The shared state is perhaps the Nailgun's world view of all data and
>> > versioned serializers for supported releases, which know how to convert
>> > the only latest existing data to any of its supported previous versions.
>> > - Decoupling we do by putting modules with its compositions to different
>> > versioned /etc/puppet subdirectories. I'm not sure how do we decouple
>> > Nailgun serializers though.
>> > - Complexity is how we compose those modules / write logic of
>> serializers.
>> > - Du

[openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-18 Thread D'Angelo, Scott
Cinder team is proposing to add support for API microversions [1]. It came up 
at our mid-cycle that we should add a new /v3 endpoint [2]. Discussions on IRC 
have raised questions about this [3]

Please weigh in on the design decision to add a new /v3 endpoint for Cinder for 
clients to use when they wish to have api-microversions.

PRO add new /v3 endpoint: A client should not ask for new-behaviour against old 
/v2 endpoint, because that might hit an old pre-microversion (i.e. Liberty) 
server, and that server might carry on with old behaviour. The client would not 
know this without checking, and so strange things happen silently.
It is possible for client to check the response from the server, but his 
requires an extra round trip.
It is possible to implement some type of caching of supported (micro-)version, 
but not all clients will do this.
Basic argument is that  continuing to use /v2 endpoint either requires an extra 
trip for each request (absent caching) meaning performance slow-down, or 
possibility of unnoticed errors.

CON add new endpoint:
Downstream cost of changing endpoints is large. It took ~3 years to move from 
/v1 -> /v2 and we will have to support the deprecated /v2 endpoint forever.
If we add microversions with /v2 endpoint, old scripts will keep working on /v2 
and they will continue to work.
We would assume that people who choose to use microversions will check that the 
server supports it.

Scottda

[1] https://etherpad.openstack.org/p/cinder-api-microversions
[2] https://www.youtube.com/watch?v=tfEidbzPOCc around 1:20
[3] 
http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2016-02-18.log.html
  around 13:17



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Update on os-vif progress (port binding negotiation)

2016-02-18 Thread Jay Pipes

Hello Stackers,

Apologies for the delay in getting this update email out to everyone.

tl;dr
-

We are making good progress on the os-vif library and plugin system. We 
have immediate goals for Mitaka, all of which are in progress and on 
track to complete by Feature Freeze.


details
---

Background
==

Over the course of the last two years, a number of proposals were put 
forth to increase the velocity of development in Nova around new virtual 
interface types. Nova core contributors generally are not networking 
experts and the engineers familiar with network interface setup were 
frustrated with the slow pace of reviews in Nova whenever they wanted to 
add or extend functionality.


After much debate, we settled on a proposal to create an os-vif Python 
library that would allow these network-focused contributors to maintain 
and enhance the code for virtual interface plugging separate from Nova. 
VIF types would be loaded as stevedore plugins, enabling easier addition 
of new VIF types and enhancements of existing ones.


In addition to developer velocity, the creation of the os-vif library 
was a perfect opportunity for the Nova developers to clean up and 
standardize the object modeling currently in use to represent virtual 
interfaces and networking objects in Nova. The os_vif.objects.VIF object 
model uses *versioned* objects now, enabling a future structured 
evolution of the data interchange format between Nova and Neutron. This 
interchange of information is typically called "port binding 
negotiation" and has been the source of a lot of spaghetti code both in 
Nova and Neutron. The move to a versioned objects interchange format 
will dramatically reduce the lines of code needed for this port binding 
negotiation and simplify this part of Nova.


Goals
=

The team has the following goals for Mitaka:

* Ensuring the os-vif library is complete and fully unit-tested
* Ensuring the reference plugin implementations (OVS and LinuxBridge) 
are in the os-vif code tree and also fully unit-tested
* Replacing the plug() and unplug() code in Nova for the OVS and 
LinuxBridge VIF types
* Submitting changes to devstack and project-config that install os-vif, 
enable the OVS and LinuxBridge plugins, and run a full set of Tempest 
functional tests to validate the interactions between Nova and os-vif


For Newton, we will be pushing for the following:

* Blueprint and implementation of changes in Neutron to send a set of 
serialized os_vif.objects.VIF objects back to Nova instead of the 
current port:binding mess (this is what is referred to as the "port 
binding negotiation process")
* Swapping out more of the non-LinuxBridge, non-OVS VIF type 
implementations in Nova's vif.py with calls to os_vif.[un]plug()

Full gate test coverage of more of the plugins

Progress


We have an os-vif-core team that has been reviewing and merging code in 
the os-vif main source repository [1]. Dan Berrange, Sean Mooney, Moshe 
Levi, and Sahid Ferdjaoui have done the bulk of the work in bringing the 
prototype library I had put together on GitHub into the OpenStack 
git.openstack.org repository. Thank you guys very much!


Dan has a work-in-progess patch to Nova [2] that replaces some of the 
code in nova/virt/libvirt/vif.py with calls to os_vif.


From our Mirantis team, I've asked Sergey Belous to handle any 
necessary changes to devstack and project-config (for a functional test 
gate check job).


Myself, I continue to do my best to get through code reviews for 
additions to the os-vif library.


If you are interested in joining the os-vif effort, please come find 
danpb, jaypipes, sahid, sean.k.mooney or MosheLevi on #openstack-nova or 
#openstack-neutron on Freenode IRC.


Thanks for reading,
-jay

[1] http://git.openstack.org/cgit/openstack/os-vif/tree
[2] https://review.openstack.org/#/c/269672/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Start to port Trove to Python 3 in Mitaka cycle?

2016-02-18 Thread Amrith Kumar
Great, so in response to your email (below) and Flavio's email [1], I submit to 
you that the way to handle this is as we had discussed at earlier meeting(s) 
and that is to wait for Newton.

Thanks,

-amrith 

[1] http://openstack.markmail.org/thread/4uksb3kmhnagoc5a

> -Original Message-
> From: Victor Stinner [mailto:vstin...@redhat.com]
> Sent: Thursday, February 18, 2016 9:42 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [trove] Start to port Trove to Python 3 in
> Mitaka cycle?
> 
> Le 18/02/2016 14:15, Amrith Kumar a écrit :
> > Let's definitely discuss this again once you have all the changes that
> you feel should be merged for Mitaka ready.
> 
> I don't like working on long patch series. In my experience, after more
> than 4 patches, it's more expensive to maintain the patch serie than to
> write patches. So I prefer to work on few patches, wait until they are
> merged, and then write following patches.
> 
> I'm not going to write dozens of patches. I suggest to do as I done in the
> paste, make progress with baby steps :-)
> 
> For example, my first change only changes the py34 test environment in
> tox.ini, it cannot break anything on Python 2, and it's enough to fix "tox
> -e py34". It is not in conflict with any other pending change.
> https://review.openstack.org/#/c/279098/
> 
>  From this point, we can add a voting gate to be able to validate
> following Python 3 changes.
> 
> 
>  > What I would like to avoid is a dribble of changes where we don't
> know how much more we have coming down the pike.
> 
> You have to be prepared for dozens of small patches. It only depends on
> the size of your project (number of code line numbers) :-)
> 
> To have an idea, you can see the Cinder blueprint which has an
> exhaustive list of all changes made for Python 3:
> https://blueprints.launchpad.net/cinder/+spec/cinder-python3
> 
> I counted 100 patches between June 2015 and February 2016.
> 
> FYI with all my pending patches for Cinder (only 4 changes remain), all
> unit tests will pass on Python 3!
> 
> It also gives you an idea of the time frame: it took me 9 months to port
> Cinder unit tests to Python 3. So more than a single OpenStack cycle (6
> months).
> 
> Since the port is long and painful, I would like to start as soon as
> possible :-)
> 
> 
>  > And while your changes may be "low risk", it does mean that if they
> merge now, the large feature sets that we have committed for this
> release will have to go through the cycle of merge conflicts, rebasing,
> code review, gate ... and so on.
> 
> The principle of technical debt is that the price only is only
> increasing if you wait longer :-) Merging Python 3 today or tomorrow
> doesn't solve the problem of merge conflicts :-)
> 
> It's really up to you to decide to "open the gate" for the flow of
> Python 3 patches, it's also up to you to control how much Python 3
> changes will merged. I can only offer my help to port code. I don't feel
> able to decide when it's the best time to start porting Trove ;-)
> 
> By the way, Gerrit provides a great "Conflicts With" information! It
> also helps to decide if it's ok to merge a Python 3 change, or if it's
> better to focus on the other changes in conflict.
> 
> Victor
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] midcycle voice channel is 7777

2016-02-18 Thread Sam Betts (sambetts)
 is working for the Thursday midcycle session so we are moving back to
that channel.

Sam

On 17/02/2016 15:19, "Jim Rollenhagen"  wrote:

>So, someone has injected their hold music into 7778. We've now moved to
>7779, sorry for the trouble :(
>
>// jim
>
>On Wed, Feb 17, 2016 at 07:01:22AM -0800, Jim Rollenhagen wrote:
>> Hi,
>> 
>> We've moved the midcycle to channel 7778 on the infra conferencing
>> system - something is wrong with  (no audio coming through).
>> 
>> /me lets infra know as well
>> 
>> // jim
>> 
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [cinder] [glance] tenant vs. project

2016-02-18 Thread Morgan Fainberg
Not all clients are fully v3 compatible, this is the effort to move to
session, from keystone client.session to keystone auth.session, and
os-client-config. Since this work has been slow, we are not 100% there yet,
but as Henrique said, OpenStack client does support both consistently. If
devstack moves away from project specific cli use, it should be possible to
move away from tenant variables.
On Feb 18, 2016 04:25, "Henrique Truta" 
wrote:

> Hi Sean,
>
> I don't think they're supposed to work with that. Both of those clients
> have their python API compatible with those variables you've commented. But
> the CLI should be used through OpenStack client. Just for an example,
> keystoneclient CLI does not support it, but OpenStack client supports
> keystone v3 operations. Shouldn't we move towards deprecating the CLI of
> individual clients in favor of OpenStack Client?
>
> Henrique
>
> Em qui, 18 de fev de 2016 às 09:05, Sean Dague  escreveu:
>
>> On 02/12/2016 07:01 AM, Sean Dague wrote:
>> > Ok... this is going to be one of those threads, but I wanted to try to
>> > get resolution here.
>> >
>> > OpenStack is wildly inconsistent in it's use of tenant vs. project. As
>> > someone that wasn't here at the beginning, I'm not even sure which one
>> > we are supposed to be transitioning from -> to.
>> >
>> > At a minimum I'd like to make all of devstack use 1 term, which is the
>> > term we're trying to get to. That will help move the needle.
>> >
>> > However, again, I'm not sure which one that is supposed to be (comments
>> > in various places show movement in both directions). So people with
>> > deeper knowledge here, can you speak up as to which is the deprecated
>> > term and which is the term moving forward.
>> >
>> >   -Sean
>>
>> So, as expected, there are snags in deleting TENANT variables in
>> devstack, which is some of the clients.
>>
>> It appears that neither glance nor cinder client work with
>> OS_PROJECT_NAME, even though they say they do:
>>
>>
>> os1:~> set | grep ^OS_
>> OS_AUTH_URL=http://10.42.0.50:5000/v2.0
>> OS_CACERT=
>> OS_IDENTITY_API_VERSION=2.0
>> OS_NO_CACHE=1
>> OS_PASSWORD=pass
>> OS_PROJECT_NAME=demo
>> OS_REGION_NAME=RegionOne
>> OS_USERNAME=demo
>> OS_VOLUME_API_VERSION=2
>>
>> os1:~> cinder list
>> ERROR: You must provide a tenant_name, tenant_id, project_id or
>> project_name (with project_domain_name or project_domain_id) via
>> --os-tenant-name (env[OS_TENANT_NAME]),  --os-tenant-id
>> (env[OS_TENANT_ID]),  --os-project-id (env[OS_PROJECT_ID])
>> --os-project-name (env[OS_PROJECT_NAME]),  --os-project-domain-id
>> (env[OS_PROJECT_DOMAIN_ID])  --os-project-domain-name
>> (env[OS_PROJECT_DOMAIN_NAME])
>>
>> os1:~> glance image-list
>> You must provide a project_id or project_name (with project_domain_name
>> or project_domain_id) via   --os-project-id (env[OS_PROJECT_ID])
>> --os-project-name (env[OS_PROJECT_NAME]),  --os-project-domain-id
>> (env[OS_PROJECT_DOMAIN_ID])  --os-project-domain-name
>> (env[OS_PROJECT_DOMAIN_NAME])
>>
>>
>> The existence of versions of these tools out there which don't support
>> OS_PROJECT_NAME will inhibit our attempts to move forward. Thoughts one
>> ways we can address this?
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Start to port Trove to Python 3 in Mitaka cycle?

2016-02-18 Thread Victor Stinner

Le 18/02/2016 14:15, Amrith Kumar a écrit :

Let's definitely discuss this again once you have all the changes that you feel 
should be merged for Mitaka ready.


I don't like working on long patch series. In my experience, after more 
than 4 patches, it's more expensive to maintain the patch serie than to 
write patches. So I prefer to work on few patches, wait until they are 
merged, and then write following patches.


I'm not going to write dozens of patches. I suggest to do as I done in 
the paste, make progress with baby steps :-)


For example, my first change only changes the py34 test environment in 
tox.ini, it cannot break anything on Python 2, and it's enough to fix 
"tox -e py34". It is not in conflict with any other pending change.

https://review.openstack.org/#/c/279098/

From this point, we can add a voting gate to be able to validate 
following Python 3 changes.



> What I would like to avoid is a dribble of changes where we don't 
know how much more we have coming down the pike.


You have to be prepared for dozens of small patches. It only depends on 
the size of your project (number of code line numbers) :-)


To have an idea, you can see the Cinder blueprint which has an 
exhaustive list of all changes made for Python 3:

https://blueprints.launchpad.net/cinder/+spec/cinder-python3

I counted 100 patches between June 2015 and February 2016.

FYI with all my pending patches for Cinder (only 4 changes remain), all 
unit tests will pass on Python 3!


It also gives you an idea of the time frame: it took me 9 months to port 
Cinder unit tests to Python 3. So more than a single OpenStack cycle (6 
months).


Since the port is long and painful, I would like to start as soon as 
possible :-)



> And while your changes may be "low risk", it does mean that if they 
merge now, the large feature sets that we have committed for this 
release will have to go through the cycle of merge conflicts, rebasing, 
code review, gate ... and so on.


The principle of technical debt is that the price only is only 
increasing if you wait longer :-) Merging Python 3 today or tomorrow 
doesn't solve the problem of merge conflicts :-)


It's really up to you to decide to "open the gate" for the flow of 
Python 3 patches, it's also up to you to control how much Python 3 
changes will merged. I can only offer my help to port code. I don't feel 
able to decide when it's the best time to start porting Trove ;-)


By the way, Gerrit provides a great "Conflicts With" information! It 
also helps to decide if it's ok to merge a Python 3 change, or if it's 
better to focus on the other changes in conflict.


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-02-18 05:29:44 -0500:
> Ok, to make sure we all ended up on the same page at the end of this
> discussion, this is what I think I heard.
> 
> 1) oslo.config is about to release with a feature that will make adding
> config to paste.ini not needed (i.e.
> https://review.openstack.org/#/c/265415/ is no longer needed).

The new feature makes it possible for applications to override the
defaults *shown in the sample config*. It was always possible to
override them at runtime.

> 
> 2) ideally the cors middleware will have sane defaults for that set of
> headers in oslo.config.

I thought the point was that the defaults needed to change based on the
application. So the middleware needs a public API to set those defaults
(something like the set_defaults function in oslo.log [1]), and then
applications that want to change the defaults need to call the new
function.

[1] http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/log.py#n247

> 
> 3) projects should be able to apply new defaults for these options in
> their codebase through a default override process (that is now nicely
> documented somewhere... url?)

http://docs.openstack.org/developer/oslo.config/generator.html#modifying-defaults-from-other-namespaces

It's a hook mechanism that relies on the app calling the existing public
APIs in libraries such as the middleware to change the defaults.

> 
> If I got any of that wrong, please let me know.
> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Wildcards instead of

2016-02-18 Thread Igor Kalnitsky
Hey Kyrylo,

As it was mentioned in the review: you're about to break roles defined
by plugins. That's not good move, I believe.

Regarding 'exclude' directive, I have no idea what you're talking
about. We don't support it now, and, anyway, there should be no
difference between roles defined by plugins and core roles.

- Igor

On Thu, Feb 18, 2016 at 12:53 PM, Kyrylo Galanov  wrote:
> Hello,
>
> We are about to switch to wildcards instead of listing all groups in tasks
> explicitly [0].
> This change must make deployment process more obvious for developers.
> However, it might lead to confusion when new groups are added either by
> plugin or fuel team in future.
>
> As mention by Bogdan, it is possible to use 'exclude' directive to mitigate
> the risk.
> Any thoughts on the topic are appreciated.
>
>
> [0] https://review.openstack.org/#/c/273596/
>
> Best regards,
> Kyrylo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] Re: Patch submission for Kolla

2016-02-18 Thread Jeremy Stanley
On 2016-02-18 13:47:33 + (+), Steven Dake (stdake) wrote:
> I recommend contacting #openstack-infra on irc with your question.
> They are the folks that can get this solved for you.  Note I think
> the best way to update contact information is to login to
> openstack.org, and fill out your contact information there and use
> the save button.  This should automatically sync with gerrit.
[...]

Well, not exactly. You need to set up a foundation individual member
profile first and then submit contact information in Gerrit after
that using a matching E-mail address. The best advice is to check
the steps you've followed against
http://docs.openstack.org/infra/manual/developers.html#account-setup
and make sure you complete any steps you may have skipped and retry
any steps that failed in the order described in that document.

If that still doesn't help, some people have posted additional
troubleshooting suggestions at
https://ask.openstack.org/question/56720 .
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] - DVR L3 data plane performance results and scenarios

2016-02-18 Thread Gal Sagie
Hello All,

We have started to test Dragonflow [1] data plane L3 performance and was
wondering
if there is any results and scenarios published for the current Neutron DVR
that we can compare and learn the scenarios to test.

We mostly want to validate and understand if our results are accurate and
also join the
community in defining base standards and scenarios to test any solution out
there.

For that we also plan to join and contribute to openstack-performance [2]
efforts which to me
are really important.

Would love any results/information you can share, also interested in
control plane
testing and API stress tests (either using Rally or not)

Thanks
Gal.

[1]
http://docs.openstack.org/developer/dragonflow/distributed_dragonflow.html
[2] https://github.com/openstack/performance-docs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Start to port Trove to Python 3 in Mitaka cycle?

2016-02-18 Thread Flavio Percoco

On 18/02/16 13:15 +, Amrith Kumar wrote:

Victor, thanks for the changes and the patch sets.

TL;DR: We've discussed this a couple of times already, once at a Trove 
meeting[1], once at length at the midcycle, and concluded that post-Mitaka is 
the right time to merge changes relative to Python 3. Once you have all the 
changes that you feel should be merged for Mitaka relative to Python 3, let us 
revisit for sure.

-- Longer version --

At this point in the development cycle, the intent is that we work on and 
submit code for accepted and committed projects for the Mitaka cycle, and bug 
fixes. Python 3 was not an accepted and committed project for Trove in the 
Mitaka cycle.

This is not the first time when a "low risk" change set for a project will be 
proposed and someone will want to have it included in the release even at this stage, and 
I don't believe that it will be the last time. For those who would like to work on the 
Python 3 port, I believe that like other multi-commit projects, they can cherry pick your 
code, or make their patches dependent on your changes. I don't believe that a failure to 
merge these into Mitaka would obstruct their ongoing development.

And while your changes may be "low risk", it does mean that if they merge now, 
the large feature sets that we have committed for this release will have to go through 
the cycle of merge conflicts, rebasing, code review, gate ... and so on.

We discussed this matter at some length at a Trove meeting [1], and we 
discussed it again at the mid-cycle. The comment you reference is the result of 
that discussion at the mid-cycle.

If I had my way, I'd rather hold any spare cycles available to get the project 
that we wanted in Mitaka (backup to Ceph [2]), which is currently in jeopardy 
of not making the Mitaka deadlines.

Let's definitely discuss this again once you have all the changes that you feel 
should be merged for Mitaka ready. What I would like to avoid is a dribble of 
changes where we don't know how much more we have coming down the pike. Once 
the committed projects for Mitaka have been merged, it may be reasonable to 
take all of these changes in one set.



My experience from other projects is that py3 patches will come and they'll keep
coming until the gate is made voting. Requesting the folks working on the py3
port to get all the patches ready before doing proper reviews adds a significant
amount of work to the team.

In Glance, Py3 patches have always been small and they have not introduced other
issues (at least that I can remember).

The above is not to ask the Trove team to change the project's priorities but
just to provide feedback from other projects. I do recommend, however, to make
this a priority for newton if it doesn't make it in Mitaka. The Py3 effort has
been huge and it is becoming more and more of an important support to provide.

Flavio



-amrith

--
Amrith Kumar, CTO   | amr...@tesora.com
Tesora, Inc | @amrithkumar
125 CambridgePark Drive, Suite 400  | http://www.tesora.com
Cambridge, MA. 02140| GPG: 0x5e48849a9d21a29b

[1] 
https://wiki.openstack.org/wiki/Trove/MeetingAgendaHistory#Trove_Meeting.2C_Jan_20.2C_2016
[2] https://review.openstack.org/#/c/256057/




-Original Message-
From: Victor Stinner [mailto:vstin...@redhat.com]
Sent: Thursday, February 18, 2016 7:20 AM
To: OpenStack Development Mailing List (not for usage questions)

Subject: [openstack-dev] [trove] Start to port Trove to Python 3 in Mitaka
cycle?

Hi,

When I began to work on porting Trove to Python 3, I was blocked by MySQL-
Python which is not compatible with Python 3. I tried a big change
replacing MySQL-Python with PyMySQL, since other OpenStack services also
moved to PyMySQL. But tests fail and I'm unable to fix them :-/
https://review.openstack.org/#/c/225915/

Recently, I noticed that the dependency is now skipped on Python 3 (thanks
to env markers in requirements.txt), and so "tox -e py34" is able to
create the test environment.

So I abandoned my PyMySQL change (I will reopen it later) and started new
simpler patches following the plan of my Python 3 blueprint for Trove:
https://blueprints.launchpad.net/trove/+spec/trove-python3

In short:

(1) fix the Python 3 gate
(2) make the Python 3 gate voting
(3) port more and more unit tests

My patches:

trove: "Add a minimal py34 test environment"
https://review.openstack.org/#/c/279098/
=> fix "tox -e py34", start with a whitelist of the 3 most basic unit
tests

trove: "Port test_template unit test to Python 3"
https://review.openstack.org/#/c/279119/
=> port another unit test

openstack-infra/project-config: "Add non-voting gate-trove-python34 check"
https://review.openstack.org/#/c/279108/


IMHO these changes are simple and the risk of regression is low, but
amrith wrote me "thanks for your change set but per last trove meeting, I
think this should wait till mitaka is done, and we can pick it up early in
newton

[openstack-dev] [Kuryr] - IRC Meeting (2/23) - 0300 UTC (#openstack-meeting-4)

2016-02-18 Thread Gal Sagie
Hello All,

We will have an IRC meeting on 2/23 at 0300 UTC
in #openstack-meeting-4

Please review the expected meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/Kuryr

You can view last meeting action items and logs here:
http://eavesdrop.openstack.org/meetings/kuryr/2016/kuryr.2016-02-15-15.00.html

Please update the agenda if you have any subject you would like to discuss
about.

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][infra] Re: Patch submission for Kolla

2016-02-18 Thread Steven Dake (stdake)
Sebastien,

Cool that will be a nice feature :)

I recommend contacting #openstack-infra on irc with your question.  They are 
the folks that can get this solved for you.  Note I think the best way to 
update contact information is to login to openstack.org, and fill out your 
contact information there and use the save button.  This should automatically 
sync with gerrit.

Note gerrit just went through an upgrade, so its possible the update contact 
information feature malfunctions sometimes in gerrit.  I just don't know for 
certain.

I have copied openstack-dev in case openstack-infra has any ideas there.

Regards
-steve

From: mailto:sfu...@emmene-moi.fr>> on behalf of 
Sebastien Fuchs mailto:sebast...@emmene-moi.fr>>
Date: Thursday, February 18, 2016 at 4:24 AM
To: Steven Dake mailto:std...@cisco.com>>
Subject: Patch submission for Kolla

Hi Steven,

Sorry to bother you with my email but I'd like to submit the included patch for 
review but I can't.

I followed the different docs, created blueprint 
(https://blueprints.launchpad.net/kolla/+spec/external-ceph) to attach to the 
review, checked out a branch from stable/liberty (new branch 
"spec/external-ceph") but I still get an error after "git review":
fatal: ICLA contributor agreement requires current contact information.

Please review your contact information:

  https://review.openstack.org/#/settings/contact


fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Unfortunately I can't update my contact information on gerrit 
(https://review.openstack.org/#/settings/contact) : I get an error (Server 
Error - Cannot store contact information).

Is there something missing ?

Very sorry again. Don't hesitate to forward me to someone else if needed.

All the best
Sebastien Fuchs
CEO
Emmene-moi SARL
34 pl du Marché Saint Honoré
75001 Paris
0663098180






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] : Steps to upgrade the current setup from Kilo to Liberty

2016-02-18 Thread Major Hayden
On 02/18/2016 04:02 AM, Sharma Swati6 wrote:
> I have followed the following steps-
> ./Scripts/teardown.sh
> Git checkout 12.0.6 (liberty)
> ran setup-hosts.yml*(FACING ISSUES HERE)*

Hello Sharma,

Could you give us the exact command you ran the error output that you received? 
 That should help us figure out if it's a problem in Ansible or within your OS 
configuration.

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Start to port Trove to Python 3 in Mitaka cycle?

2016-02-18 Thread Amrith Kumar
Victor, thanks for the changes and the patch sets.

TL;DR: We've discussed this a couple of times already, once at a Trove 
meeting[1], once at length at the midcycle, and concluded that post-Mitaka is 
the right time to merge changes relative to Python 3. Once you have all the 
changes that you feel should be merged for Mitaka relative to Python 3, let us 
revisit for sure.

-- Longer version --

At this point in the development cycle, the intent is that we work on and 
submit code for accepted and committed projects for the Mitaka cycle, and bug 
fixes. Python 3 was not an accepted and committed project for Trove in the 
Mitaka cycle.

This is not the first time when a "low risk" change set for a project will be 
proposed and someone will want to have it included in the release even at this 
stage, and I don't believe that it will be the last time. For those who would 
like to work on the Python 3 port, I believe that like other multi-commit 
projects, they can cherry pick your code, or make their patches dependent on 
your changes. I don't believe that a failure to merge these into Mitaka would 
obstruct their ongoing development.

And while your changes may be "low risk", it does mean that if they merge now, 
the large feature sets that we have committed for this release will have to go 
through the cycle of merge conflicts, rebasing, code review, gate ... and so on.

We discussed this matter at some length at a Trove meeting [1], and we 
discussed it again at the mid-cycle. The comment you reference is the result of 
that discussion at the mid-cycle.

If I had my way, I'd rather hold any spare cycles available to get the project 
that we wanted in Mitaka (backup to Ceph [2]), which is currently in jeopardy 
of not making the Mitaka deadlines.

Let's definitely discuss this again once you have all the changes that you feel 
should be merged for Mitaka ready. What I would like to avoid is a dribble of 
changes where we don't know how much more we have coming down the pike. Once 
the committed projects for Mitaka have been merged, it may be reasonable to 
take all of these changes in one set.

-amrith

--
Amrith Kumar, CTO   | amr...@tesora.com
Tesora, Inc | @amrithkumar
125 CambridgePark Drive, Suite 400  | http://www.tesora.com
Cambridge, MA. 02140| GPG: 0x5e48849a9d21a29b 

[1] 
https://wiki.openstack.org/wiki/Trove/MeetingAgendaHistory#Trove_Meeting.2C_Jan_20.2C_2016
[2] https://review.openstack.org/#/c/256057/



> -Original Message-
> From: Victor Stinner [mailto:vstin...@redhat.com]
> Sent: Thursday, February 18, 2016 7:20 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [trove] Start to port Trove to Python 3 in Mitaka
> cycle?
> 
> Hi,
> 
> When I began to work on porting Trove to Python 3, I was blocked by MySQL-
> Python which is not compatible with Python 3. I tried a big change
> replacing MySQL-Python with PyMySQL, since other OpenStack services also
> moved to PyMySQL. But tests fail and I'm unable to fix them :-/
> https://review.openstack.org/#/c/225915/
> 
> Recently, I noticed that the dependency is now skipped on Python 3 (thanks
> to env markers in requirements.txt), and so "tox -e py34" is able to
> create the test environment.
> 
> So I abandoned my PyMySQL change (I will reopen it later) and started new
> simpler patches following the plan of my Python 3 blueprint for Trove:
> https://blueprints.launchpad.net/trove/+spec/trove-python3
> 
> In short:
> 
> (1) fix the Python 3 gate
> (2) make the Python 3 gate voting
> (3) port more and more unit tests
> 
> My patches:
> 
> trove: "Add a minimal py34 test environment"
> https://review.openstack.org/#/c/279098/
> => fix "tox -e py34", start with a whitelist of the 3 most basic unit
> tests
> 
> trove: "Port test_template unit test to Python 3"
> https://review.openstack.org/#/c/279119/
> => port another unit test
> 
> openstack-infra/project-config: "Add non-voting gate-trove-python34 check"
> https://review.openstack.org/#/c/279108/
> 
> 
> IMHO these changes are simple and the risk of regression is low, but
> amrith wrote me "thanks for your change set but per last trove meeting, I
> think this should wait till mitaka is done, and we can pick it up early in
> newton."
> 
> I discussed with some Trove developers who are interested to start the
> Python 3 port right now. What do you think?
> 
> Maybe we can discuss that in the next Trove meeting?
> https://wiki.openstack.org/wiki/Meetings/TroveMeeting
> (Wednesdays at 18:00 UTC in #openstack-meeting-alt)
> 
> Oops, I just missed the meeting yesterday. I was too slow to write this
> email :-)
> 
> Victor
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinf

Re: [openstack-dev] [all] [cinder] [glance] tenant vs. project

2016-02-18 Thread Henrique Truta
Hi Sean,

I don't think they're supposed to work with that. Both of those clients
have their python API compatible with those variables you've commented. But
the CLI should be used through OpenStack client. Just for an example,
keystoneclient CLI does not support it, but OpenStack client supports
keystone v3 operations. Shouldn't we move towards deprecating the CLI of
individual clients in favor of OpenStack Client?

Henrique

Em qui, 18 de fev de 2016 às 09:05, Sean Dague  escreveu:

> On 02/12/2016 07:01 AM, Sean Dague wrote:
> > Ok... this is going to be one of those threads, but I wanted to try to
> > get resolution here.
> >
> > OpenStack is wildly inconsistent in it's use of tenant vs. project. As
> > someone that wasn't here at the beginning, I'm not even sure which one
> > we are supposed to be transitioning from -> to.
> >
> > At a minimum I'd like to make all of devstack use 1 term, which is the
> > term we're trying to get to. That will help move the needle.
> >
> > However, again, I'm not sure which one that is supposed to be (comments
> > in various places show movement in both directions). So people with
> > deeper knowledge here, can you speak up as to which is the deprecated
> > term and which is the term moving forward.
> >
> >   -Sean
>
> So, as expected, there are snags in deleting TENANT variables in
> devstack, which is some of the clients.
>
> It appears that neither glance nor cinder client work with
> OS_PROJECT_NAME, even though they say they do:
>
>
> os1:~> set | grep ^OS_
> OS_AUTH_URL=http://10.42.0.50:5000/v2.0
> OS_CACERT=
> OS_IDENTITY_API_VERSION=2.0
> OS_NO_CACHE=1
> OS_PASSWORD=pass
> OS_PROJECT_NAME=demo
> OS_REGION_NAME=RegionOne
> OS_USERNAME=demo
> OS_VOLUME_API_VERSION=2
>
> os1:~> cinder list
> ERROR: You must provide a tenant_name, tenant_id, project_id or
> project_name (with project_domain_name or project_domain_id) via
> --os-tenant-name (env[OS_TENANT_NAME]),  --os-tenant-id
> (env[OS_TENANT_ID]),  --os-project-id (env[OS_PROJECT_ID])
> --os-project-name (env[OS_PROJECT_NAME]),  --os-project-domain-id
> (env[OS_PROJECT_DOMAIN_ID])  --os-project-domain-name
> (env[OS_PROJECT_DOMAIN_NAME])
>
> os1:~> glance image-list
> You must provide a project_id or project_name (with project_domain_name
> or project_domain_id) via   --os-project-id (env[OS_PROJECT_ID])
> --os-project-name (env[OS_PROJECT_NAME]),  --os-project-domain-id
> (env[OS_PROJECT_DOMAIN_ID])  --os-project-domain-name
> (env[OS_PROJECT_DOMAIN_NAME])
>
>
> The existence of versions of these tools out there which don't support
> OS_PROJECT_NAME will inhibit our attempts to move forward. Thoughts one
> ways we can address this?
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Start to port Trove to Python 3 in Mitaka cycle?

2016-02-18 Thread Victor Stinner

Hi,

When I began to work on porting Trove to Python 3, I was blocked by 
MySQL-Python which is not compatible with Python 3. I tried a big change 
replacing MySQL-Python with PyMySQL, since other OpenStack services also 
moved to PyMySQL. But tests fail and I'm unable to fix them :-/

https://review.openstack.org/#/c/225915/

Recently, I noticed that the dependency is now skipped on Python 3 
(thanks to env markers in requirements.txt), and so "tox -e py34" is 
able to create the test environment.


So I abandoned my PyMySQL change (I will reopen it later) and started 
new simpler patches following the plan of my Python 3 blueprint for Trove:

https://blueprints.launchpad.net/trove/+spec/trove-python3

In short:

(1) fix the Python 3 gate
(2) make the Python 3 gate voting
(3) port more and more unit tests

My patches:

trove: "Add a minimal py34 test environment"
https://review.openstack.org/#/c/279098/
=> fix "tox -e py34", start with a whitelist of the 3 most basic unit tests

trove: "Port test_template unit test to Python 3"
https://review.openstack.org/#/c/279119/
=> port another unit test

openstack-infra/project-config: "Add non-voting gate-trove-python34 check"
https://review.openstack.org/#/c/279108/


IMHO these changes are simple and the risk of regression is low, but 
amrith wrote me "thanks for your change set but per last trove meeting, 
I think this should wait till mitaka is done, and we can pick it up 
early in newton."


I discussed with some Trove developers who are interested to start the 
Python 3 port right now. What do you think?


Maybe we can discuss that in the next Trove meeting?
https://wiki.openstack.org/wiki/Meetings/TroveMeeting
(Wednesdays at 18:00 UTC in #openstack-meeting-alt)

Oops, I just missed the meeting yesterday. I was too slow to write this 
email :-)


Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [cinder] [glance] tenant vs. project

2016-02-18 Thread Sean Dague
On 02/12/2016 07:01 AM, Sean Dague wrote:
> Ok... this is going to be one of those threads, but I wanted to try to
> get resolution here.
> 
> OpenStack is wildly inconsistent in it's use of tenant vs. project. As
> someone that wasn't here at the beginning, I'm not even sure which one
> we are supposed to be transitioning from -> to.
> 
> At a minimum I'd like to make all of devstack use 1 term, which is the
> term we're trying to get to. That will help move the needle.
> 
> However, again, I'm not sure which one that is supposed to be (comments
> in various places show movement in both directions). So people with
> deeper knowledge here, can you speak up as to which is the deprecated
> term and which is the term moving forward.
> 
>   -Sean

So, as expected, there are snags in deleting TENANT variables in
devstack, which is some of the clients.

It appears that neither glance nor cinder client work with
OS_PROJECT_NAME, even though they say they do:


os1:~> set | grep ^OS_
OS_AUTH_URL=http://10.42.0.50:5000/v2.0
OS_CACERT=
OS_IDENTITY_API_VERSION=2.0
OS_NO_CACHE=1
OS_PASSWORD=pass
OS_PROJECT_NAME=demo
OS_REGION_NAME=RegionOne
OS_USERNAME=demo
OS_VOLUME_API_VERSION=2

os1:~> cinder list
ERROR: You must provide a tenant_name, tenant_id, project_id or
project_name (with project_domain_name or project_domain_id) via
--os-tenant-name (env[OS_TENANT_NAME]),  --os-tenant-id
(env[OS_TENANT_ID]),  --os-project-id (env[OS_PROJECT_ID])
--os-project-name (env[OS_PROJECT_NAME]),  --os-project-domain-id
(env[OS_PROJECT_DOMAIN_ID])  --os-project-domain-name
(env[OS_PROJECT_DOMAIN_NAME])

os1:~> glance image-list
You must provide a project_id or project_name (with project_domain_name
or project_domain_id) via   --os-project-id (env[OS_PROJECT_ID])
--os-project-name (env[OS_PROJECT_NAME]),  --os-project-domain-id
(env[OS_PROJECT_DOMAIN_ID])  --os-project-domain-name
(env[OS_PROJECT_DOMAIN_NAME])


The existence of versions of these tools out there which don't support
OS_PROJECT_NAME will inhibit our attempts to move forward. Thoughts one
ways we can address this?

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-18 Thread Aleksandr Didenko
> Given the requirements to be able to use new features in fuel, with an
older version of OpenStack, what alternative would you propose?

For example, it's possible to use existing "release" functionality in Fuel
(release contains granular tasks configuration, puppet modules and
manifests, configuration data). So after upgrade from 8.0 to 9.0 it will
look like this [0] - with separate composition layer for every supported
"release".

> We should allow a user to specify that they want a build a cloud using X
fuel release to deploy Y os with Z OpenStack release.

[0] should work for this as well. But the number of X-Y-Z combinations will
be limited. Well, it will be limited in any case, I don't think that it's
possible to support unlimited number of OpenStack versions in a single Fuel
release.

In case we want to use single composition layer for more than one openstack
version, we need to resolve the following blockers:
- Move everything except composition layer (top-scope manifests and other
granular tasks) from fuel-library to their own repos. Otherwise we'll have
OpenStack version conditionals in modules manifets, providers and functions
which would be a mess.
- Refactor tasks upload/serialization in Nailgun
- (?) Refactor configuration data serialization in Nailgun

And still we'll have to add conditionals to puppet functions that relay on
configuration data directly (like generate_network_config.rb). Or write
some sort of data serialization in front of them in manifests. Or leave
nailgun serialization based on installed version (which is almost the same
as using separate composition layers [0]).

In either case (separate releases or single composition layer) it will
double CI load and testing efforts, because we need to CI/test new features
and patches for 9.0+mitaka and 9.0+liberty.

Regards,
Alex

[0] http://paste.openstack.org/show/487383/


On Thu, Feb 18, 2016 at 9:31 AM, Bogdan Dobrelya 
wrote:

> On 17.02.2016 18:23, Bogdan Dobrelya wrote:
> >> So we'll have tons of conditionals in composition layer, right? Even if
> >> some puppet-openstack class have just one new parameter in new release,
> >> then we'll have to write a conditional and duplicate class declaration.
> Or
> >> write complex parameters hash definitions/merges and use
> >> create_resources(). The more releases we want to support the more
> >> complicated composition layer will become. That won't make contribution
> to
> >> fuel-library easier and even can greatly reduce development speed. Also
> are
> >> we going to add new features to stable releases using this workflow with
> >> single composition layer?
> >
> > As I can see from an example composition [0], such code would be an
> > unmaintainable burden for development and QA process. Next imagine a
> > case for incompatible *providers* like network transformations - shall
> > we put multiple if/case to the ruby providers as well?..
> >
> > That is not a way to go for a composition, sorry. While the idea may be
> > doable, I agree, but perhaps another way.
> >
> > (tl;dr)
> > By the way, this reminded me "The wrong abstraction" [1] article and
> > discussion. I agree with the author and believe one should not group
> > code (here it is versioned puppet modules & compositions) in a way which
> > introduces abstractions (here a super-composition) with multiple
> > if/else/case and hardcoded things to switch the execution flow based on
> > version of things. Just keep code as is - partially duplicated by
> > different releases in separate directories with separate modules and
> > composition layers and think of better solutions please.
> >
> > There is also a nice comment: "...try to optimize my code around
> > reducing state, coupling, complexity and code, in that order". I
> > understood that like a set of "golden rules":
> > - Make it coupled more tight to decrease (shared) state
> > - Make it more complex to decrease coupling
> > - Make it duplicated to decrease complexity (e.g. abstractions)
> >
> > (tl;dr, I mean it)
> > So, bringing those here.
> > - The shared state is perhaps the Nailgun's world view of all data and
> > versioned serializers for supported releases, which know how to convert
> > the only latest existing data to any of its supported previous versions.
> > - Decoupling we do by putting modules with its compositions to different
> > versioned /etc/puppet subdirectories. I'm not sure how do we decouple
> > Nailgun serializers though.
> > - Complexity is how we compose those modules / write logic of
> serializers.
> > - Duplication is puppet classes (and providers) with slightly different
> > call parameters from a version to version. Sometimes even not backwards
> > compatible. Probably same to the serializers?
> >
> > So, we're going to *increase complexity* by introducing
> > super-compositions for multi OpenStack releases. Not sure about what to
> > happen to the serializers, any volunteers to clarify an impact?. And the
> > Rules "allow" us to do so only in 

[openstack-dev] [Fuel] Wildcards instead of

2016-02-18 Thread Kyrylo Galanov
Hello,

We are about to switch to wildcards instead of listing all groups in tasks
explicitly [0].
This change must make deployment process more obvious for developers.
However, it might lead to confusion when new groups are added either by
plugin or fuel team in future.

As mention by Bogdan, it is possible to use 'exclude' directive to mitigate
the risk.
Any thoughts on the topic are appreciated.


[0] https://review.openstack.org/#/c/273596/

Best regards,
Kyrylo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] intrinsic function bugfixes and hot versioning

2016-02-18 Thread Thomas Herve
On Wed, Feb 17, 2016 at 7:54 PM, Steven Hardy  wrote:
> Hi all,
>
> So, Zane and I have discussed $subject and it was suggested I take this to
> the list to reach consensus.
>
> Recently, I've run into a couple of small but inconvenient limitations in
> our intrinsic function implementations, specifically for str_replace and
> repeat, both of which did not behave the way I expected when referencing
> things via get_param/get_attr:

Disclaimer: compatibility is not black and white, especially in these
cases. We need to make decisions based on the impact we can imagine on
users, so it's certainly subjective. That said:

> https://bugs.launchpad.net/heat/+bug/1539737

I think it works fine as a bug fix.

> https://bugs.launchpad.net/heat/+bug/1546684

I agree that a new version would be better.

The main difference for me is that even if it's arguable, you could
build a working template relying on the current behavior (having a
template returned by a function).
If you find a way to keep the current behavior *and* have the one you
expect, then I can see it as a bug fix.

-- 
Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] will we use os-vif in kuryr

2016-02-18 Thread Daniel P. Berrange
On Thu, Feb 18, 2016 at 09:01:35AM +, Liping Mao (limao) wrote:
> Hi Kuryr team,
> 
> I see couple of commits to add support for vif plug.
> https://review.openstack.org/#/c/280411/
> https://review.openstack.org/#/c/280878/
> 
> Do we have plan to use os-vif?
> https://github.com/openstack/os-vif

FYI, we're trying reasonably hard to *not* make any assumptions about
what compute or network services are using os-vif. ie, we want os-vif
as a framework to be usable from Nova, or any other compute manager,
and likewise be usable from Neutron or any other network manager.
Obviously the actual implementations may be different, but the general
os-vif framework tries to be agnostic.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] upgrade implications of lots of content in paste.ini

2016-02-18 Thread Sean Dague
Ok, to make sure we all ended up on the same page at the end of this
discussion, this is what I think I heard.

1) oslo.config is about to release with a feature that will make adding
config to paste.ini not needed (i.e.
https://review.openstack.org/#/c/265415/ is no longer needed).

2) ideally the cors middleware will have sane defaults for that set of
headers in oslo.config.

3) projects should be able to apply new defaults for these options in
their codebase through a default override process (that is now nicely
documented somewhere... url?)

If I got any of that wrong, please let me know.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] publish and update Gerrit dashboard link automatically

2016-02-18 Thread Rossella Sblendido



On 02/17/2016 07:17 PM, Doug Wiegley wrote:

Results, updated hourly (bookmarkable, will redirect to gerrit):

http://104.236.79.17/
http://104.236.79.17/current
http://104.236.79.17/current-min


Nice, thanks a lot for looking into this!!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][stable][oslo] oslo.service 0.9.1 release (liberty)

2016-02-18 Thread Victor Stinner

Hi,


Le 17/02/2016 19:29, no-re...@openstack.org a écrit :
> We are chuffed to announce the release of:
>
> oslo.service 0.9.1: oslo.service library
> (...)
>
> Changes in oslo.service 0.9.0..0.9.1
> 
>
> 8b6e2f6 Fix race condition on handling signals
> eb1a4aa Fix a race condition in signal handlers

This release contains two major changes to fix race conditions in signal 
handling. Related bugs:


"Race condition in SIGTERM signal handler"
https://bugs.launchpad.net/oslo.service/+bug/1524907
=> "AssertionError: Cannot switch to MAINLOOP from MAINLOOP" error

"Failed to stop nova-api in grenade tests"
https://bugs.launchpad.net/nova/+bug/1538204
=> "oslo_service.threadgroup RuntimeError: dictionary changed size 
during iteration"


oslo.service 0.9.1 is now in upper-contraints.txt and so will be 
deployed on Liberty CIs:

https://review.openstack.org/#/c/280934/

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel][nailgun][volume-manager][fuel-agent] lvm metadata size value. why was it set to 64M?

2016-02-18 Thread Evgeniy L
Hi Alexander,

I was trying to trace the change and found 3 year old commit, yes it's hard
to recover the reason [0].
So what we should ask is what is a right way to calculate lvm metadata size
and change this behaviour.

I would suggest at least explicitly set metadata size on Nailgun side to
the same amount we have in the agent (until better size is found). Plus
explicitly reserve some amount of space based on io-optimal of specific
disk.

Thanks,

[0]
https://github.com/Mirantis/fuelweb/commit/d4d14b528b76b8e9fcbca51d3047a3884792d69f
[1] https://www.redhat.com/archives/linux-lvm/2012-April/msg00024.html

On Wed, Feb 17, 2016 at 8:51 PM, Alexander Gordeev 
wrote:

> Hi,
>
> Apparently, nailgun assumes that lvm metadata size is always set to 64M [1]
>
> It seems that it was defined here since the early beginning of nailgun as
> a project, therefore it's impossible to figure out for what purposes that
> was done as early commit messages are not so informative.
>
> According to the documentation (man lvm.conf):
>
>   pvmetadatasize — Approximate number of sectors to set aside
> for each copy of the metadata. Volume groups with large numbers  of
> physical  or  logical  volumes,  or  volumes groups containing complex
> logical volume structures will need additional space for their metadata.
> The metadata areas are treated as circular buffers, so unused space becomes
> filled with an archive of the most recent previous versions of the metadata.
>
>
> The default value is set to 255 sectors. (128KiB)
>
> Quotation from particular lvm.conf sample:
> # Approximate default size of on-disk metadata areas in sectors.
> # You should increase this if you have large volume groups or
> # you want to retain a large on-disk history of your metadata changes.
>
> # pvmetadatasize = 255
>
>
> nailgun's volume manager calculates sizes of logical volumes within one
> physical volume group and takes into account the size of lvm metadata [2].
>
> However, due to logical volumes size gets rounded to the nearest multiple
> of PE size (which is 4M usually), fuel-agent always ends up with the lack
> of free space when creating logical volumes exactly in accordance with
> partitioning scheme is generated by volume manager.
> Thus, tricky logic was added into fuel-agent [3] to bypass that flaw.
> Since 64M is way too big value when compared with typical one, fuel-agent
> silently reduces the size of lvm metadata by 8M and then partitioning
> always goes smooth.
>
> Consequently, almost each physical volume group remains only 4M of free
> space. It worked fine on old good HDDs.
>
> But when the time comes to use any FC/HBA/HW RAID block storage device
> which is occasionally reporting relatively huge values for minimal io size
> and optimal io size exposed in sysfs, then fuel-agent might end up with the
> lack of free space once again due to logical volume alignments within
> physical volume group [4]. Those alignments have been done by LVM
> automatically with respect to those values [5]
>
> As I'm going to trade off some portion of reserved amount of disk space
> for storing lvm metadata for the sake of logical volume alignments, here're
> the questions:
>
> * why was lvm metadata set to 64M?
> * could someone shed more light on any obvious reasons/needs hidden behind
> that?
> * what is the minimal size of lvm metadata we'll be happy with?
> * the same question for the optimal size.
>
>
> [1]
> https://github.com/openstack/fuel-web/blob/6bd08607c6064e99ad2ed277b1c17d7b23b13c8a/nailgun/nailgun/extensions/volume_manager/manager.py#L824
> [2]
> https://github.com/openstack/fuel-web/blob/6bd08607c6064e99ad2ed277b1c17d7b23b13c8a/nailgun/nailgun/extensions/volume_manager/manager.py#L867-L875
> [3]
> https://github.com/openstack/fuel-agent/commit/c473202d4db774b0075b8d9c25f217068f7c1727
> [4] https://bugs.launchpad.net/fuel/+bug/1546049
> [5] http://people.redhat.com/msnitzer/docs/io-limits.txt
>
>
> Thanks,
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] : Steps to upgrade the current setup from Kilo to Liberty

2016-02-18 Thread Sharma Swati6
 Hi All,

Can anyone please guide me with the steps to upgrade my current 
openstack-ansible setup from Kilo to Liberty.

I have followed the following steps-
./Scripts/teardown.sh 
Git checkout 12.0.6 (liberty)
ran setup-hosts.yml (FACING ISSUES HERE)

Thanks & Regards
 Swati Sharma
 System Engineer
 Tata Consultancy Services
 Ground to 8th Floors, Building No. 1 & 2,
 Skyview Corporate Park, Sector 74A,NH 8
 Gurgaon - 122 004,Haryana
 India
 Cell:- +91-9717238784
 Mailto: sharma.swa...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-18 Thread Nikola Đipanov
On 02/15/2016 09:27 AM, Sylvain Bauza wrote:
> 
> 
> Le 15/02/2016 06:21, Cheng, Yingxin a écrit :
>>
>> Hi,
>>
>>  
>>
>> I’ve uploaded a prototype https://review.openstack.org/#/c/280047/
>>  to testify its design goals
>> in accuracy, performance, reliability and compatibility improvements.
>> It will also be an Austin Summit Session if elected:
>> https://www.openstack.org/summit/austin-2016/vote-for-speakers/Presentation/7316
>>
>>
>>  
>>
>> I want to gather opinions about this idea:
>>
>> 1. Is this feature possible to be accepted in the Newton release?
>>
> 
> Such feature requires a spec file to be written
> http://docs.openstack.org/developer/nova/process.html#how-do-i-get-my-code-merged
> 
> Ideally, I'd like to see your below ideas written in that spec file so
> it would be the best way to discuss on the design.
> 
> 

I really cannot help but protest this!

There is actual code posted, and we go back and ask people to write
documents without even bothering to look at the code. That makes no
sense to me!

I'll go and comment on the proposed code:

https://review.openstack.org/#/c/280047/

Which has infinitely more information about the idea than a random text
document.

>> 2. Suggestions to improve its design and compatibility.
>>
> 
> I don't want to go into details here (that's rather the goal of the spec
> for that), but my biggest concerns would be when reviewing the spec :
>  - how this can meet the OpenStack mission statement (ie. ubiquitous
> solution that would be easy to install and massively scalable)
>  - how this can be integrated with the existing (filters, weighers) to
> provide a clean and simple path for operators to upgrade
>  - how this can be supporting rolling upgrades (old computes sending
> updates to new scheduler)
>  - how can we test it
>  - can we have the feature optional for operators
> 

This is precisely how we make sure there is no innovation happening in
Nova ever.

Not all of the above have to be answered for the idea to have technical
merit and be useful to some users. We should be happy to have feature
branches like this available for people to try out and use and iterate
on before we slam developers with our "you need to be this tall to ride"
list.

N.

> 
>> 3. Possibilities to integrate with resource-provider bp series: I know
>> resource-provider is the major direction of Nova scheduler, and there
>> will be fundamental changes in the future, especially according to the
>> bp
>> https://review.openstack.org/#/c/271823/1/specs/mitaka/approved/resource-providers-scheduler.rst.
>> However, this prototype proposes a much faster and compatible way to
>> make schedule decisions based on scheduler caches. The in-memory
>> decisions are made at the same speed with the caching scheduler, but
>> the caches are kept consistent with compute nodes as quickly as
>> possible without db refreshing.
>>
>>  
>>
> 
> That's the key point, thanks for noticing our priorities. So, you know
> that our resource modeling is drastically subject to change in Mitaka
> and Newton. That is the new game, so I'd love to see how you plan to
> interact with that.
> Ideally, I'd appreciate if Jay Pipes, Chris Dent and you could share
> your ideas because all of you are having great ideas to improve a
> current frustrating solution.
> 
> -Sylvain
> 
> 
>> Here is the detailed design of the mentioned prototype:
>>
>>  
>>
>> >>
>>
>> Background:
>>
>> The host state cache maintained by host manager is the scheduler
>> resource view during schedule decision making. It is updated whenever
>> a request is received[1], and all the compute node records are
>> retrieved from db every time. There are several problems in this
>> update model, proven in experiments[3]:
>>
>> 1. Performance: The scheduler performance is largely affected by db
>> access in retrieving compute node records. The db block time of a
>> single request is 355ms in average in the deployment of 3 compute
>> nodes, compared with only 3ms in in-memory decision-making. Imagine
>> there could be at most 1k nodes, even 10k nodes in the future.
>>
>> 2. Race conditions: This is not only a parallel-scheduler problem, but
>> also a problem using only one scheduler. The detailed analysis of
>> one-scheduler-problem is located in bug analysis[2]. In short, there
>> is a gap between the scheduler makes a decision in host state cache
>> and the
>>
>> compute node updates its in-db resource record according to that
>> decision in resource tracker. A recent scheduler resource consumption
>> in cache can be lost and overwritten by compute node data because of
>> it, result in cache inconsistency and unexpected retries. In a
>> one-scheduler experiment using 3-node deployment, there are 7 retries
>> out of 31 concurrent schedule requests recorded, results in 22.6%
>> extra performance overhead.
>>
>> 3. Parallel scheduler support: The design of filter scheduler leads to
>> an

Re: [openstack-dev] [Openstack-i18n] [stable][i18n] What is the backport policy on i18n changes?

2016-02-18 Thread Ying Chun Guo
hmm, I think the policy should depends on projects.
For projects which don't have active translations, I think it's OK to
backport.
For projects which have active translations, "Exception procedure" of
StringFreeze could help.
Translators need to be notified for the changes.

If you don't know whether the project has active translations or not,
"Exception procedure" can help, anyway.

Best regards
Ying Chun Guo (Daisy)


Matt Riedemann  wrote on 2016/02/18 08:16:26:

> From: Matt Riedemann 
> To: "OpenStack Development Mailing List (not for usage questions)"
> , openstack-i...@lists.openstack.org
> Date: 2016/02/18 08:18
> Subject: [Openstack-i18n] [stable][i18n] What is the backport policy
> on i18n changes?
>
> I don't think we have an official policy for stable backports with
> respect to translatable string changes.
>
> I'm looking at a release request for ironic-inspector on stable/liberty
> [1] and one of the changes in that has translatable string changes to
> user-facing error messages [2].
>
> mrunge brought up this issue in the stable team meeting this week also
> since Horizon has to be extra careful about backporting changes with
> translatable string changes.
>
> I think on the server side, if they are changes that just go in the
> logs, it's not a huge issue. But for user facing changes, should we
> treat those like StringFreeze [3]? Or only if the stable branches for
> the given project aren't getting translation updates? I know the server
> projects (at least nova) is still get translation updates on
> stable/liberty so if we do backport changes with translatable string
> updates, they aren't getting updated in stable. I don't see anything
> like that happening for ironic-inspector on stable/liberty though.
>
> Thoughts?
>
> [1] https://review.openstack.org/#/c/279515/
> [2] https://review.openstack.org/#/c/279071/1/ironic_inspector/process.py
> [3] https://wiki.openstack.org/wiki/StringFreeze
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> Openstack-i18n mailing list
> openstack-i...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-i18n
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][i18n] What is the backport policy on i18n changes?

2016-02-18 Thread Dmitry Tantsur

On 02/18/2016 01:16 AM, Matt Riedemann wrote:

I don't think we have an official policy for stable backports with
respect to translatable string changes.

I'm looking at a release request for ironic-inspector on stable/liberty
[1] and one of the changes in that has translatable string changes to
user-facing error messages [2].

mrunge brought up this issue in the stable team meeting this week also
since Horizon has to be extra careful about backporting changes with
translatable string changes.

I think on the server side, if they are changes that just go in the
logs, it's not a huge issue. But for user facing changes, should we
treat those like StringFreeze [3]? Or only if the stable branches for
the given project aren't getting translation updates? I know the server
projects (at least nova) is still get translation updates on
stable/liberty so if we do backport changes with translatable string
updates, they aren't getting updated in stable. I don't see anything
like that happening for ironic-inspector on stable/liberty though.


Hi!

I had this concern, but ironic-inspector has never had any actual 
translations, so I don't think it's worth blocking this (pretty 
annoying) bug fix based on that.




Thoughts?

[1] https://review.openstack.org/#/c/279515/
[2] https://review.openstack.org/#/c/279071/1/ironic_inspector/process.py
[3] https://wiki.openstack.org/wiki/StringFreeze




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] will we use os-vif in kuryr

2016-02-18 Thread Liping Mao (limao)
Hi Kuryr team,

I see couple of commits to add support for vif plug.
https://review.openstack.org/#/c/280411/
https://review.openstack.org/#/c/280878/

Do we have plan to use os-vif?
https://github.com/openstack/os-vif


Regards,
Liping Mao

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Tempest initialization failure while installing devstack.

2016-02-18 Thread Brijnandan
Hi All,

 

I have been trying to install devstack on my Ubuntu machine but while
initializing tempest I am getting the below error.

I have tried for 4-5 times but still facing the same issue. 

 

2016-02-18 08:17:00.403 | venv create: /opt/stack/new/tempest/.tox/venv

2016-02-18 08:17:10.731 | venv installdeps:
-r/opt/stack/new/tempest/requirements.txt,
-r/opt/stack/new/tempest/test-requirements.txt

2016-02-18 08:17:48.313 | ERROR: invocation failed (exit code 1), logfile:
/opt/stack/new/tempest/.tox/venv/log/venv-1.log

2016-02-18 08:17:48.314 | ERROR: actionid: venv

2016-02-18 08:17:48.314 | msg: getenv

2016-02-18 08:17:48.314 | cmdargs:
[local('/opt/stack/new/tempest/.tox/venv/bin/pip'), 'install', '-U',
'-r/opt/stack/new/tempest/requirements.txt',
'-r/opt/stack/new/tempest/test-requirements.txt']

2016-02-18 08:17:48.314 | env: {'LOGNAME': 'stack', 'USER': 'stack',
'OS_REGION_NAME': 'RegionOne', 'OS_PROJECT_NAME': 'admin', 'PS4': '+
${BASH_SOURCE:-}:${FUNCNAME[0]:-}:L${LINENO:-}:   ', 'PATH':
'/opt/stack/new/tempest/.tox/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sb
in:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/sbin:/usr/sbin
:/usr/local/sbin:/usr/sbin:/sbin', 'OS_NO_CACHE': 'True', 'TERM': 'unknown',
'SHELL': '/bin/bash', 'OS_IDENTITY_API_VERSION': '3',
'NEUTRON_TEST_CONFIG_FILE': '/etc/neutron/debug.ini', 'USE_PYTHON3':
'False', 'PYTHON3_VERSION': '3.4', 'PYTHONHASHSEED': '299230788',
'SUDO_USER': 'jenkins', 'HOME': '/opt/stack/new', 'USERNAME': 'stack',
'OS_USERNAME': 'admin', 'SUDO_UID': '1001', '_STDBUF_O': 'L',
'OS_USER_DOMAIN_ID': 'default', 'PWD': '/opt/stack/new/tempest',
'PYTHON2_VERSION': '2.7', 'OS_PASSWORD': 'secretadmin', 'LC_ALL': 'C', '_':
'/usr/local/bin/tox', 'LD_PRELOAD': '/usr/lib/coreutils/libstdbuf.so',
'SUDO_COMMAND': '/usr/bin/stdbuf -oL -eL ./stack.sh', 'SUDO_GID': '1002',
'VIRTUAL_ENV': '/opt/stack/new/tempest/.tox/venv', 'OS_PROJECT_DOMAIN_ID':
'default', '_STDBUF_E': 'L', 'OLDPWD': '/opt/stack/new/devstack', 'SHLVL':
'1', 'OS_AUTH_URL': 'http://127.0.0.1:35357', 'OS_TEST_PATH':
'./tempest/tests', 'MAIL': '/var/mail/stack'}

2016-02-18 08:17:48.314 | 

2016-02-18 08:17:48.314 | Collecting pbr>=1.6 (from -r
/opt/stack/new/tempest/requirements.txt (line 4))

2016-02-18 08:17:48.314 |
/opt/stack/new/tempest/.tox/venv/local/lib/python2.7/site-packages/pip/_vend
or/requests/packages/urllib3/util/ssl_.py:315: SNIMissingWarning: An HTTPS
request has been made, but the SNI (Subject Name Indication) extension to
TLS is not available on this platform. This may cause the server to present
an incorrect TLS certificate, which can cause validation failures. For more
information, see
https://urllib3.readthedocs.org/en/latest/security.html#snimissingwarning.

2016-02-18 08:17:48.314 |   SNIMissingWarning

2016-02-18 08:17:48.314 |
/opt/stack/new/tempest/.tox/venv/local/lib/python2.7/site-packages/pip/_vend
or/requests/packages/urllib3/util/ssl_.py:120: InsecurePlatformWarning: A
true SSLContext object is not available. This prevents urllib3 from
configuring SSL appropriately and may cause certain SSL connections to fail.
For more information, see
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarn
ing.

2016-02-18 08:17:48.314 |   InsecurePlatformWarning

2016-02-18 08:17:48.314 |   Using cached pbr-1.8.1-py2.py3-none-any.whl

2016-02-18 08:17:48.314 | Collecting cliff!=1.16.0,>=1.15.0 (from -r
/opt/stack/new/tempest/requirements.txt (line 5))

2016-02-18 08:17:48.314 |   Using cached cliff-1.17.0-py2-none-any.whl

2016-02-18 08:17:48.314 | Collecting anyjson>=0.3.3 (from -r
/opt/stack/new/tempest/requirements.txt (line 6))

2016-02-18 08:17:48.314 | Collecting httplib2>=0.7.5 (from -r
/opt/stack/new/tempest/requirements.txt (line 7))

2016-02-18 08:17:48.314 | Collecting jsonschema!=2.5.0,<3.0.0,>=2.0.0 (from
-r /opt/stack/new/tempest/requirements.txt (line 8))

2016-02-18 08:17:48.314 |   Using cached
jsonschema-2.5.1-py2.py3-none-any.whl

2016-02-18 08:17:48.314 | Collecting testtools>=1.4.0 (from -r
/opt/stack/new/tempest/requirements.txt (line 9))

2016-02-18 08:17:48.314 |   Using cached
testtools-2.0.0-py2.py3-none-any.whl

2016-02-18 08:17:48.314 | Collecting paramiko>=1.16.0 (from -r
/opt/stack/new/tempest/requirements.txt (line 10))

2016-02-18 08:17:48.314 |   Using cached
paramiko-1.16.0-py2.py3-none-any.whl

2016-02-18 08:17:48.315 | Collecting netaddr!=0.7.16,>=0.7.12 (from -r
/opt/stack/new/tempest/requirements.txt (line 11))

2016-02-18 08:17:48.315 |   Using cached netaddr-0.7.18-py2.py3-none-any.whl

2016-02-18 08:17:48.315 | Collecting testrepository>=0.0.18 (from -r
/opt/stack/new/tempest/requirements.txt (line 12))

2016-02-18 08:17:48.315 | Collecting pyOpenSSL>=0.14 (from -r
/opt/stack/new/tempest/requirements.txt (line 13))

2016-02-18 08:17:48.315 |   Using cached
pyOpenSSL-0.15.1-py2.py3-none-any.whl

2016-02-18 08:17:48.315 | Collecting oslo.concurrency>=2.3.0 (from -r
/opt/stack/new/tempest/requirements.txt (line 14

[openstack-dev] [devstack] Installing devstack with multi-region setting fails

2016-02-18 Thread Yipei Niu
2016-01-29 03:26:29.317 | + source /home/stack/devstack/userrc_early
2016-01-29 03:26:29.317 | ++ export OS_IDENTITY_API_VERSION=3
2016-01-29 03:26:29.317 | ++ OS_IDENTITY_API_VERSION=3
2016-01-29 03:26:29.318 | ++ export OS_AUTH_URL=http://192.168.56.101:35357
2016-01-29 03:26:29.318 | ++ OS_AUTH_URL=http://192.168.56.101:35357
2016-01-29 03:26:29.318 | ++ export OS_USERNAME=admin
2016-01-29 03:26:29.318 | ++ OS_USERNAME=admin
2016-01-29 03:26:29.318 | ++ export OS_USER_DOMAIN_ID=default
2016-01-29 03:26:29.318 | ++ OS_USER_DOMAIN_ID=default
2016-01-29 03:26:29.318 | ++ export OS_PASSWORD=nypnyp0316
2016-01-29 03:26:29.318 | ++ OS_PASSWORD=nypnyp0316
2016-01-29 03:26:29.319 | ++ export OS_PROJECT_NAME=admin
2016-01-29 03:26:29.319 | ++ OS_PROJECT_NAME=admin
2016-01-29 03:26:29.319 | ++ export OS_PROJECT_DOMAIN_ID=default
2016-01-29 03:26:29.319 | ++ OS_PROJECT_DOMAIN_ID=default
2016-01-29 03:26:29.319 | ++ export OS_REGION_NAME=RegionTwo
2016-01-29 03:26:29.319 | ++ OS_REGION_NAME=RegionTwo
2016-01-29 03:26:29.319 | + create_keystone_accounts
2016-01-29 03:26:29.319 | + local admin_tenant
2016-01-29 03:26:29.320 | ++ openstack project show admin -f value -c id
2016-01-29 03:26:30.624 | Could not find resource admin
2016-01-29 03:26:30.667 | + admin_tenant=
2016-01-29 03:26:30.668 | + exit_trap
2016-01-29 03:26:30.668 | + local r=1
2016-01-29 03:26:30.669 | ++ jobs -p
2016-01-29 03:26:30.671 | + jobs=
2016-01-29 03:26:30.672 | + [[ -n '' ]]
2016-01-29 03:26:30.672 | + kill_spinner
2016-01-29 03:26:30.672 | + '[' '!' -z '' ']'
2016-01-29 03:26:30.672 | + [[ 1 -ne 0 ]]
2016-01-29 03:26:30.673 | + echo 'Error on exit'
2016-01-29 03:26:30.674 | Error on exit
2016-01-29 03:26:30.675 | + generate-subunit 1454037870 120 fail
2016-01-29 03:26:31.045 | + [[ -z /opt/stack/logs ]]
2016-01-29 03:26:31.045 | + /home/stack/devstack/tools/worlddump.py -d
/opt/stack/logs
2016-01-29 03:26:31.111 | df: '/run/user/112/gvfs': Permission denied
2016-01-29 03:26:31.312 | + exit 1

Under the multi-region setting, two devstack, i.e., RegionOne and
RegionTwo, share the same Keystone service in RegionOne. The file of
local.conf of RegionTwo is set as
http://docs.openstack.org/developer/devstack/configuration.html#multi-region-setup
indicated.
In the configuration, the REGION_NAME is set as "RegionTwo".
Correspondingly, OS_REGION_NAME is set as "RegionTwo" as well based on
"export OS_REGION_NAME=$REGION_NAME" in stack.sh. When executing "openstack
project show admin -f value -c id", it hence fails finding resource admin
in RegionTwo instead of RegionOne.

I have reported the bug, seen in
https://bugs.launchpad.net/devstack/+bug/1540802.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Supporting multiple Openstack versions

2016-02-18 Thread Bogdan Dobrelya
On 17.02.2016 18:23, Bogdan Dobrelya wrote:
>> So we'll have tons of conditionals in composition layer, right? Even if
>> some puppet-openstack class have just one new parameter in new release,
>> then we'll have to write a conditional and duplicate class declaration. Or
>> write complex parameters hash definitions/merges and use
>> create_resources(). The more releases we want to support the more
>> complicated composition layer will become. That won't make contribution to
>> fuel-library easier and even can greatly reduce development speed. Also are
>> we going to add new features to stable releases using this workflow with
>> single composition layer?
> 
> As I can see from an example composition [0], such code would be an
> unmaintainable burden for development and QA process. Next imagine a
> case for incompatible *providers* like network transformations - shall
> we put multiple if/case to the ruby providers as well?..
> 
> That is not a way to go for a composition, sorry. While the idea may be
> doable, I agree, but perhaps another way.
> 
> (tl;dr)
> By the way, this reminded me "The wrong abstraction" [1] article and
> discussion. I agree with the author and believe one should not group
> code (here it is versioned puppet modules & compositions) in a way which
> introduces abstractions (here a super-composition) with multiple
> if/else/case and hardcoded things to switch the execution flow based on
> version of things. Just keep code as is - partially duplicated by
> different releases in separate directories with separate modules and
> composition layers and think of better solutions please.
> 
> There is also a nice comment: "...try to optimize my code around
> reducing state, coupling, complexity and code, in that order". I
> understood that like a set of "golden rules":
> - Make it coupled more tight to decrease (shared) state
> - Make it more complex to decrease coupling
> - Make it duplicated to decrease complexity (e.g. abstractions)
> 
> (tl;dr, I mean it)
> So, bringing those here.
> - The shared state is perhaps the Nailgun's world view of all data and
> versioned serializers for supported releases, which know how to convert
> the only latest existing data to any of its supported previous versions.
> - Decoupling we do by putting modules with its compositions to different
> versioned /etc/puppet subdirectories. I'm not sure how do we decouple
> Nailgun serializers though.
> - Complexity is how we compose those modules / write logic of serializers.
> - Duplication is puppet classes (and providers) with slightly different
> call parameters from a version to version. Sometimes even not backwards
> compatible. Probably same to the serializers?
> 
> So, we're going to *increase complexity* by introducing
> super-compositions for multi OpenStack releases. Not sure about what to
> happen to the serializers, any volunteers to clarify an impact?. And the
> Rules "allow" us to do so only in order to decrease either coupling or
> shared state, which is not the case, AFAICT. Modules with compositions
> are separated well by OpenStack versions, nothing to decrease. Might
> that change to decrease a shared state? I'm not sure if it even applies
> here. Puppet versioning shares nothing. Only Nailgun folks may know the
> answer.

AFAIK, Nailgun serializers have no shared state and state at all, as
they always produce the same "data view" for a given version and there
is no internal state impacting results. Although, they might end up with
decreased complexity while the orchestration logic - with a drastically
increased one. It would be hard to determine which deployment graph to
build from the super-composed multi-release modular tasks, AFAICT. So it
seems no gain here as well. That means the Golden Rules recommend us to
not do so :)

> 
> [0]
> https://review.openstack.org/#/c/281084/1/deployment/puppet/ceph/manifests/nova_compute.pp
> [1] https://news.ycombinator.com/item?id=11032296
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage Demo - Alarms Use Case

2016-02-18 Thread Weyl, Alexey (Nokia - IL)
Hi,

We are happy to share with you all our second Vitrage demo, showing alarms 
imported from Nagios in Vitrage horizon UI. Here it is:

https://www.youtube.com/watch?v=w1XQATkrdmg

For more details about Vitrage, please visit our wiki here:

https://wiki.openstack.org/wiki/Vitrage 

Thanks,
Alexey

P.S. wait for our next demo which will show RCA and deduced alarms :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev