Re: [openstack-dev] [nova] why are we backporting low priority v3 api fixes to v2?

2013-12-01 Thread Gary Kotton


From: Christopher Yeoh cbky...@gmail.commailto:cbky...@gmail.com
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Sunday, December 1, 2013 12:25 AM
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] why are we backporting low priority v3 api 
fixes to v2?


On Sun, Dec 1, 2013 at 8:02 AM, Matt Riedemann 
mrie...@linux.vnet.ibm.commailto:mrie...@linux.vnet.ibm.com wrote:
I've seen a few bugs/reviews like this [1] lately which are essentially 
backporting fixes from the nova openstack v3 API to the v2 API. While this is 
goodness for the v2 API, I'm not sure why we're spending time on low priority 
bug fixes like this for the v2 API when v3 is the future. Shouldn't only high 
impact / high probability fixes get backported to the nova v2 API now?  I think 
most people are still using v2 so they are probably happy to get the fixes, but 
it kind of seems to prolong the inevitable.

Am I missing something?


The V2 API is going to be with us for quite a while even if the as planned V3 
API becomes official with
the icehouse release. At the moment the V2 API is still even open for new 
features - this will probably
change at the end of I-2.

I agree those bugs are quite low priority fixes and the V3 work is a lot more 
important, but I don't think we should blocking
them yet. We should perhaps reconsider the acceptance of very low priority 
fixes like you reference towards or at the end of
Icehouse.

[Gary] I agree, we should not be blocking these. I think that we should pay 
attention to issues that are just dealt with in V2 and make sure that those are 
actually addressed in V3.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Extending quota mechanism to all entities (vip, pool, member, health-monitor)

2013-12-01 Thread Evgeny Fedoruk
Hello All,

Extending quota mechanism to support vips, pools, members and health-monitors
Blueprint is not approved yet.
Changes are ready for review

Blueprint:
   https://blueprints.launchpad.net/neutron/+spec/neutron-quota-extension

Related changes for review:
  neutron - https://review.openstack.org/58720
  python-neutronclient - https://review.openstack.org/59192
  horizon - https://review.openstack.org/59195

You  are welcome to  review and comment

Thanks,
Evg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Splitting up V3 API admin-actions plugin

2013-12-01 Thread Christopher Yeoh
Hi,

At the summit we agreed to split out lock/unlock, pause/unpause,
suspend/unsuspend
functionality out of the V3 version of admin actions into separate
extensions to make it easier for deployers to only have loaded the
functionality that they want.

Remaining in admin_actions we have:

migrate
live_migrate
reset_network
inject_network_info
create_backup
reset_state

I think it makes sense to separate out migrate and live_migrate into a
migrate plugin as well.

What do people think about the others? There is no real overhead of having
them in separate
plugins and totally remove admin_actions. Does anyone have any objections
from this being done?

Also in terms of grouping I don't think any of the others remaining above
really belong together, but welcome any suggestions.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Oslo] Future of Key Distribution Server, Trusted Messaging

2013-12-01 Thread Jarret Raim
 I also don't like that the discussions suggested that because it would be
hard
 to get Barbican incubated/integrated it should not be used. That is just
crazy
 talk. TripleO merged with Tuskar because Tuskar is part of deployment.

We are completing our incubation request for Barbican right now. I am
waiting to send it until tomorrow as I figured it woudn't get a lot of
traction right before the break.

As I've said before, I think the KDS should be part of Barbican, but if
Keystone wants to merge it sooner, I won't complain. Barbican has a pretty
full roadmap through the Icehouse release, so I doubt my team would get to
this. We'd be happy to help anyone interested in implementing it and I would
merge it, but that's up to the authors.

The public / private service argument I don't really get. The KDS will be an
internal server regardless of whether it is in Barbican or Keystone so I
don't think that is a differentiator.

 Seems to me that pulling Barbican into the identity _program_, but still
as its
 own project/repo/etc. would solve that problem.

Not sure I agree here. Key management solves many problems, some of which
are identity problems, but key management is not fundamentally an identity
service.  For example, SSL certificates for services, symmetric key
generation for at rest encryption, etc.

What do we think are the reasons for combining the two efforts?



Jarret



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-12-01 Thread Anita Kuno
Great initiative putting this plan together, Maru. Thanks for doing
this. Thanks for volunteering to help, Salvatore (I'm thinking of asking
for you to be cloned - once that becomes available.) if you add your
patch urls (as you create them) to the blueprint Maru started [0] that
would help to track the work.

Armando, thanks for doing this work as well. Could you add the urls of
the patches you reference to the exceptional-conditions blueprint?

For icehouse-1 to be a realistic goal for this assessment and clean-up,
patches for this would need to be up by Tuesday Dec. 3 at the latest
(does 13:00 UTC sound like a reasonable target?) so that they can make
it through review and check testing, gate testing and merging prior to
the Thursday Dec. 5 deadline for icehouse-1. I would really like to see
this, I just want the timeline to be conscious.

I would like to say talk to me tomorrow in -neutron to ensure you are
getting the support you need to achieve this but I will be flying (wifi
uncertain). I do hope that some additional individuals come forward to
help with this.

Thanks Maru, Salvatore and Armando,
Anita.

[0]
https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error

On 11/30/2013 08:24 PM, Maru Newby wrote:
 
 On Nov 28, 2013, at 1:08 AM, Salvatore Orlando sorla...@nicira.com wrote:
 
 Thanks Maru,

 This is something my team had on the backlog for a while.
 I will push some patches to contribute towards this effort in the next few 
 days.

 Let me know if you're already thinking of targeting the completion of this 
 job for a specific deadline.
 
 I'm thinking this could be a task for those not involved in fixing race 
 conditions, and be done in parallel.  I guess that would be for icehouse-1 
 then?  My hope would be that the early signs of race conditions would then be 
 caught earlier.
 
 
 m.
 

 Salvatore


 On 27 November 2013 17:50, Maru Newby ma...@redhat.com wrote:
 Just a heads up, the console output for neutron gate jobs is about to get a 
 lot noisier.  Any log output that contains 'ERROR' is going to be dumped 
 into the console output so that we can identify and eliminate unnecessary 
 error logging.  Once we've cleaned things up, the presence of unexpected 
 (non-whitelisted) error output can be used to fail jobs, as per the 
 following Tempest blueprint:

 https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors

 I've filed a related Neutron blueprint for eliminating the unnecessary error 
 logging:

 https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error

 I'm looking for volunteers to help with this effort, please reply in this 
 thread if you're willing to assist.

 Thanks,


 Maru
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-01 Thread Alessandro Pilotti
Hi all,

At Cloudbase we are heavily using VMware Workstation and Fusion for 
development, demos and PoCs, so we thought: why not replacing our automation 
scripts with a fully functional Nova driver and use OpenStack APIs and Heat for 
the automation? :-)

Here’s the repo for this Nova driver project: 
https://github.com/cloudbase/nova-vix-driver/

The driver is already working well and supports all the basic features you’d 
expect from a Nova driver, including a VNC console accessible via Horizon. 
Please refer to the project README for additional details.
The usage of CoW images (linked clones) makes deploying images particularly 
fast, which is a good thing when you develop or run demos. Heat or Puppet, 
Chef, etc make the whole process particularly sweet of course.


The main idea was to create something to be used in place of solutions like 
Vagrant, with a few specific requirements:

1) Full support for nested virtualization (VMX and EPT).

For the time being the VMware products are the only ones supporting Hyper-V and 
KVM as guests, so this became a mandatory path, at least until EPT support will 
be fully functional in KVM.
This rules out Vagrant as an option. Their VMware support is not free and 
beside that they don’t support nested virtualization (yet, AFAIK).

Other workstation virtualization options, including VirtualBox and Hyper-V are 
currently ruled out due to the lack of support for this feature as well.
Beside that Hyper-V and VMware Workstation VMs can work side by side on Windows 
8.1, all you need is to fire up two nova-compute instances.

2) Work on Windows, Linux and OS X workstations

Here’s a snapshot of Nova compute  running on OS X and showing Novnc connected 
to a Fusion VM console:

https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png

3) Use OpenStack APIs

We wanted to have a single common framework for automation and bring OpenStack 
on the workstations.
Beside that, dogfooding is a good thing. :-)

4) Offer a free alternative for community contributions

VMware Player is fair enough, even with the “non commercial use” limits, etc.

Communication with VMware components is based on the freely available Vix SDK 
libs, using ctypes to call the C APIs. The project provides a library to easily 
interact with the VMs, in case it sould be needed, e.g.:

from vix import vixutils
with vixutils.VixConnection() as conn:
with conn.open_vm(vmx_path) as vm:
vm.power_on()

We though about using libvirt, since it has support for those APIs as well, but 
it was way easier to create a lightweight driver from scratch using the Vix 
APIs directly.

TODOs:

1) A minimal Neutron agent for attaching networks (now all networks are 
attached to the NAT interface).
2) Resize disks on boot based on the flavor size
3) Volume attach / detach (we can just reuse the Hyper-V code for the Windows 
case)
4) Same host resize

Live migration is not making particularly sense in this context, so the 
implementation is not planned.

Note: we still have to commit the unit tests. We’ll clean them during next week 
and push them.


As usual, any idea, suggestions and especially contributions are highly welcome!

We’ll follow up with a blog post with some additional news related to this 
project quite soon.


Thanks,

Alessandro


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] maintenance policy for code graduating from the incubator

2013-12-01 Thread Doug Hellmann
On Sat, Nov 30, 2013 at 3:52 PM, Sandy Walsh sandy.wa...@rackspace.comwrote:



 On 11/29/2013 03:58 PM, Doug Hellmann wrote:
 
 
 
  On Fri, Nov 29, 2013 at 2:14 PM, Sandy Walsh sandy.wa...@rackspace.com
  mailto:sandy.wa...@rackspace.com wrote:
 
  So, as I mention in the branch, what about deployments that haven't
  transitioned to the library but would like to cherry pick this
 feature?
 
  after it starts moving into a library can leave a very big gap
  when the functionality isn't available to users.
 
 
  Are those deployments tracking trunk or a stable branch? Because IIUC,
  we don't add features like this to stable branches for the main
  components, either, and if they are tracking trunk then they will get
  the new feature when it ships in a project that uses it. Are you
  suggesting something in between?

 Tracking trunk. If the messaging branch has already landed in Nova, then
 this is a moot discussion. Otherwise we'll still need it in incubator.

 That said, consider if messaging wasn't in nova trunk. According to this
 policy the new functionality would have to wait until it was. And, as
 we've seen with messaging, that was a very long time. That doesn't seem
 reasonable.


The alternative is feature drift between the incubated version of rpc and
oslo.messaging, which makes the task of moving the other projects to
messaging even *harder*.

What I'm proposing seems like a standard deprecation/backport policy; I'm
not sure why you see the situation as different. Sandy, can you elaborate
on how you would expect to maintain feature parity between the incubator
and library while projects are in transition?

Doug




 
  Doug
 
 
 
 
  -S
 
  
  From: Eric Windisch [e...@cloudscaling.com
  mailto:e...@cloudscaling.com]
  Sent: Friday, November 29, 2013 2:47 PM
  To: OpenStack Development Mailing List (not for usage questions)
  Subject: Re: [openstack-dev] [oslo] maintenance policy for code
  graduating from the incubator
 
   Based on that, I would like to say that we do not add new features
 to
   incubated code after it starts moving into a library, and only
 provide
   stable-like bug fix support until integrated projects are moved
  over to
   the graduated library (although even that is up for discussion).
  After all
   integrated projects that use the code are using the library
  instead of the
   incubator, we can delete the module(s) from the incubator.
 
  +1
 
  Although never formalized, this is how I had expected we would handle
  the graduation process. It is also how we have been responding to
  patches and blueprints offerings improvements and feature requests
 for
  oslo.messaging.
 
  --
  Regards,
  Eric Windisch
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  mailto:OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] revert of cinder cli tests

2013-12-01 Thread Sean Dague
I had to revert the cinder cli tests tonight, as it turns out they
weren't parallel safe. Specifically when listing some of the v1  v2
versions of quotas they were doing so at an admin level, which mean the
creation and deletion of tenants from other cinder tests would often
happen in between the v1 and v2 calls, causing a failure. This was
starting to happen  50% of the time, and was blocking the nova metadata
server fix.

The revert is here - https://review.openstack.org/#/c/59306/

I welcome a new spin of these tests, but we need to be careful of
situations like what happened above.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stop logging non-exceptional conditions as ERROR

2013-12-01 Thread Maru Newby

On Dec 2, 2013, at 2:07 AM, Anita Kuno ante...@anteaya.info wrote:

 Great initiative putting this plan together, Maru. Thanks for doing
 this. Thanks for volunteering to help, Salvatore (I'm thinking of asking
 for you to be cloned - once that becomes available.) if you add your
 patch urls (as you create them) to the blueprint Maru started [0] that
 would help to track the work.
 
 Armando, thanks for doing this work as well. Could you add the urls of
 the patches you reference to the exceptional-conditions blueprint?
 
 For icehouse-1 to be a realistic goal for this assessment and clean-up,
 patches for this would need to be up by Tuesday Dec. 3 at the latest
 (does 13:00 UTC sound like a reasonable target?) so that they can make
 it through review and check testing, gate testing and merging prior to
 the Thursday Dec. 5 deadline for icehouse-1. I would really like to see
 this, I just want the timeline to be conscious.

My mistake, getting this done by Tuesday does not seem realistic.  icehouse-2, 
then.


m.

 
 I would like to say talk to me tomorrow in -neutron to ensure you are
 getting the support you need to achieve this but I will be flying (wifi
 uncertain). I do hope that some additional individuals come forward to
 help with this.
 
 Thanks Maru, Salvatore and Armando,
 Anita.
 
 [0]
 https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
 
 On 11/30/2013 08:24 PM, Maru Newby wrote:
 
 On Nov 28, 2013, at 1:08 AM, Salvatore Orlando sorla...@nicira.com wrote:
 
 Thanks Maru,
 
 This is something my team had on the backlog for a while.
 I will push some patches to contribute towards this effort in the next few 
 days.
 
 Let me know if you're already thinking of targeting the completion of this 
 job for a specific deadline.
 
 I'm thinking this could be a task for those not involved in fixing race 
 conditions, and be done in parallel.  I guess that would be for icehouse-1 
 then?  My hope would be that the early signs of race conditions would then 
 be caught earlier.
 
 
 m.
 
 
 Salvatore
 
 
 On 27 November 2013 17:50, Maru Newby ma...@redhat.com wrote:
 Just a heads up, the console output for neutron gate jobs is about to get a 
 lot noisier.  Any log output that contains 'ERROR' is going to be dumped 
 into the console output so that we can identify and eliminate unnecessary 
 error logging.  Once we've cleaned things up, the presence of unexpected 
 (non-whitelisted) error output can be used to fail jobs, as per the 
 following Tempest blueprint:
 
 https://blueprints.launchpad.net/tempest/+spec/fail-gate-on-log-errors
 
 I've filed a related Neutron blueprint for eliminating the unnecessary 
 error logging:
 
 https://blueprints.launchpad.net/neutron/+spec/log-only-exceptional-conditions-as-error
 
 I'm looking for volunteers to help with this effort, please reply in this 
 thread if you're willing to assist.
 
 Thanks,
 
 
 Maru
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-01 Thread Maru Newby

On Nov 30, 2013, at 1:00 AM, Sean Dague s...@dague.net wrote:

 On 11/29/2013 10:33 AM, Jay Pipes wrote:
 On 11/28/2013 07:45 AM, Akihiro Motoki wrote:
 Hi,
 
 I am working on adding request-id to API response in Neutron.
 After I checked what header is used in other projects
 header name varies project by project.
 It seems there is no consensus what header is recommended
 and it is better to have some consensus.
 
   nova: x-compute-request-id
   cinder:   x-compute-request-id
   glance:   x-openstack-request-id
   neutron:  x-network-request-id  (under review)
 
 request-id is assigned and used inside of each project now,
 so x-service-request-id looks good. On the other hand,
 if we have a plan to enhance request-id across projects,
 x-openstack-request-id looks better.
 
 My vote is for:
 
 x-openstack-request-id
 
 With an implementation of create a request UUID if none exists yet in
 some standardized WSGI middleware...
 
 Agreed. I don't think I see any value in having these have different
 service names, having just x-openstack-request-id across all the
 services seems a far better idea, and come back through and fix nova and
 cinder to be that as well.

+1 

An openstack request id should be service agnostic to allow tracking of a 
request across many services (e.g. a call to nova to boot a VM should generate 
a request id that is provided to other services in requests to provision said 
VM).  All services would ideally share a facility for generating new request 
ids and for securely accepting request ids from other services.


m.

 
   -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Splitting up V3 API admin-actions plugin

2013-12-01 Thread Alex Xu

On 2013?12?01? 21:39, Christopher Yeoh wrote:

Hi,

At the summit we agreed to split out lock/unlock, pause/unpause, 
suspend/unsuspend
functionality out of the V3 version of admin actions into separate 
extensions to make it easier for deployers to only have loaded the 
functionality that they want.


Remaining in admin_actions we have:

migrate
live_migrate
reset_network
inject_network_info
create_backup
reset_state

I think it makes sense to separate out migrate and live_migrate into a 
migrate plugin as well.


What do people think about the others? There is no real overhead of 
having them in separate
plugins and totally remove admin_actions. Does anyone have any 
objections from this being done?


I have question for reset_network and inject_network_info. Is it useful 
for v3 api? The network info(ip address, gateway...) should be pushed

by DHCP service that provided by Neutron. And we don't like any inject.



Also in terms of grouping I don't think any of the others remaining 
above really belong together, but welcome any suggestions.


Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-01 Thread Kyle Mestery (kmestery)
On Dec 1, 2013, at 4:10 PM, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:
 
 Hi all,
 
 At Cloudbase we are heavily using VMware Workstation and Fusion for 
 development, demos and PoCs, so we thought: why not replacing our automation 
 scripts with a fully functional Nova driver and use OpenStack APIs and Heat 
 for the automation? :-)
 
 Here’s the repo for this Nova driver project: 
 https://github.com/cloudbase/nova-vix-driver/
 
 The driver is already working well and supports all the basic features you’d 
 expect from a Nova driver, including a VNC console accessible via Horizon. 
 Please refer to the project README for additional details.
 The usage of CoW images (linked clones) makes deploying images particularly 
 fast, which is a good thing when you develop or run demos. Heat or Puppet, 
 Chef, etc make the whole process particularly sweet of course. 
 
 
 The main idea was to create something to be used in place of solutions like 
 Vagrant, with a few specific requirements:
 
 1) Full support for nested virtualization (VMX and EPT).
 
 For the time being the VMware products are the only ones supporting Hyper-V 
 and KVM as guests, so this became a mandatory path, at least until EPT 
 support will be fully functional in KVM.
 This rules out Vagrant as an option. Their VMware support is not free and 
 beside that they don’t support nested virtualization (yet, AFAIK). 
 
 Other workstation virtualization options, including VirtualBox and Hyper-V 
 are currently ruled out due to the lack of support for this feature as well.
 Beside that Hyper-V and VMware Workstation VMs can work side by side on 
 Windows 8.1, all you need is to fire up two nova-compute instances.
 
 2) Work on Windows, Linux and OS X workstations
 
 Here’s a snapshot of Nova compute  running on OS X and showing Novnc 
 connected to a Fusion VM console:
 
 https://dl.dropboxusercontent.com/u/9060190/Nova-compute-os-x.png
 
 3) Use OpenStack APIs
 
 We wanted to have a single common framework for automation and bring 
 OpenStack on the workstations. 
 Beside that, dogfooding is a good thing. :-) 
 
 4) Offer a free alternative for community contributions
   
 VMware Player is fair enough, even with the “non commercial use” limits, etc.
 
 Communication with VMware components is based on the freely available Vix SDK 
 libs, using ctypes to call the C APIs. The project provides a library to 
 easily interact with the VMs, in case it sould be needed, e.g.:
 
 from vix import vixutils
 with vixutils.VixConnection() as conn:
 with conn.open_vm(vmx_path) as vm:
 vm.power_on()
 
 We though about using libvirt, since it has support for those APIs as well, 
 but it was way easier to create a lightweight driver from scratch using the 
 Vix APIs directly.
 
 TODOs:
 
 1) A minimal Neutron agent for attaching networks (now all networks are 
 attached to the NAT interface).
 2) Resize disks on boot based on the flavor size
 3) Volume attach / detach (we can just reuse the Hyper-V code for the Windows 
 case)
 4) Same host resize
 
 Live migration is not making particularly sense in this context, so the 
 implementation is not planned. 
 
 Note: we still have to commit the unit tests. We’ll clean them during next 
 week and push them.
 
 
 As usual, any idea, suggestions and especially contributions are highly 
 welcome!
 
 We’ll follow up with a blog post with some additional news related to this 
 project quite soon. 
 
 
This is very cool Alessandro, thanks for sharing! Any plans to try and get this
nova driver upstreamed?

Thanks,
Kyle

 Thanks,
 
 Alessandro
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Tempest] Need to prepare the IPv6 environment for static IPv6 injection test case

2013-12-01 Thread Yang XY Yu
Hi all stackers,

Currently Neutron/Nova code has supported the static IPv6 injection, but 
there is no tempest scenario coverage to support IPv6 injection test case. 
So I finished the test case and run the it successfully in my local 
environment, and already submitted the code-review in community: 
https://review.openstack.org/#/c/58721/, but the community Jenkins env has 
not supported IPv6 and there are still a few pre-requisites setup below if 
running the test case correctly, 

1. Special Image needed to support IPv6 by using cloud-init, currently the 
cirros image used by tempest does not installed cloud-init.

2. Prepare interfaces.template file below on compute node.
edit  /usr/share/nova/interfaces.template 

# Injected by Nova on instance boot
#
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

{% for ifc in interfaces -%}
auto {{ ifc.name }}
{% if use_ipv6 -%}
iface {{ ifc.name }} inet6 static
address {{ ifc.address_v6 }}
netmask {{ ifc.netmask_v6 }}
{%- if ifc.gateway_v6 %}
gateway {{ ifc.gateway_v6 }}
{%- endif %}
{%- endif %}

{%- endfor %}


So considering these two pre-requisites, what should be done to enable 
this patch for IPv6 injection? Should I open a bug for cirros to enable 
cloud-init?   Or skip the test case because of this bug ?
Any comments are appreciated!

Thanks  Best Regards,

Yang Yu(于杨)
Cloud Solutions and OpenStack Development
China Systems  Technology Laboratory Beijing
E-mail: yuyan...@cn.ibm.com 
Tel: 86-10-82452757
Address: Ring Bldg. No.28 Building, Zhong Guan Cun Software Park, 
No. 8 Dong Bei Wang West Road, ShangDi, Haidian District, Beijing 100193, 
P.R.China 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] request-id in API response

2013-12-01 Thread John Dickinson
Just to add to the story, Swift uses X-Trans-Id and generates it in the 
outer-most catch_errors middleware.

Swift's catch errors middleware is responsible for ensuring that the 
transaction id exists on each request, and that all errors previously uncaught, 
anywhere in the pipeline, are caught and logged. If there is not a common way 
to do this, yet, I submit it as a great template for solving this problem. It's 
simple, scalable, and well-tested (ie tests and running in prod for years).

https://github.com/openstack/swift/blob/master/swift/common/middleware/catch_errors.py

Leaving aside error handling and only focusing on the transaction id (or 
request id) generation, since OpenStack services are exposed to untrusted 
clients, how would you propose communicating the appropriate transaction id to 
a different service? I can see great benefit to having a glance transaction ID 
carry through to Swift requests (and so on), but how should the transaction id 
be communicated? It's not sensitive info, but I can imagine a pretty big 
problem when trying to track down errors if a client application decides to set 
eg the X-Set-Transaction-Id header on every request to the same thing.

Thanks for bringing this up, and I'd welcome a patch in Swift that would use a 
common library to generate the transaction id, if it were installed. I can see 
that there would be huge advantage to operators to trace requests through 
multiple systems.

Another option would be for each system that calls an another OpenStack system 
to expect and log the transaction ID for the request that was given. This would 
be looser coupling and be more forgiving for a heterogeneous cluster. Eg when 
Glance makes a call to Swift, Glance cloud log the transaction id that Swift 
used (from the Swift response). Likewise, when Swift makes a call to Keystone, 
Swift could log the Keystone transaction id. This wouldn't result in a single 
transaction id across all systems, but it would provide markers so an admin 
could trace the request.

--John




On Dec 1, 2013, at 5:48 PM, Maru Newby ma...@redhat.com wrote:

 
 On Nov 30, 2013, at 1:00 AM, Sean Dague s...@dague.net wrote:
 
 On 11/29/2013 10:33 AM, Jay Pipes wrote:
 On 11/28/2013 07:45 AM, Akihiro Motoki wrote:
 Hi,
 
 I am working on adding request-id to API response in Neutron.
 After I checked what header is used in other projects
 header name varies project by project.
 It seems there is no consensus what header is recommended
 and it is better to have some consensus.
 
 nova: x-compute-request-id
 cinder:   x-compute-request-id
 glance:   x-openstack-request-id
 neutron:  x-network-request-id  (under review)
 
 request-id is assigned and used inside of each project now,
 so x-service-request-id looks good. On the other hand,
 if we have a plan to enhance request-id across projects,
 x-openstack-request-id looks better.
 
 My vote is for:
 
 x-openstack-request-id
 
 With an implementation of create a request UUID if none exists yet in
 some standardized WSGI middleware...
 
 Agreed. I don't think I see any value in having these have different
 service names, having just x-openstack-request-id across all the
 services seems a far better idea, and come back through and fix nova and
 cinder to be that as well.
 
 +1 
 
 An openstack request id should be service agnostic to allow tracking of a 
 request across many services (e.g. a call to nova to boot a VM should 
 generate a request id that is provided to other services in requests to 
 provision said VM).  All services would ideally share a facility for 
 generating new request ids and for securely accepting request ids from other 
 services.
 
 
 m.
 
 
  -Sean
 
 -- 
 Sean Dague
 http://dague.net
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Thursday subteam meeting

2013-12-01 Thread IWAMOTO Toshihiro
At Fri, 29 Nov 2013 07:25:54 +0900,
Itsuro ODA wrote:
 
 Hi Eugene,
 
 Thank you for the response.
 
 I have a comment.
 I think 'provider' attribute should be added to loadbalance resource
 and used rather than pool's 'provider' since I think using multiple
 driver within a loadbalancer does not make sense.

There can be a 'provider' attribute in a loadbalancer resource, but,
to maintain API, the 'provider' attribute in pools should remain the
same.
Is there any other attribute planned for the loadbalancer resource?

 What do you think ?
 
 I'm looking forward to your code up !
 
 Thanks.
 Itsuro Oda
 
 On Thu, 28 Nov 2013 16:58:40 +0400
 Eugene Nikanorov enikano...@mirantis.com wrote:
 
  Hi Itsuro,
  
  I've updated the wiki with some examples of cli workflow that illustrate
  proposed API.
  Please see the updated page:
  https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance#API_change
  
  Thanks,
  Eugene.

--
IWAMOTO Toshihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] whey cannot we update vpn service and IPsecSiteConnection when they are in pending_create state

2013-12-01 Thread Yongsheng Gong
currently, when vpn service and IPsecSiteConnection are created, their
state is PENDING_CREATE,  which disallows updating at:
def assert_update_allowed(self, obj):
status = getattr(obj, 'status', None)
_id = getattr(obj, 'id', None)
if utils.in_pending_status(status):
raise vpnaas.VPNStateInvalidToUpdate(id=_id, state=status)

so why cannot we update these objects just after we created them?

I think  for IPsecSiteConnection, we should be able to update some
description fields such as name, and for vpnservice objects, we should be
able to do updating if we have no IPsecSiteConnection for it.

Regards,
Yong Sheng Gong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] VMware Workstation / Fusion / Player Nova driver

2013-12-01 Thread Chmouel Boudjnah
On Sun, Dec 1, 2013 at 11:10 PM, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:

 We’ll follow up with a blog post with some additional news related to this
 project quite soon.


really cool, I'd love to use that for my apple laptop. It would be nice if
it goes on stackforge which should be easier to contribute.

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev