Re: [Openstack-operators] [openstack-dev] PLEASE READ: VPNaaS API Change - not backward compatible

2015-08-27 Thread Paul Michali
Thanks for the links and thoughtful comments James!

In the Heat documentation, is the subnet ID being treated as optional (or
is the documentation not correct)? I think it is a required argument in the
REST API. Ref:
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::VPNService

It really sounds like we need to support both versions simultaneously, to
avoid some of the complexity that you are mentioning (I didn't know about
the Heat aspects).

We (development) should focus on formulating a plan for supporting both
versions. Hopefully my proposal in https://review.openstack.org/#/c/191944/ is
a good starting point.  I'll respond to comments on that today.

Regards,

Paul Michali (pc_m)

P.S. BTW: No offense taken in comments - email is such an imprecise
communication medium.



On Wed, Aug 26, 2015 at 7:11 PM James Dempsey jam...@catalyst.net.nz
wrote:

 On 26/08/15 23:43, Paul Michali wrote:
  James,
 
  Great stuff! Please see @PCM in-line...
 
  On Tue, Aug 25, 2015 at 6:26 PM James Dempsey jam...@catalyst.net.nz

 SNIP

  1) Horizon compatibility
 
  We run a newer version of horizon than we do neutron.  If Horizon
  version X doesn't work with Neutron version X-1, this is a very big
  problem for us.
 
 
  @PCM Interesting. I always thought that Horizon updates lagged Neutron
  changes, and this wouldn't be a concern.
 

 @JPD
 Our Installed Neutron typically lags Horizon by zero or one release.  My
 concern is how will Horizon version X cope with a point-in-time API
 change?  Worded slightly differently: We rarely update Horizon and
 Neutron at the same time so there would need to be a version(or
 versions) of Horizon that could detect a Neutron upgrade and start using
 the new API.  (I'm fine if there is a Horizon config option to select
 old/new VPN API usage.)

 
 
 
  2) Service interruption
 
  How much of a service interruption would the 'migration path' cause?
 
 
  @PCM The expectation of the proposal is that the migration would occur as
  part of the normal OpenStack upgrade process (new services installed,
  current services stopped, database migration occurs, new services are
  started).
 
  It would have the same impact as what would happen today, if you update
  from one release to another. I'm sure you folks have a much better handle
  on that impact and how to handle it (maintenance windows, scheduled
  updates, etc).
 

 @JPD This seems fine.

 
  We
  all know that IPsec VPNs can be fragile...  How much of a guarantee will
  we have that migration doesn't break a bunch of VPNs all at the same
  time because of some slight difference in the way configurations are
  generated?
 
 
  @PCM I see the risk as extremely low. With the migration, the end result
 is
  really just moving/copying fields from one table to another. The
 underlying
  configuration done to *Swan would be the same.
 
  For example, the subnet ID, which is specified in the VPN service API and
  stored in the vpnservices table, would be stored in a new vpn_endpoints
  table, and the ipsec_site_connections table would reference that entry
  (rather than looking up the subnet in the vpnservices table).
 

 @JPD This makes me feel more comfortable; thanks for explaining.

 
 
  3) Heat compatibility
 
  We don't always run the same version of Heat and Neutron.
 
 
  @PCM I must admit, I've never used Heat, and am woefully ignorant about
 it.
  Can you elaborate on Heat concerns as may be related to VPN API
 differences?
 
  Is Heat being used to setup VPN connections, as part of orchestration?
 

 @JPD
 My concerns are two-fold:

 1) Because Heat makes use of the VPNaaS API, it seems like the same
 situation exists as with Horizon.  Some version or versions of Heat will
 need to be able to make use of both old and new VPNaaS APIs in order to
 cope with a Neutron upgrade.

 2) Because we use Heat resource types like
 OS::Neutron::IPsecSiteConnection [1], we may lose the ability to
 orchestrate VPNs if endpoint groups are not added to Heat at the same time.


 Number 1 seems like a real problem that needs a fix.  Number 2 is a fact
 of life that I am not excited about, but am prepared to deal with.

 Yes, Heat is being used to build VPNs, but I am prepared to make the
 decision on behalf of my users... VPN creation via Heat is probably less
 important than the new VPNaaS features, but it would be really great if
 we could work on the updated heat resource types in parallel.

 [1]

 http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Neutron::IPsecSiteConnection

 
 
 
 
  Is there pain for the customers beyond learning about the new API
  changes
  and capabilities (something that would apply whether there is backward
  compatibility or not)?
 
 
  See points 1,2, and 3 above.
 
 
 
  Another implication of not having backwards compatibility would be that
  end-users would need to immediately switch to using the new API, once
 the
  migration occurs, versus doing so 

Re: [Openstack-operators] How to restrict AvZones to Tenants

2015-08-27 Thread raju
Tried the below way, still able to access other AVzones also

nova aggregate-set-metadata aggregate_ID filter_tenant_id=tenant_ID

Thanks for all your replies, appreciate it.


On Wed, Aug 26, 2015 at 9:55 PM, gustavo panizzo gfa g...@zumbi.com.ar
wrote:

 On Thu, Aug 27, 2015 at 11:26:42 +1000, Marcus Furlong wrote:
  On 27 August 2015 at 06:48, raju raju.r...@gmail.com wrote:
   Hi,
  
   I want to restrict Avzones to particular Tenant so that users in the
 Tenant
   can only see the particular Avzone from drop down while provisioning
   instances.

 you can add a *filter_tenant_id* key on the aggregate which makes the AZ
 and configure AggregateMultiTenancyIsolation filter, that will exclude
 other tenants from those hypervisors, AZ will still be visible by other
 tenants but they won't be able to use it

 --
 1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333

 keybase: http://keybase.io/gfa

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Large Deployment Team] Meeting in an hour

2015-08-27 Thread Matt Van Winkle
Just a gentle reminder since we bumped it a week for the mid-cycle.  I'll be 
hopping on from OpenStack Silicon Valley.  If I am late joining, someone start 
the meeting for me.

Thanks!
VW

Sent from my iPhone
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] IP - availability monitoring

2015-08-27 Thread Salvatore Orlando
I am not going to be too picky either and play the role of the defender of
the REST principles.

While we can surely think about a solution that fits nicely in current APis
and also gives you the ability of querying all the subnets at once, I also
realise there as a practical matter thinking about an ideal solution likely
implies delays.
I regard this as useful feature for operators, and I am therefore not
opposed to implementing extension as it is. At the end of the day it only
provides information, it does not alter the state of any neutron resource.

At some point in the future we can then converge on an API to determine
resource usage that will also include the functionality exposed by your API.

Salvatore

On 26 August 2015 at 16:09, Kris G. Lindgren klindg...@godaddy.com wrote:

 Hello,

 As you know, much discussion has been around the naming and the url
 pathing for the ip-usages extension.  We also discussed this at the neutron
 mid-cycle as well.  Since we are the ones the made the extension, we use
 the extension to help with scheduling in our layer 3 network design.We
 have no preference as to the url's but the only thing that we want to
 maintain is the ability to query for all subnets at once.  We have a nova
 scheduling filter that makes a call into the  ip-usages extension.
 Otherwise every time we provision a vm we would have to make N number of
 calls to get all the subnets and their usages.
 

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.

 From: Salvatore Orlando salv.orla...@gmail.com
 Date: Tuesday, August 25, 2015 at 4:33 PM
 To: Assaf Muller amul...@redhat.com
 Cc: openstack-operators@lists.openstack.org 
 openstack-operators@lists.openstack.org
 Subject: Re: [Openstack-operators] IP - availability monitoring

 As the specification linked by Assaf has without doubt value for
 operators, I think the drivers team might consider it for inclusion in the
 Liberty release.
 Unfortunately the specification and the patch have not received many
 reviews, but can still be sorted, especially considering that the patch's
 size is manageable and its impact contained.

 Nevertheless, Daniel in his post referred to providing information about
 usage of IPs in resources like subnets, whereas the patch under review
 proposes the addition of a new read-only resource called 'network_ip_usage'.
 The only thing I'd change is that I'd make this information available in a
 different way.
 For instance:

 - through a sub-url of subnets: GET /v2.0/subnets/id/ip_usage
 - through a query paramer on subnet GET /v2.0/subnets/id?ip_usage=True
 - making IPs a read only resource GET /v2.0/ips?subnet_id=idcount=True

 I think from a user perspective the latter would be the more elegant and
 simple to use, but it will require additional work for introducing resource
 counting in Neutron APIs; and for this there's an old spec too [1]. Having
 operators providing feedback on how they reckon this is information is best
 consumed would be valuable.

 [1] https://review.openstack.org/#/c/102199/

 Salvatore






 On 24 August 2015 at 03:21, Assaf Muller amul...@redhat.com wrote:



 On Sun, Aug 23, 2015 at 8:23 PM, Daniel Speichert dan...@speichert.pl
 wrote:

 On 8/22/2015 23:24, Balaji Narayanan (பாலாஜி நாராயணன்) wrote:
  Hello Operators,
 
  In the capacity management discussions at the Ops Summit last week, I
  thought there was a some discussion on monitoring of fixed / floating
  subnets and availability.
 
  At Yahoo, we use nova-network and have an API extension available for
  reporting how much ip subnets are configured on a cluster and how much
  of them are used / remaining. We use this to trigger an alert /
  augment additional subnets to the cluster.
 
  If there is enough interest in this, we can look at pushing this
 upstrem.
 
  Here is a blue print that vilobh wrote initially for this -
  https://review.openstack.org/#/c/94299/
 This sounds like a very useful extension, considering there's really no
 quotas for IP addresses and IPs are a scarce resource.
 I'm aware of multiple big private cloud operators using custom scripts
 to generate reports of available IP addresses.

 I'm pretty sure an extension like this would be great for neutron (I'm
 not using nova-network). Considering that most networking scenarios
 (flat, provider networks, floating IPs with L3) have subnets as a
 resource in neutron, with allocation pools, it seems enough to create an
 extension that would provide statistics for a subnet or summary
 statistics for all subnets within a network if so requested.

 I can work on a new blueprint version for neutron.


 There already is one.

 Code:
 https://review.openstack.org/#/c/212955/

 RFE bug:
 https://bugs.launchpad.net/neutron/+bug/1457986

 Blueprint:
 https://blueprints.launchpad.net/neutron/+spec/network-ip-usage-api

 Spec:

 https://review.openstack.org/#/c/180803/5/specs/liberty/network-ip-usage-api.rst



 

Re: [Openstack-operators] Juno and Kilo Interoperability

2015-08-27 Thread Eren Türkay
Hello Sam and David,

Thank your for the responses, I really appreciate them.


 With clients the only issue I’ve had is with the designate client not being
 backwards compatible between icehouse and juno. Ceilometer changed the way
 they signed messaged between Icehouse and Juno which was a pain so we had to
 set up parallel virtual hosts and collectors to push it into mongo.

We only have Juno and Kilo in the environment. I haven't seen a major change in
ceilometer in Kilo and I believe it should be fine since we don't have N, N-2
versions in the environment.

 All the APIs are pretty stable so it shouldn’t really matter what version of
 say keystone can work with what version of nova etc. We basically take it
 for granted now although of course test in your own env.

Yeah I think so. I will test it these days.

 With nova make sure you set upgrade_levels so your nova control can talk to
 you computes that are on version N-1 etc.

What I will have for second site is Kilo neutron/cinder/glance/nova. Juno
keystone and Juno clients (Horizon) will be shared. Since nova-api,
nova-compute, nova-scheduler will be Kilo in the second site, there should be
no problem. I believe Juno Horizon will make the requests to the Kilo
endpoints, which is backward-compatible, and Kilo site will do the rest.

So, in summary, we have the same version inside a site, but different versions
in the whole environment for now. (Site A is Juno, Site B is Kilo, and Juno
keystone/horizon is shared). I think it will just work. When upgrading the Juno
site, I will start with nova and will take your responses into account. That's
when I will have Juno+Kilo in one site.

Regards,


-- 
Eren Türkay, System Administrator
https://skyatlas.com/ | +90 850 885 0357

Yildiz Teknik Universitesi Davutpasa Kampusu
Teknopark Bolgesi, D2 Blok No:107
Esenler, Istanbul Pk.34220



signature.asc
Description: OpenPGP digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack][openstack-dev][openstack-operators][chef] stable/kilo is born!

2015-08-27 Thread JJ Asghar

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

I'd like to announce that we have stamped stable/kilo.

I'd like to congratulate everyone who has committed and helped the
OpenStack-Chef project get to this milestone.

We have made tremendous progress with the limited resources our project has.

If you are using or are thinking of using the Chef cookbooks please
don't hesitate to come round to #openstack-chef and help out, our
project is always looking for feedback.

If you don't know, our meeting time is here[1] and most of the cores are
available in the channel during US business hours.

[1]: https://wiki.openstack.org/wiki/Meetings/ChefCookbook

Cheers!

- -- 
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV36LqAAoJEDZbxzMH0+jT1OAQAK6p7Z8HfBPiLuAvyWGDhF2Z
MOVibLXXTjpZjdvUPwGo/nadVxUH5yZxUndZ8P6rmHcklvK0M6ETL6n3X86Y1rmG
JbOf0oi7b2irIG50i+TCSdmhhelpDI/8tg+oFVD27nsTvfz1hNmlFjMNin7N1AD5
yqft1UJOU23B643F6/KCG299thyJviDN0Jgoe3VrKERXpol/1yL63VbNQOsLj6Dq
03hTGOW84qpUK8o+C+/7sxQ5KolrQ6OhYU4IBXafroiFtzeV8ROUX8WsrEtGXwzi
RsQk2n0xlENy9XVIyJxKdoZx6VGSqvTKIMyF+9C42rvlBS0aRYUmnDR1u+vdGmWX
H1bCUxbam0zG4GZ96GOXJ6xOlay79+qEIQt1svCIFeCxTmmLoFTLjQkAbSR+tf9W
Cdys/+Pq2EZA/KG7CIMk6rAl/aveikO/9JoCuzyY7asm2++o0Csc1QoPvJU9MWHY
mQHsonsf9MTd3/8cM7H7BnzWcGpbQYokhVwEJBaysqJekIrcJ8I+is87yEV9RuHB
GtUCW84z2/+PwDYPSCZH+iE8eqY1Q7xyFRxzceoqZDtEaWjgmv73MTsnAZhza4Il
GgdwFmLuk5UeKSfGTA7a5ernvCOHmoGmnD2mXKvqq5KnYw9X2cm69tZ5NiixVTZw
baN5uo04PCDLGGzitSPJ
=/MCc
-END PGP SIGNATURE-


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators