Re: [Openstack-operators] [nova] Backlog Specs: a way to send requirements to the developer community

2015-05-14 Thread Maish Saidel-Keesing

On 05/14/15 21:04, Boris Pavlovic wrote:

John,

I believe that backlog should be different much simpler then specs.

Imho Operators don't have time / don't want to write long long specs 
and analyze how they are aligned with specs
or moreover how they should be implemented and how they impact 
performance/security/scalability. They want

just to provide feedback and someday get it implemented/fixed.

In Rally we chose different way called feature request.
The process is the same as for specs, but template is much simpler.

Bravo

Can we please have this as a default template and the default way to 
allow Operators to submit a feature request for EVERY and ALL the 
OpenStack projects ??





Here is the page:
https://rally.readthedocs.org/en/latest/feature_requests.html

And here is the sample of feature request:
https://rally.readthedocs.org/en/latest/feature_request/launch_specific_benchmark.html


Best regards,
Boris Pavlovic

On Thu, May 14, 2015 at 9:03 PM, Boris Pavlovic 
bpavlo...@mirantis.com mailto:bpavlo...@mirantis.com wrote:


John,

I believe that backlog should be different much simpler then specs.

Imho Operators don't have time / don't want to write long long
specs and analyze how they are aligned with specs
or moreover how they should be implemented and how they impact
performance/security/scalability. They want
just to provide feedback and someday get it implemented/fixed.

In Rally we chose different way called feature request.
The process is the same as for specs, but template is much simpler.

Here is the page:
https://rally.readthedocs.org/en/latest/feature_requests.html

And here is the sample of feature request:

https://rally.readthedocs.org/en/latest/feature_request/launch_specific_benchmark.html


Best regards,
Boris Pavlovic

On Thu, May 14, 2015 at 8:47 PM, John Garbutt
j...@johngarbutt.com mailto:j...@johngarbutt.com wrote:

Hi,

I was talking with Matt (VW) about how best some large deployment
working sessions could send their requirements to Nova.

As an operator, if you have a problem that needs fixing or use
case
that needs addressing, a great way of raising that issue with the
developer community is a Backlog nova-spec.

You can read more about Nova's backlog specs here:
http://specs.openstack.org/openstack/nova-specs/specs/backlog/

Any questions, comments or ideas, please do let me know.

Thanks,
John

PS
In Kilo we formally started accepting backlog specs,
although we are
only just getting the first of these submitted now. There is
actually
a patch to fix up how they get rendered:
https://review.openstack.org/#/c/182793/2

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
mailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] Backlog Specs: a way to send requirements to the developer community

2015-05-14 Thread John Garbutt
Hi,

I was talking with Matt (VW) about how best some large deployment
working sessions could send their requirements to Nova.

As an operator, if you have a problem that needs fixing or use case
that needs addressing, a great way of raising that issue with the
developer community is a Backlog nova-spec.

You can read more about Nova's backlog specs here:
http://specs.openstack.org/openstack/nova-specs/specs/backlog/

Any questions, comments or ideas, please do let me know.

Thanks,
John

PS
In Kilo we formally started accepting backlog specs, although we are
only just getting the first of these submitted now. There is actually
a patch to fix up how they get rendered:
https://review.openstack.org/#/c/182793/2

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] how to filter outgoing VM traffic in icehouse

2015-05-14 Thread Abel Lopez
I heard lots of talk in Paris about having nova-network reach feature parity 
with neutron.
With neutron, you can specify egress/ingress rules in Horizon, so if 
nova-network ever got feature parity, it should work *someday*

 On May 14, 2015, at 10:10 AM, Stephen Cousins steve.cous...@maine.edu wrote:
 
 Is there any plan for egress rules to be managed in Horizon?
 
 
 
 On Wed, May 13, 2015 at 5:47 PM, Kevin Bringard (kevinbri) 
 kevin...@cisco.com mailto:kevin...@cisco.com wrote:
 Ah, I don't believe nova-network supports EGRESS rules.
 
 On 5/13/15, 3:41 PM, Gustavo Randich gustavo.rand...@gmail.com 
 mailto:gustavo.rand...@gmail.com wrote:
 
 Hi, sorry, I forgot to mention: I'm using nova-network
 
 
 
 On Wed, May 13, 2015 at 6:39 PM, Abel Lopez
 alopg...@gmail.com mailto:alopg...@gmail.com wrote:
 
 Yes, you can define egress security group rules.
 
  On May 13, 2015, at 2:32 PM, Gustavo Randich
 gustavo.rand...@gmail.com mailto:gustavo.rand...@gmail.com wrote:
 
  Hi,
 
  Is there any way to filter outgoing VM traffic in Icehouse, preferably
 using security groups? I.e. deny all traffic except to certain IPs
 
  Thanks!
 
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org 
  mailto:OpenStack-operators@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
 
 
 
 
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org 
 mailto:OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-operators][chef] OpenStack-Chef Official Ops Meetup

2015-05-14 Thread JJ Asghar
 On May 12, 2015, at 3:00 PM, JJ Asghar jasg...@chef.io wrote:
 
 I’d like to announce the OpenStack-Chef Ops Meetup in Vancouver. We have an 
 etherpad[1] going with topics people would like to discuss. I haven’t found a 
 room or space for us yet, but when I do I’ll comment back on this thread and 
 add it to the etherpad, (right now looking at the other declaration of time 
 slots, I think Wednesday 1630 is going to be our best bet)  As with the Paris 
 Ops Meetup we had a lot of great questions and use cases discussed but it was 
 very unstructured and I got some negative feedback about that, hence the 
 etherpad.

Hey Everyone, so it seems I need to take a step back because I got some wires 
crossed.  

Turns out the Ops meetup is our offical meeting space during the Summit which i 
put on the on the etherpad[1]. I’d like to keep this on topic per the agenda 
we’ve created on that etherpad. I think this is best, it gives the larger 
community of OpenStack+Chef users to come together and discuss future plans and 
questions in a face to face in a real time discussion. 

Just so we have it documented in this email: the meetup is in Room 217 on 
Wednesday at 0950am[2].

Though this brings up the topic of the Dev meetup. As the majority of the core 
members are going to be at the Summit, we still need a time and a place for us 
to discuss the stable/kilo branching. I propose we find a working space, and 
just spend an hour or two together. I’ve created a doodle[3] with some time 
slots, if you could put your ideal time in, including your email address so we 
can organize off the mailing list. I’ll @core this in the IRC channel too.

This is also open to the community as a whole, so if you’d like to post your 
ideal time don’t hesitate to.

 If I can ask for a volunteer note-taker to step up and to directly contact 
 me; so when we start the meeting we can just jump in that would me amazing. I 
 can sweeten the deal if you step up with some Chef Swag too ;).

Just to follow up on this, I haven’t had anyone step up yet, so if you where 
thinking about it, please don’t hesitate to reach out.


 [1]: https://etherpad.openstack.org/p/YVR-ops-chef 
 https://etherpad.openstack.org/p/YVR-ops-chef
[2]: http://sched.co/3D8a http://sched.co/3D8a[3]: 
http://doodle.com/gb4ww6izbwg8k9di http://doodle.com/gb4ww6izbwg8k9di___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Vancouver Summit - Customer On-boarding/Off-boarding

2015-05-14 Thread Joseph Bajin
Hi,

I will be moderating the Customer On-Boarding/Off-Boarding[1] session at
the summit, and wanted to make sure we get as much feedback into the
etherpad[2] as possible.

Both adding and removing users seems like a pretty simple idea, but it gets
complicated pretty quickly. So any suggestions, recommendations or examples
are welcome!

Thanks

Joe


[1] - http://sched.co/3C4p
[2] - https://etherpad.openstack.org/p/YVR-ops-customer-onboarding
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-14 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi Basil

On 14/05/15 16:04, Basil Baby wrote:
 I can see the patch for CVE-2015-3456 updated to qemu-kvm package
 on Precise - Icehouse branch. 
 https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/icehouse-s
taging/+build/7425816

 
https://launchpad.net/%7Eubuntu-cloud-archive/+archive/ubuntu/icehouse-
staging/+build/7425816
 
 But, on precise-havana it is not yet updated. (Latest available is
 https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/havana-sta
ging/+build/5955528

 
https://launchpad.net/%7Eubuntu-cloud-archive/+archive/ubuntu/havana-st
aging/+build/5955528)
 Is there a plan to update the package ?

The Havana Cloud Archive pocket was EOL in July last year, so won't
receive any update for this vulnerability.  Icehouse is the oldest
supported release on 12.04 in the Cloud Archive.

Sorry - probably not the answer you wanted.

Regards

James

- -- 
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJVVMPFAAoJEL/srsug59jD2joQALZ6W+iaoZCKBf+fyX1MWIPk
F1uLsomzJVSY7LRRAlEHKy29s8i7JFDTsEs/W/oToSFdase/HMs+YQpHRAiLbwQ6
2nYPsvstlVryp7eaIBNVvwb2G/+ml4oomHSTQgx6IH6SgvCsVDDYnlFbv4Yzn+dn
1JQF31M+YpiKzJsifOkUAw6WCV0UJBfbW8OFx3CvwbdxgGJR/F6LbibfvSrf3fh/
ZTjO/0lYu5L7r+Nv+AaocUYokTp+HcikRtpPPJ/Mt+uYyn8WodFSYFSlVQF8OQLg
umIwAv7Wv0vP1Ktv9QEwkM51jFZ1cBg6cDTTsSOt5sgTKe8I1TsIJQeJWcJOcfxk
kdxCYzJRunRgqsiehXAkHNKsQ/R75WMENfgj/a0tiDsY5Q0KZjLJjlhTwL9BwFF1
u+Ot4M3xdqRqgyw8clTRZwyBq9xsLGi0GGlhDlOY5Yl8Z+WLh4lX8Icw3AO5v2Hx
/h9DFdl88aSGBuJ31iMVM2r5rEFyGrzPiAXg9tpW3MlIeNihSSgipjO9rPuEpRbQ
sut91ANsoJV1KIr2VRPawPKRUI7JxWLQszl0xl8zYK5q8LzO7xc+tKbkkDmpIiAs
QBj0CuAcwACbGywZG5xJnZclV3/g7nqTQPkjHd2+NyZaRrUxBlASiLVre368aZpf
BYr7I1Xfa4bSAmIsoCMT
=fZju
-END PGP SIGNATURE-

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging, Deployment CI/CD - Moderators Needed (Vancouver)

2015-05-14 Thread Matt Kassawara
Wouldn't this replace the database session... not the packaging session?

On Thu, May 14, 2015 at 11:10 AM, matt m...@nycresistor.com wrote:

 A worthwhile discussion.  But are we talking about this as it relates to
 packaging or as a separate track related to the architectural and
 procedural challenges?

 -matt

 On Thu, May 14, 2015 at 12:06 PM, Matt Kassawara mkassaw...@gmail.com
 wrote:

 I propose that we spend more time discussing how to improve the
 networking guide for operators. Can we enhance the structure/content of
 existing deployment scenarios to appeal to more operators, particularly
 those looking to jump from nova-net to neutron? Can we get more operators
 to help contribute additional practical deployment scenarios?

 On Tue, May 12, 2015 at 11:44 PM, Tom Fifield t...@openstack.org wrote:

 Packaging is now solved.

 We still need moderators for:


 1. Database - Big room discussion session
Tuesday 3:40 - 4:30


 Since there hasn't been a response so far, this session will likely be
 cancelled.


 In the event this session is cancelled, something should replace it. If
 you're willing to moderate a session that isn't Database, what would you
 like to do? :)



 Regards,


 Tom


 On 11/05/15 11:46, Tom Fifield wrote:
  Ok, I've made that switch, so now we have a need for moderators for:
 
  1. Database - Big room discussion session
 Tuesday 3:40 - 4:30
 
 
  2. Packaging Working Group - Small room working session
 Wednesday 4:30 - 6:00
 
  If you're interested, please have a look at the Moderators Guide:
  https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide
 
  and get in touch off-list :)
 
  We can take a couple of people for each session to share the load.
 
 
  Regards,
 
 
  Tom
 
 
  On 11/05/15 11:36, Matt Fischer wrote:
  Tom,
 
  This doesn't solve your problem, but I will gladly swap Database for
  Deployment/CI/CD. I have more experience on that topic and am even
  presenting on it.
 
  On Sun, May 10, 2015 at 9:31 PM, Tom Fifield t...@openstack.org
  mailto:t...@openstack.org wrote:
 
  Hi all,
 
  We're in need of moderators for these ops sessions in Vancouver:
 
  1. CI/CD and Deployment - Big room discussion session
 Tuesday 3:40 - 4:30
 
 
  2. Packaging Working Group - Small room working session
 Wednesday 4:30 - 6:00
 
 
  If you're interested, please have a look at the Moderators Guide:
 
 https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide
 
  and get in touch off-list :)
 
  We can take a couple of people for each session to share the load.
 
 
  Regards,
 
 
  Tom
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  mailto:OpenStack-operators@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-14 Thread Basil Baby
If anyone from Canonical here who maintains ubuntu-cloud.archive.canonical,

I can see the patch for CVE-2015-3456 updated to qemu-kvm package on
Precise - Icehouse branch.
https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/icehouse-staging/+build/7425816

But, on precise-havana it is not yet updated.
(Latest available is
https://launchpad.net/~ubuntu-cloud-archive/+archive/ubuntu/havana-staging/+build/5955528
)
Is there a plan to update the package ?

Thanks,
-Basil

On Wed, May 13, 2015 at 7:25 PM, Matt Van Winkle mvanw...@rackspace.com
wrote:

 It would.  I'd test though.  Depending on the amount of RAM and the I/O of
 the underlying host, we saw that some larger instances could take longer
 to suspend/resume than shutdown/power up.  You maintain the state of the
 system, but may see longer downtime for the instance.  Something to
 think about.

 Thanks!
 Matt

 On 5/13/15 6:19 PM, Favyen Bastani fbast...@perennate.com wrote:

 Would a virsh suspend/save/restore/resume operation accomplish similar
 result as the localhost migration?
 
 Best,
 Favyen
 
 On 05/13/2015 12:44 PM, Matt Van Winkle wrote:
  Yeah, something like that would be handy.
 
  From: matt m...@nycresistor.commailto:m...@nycresistor.com
  Date: Wednesday, May 13, 2015 10:29 AM
  To: Daniel P. Berrange
 berra...@redhat.commailto:berra...@redhat.com
  Cc: Matt Van Winkle
 mvanw...@rackspace.commailto:mvanw...@rackspace.com,
 openstack-operators@lists.openstack.orgmailto:
 openstack-operators@lists
 .openstack.org
 openstack-operators@lists.openstack.orgmailto:
 openstack-operators@lists
 .openstack.org
  Subject: Re: [Openstack-operators] Venom vulnerability
 
  honestly that seems like a very useful feature to ask for...
 specifically for upgrading qemu.
 
  -matt
 
  On Wed, May 13, 2015 at 11:19 AM, Daniel P. Berrange
 berra...@redhat.commailto:berra...@redhat.com wrote:
  On Wed, May 13, 2015 at 03:08:47PM +, Matt Van Winkle wrote:
  So far, your assessment is spot on from what we've seen.  A migration
  (if you have live migrate that's even better) should net the same
 result
  for QEMU.  Some have floated the idea of live migrate within the same
  host.  I don't know if nova out of the box would support such a thing.
 
  Localhost migration (aka migration within the same host) is not
 something
  that is supported by libvirt/KVM. Various files QEMU has on disk are
 based
  on the VM name/uuid and you can't have 2 QEMU processes on the host
 having
  the files at the same time, which precludes localhost migration working.
 
  Regards,
  Daniel
 
 
 
  -BEGIN PGP MESSAGE- Version: GnuPG v1 Comment: Charset:
 us-ascii
  hQIMA4ToeuPbGFzLAQ//WKATa6VRGKJKq7zAcUTO0tS8Lgq5zuo1buc2pJtbPKXi
  pFmHpgTsXxoU3LNhfWelAToCQdacVLUw5OiFsWyoVsjAcuRzMrN+l8WHYG4jZDGs
  bXCUp4XwShex35/vmI15NTAKrmbgIJZRi80sewCZ8H13rei86TPKA5b1C9SFxiqq
  KGmntJdiEyk+x2SOz5xvZVx/29XryUSBXo6YAVQmW4AZrrdVdkRxDKCX3tw90UZ+
  RCibGl1nac4n2rrXZ+izKcq6d+CYo28yBaEJ5zecrU1K9M/rZwyVWnr5NTP0bs0B
  EOBV+0YsaBJdfbdrntKGUZCKVta4QdX9mOIQ7GYM/DP3IxHywFKfcwjG0iRjHYQG
  sNCK0ymhr+eNcBKWHjyVqvy/W5IIep+ES1Y7xhmwqPfWEraNQ+Scc9T6i7mWAaam
  dn7fVaO3dOHEoKVGX6Z+TtQS+FjesrgtOtvEeonVAkQLNEBVnQcMaMOrz+Ia1AXf
  +SwkcksDaqylXC1TqTLjyA7ceEHWqPL7d6EfIM7dBT/tg0h5WL2XgoJlFddSXDoR
  99b2Arc9jaG+tJamvRO+M8Ky8uVuD5pF68wDwfvPqHbzSzzt3fmmkQkOVmtNLkjp
  ZAGDxV/0+xhurdz4HFDz6q3ShpgREsgBEOd8uY7UCn67nRZbrS4YtdUIV25dhknS
  6gGkwfhs5IR99F/IvQUXsUs1m5DCWZI0GkWEaTcTEJfNoYHLPH+vLdtzupNz7ihp
  sNtie42q3urYLW5irAFeTW8jyjS4V5TPMMUXMvp5DG4eOGGCoKiZQhmT3JJB3PHe
  5kghWgOlRQyK9trkH1zS8cgpXPhL+g/LGRfrp+xH7E7Hn1DLMizeQargFpcLmpdR
  KHQQCHlBuB4gTQ0n/ai5zRVrioH+6GVMVedUxsYTMlrVWNGocYVZ/lzjHdDGVPiQ
  JoxmMxVqL8icPu21FoIXGKiTA6VI0cAmugpQDXFVuk+HVYyYGtj9swmPyaR7ykXU
  1+4KAyBXsmz4y/mQxKsSVZnlp+cq9Y6iR7IPcj06KMeTF61Zc6sJZ0aIDl6IzzOB
  UErMtFTKuAMAFPmB2wZ2kMsuz5K48BZcDSeO6PT6fbsWtQvmRK+Fqjf8iLtpLnEj
  2aG0hKeDTJkZKJOtaHoePx1MBrfRS1kCSAhjTCIxgSuIKLsRx9M+8KfqB+suYXUA
  RbrSrOyvl16YfUmTaWdYS+PdKuLYEVHViqZecvc30jALJoQOcvoWO7Kwzh4Tl4H9
  jeSA1+lpV0P25tm7x+PbpAVgbX0aBD4rs2TYU79MersBvL8trm3q6UcB0Bcud/XQ
  rUTUa7xUgS8XO+EsU6WMKmRZ+Usl+yTqaXH4eTMMAAL1b2Kq9Lr3RZP/zuQpYfiG
  aSfX8al6YJQRGRVwYORbeUjcOw5fioash8Xf1OEpj0fYLGbsqhRUZU6UbADjEcHo
  YJID1xvBUmw149iCbOTwHb1rTfw2t8VThkfIxbSTd7t/urYNn5F5H1dhWocvs+oR
  cd4GKZJjvQcT2/RH8taspQjWNL5asRQvwdb6ZUYQDa5G6o2N3pjIrP9Itue8Iaf6
  B/xZ6MnFnAB821YiT1V0KbX7FB8bE6HE9z7jR1zpqBA3LbPxVtst2AxenVxbCSQT
  scA5c4YoXXgxPbrCyX22lyAKwuYEaRa7KrPVjrJoyjDDK1uFD0JRqzokJcS/7dBY
  F9xrz5H9yRoyVwy/pG9uEdoQkGth3DiOBkqUMYrvipqP0AKHRHcASdL/3fbgdB9Q
  bmCwWVTyUVbmqztawJ8Xc9+QRk1wEbLvt3df9DZkUT8lqR9JUt4xLWpMvhOhsIVQ
  iXFaeSoZTpa7B8NzTpJPfCrZtTYnZxzHewxg0gViHQPSv+LmvpR2Z3k6CkgRdqKE
  1vM/+Ih2Ksc+Yyd5T40IObyaTmSigXnIkKv3vHQtaZaLmwiZRFJY8EmLASSz5/o/
  LUNMH1CPPvj00W3rLzMHDnYu2ZhWETpQBGjNUWcQnzo6Vfg3SBXse3WbZu73Ix2f
  O+kMHjMtB9Nf4URij4D3obLpSVZ1F95wyS63yTuS7nncSNnvbm891946F4/k/J79
  4fsPVdOA3JSrR9nl10yKsxlfbeTh3saPP2GvDd7TWmC1AdCej64RyyNojJONvbi2
  

[Openstack-operators] ha queues Juno periodic rabbitmq errors

2015-05-14 Thread Pedro Sousa
Hi all,

I'm using Juno and ocasionally see this kind of errors when I reboot one of
my rabbit nodes:

*MessagingTimeout: Timed out waiting for a reply to message ID
e95d4245da064c779be2648afca8cdc0*

I use ha queues in my openstack services:


*rabbit_hosts=192.168.113.206:5672
http://192.168.113.206:5672,192.168.113.207:5672
http://192.168.113.207:5672,192.168.113.208:5672
http://192.168.113.208:5672*

*rabbit_ha_queues=True*

As anyone experienced this issues? is this a oslo bug or related?

Regards,
Pedro Sousa
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ha queues Juno periodic rabbitmq errors

2015-05-14 Thread Kevin Bringard (kevinbri)
If you're using Rabbit 3.x you need to enable HA queues via policy on the
rabbit server side.

Something like this:

rabbitmqctl set_policy ha-all 
'{ha-mode:all,ha-sync-mode:automatic}'


Obviously, tailor it to your own needs :-)

We've also seen issues with TCP_RETRIES2 needing to be turned way down
because when rebooting the rabbit node, it takes quite some time for the
remote host to realize it's gone and tear down the connections.

On 5/14/15, 9:23 AM, Pedro Sousa pgso...@gmail.com wrote:

Hi all,


I'm using Juno and ocasionally see this kind of errors when I reboot one
of my rabbit nodes:


MessagingTimeout: Timed out waiting for a reply to message ID
e95d4245da064c779be2648afca8cdc0


I use ha queues in my openstack services:


rabbit_hosts=192.168.113.206:5672
http://192.168.113.206:5672,192.168.113.207:5672
http://192.168.113.207:5672,192.168.113.208:5672
http://192.168.113.208:5672

rabbit_ha_queues=True



As anyone experienced this issues? is this a oslo bug or related?


Regards,
Pedro Sousa









___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ha queues Juno periodic rabbitmq errors

2015-05-14 Thread Pedro Sousa
Hi Kevin,

thank you for reply, I'm using rabbitmqctl set_policy HA '^(?!amq\.).*'
'{ha-mode: all}'

I will test with ha-sync-mode:automatic' and net.ipv4.tcp_retries2=5

Regards,
Pedro Sousa






On Thu, May 14, 2015 at 4:29 PM, Kevin Bringard (kevinbri) 
kevin...@cisco.com wrote:

 If you're using Rabbit 3.x you need to enable HA queues via policy on the
 rabbit server side.

 Something like this:

 rabbitmqctl set_policy ha-all 
 '{ha-mode:all,ha-sync-mode:automatic}'


 Obviously, tailor it to your own needs :-)

 We've also seen issues with TCP_RETRIES2 needing to be turned way down
 because when rebooting the rabbit node, it takes quite some time for the
 remote host to realize it's gone and tear down the connections.

 On 5/14/15, 9:23 AM, Pedro Sousa pgso...@gmail.com wrote:

 Hi all,
 
 
 I'm using Juno and ocasionally see this kind of errors when I reboot one
 of my rabbit nodes:
 
 
 MessagingTimeout: Timed out waiting for a reply to message ID
 e95d4245da064c779be2648afca8cdc0
 
 
 I use ha queues in my openstack services:
 
 
 rabbit_hosts=192.168.113.206:5672
 http://192.168.113.206:5672,192.168.113.207:5672
 http://192.168.113.207:5672,192.168.113.208:5672
 http://192.168.113.208:5672
 
 rabbit_ha_queues=True
 
 
 
 As anyone experienced this issues? is this a oslo bug or related?
 
 
 Regards,
 Pedro Sousa
 
 
 
 
 
 
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [openstack-dev][openstack-operators][Rally][announce] What's new in Rally v0.0.4

2015-05-14 Thread Mikhail Dubov
Hi everyone,

Rally team is happy to announce that we have just cut the new release 0.0.4!

*Release stats:*

   - Commits: *87*
   - Bug fixes: *21*
   - New scenarios: *14*
   - New contexts: *2*
   - New SLA: *1*
   - Dev cycle: *30 days*
   - Release date: *14/May/2015*

*New features:*

   - *Rally now can generate load with users that already exist. *This
   makes it possible to use Rally for benchmarking OpenStack clouds that are
   using LDAP, AD or any other read-only keystone backend where it is not
   possible to create any users dynamically.
   - *New decorator **@osclients.Clients.register. *This decorator adds new
   OpenStack clients at runtime. The added client will be available from
   *osclients.Clients* at the module level and cached.
   - *Improved installation script.* The installation script for Rally now
   can be run from an unprivileged user, supports different database types,
   allows to specify a custom python binary, automatically installs needed
   software if run as root etc.

For more details, take a look at the *Release notes for 0.0.4*
https://rally.readthedocs.org/en/latest/release_notes/latest.html.

Best regards,
Mikhail Dubov

Engineering OPS
Mirantis, Inc.
E-Mail: mdu...@mirantis.com
Skype: msdubov
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ha queues Juno periodic rabbitmq errors

2015-05-14 Thread Kevin Bringard (kevinbri)


On 5/14/15, 9:45 AM, Pedro Sousa pgso...@gmail.com wrote:

Hi Kevin,


thank you for reply, I'm using rabbitmqctl set_policy HA '^(?!amq\.).*'
'{ha-mode: all}'


I will test with ha-sync-mode:automatic' and net.ipv4.tcp_retries2=5

I don't know that you need to ha-sync-mode to automatic (I was just using
an example I found quickly on the internets), but I do think the
tcp_retries2 thing will help. I think we may have even set ours to 3...
I'd have to check. But, fiddle with it to the point that it times out
connections quickly, without having false positives.

There's a doc RH wrote about it. It's specific to Oracle, but should be
portable.

https://www.redhat.com/promo/summit/2010/presentations/summit/decoding-the-
code/fri/scott-945-tuning/summit_jbw_2010_presentation.pdf



Regards,
Pedro Sousa












On Thu, May 14, 2015 at 4:29 PM, Kevin Bringard (kevinbri)
kevin...@cisco.com wrote:

If you're using Rabbit 3.x you need to enable HA queues via policy on the
rabbit server side.

Something like this:

rabbitmqctl set_policy ha-all 
'{ha-mode:all,ha-sync-mode:automatic}'


Obviously, tailor it to your own needs :-)

We've also seen issues with TCP_RETRIES2 needing to be turned way down
because when rebooting the rabbit node, it takes quite some time for the
remote host to realize it's gone and tear down the connections.

On 5/14/15, 9:23 AM, Pedro Sousa pgso...@gmail.com wrote:

Hi all,


I'm using Juno and ocasionally see this kind of errors when I reboot one
of my rabbit nodes:


MessagingTimeout: Timed out waiting for a reply to message ID
e95d4245da064c779be2648afca8cdc0


I use ha queues in my openstack services:


rabbit_hosts=192.168.113.206:5672 http://192.168.113.206:5672
http://192.168.113.206:5672,192.168.113.207:5672
http://192.168.113.207:5672
http://192.168.113.207:5672,192.168.113.208:5672
http://192.168.113.208:5672
http://192.168.113.208:5672

rabbit_ha_queues=True



As anyone experienced this issues? is this a oslo bug or related?


Regards,
Pedro Sousa

















___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Packaging, Deployment CI/CD - Moderators Needed (Vancouver)

2015-05-14 Thread matt
that makes more sense to me.

On Thu, May 14, 2015 at 12:13 PM, Matt Kassawara mkassaw...@gmail.com
wrote:

 Wouldn't this replace the database session... not the packaging session?

 On Thu, May 14, 2015 at 11:10 AM, matt m...@nycresistor.com wrote:

 A worthwhile discussion.  But are we talking about this as it relates to
 packaging or as a separate track related to the architectural and
 procedural challenges?

 -matt

 On Thu, May 14, 2015 at 12:06 PM, Matt Kassawara mkassaw...@gmail.com
 wrote:

 I propose that we spend more time discussing how to improve the
 networking guide for operators. Can we enhance the structure/content of
 existing deployment scenarios to appeal to more operators, particularly
 those looking to jump from nova-net to neutron? Can we get more operators
 to help contribute additional practical deployment scenarios?

 On Tue, May 12, 2015 at 11:44 PM, Tom Fifield t...@openstack.org wrote:

 Packaging is now solved.

 We still need moderators for:


 1. Database - Big room discussion session
Tuesday 3:40 - 4:30


 Since there hasn't been a response so far, this session will likely be
 cancelled.


 In the event this session is cancelled, something should replace it. If
 you're willing to moderate a session that isn't Database, what would you
 like to do? :)



 Regards,


 Tom


 On 11/05/15 11:46, Tom Fifield wrote:
  Ok, I've made that switch, so now we have a need for moderators for:
 
  1. Database - Big room discussion session
 Tuesday 3:40 - 4:30
 
 
  2. Packaging Working Group - Small room working session
 Wednesday 4:30 - 6:00
 
  If you're interested, please have a look at the Moderators Guide:
  https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide
 
  and get in touch off-list :)
 
  We can take a couple of people for each session to share the load.
 
 
  Regards,
 
 
  Tom
 
 
  On 11/05/15 11:36, Matt Fischer wrote:
  Tom,
 
  This doesn't solve your problem, but I will gladly swap Database for
  Deployment/CI/CD. I have more experience on that topic and am even
  presenting on it.
 
  On Sun, May 10, 2015 at 9:31 PM, Tom Fifield t...@openstack.org
  mailto:t...@openstack.org wrote:
 
  Hi all,
 
  We're in need of moderators for these ops sessions in Vancouver:
 
  1. CI/CD and Deployment - Big room discussion session
 Tuesday 3:40 - 4:30
 
 
  2. Packaging Working Group - Small room working session
 Wednesday 4:30 - 6:00
 
 
  If you're interested, please have a look at the Moderators Guide:
 
 https://wiki.openstack.org/wiki/Operations/Meetups#Moderators_Guide
 
  and get in touch off-list :)
 
  We can take a couple of people for each session to share the
 load.
 
 
  Regards,
 
 
  Tom
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  mailto:OpenStack-operators@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [all] Technical Committee Highlights May 13, 2015

2015-05-14 Thread Robert Collins
On 15 May 2015 at 07:15, Anne Gentle annegen...@justwriteclick.com wrote:
 In response to the feedback during elections, the Technical Committee now
 has a subteam dedicated to communications. Below is a link to the first post
 in our revitalized series. As always, we're here for you and listening and
 adjusting.

 http://www.openstack.org/blog/2015/05/technical-committee-highlights-may-13-2015

Cool - thanks very much for leading this!

Uhm, one small errata. My term just started :)

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] chef

2015-05-14 Thread JJ Asghar

 On May 14, 2015, at 1:21 AM, aishwarya.adyanth...@accenture.com wrote:
 
 Hi J,
 
 I ran the command 'knife node list' and found the list to be empty while the 
 'knife client list' command displays chef-validator and chef-webui. It seems 
 like when I was creating the node through  knife openstack server create the 
 node got launched successfully in the dashboard and I'm able to ssh the node 
 but the command gives error as:
 
 Waiting for sshd to host 
 (x.x.x.x).done
 Doing old-style registration with the validation key at 
 /root/chef-repo/.chef/chef-validator.pem...
 Delete your validation key in order to use your user credentials instead
 
 Connecting to x.x.x.x
 FATAL: Check if --bootstrap-protocol and --image-os-type is correct. 
 connection closed by remote host
 ERROR: Net::SSH::Disconnect: Check if --bootstrap-protocol and 
 --image-os-type is correct. connection closed by remote host
 
 
 The command I'm using to create the node is:
 
 knife openstack server create -f 2 -I ubuntu_image -a 
 external_id--network-ids alphanumeric_id -S chef_key -N node_name

I’m at a loss here. If this is a Ubuntu box, neither of the —bootstrap-protocol 
or the —image-os-type should be even checked. This is also failing because the 
connection was closed, so thats pointing to your vm on the in your OpenStack 
instance. Do you have any networking oddities between your workstation and this 
vm and cloud?

Also, i’m hoping that “ubuntu_image” is you scrubbing your UUID, because I’ve 
seen some issues with using the “easy name” in Juno. I’m pretty sure it was 
resolved in the 1.0.0 release though.

Actually yeah i don’t know what version of the OpenStack are you running 
against? Is this a major public cloud or is it in-house?

I’ve personally tested the knife-openstack 1.1.0 against a couple major public 
clouds, devstack, and the Chef cookbook built OpenStack instance so I know we 
have a good coverage.

You said you can SSH into the box, that’s good, so also lets take a step back. 
Can you provision a machine with knife openstack server create? When you do, do 
you see it pop up in nova list or the horizon dashboard? If so, then we can 
determine that the creation is working and we can move forward with the network 
connection and bootstrapping. 

In very simple terms the bootstrap literally SSHs into the box, pulls down 
chef, moves up the validation key and then runs chef-client. If you can SSH to 
it from the box you run your knife commands from, then there is no reason why 
this shouldn’t work.

Anyway, this is the best i got via email, i’m curious to see if any of this 
helps you,
-JJ
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?

2015-05-14 Thread Matt Riedemann



On 5/14/2015 2:59 PM, Kris G. Lindgren wrote:

How would this impact someone running juno nova-compute on rhel 6 boxes?
Or installing the python2.7 from SCL and running kilo+ code on rhel6?

For [3] it couldn't we get the exact same information from /proc/cpuinfo?


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.



On 5/14/15, 1:23 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


The minimum required version of libvirt in the driver is 0.9.11 still
[1].  We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno.

The libvirt distro support matrix is here: [2]

Can we safely assume the people aren't going to be running Libvirt
compute nodes on RHEL  7.1 or Ubuntu Precise?

Regarding RHEL, I think this is a safe bet because in Kilo nova dropped
python 2.6 support and RHEL  6 doesn't have py26 so you'd be in trouble
running kilo+ nova on RHEL 6.x anyway.

There are some workarounds in the code [3] I'd like to see removed by
bumping the minimum required version.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver
.py?id=2015.1.0#n335
[2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
[3]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.p
y?id=2015.1.0#n754

--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




This would be Liberty, so when you upgrade nova-compute to Liberty you'd 
also need to upgrade the host OS to something that supports libvirt = 
1.2.2.


--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [all] Technical Committee Highlights May 13, 2015

2015-05-14 Thread Anne Gentle
In response to the feedback during elections, the Technical Committee now
has a subteam dedicated to communications. Below is a link to the first
post in our revitalized series. As always, we're here for you and listening
and adjusting.

http://www.openstack.org/blog/2015/05/technical-committee-highlights-may-13-2015
Thanks,
Anne

-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Telco][NFV] OpenStack Telco Working Group Vancouver session

2015-05-14 Thread Steve Gordon
Hi all,

I am very pleased to be facilitating the OpenStack Telco Working Group session 
at the Vancouver summit. The session is scheduled as a working session on 
Wednesday, May 20th @ 9:00 AM in East Building, Room 2/3 More details can be 
found on the Liberty Design Summit schedule[0]. Please note that we have *also* 
been allocated 30 minutes on Monday [1] to discuss with OPNFV Day participants 
the purpose and activities of the working group at a high level. While this is 
not intended as a working session working group members are encouraged to 
attend this and other OPNFV Day activities [2] that may be relevant to them.

During the IRC meetings we have put together a rough session agenda. I would be 
grateful if those interested would go ahead and continue to submit ideas for 
the agenda to the etherpad [3]. I look forward to seeing you all there!

Regards,

Steve

[0] - 
http://libertydesignsummit.sched.org/event/c6f3464285755aa4e52c64783288efcd
[1] - 
http://libertydesignsummit.sched.org/event/91f840a20957fe6dcc6d6281db6de7f7#.VVUCAeTofCE
[2] - 
https://openstacksummitmay2015vancouver.sched.org/overview/type/opnfv+day#.VVUCmOTofCE
[3] - https://etherpad.openstack.org/p/YVR-ops-telco

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-14 Thread Sławek Kapłoński
Hello,

So if I understand You correct, it is not so dangeorus if I'm using
ibvirt with apparmor and this libvirt is adding apparmor rules for
every qemu process, yes?

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Wed, May 13, 2015 at 04:01:05PM +0100, Daniel P. Berrange wrote:
 On Wed, May 13, 2015 at 02:31:26PM +, Tim Bell wrote:
  
  Looking through the details of the Venom vulnerability,
  https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/, it
  would appear that the QEMU processes need to be restarted.
  
  Our understanding is thus that a soft reboot of the VM is not sufficient
  but a hard one would be OK.
 
 Yes, the key requirement is that you get a new QEMU process running. So
 this means a save-to-disk followed by restore, or a shutdown + boot,
 or a live migration to another (patched) host.
 
 In current Nova code a hard reboot operation will terminate the QEMU
 process and then start it again, which is the same as shutdown + boot
 really. A soft reboot will also terminate the QEMU process and start
 it again, when when terminating it, it will try to do so gracefully.
 ie init gets a chance todo a orderly shutdown of services. A soft
 reboot though is not guaranteed to ever finish / happen, since it
 relies on co-operating guest OS to respond to the ACPI signal. So
 soft reboot is probably not a reliable way of guaranteeing you get
 a new QEMU process.
 
 My recommendation would be a live migration, or save to disk and restore
 though, since those both minimise interruption to your guest OS workloads
 where as a hard reboot or shutdown obviously kills them.
 
 
 Also note that this kind of bug in QEMU device emulation is the poster
 child example for the benefit of having sVirt (either SELinux or AppArmor
 backends) enabled on your compute hosts. With sVirt, QEMU is restricted
 to only access resources that have been explicitly assigned to it. This
 makes it very difficult (likely/hopefully impossible[1]) for a compromised
 QEMU to be used to break out to compromise the host as a whole, likewise
 protect against compromising other QEMU processes on the same host. The
 common Linux distros like RHEL, Fedora, Debian, Ubuntu, etc all have
 sVirt feature available and enabled by default and OpenStack doesn't
 do anything to prevent it from working. Hopefully no one is actively
 disabling it themselves leaving themselves open to attack...
 
 Finally QEMU processes don't run as root by default, they use a
 'qemu' user account with minimal privileges, which adds another layer
 of protection against total host compromise
 
 So while this bug is no doubt serious and worth patching asap, IMHO,
 it is not the immediate end of the world scale disaster that some
 are promoting it to be.
 
 
 NB, this mail is my personal analysis of the problem - please refer
 to the above linked redhat.com blog post and/or CVE errata notes,
 or contact Red Hat support team, for the official Red Hat view on
 this.
 
 Regards,
 Daniel
 
 [1] I'll never claim anything is 100% foolproof, but it is intended to
 to be impossible to escape sVirt, so any such viable escape routes
 would themselves be considered security bugs.
 -- 
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron/DVR scalability of one giant single tenant VS multiple tenants

2015-05-14 Thread Gustavo Randich
Thanks Kevin,

If I understood you well, scalability isn't impacted by number of tenants,
but rather by number of ports by network / security group / tenant router;
so, if I have a single giant tenant network with several thousands ports,
perhaps I'll have a problem.

Partitioning the load into various tenant networks should mitigate these
problems, independently of the total number of tenants. So I could I keep
running the cloud fine with a *single* tenant owning *several* internal
networks, right?

Gustavo


On Thu, May 14, 2015 at 6:56 PM, Kevin Benton blak...@gmail.com wrote:

 Neutron scalability isn't impacted directly by the number of tenants
 so that shouldn't matter too much. The following are a few things to
 consider.

 Number of ports per security group: Every time a member of a security
 group (a port) is removed/added or has it's IP changed, a notification
 goes out to the L2 agents so they can update their firewall rules. If
 you have thousands of ports and lots of churn, the L2 agents will be
 busy all of the time processing the changes and may fall behind
 impacting the time it takes for ports to gain connectivity.

 Number of ports per network: Each network is a broadcast domain so a
 single network with hundreds of ports will get pretty chatty with
 broadcast and multicast traffic. Also, if you use l2pop, each l2 agent
 has to know the location of every port that shares a network with the
 ports on the agent. I don't think this has as much impact as the
 security groups updating, but it's something to keep in mind.

 Number of ports behind a single tenant router: Any traffic that goes
 to an external network that doesn't have a floating IP associated with
 it needs to go via the assigned centralized SNAT node for that router.
 If a lot of your VMs don't have floating IPs and generate lots of
 traffic, this single translation point will quickly become a
 bottleneck.

 Number of centralized SNAT agents: Even if you have lots of tenant
 routers to address the issue above, you need to make sure you have
 plenty of L3 agents with access to the external network and
 'agent_mode' set to 'dvr_snat' so they can be used as centralized SNAT
 nodes. Otherwise, if you only have one centralized SNAT node,
 splitting the traffic across a bunch of tenant routers doesn't buy you
 much.

 Let me know if you need me to clarify anything.

 Cheers,
 Kevin Benton

 On Thu, May 14, 2015 at 9:15 AM, Gustavo Randich
 gustavo.rand...@gmail.com wrote:
  Hi!
 
  We are evaluating the migration of our private cloud of several thousand
 VMs
  from multi-host nova-network to neutron/DVR. For historical reasons, we
  currently use a single tenant because group administration is made
 outside
  openstack (users don't talk to OS API). The number of compute nodes we
 have
  now is approx. 400, and growing.
 
  My question is:
 
  Srictly regarding the scalability and performance fo the DVR/Neutron
 virtual
  networking components inside compute nodes (OVS virtual switches,
 iptables,
  VXLAN tunnel mesh, etc.), should we mantain this single-tenant /
  single-network architecture in Neutron/DVR? Or should we partition our
 next
  cloud into several tenants each corresponding to different
 groups/verticals
  inside the company, and possibly each with their several private
 networks?
 
  Thanks!
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 



 --
 Kevin Benton

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?

2015-05-14 Thread Jesse Keating
I'm +1 on this. If people want to run Liberty on an old platform, the onus
is on them to figure out how to install the relevant deps on that platform.


- jlk

On Thu, May 14, 2015 at 2:33 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
wrote:



 On 5/14/2015 3:35 PM, Matt Riedemann wrote:



 On 5/14/2015 2:59 PM, Kris G. Lindgren wrote:

 How would this impact someone running juno nova-compute on rhel 6 boxes?
 Or installing the python2.7 from SCL and running kilo+ code on rhel6?

 For [3] it couldn't we get the exact same information from /proc/cpuinfo?
 

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.



 On 5/14/15, 1:23 PM, Matt Riedemann mrie...@linux.vnet.ibm.com
 wrote:

  The minimum required version of libvirt in the driver is 0.9.11 still
 [1].  We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno.

 The libvirt distro support matrix is here: [2]

 Can we safely assume the people aren't going to be running Libvirt
 compute nodes on RHEL  7.1 or Ubuntu Precise?

 Regarding RHEL, I think this is a safe bet because in Kilo nova dropped
 python 2.6 support and RHEL  6 doesn't have py26 so you'd be in trouble
 running kilo+ nova on RHEL 6.x anyway.

 There are some workarounds in the code [3] I'd like to see removed by
 bumping the minimum required version.

 [1]

 http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver

 .py?id=2015.1.0#n335
 [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
 [3]

 http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.p

 y?id=2015.1.0#n754

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 This would be Liberty, so when you upgrade nova-compute to Liberty you'd
 also need to upgrade the host OS to something that supports libvirt =
 1.2.2.


 Here is the patch to see what this would look like:

 https://review.openstack.org/#/c/183220/


 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [all] Technical Committee Highlights May 13, 2015

2015-05-14 Thread Chris Dent

On Thu, 14 May 2015, Anne Gentle wrote:


In response to the feedback during elections, the Technical Committee now
has a subteam dedicated to communications. Below is a link to the first
post in our revitalized series. As always, we're here for you and listening
and adjusting.


Awesome, thanks very much for getting this rolling. A very promising
start.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] OpenStack 2015.1.0 for Debian Sid and Jessie

2015-05-14 Thread Thomas Goirand

Hi,

I am pleased to announce the general availability of OpenStack 2015.1.0 
(aka Kilo) in Debian unstable (aka Sid) and through the official Debian 
backports repository for Debian 8.0 (aka Sid).


Debian 8.0 Jessie just released
===
As you may know, Debian 8.0 was released on the 25th of April, just a 
few days before OpenStack Kilo (on the 30th of April). Just right after 
Debian Jessie got released, OpenStack Kilo was uploaded to unstable, and 
slowly migrated the usual way to the new Debian Testing, named Stretch.


As a lot of new packages had to go through the Debian FTP master NEW 
queue for review (they check mainly for the copyright / licensing 
information, but also if the package is conform to the Debian policy). 
I'd like here to publicly thank Paul Tagliamonte from the Debian FTP 
team for his prompt work, which allowed Kilo to reach the Debian 
repositories just a few days after its release (in fact, Kilo was fully 
available in Unstable more than a week ago).


Debian Jessie Backports
===
Previously, each release of OpenStack, as a backport for Debian Stable, 
was only available through private repositories. This wasn't a 
satisfying solution, and we wanted to address it by uploading to the 
official Debian backports. And the result is now available: all of 
OpenStack Kilo has been uploaded to Debian jessie-backports. If you want 
to use these repositories, just add them to your sources.list (note that 
the Debian installer proposes to add it by default):


deb http://httpredir.debian.org/debian jessie-backports main

(of course, you can use any Debian mirror, not just the httpredir)

All of the usual OpenStack components are currently available in the 
official backports, but there's still some more to come, like for 
example Heat, Murano, Trove or Sahara. For Heat, it's because we're 
still waiting for python-oslo.versionedobjects 0.1.1-2 to migrate to 
Stretch (as a rule: we can't upload to backports unless a package is 
already in Testing). For the last 3, I'm not sure if they will be 
backported to Jessie. Please provide your feedback and tell the Debian 
packaging team if they are important for you in the official 
jessie-backports repository, or if Sid is enough. Also, at the time of 
writing of this message, Horizon and Designate are still in the 
backports FTP master NEW queue (but it should be approved very soon).


Also, I have just uploaded a first version of Barbican (still in the NEW 
queue waiting for approval...), and there's a package for Manila that is 
currently on the work by a new contributor.


Note on Neutron off-tree drivers

The neutron-lbaas, neutron-fwaas and neutron-vpnaas packages have been 
uploaded and are part of Sid. If you need it through jessie-backports, 
please just let me know.


All vendor-specific drivers have been separated from Neutron, and are 
now available as separate packages. I wrote packages for them all, but 
the issue is that most of them wouldn't even build due to failed unit 
tests. For most of them, it used to work in the Kilo beta 3 of Neutron 
(it's the case for all but 2 of them who were broken at the time), but 
they appeared broken with the Kilo final release, as they didn't update 
after the Kilo release.


I have repaired some of them, but working on these packages has shown to 
be a very frustrating work, as they receive very few updates from 
upstream. I do not plan to work much on them unless one of the below 
condition:

- My employer needs them
- things are moving forward upstream, and that these unit tests are 
repaired in the stackforge repository.


If you are a network hardware vendor and read this, please push for more 
maintenance, as it's in a really bad state ATM. You are welcome to get 
in touch with me, and I'll be happy to help you to help.


Bug report
==
If you see any issue in the packages, please do report them to the 
Debian bug tracker. Instructions are available here:

https://www.debian.org/Bugs/Reporting

Happy installation,

Thomas Goirand (zigo)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Can we bump MIN_LIBVIRT_VERSION to 1.2.2 in Liberty?

2015-05-14 Thread Matt Riedemann



On 5/14/2015 3:35 PM, Matt Riedemann wrote:



On 5/14/2015 2:59 PM, Kris G. Lindgren wrote:

How would this impact someone running juno nova-compute on rhel 6 boxes?
Or installing the python2.7 from SCL and running kilo+ code on rhel6?

For [3] it couldn't we get the exact same information from /proc/cpuinfo?


Kris Lindgren
Senior Linux Systems Engineer
GoDaddy, LLC.



On 5/14/15, 1:23 PM, Matt Riedemann mrie...@linux.vnet.ibm.com wrote:


The minimum required version of libvirt in the driver is 0.9.11 still
[1].  We've been gating against 1.2.2 in Ubuntu Trusty 14.04 since Juno.

The libvirt distro support matrix is here: [2]

Can we safely assume the people aren't going to be running Libvirt
compute nodes on RHEL  7.1 or Ubuntu Precise?

Regarding RHEL, I think this is a safe bet because in Kilo nova dropped
python 2.6 support and RHEL  6 doesn't have py26 so you'd be in trouble
running kilo+ nova on RHEL 6.x anyway.

There are some workarounds in the code [3] I'd like to see removed by
bumping the minimum required version.

[1]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver

.py?id=2015.1.0#n335
[2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix
[3]
http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/host.p

y?id=2015.1.0#n754

--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




This would be Liberty, so when you upgrade nova-compute to Liberty you'd
also need to upgrade the host OS to something that supports libvirt =
1.2.2.



Here is the patch to see what this would look like:

https://review.openstack.org/#/c/183220/

--

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [all] Technical Committee Highlights May 13, 2015

2015-05-14 Thread Anne Gentle
On Thu, May 14, 2015 at 2:32 PM, Robert Collins robe...@robertcollins.net
wrote:

 On 15 May 2015 at 07:15, Anne Gentle annegen...@justwriteclick.com
 wrote:
  In response to the feedback during elections, the Technical Committee now
  has a subteam dedicated to communications. Below is a link to the first
 post
  in our revitalized series. As always, we're here for you and listening
 and
  adjusting.
 
 
 http://www.openstack.org/blog/2015/05/technical-committee-highlights-may-13-2015

 Cool - thanks very much for leading this!

 Uhm, one small errata. My term just started :)


Ha, oops! Thanks Rob, working on it. :)



 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Anne Gentle
annegen...@justwriteclick.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-14 Thread Favyen Bastani
On 05/14/2015 05:23 PM, Sławek Kapłoński wrote:
 Hello,
 
 So if I understand You correct, it is not so dangeorus if I'm using
 ibvirt with apparmor and this libvirt is adding apparmor rules for
 every qemu process, yes?
 
 

You should certainly verify that apparmor rules are enabled for the qemu
processes.

Apparmor reduces the danger of the vulnerability. However, if you are
assuming that virtual machines are untrusted, then you should also
assume that an attacker can execute whatever operations permitted by the
apparmor rules (mostly built based on abstraction usually at
/etc/apparmor.d/libvirt-qemu); so you should check that you have
reasonable limits on those permissions. Best is to restart the processes
by way of live migration or otherwise.

Best,
Favyen

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron/DVR scalability of one giant single tenant VS multiple tenants

2015-05-14 Thread Kevin Benton
Neutron scalability isn't impacted directly by the number of tenants
so that shouldn't matter too much. The following are a few things to
consider.

Number of ports per security group: Every time a member of a security
group (a port) is removed/added or has it's IP changed, a notification
goes out to the L2 agents so they can update their firewall rules. If
you have thousands of ports and lots of churn, the L2 agents will be
busy all of the time processing the changes and may fall behind
impacting the time it takes for ports to gain connectivity.

Number of ports per network: Each network is a broadcast domain so a
single network with hundreds of ports will get pretty chatty with
broadcast and multicast traffic. Also, if you use l2pop, each l2 agent
has to know the location of every port that shares a network with the
ports on the agent. I don't think this has as much impact as the
security groups updating, but it's something to keep in mind.

Number of ports behind a single tenant router: Any traffic that goes
to an external network that doesn't have a floating IP associated with
it needs to go via the assigned centralized SNAT node for that router.
If a lot of your VMs don't have floating IPs and generate lots of
traffic, this single translation point will quickly become a
bottleneck.

Number of centralized SNAT agents: Even if you have lots of tenant
routers to address the issue above, you need to make sure you have
plenty of L3 agents with access to the external network and
'agent_mode' set to 'dvr_snat' so they can be used as centralized SNAT
nodes. Otherwise, if you only have one centralized SNAT node,
splitting the traffic across a bunch of tenant routers doesn't buy you
much.

Let me know if you need me to clarify anything.

Cheers,
Kevin Benton

On Thu, May 14, 2015 at 9:15 AM, Gustavo Randich
gustavo.rand...@gmail.com wrote:
 Hi!

 We are evaluating the migration of our private cloud of several thousand VMs
 from multi-host nova-network to neutron/DVR. For historical reasons, we
 currently use a single tenant because group administration is made outside
 openstack (users don't talk to OS API). The number of compute nodes we have
 now is approx. 400, and growing.

 My question is:

 Srictly regarding the scalability and performance fo the DVR/Neutron virtual
 networking components inside compute nodes (OVS virtual switches, iptables,
 VXLAN tunnel mesh, etc.), should we mantain this single-tenant /
 single-network architecture in Neutron/DVR? Or should we partition our next
 cloud into several tenants each corresponding to different groups/verticals
 inside the company, and possibly each with their several private networks?

 Thanks!


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Kevin Benton

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Neutron/DVR scalability of one giant single tenant VS multiple tenants

2015-05-14 Thread Kevin Benton
Yes, correct. Tenants basically are just used as a tag to filter and
restrict API operations.
On May 14, 2015 4:35 PM, Gustavo Randich gustavo.rand...@gmail.com
wrote:

 Thanks Kevin,

 If I understood you well, scalability isn't impacted by number of tenants,
 but rather by number of ports by network / security group / tenant router;
 so, if I have a single giant tenant network with several thousands ports,
 perhaps I'll have a problem.

 Partitioning the load into various tenant networks should mitigate these
 problems, independently of the total number of tenants. So I could I keep
 running the cloud fine with a *single* tenant owning *several* internal
 networks, right?

 Gustavo


 On Thu, May 14, 2015 at 6:56 PM, Kevin Benton blak...@gmail.com wrote:

 Neutron scalability isn't impacted directly by the number of tenants
 so that shouldn't matter too much. The following are a few things to
 consider.

 Number of ports per security group: Every time a member of a security
 group (a port) is removed/added or has it's IP changed, a notification
 goes out to the L2 agents so they can update their firewall rules. If
 you have thousands of ports and lots of churn, the L2 agents will be
 busy all of the time processing the changes and may fall behind
 impacting the time it takes for ports to gain connectivity.

 Number of ports per network: Each network is a broadcast domain so a
 single network with hundreds of ports will get pretty chatty with
 broadcast and multicast traffic. Also, if you use l2pop, each l2 agent
 has to know the location of every port that shares a network with the
 ports on the agent. I don't think this has as much impact as the
 security groups updating, but it's something to keep in mind.

 Number of ports behind a single tenant router: Any traffic that goes
 to an external network that doesn't have a floating IP associated with
 it needs to go via the assigned centralized SNAT node for that router.
 If a lot of your VMs don't have floating IPs and generate lots of
 traffic, this single translation point will quickly become a
 bottleneck.

 Number of centralized SNAT agents: Even if you have lots of tenant
 routers to address the issue above, you need to make sure you have
 plenty of L3 agents with access to the external network and
 'agent_mode' set to 'dvr_snat' so they can be used as centralized SNAT
 nodes. Otherwise, if you only have one centralized SNAT node,
 splitting the traffic across a bunch of tenant routers doesn't buy you
 much.

 Let me know if you need me to clarify anything.

 Cheers,
 Kevin Benton

 On Thu, May 14, 2015 at 9:15 AM, Gustavo Randich
 gustavo.rand...@gmail.com wrote:
  Hi!
 
  We are evaluating the migration of our private cloud of several
 thousand VMs
  from multi-host nova-network to neutron/DVR. For historical reasons, we
  currently use a single tenant because group administration is made
 outside
  openstack (users don't talk to OS API). The number of compute nodes we
 have
  now is approx. 400, and growing.
 
  My question is:
 
  Srictly regarding the scalability and performance fo the DVR/Neutron
 virtual
  networking components inside compute nodes (OVS virtual switches,
 iptables,
  VXLAN tunnel mesh, etc.), should we mantain this single-tenant /
  single-network architecture in Neutron/DVR? Or should we partition our
 next
  cloud into several tenants each corresponding to different
 groups/verticals
  inside the company, and possibly each with their several private
 networks?
 
  Thanks!
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 



 --
 Kevin Benton



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Multiple vlan ranges on same physical interface [ml2]

2015-05-14 Thread George Shuklin

On 05/11/2015 11:23 AM, Kevin Benton wrote:


I apologize but I didn't quite follow what the issue was with tenants 
allocating networks in your use case, can you elaborate a bit there?


From what it sounded like, it seems like you could define the vlan 
range you want the tenants' internal networks to come from in the 
network_vlan_ranges.  Then any admin networks would just specify the 
segmentation id outside of that range. Why doesn't that work?




I (as admin) can use vlans outside of network_vlan_ranges in 
[ml2_type_vlan] section of ml2_conf.ini?


I've never tried...

Yes, I can!

Thank you.


Thanks,
Kevin Benton

On May 9, 2015 17:16, George Shuklin george.shuk...@gmail.com 
mailto:george.shuk...@gmail.com wrote:


Yes, that's result.

My plan was to allow 'internal' networks in neutron (by tenants
itself), but after some struggle we've decided to fallback to
'created by script during tenant bootstrapping'.

Unfortunately, neutron has no conception of 'default physical
segment' for VLAN autoallocation for tenant networks (it just
grabs first available).

On 05/09/2015 03:08 AM, Kevin Benton wrote:

So if you don't let tenants allocate networks, then why do the
VLAN ranges in neutron matter? It can just be part of your
net-create scripts.


On Fri, May 8, 2015 at 9:35 AM, George Shuklin
george.shuk...@gmail.com mailto:george.shuk...@gmail.com wrote:

We've got a bunch of business logic above openstack. It's
allocating VLANs on-fly for external networks and connect
pieces outside neutron (configuring hardware router, etc).

Anyway, after some research we've decided to completely ditch
idea of 'tenant networks'. All networks are external and
handled by our software with administrative rights.

All networks for tenant are created during tenant bootstrap,
including local networks which are now looking funny
'external local network without gateway'. By nailing every
moving part in 'neutron net-create' we've got stable
behaviour and kept allocation database inside our software.
That kills a huge part of openstack idea, but at least it
works straightforward and nice.

I really like to see all that been implemented in vendor
plugins for neutron, but average code and documentation
quality for them are below any usable level, so we implements
hw configuration by ourselves.


On 05/08/2015 09:15 AM, Kevin Benton wrote:

If one set of VLANs is for external networks which are
created by admins, why even specify network_vlan_ranges for
that set?

For example, even if network_vlan_ranges is
'local:1000:4000', you can still successfully run the
following as an admin:
neutron net-create --provider:network_type=vlan
--provider:physical_network=local
--provider:segmentation_id=40 myextnet --router:external

On Thu, May 7, 2015 at 7:32 AM, George Shuklin
george.shuk...@gmail.com mailto:george.shuk...@gmail.com
wrote:

Hello everyone.

Got a problem: we want to use same physical interface
for external networks and virtual (tenant) networks. All
inside vlans with different ranges.

My expected config was:

[ml2]
type_drivers = vlan
tenant_network_types = vlan
[ml2_type_vlan]
network_vlan_ranges = external:1:100,local:1000:4000
[ovs]
bridge_mappings = external:br-ex,local:br-ex

But it does not work:

ERROR
neutron.plugins.openvswitch.agent.ovs_neutron_agent [-]
Parsing bridge_mappings failed: Value br-ex in mapping:
'gp:br-ex' not unique. Agent terminated!

I understand that I can cheat and manually configure
bridge pile (br-ex and br-loc both plugged to br-real,
which linked to physical interface), but it looks very
fragile.

Is any nicer way to do this? And why ml2 (ovs plugin?)
does not allow to use mapping from many networks to one
bridge?

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
mailto:OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Kevin Benton



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
mailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Kevin Benton



___

Re: [Openstack-operators] [openstack-dev] Who is using nova-docker? (Re: [nova-docker] Status update)

2015-05-14 Thread Abel Lopez
Again, a conversation that should include the ops list.

On Wed, May 13, 2015 at 6:28 AM, Adrian Otto adrian.o...@rackspace.com
wrote:

  Solum uses it in our Vagrant setup. It makes the dev environment perform
 very nicely, and is compatible with the Docker containers Solum generates.



  Sent from my Verizon Wireless 4G LTE smartphone


  Original message 
 From: John Griffith john.griffi...@gmail.com
 Date: 05/12/2015 9:42 PM (GMT-08:00)
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-...@lists.openstack.org
 Subject: Re: [openstack-dev] Who is using nova-docker? (Re: [nova-docker]
 Status update)



 On Tue, May 12, 2015 at 12:09 PM, Fawad Khaliq fa...@plumgrid.com wrote:


 On Mon, May 11, 2015 at 7:14 PM, Davanum Srinivas dava...@gmail.com
 wrote:

 Good points, Dan and John.

 At this point it may be useful to see who is actually using
 nova-docker. Can folks who are using any version of nova-docker,
 please speak up with a short description of their use case?

  I am using Kilo in a multi-hypervisor mode, some applications running
 on Docker containers and some backend work provisioned as VMs.


 Thanks,
 dims

 On Mon, May 11, 2015 at 10:06 AM, Dan Smith d...@danplanet.com wrote:
  +1 Agreed nested containers are a thing. Its a great reason to keep
  our LXC driver.
 
  I don't think that's a reason we should keep our LXC driver, because
 you
  can still run containers in containers with other things. If anything,
  using a nova vm-like container to run application-like containers
 inside
  them is going to beg the need to tweak more detailed things on the
  vm-like container to avoid restricting the application one, I think.
 
  IMHO, the reason to keep the seldom-used, not-that-useful LXC driver in
  nova is because it's nearly free. It is the libvirt driver with a few
  conditionals to handle different things when necessary for LXC. The
  docker driver is a whole other nova driver to maintain, with even less
  applicability to being a system container (IMHO).
 
  I am keen we set the right expectations here. If you want to treat
  docker containers like VMs, thats OK.
 
  I guess a remaining concern is the driver dropping into diss-repair
  if most folks end up using Magnum when they want to use docker.
 
  I think this is likely the case and I'd like to avoid getting into this
  situation again. IMHO, this is not our target audience, it's very much
  not free to just put it into the tree because meh, some people might
  like it instead of the libvirt-lxc driver.
 
  --Dan
 
 
 __
  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Davanum Srinivas :: https://twitter.com/dims


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

  ​I'm using nova-docker, started out as just learning the pieces...
 ended up being useful for some internal test and dev work on my side.
 Including building/shipping apps to customers that I used to send out as
 qcows.  I could certainly do this without nova-docker and go straight to
 docker, but this is more fun, and fits with my existing workflow/usage of
 OpenStack for various things.

  Also FWIW Magnum currently is way overkill for what I'm doing, just like
 some other projects.​  I do plan to check that out before long, but for now
 it's a bit overkill for me.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators