Re: [Openstack-operators] Periodic packet loss neutron l3-agent HA Juno

2015-05-13 Thread Pedro Sousa
Hi,

as observed by Assaf Muller, I need to apply these patches:

https://review.openstack.org/154609
https://review.openstack.org/#/c/154589/

Waiting for openstack-neutron-2014.2.3 rpm from rdo repos to fix it.

Regards,
Pedro Sousa


On Mon, May 11, 2015 at 4:55 PM, Pedro Sousa pgso...@gmail.com wrote:

 Hi all,

 I'm using l3-agent in HA mode in Juno and I'm observing periodic routing
 packet loss in different tenant networks. I've started observing this when
 I switched from VXLAN tunnels to VLANS. I use openvswitch.

 At L2 level, within the same tenant network I don't see this behavior.

 Anyone has observed this and what's best way to debug it?

 openstack-neutron-ml2-2014.2.2-1.el7.noarch
 openstack-neutron-2014.2.2-1.el7.noarch
 python-neutronclient-2.3.9-1.el7.centos.noarch
 openstack-neutron-openvswitch-2014.2.2-1.el7.noarch
 openstack-neutron-metering-agent-2014.2.2-1.el7.noarch
 python-neutron-2014.2.2-1.el7.noarch
 openvswitch-2.3.1-2.el7.x86_64
 kernel-3.10.0-123.13.1.el7.x86_64
 keepalived-1.2.13-6.el7.x86_64


 Thanks,
 Pedro Sousa


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Horizon - domain-scoped token support

2015-05-13 Thread Adam Young

On 05/12/2015 05:43 AM, Olga Dodin wrote:

Hi,

For our OpenStack environment  (Juno + Ubuntu 12.04 ) I have 
configured Identity service for multi-domain support(with keystone API 
v3 and sample v3 policy file).
Using domain-scoped token with CLI (openstack) I can access users and 
projects belong to different domains.
Horizon however can't use domain-scoped token and on cloud_admin 
 login errors like Unable to retrieve user/project/domain list arise,
I've tried to apply the patch 
https://review.openstack.org/#/c/148082/to support domain-scoped 
tokens in Horizon - checked out the Patch Set 63 code and copied the 
files listed in the patch description accordingly.


Now when I try to login to openstack dashboard getting message in 
horizon_error.log:


[error] /usr/local/lib/python2.7/dist-packages/pbr/version.py:25: 
UserWarning: Module openstack_dashboard was already imported from 
/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/__init__.pyc, 
but /usr/lib/python2.7/dist-packages is being added to sys.path

 [error]   import pkg_resources


is that the whole error message?  It seems like a path problem, possibly 
on how you applied the patch.


Did you run  python setup.py develop to get the change?




Appreciate any help/advice on how to resolve this.

Thanks,

Olga Dodin
Servers  Network group, IBM RD Labs in Israel


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Venom vulnerability

2015-05-13 Thread Tim Bell

Looking through the details of the Venom vulnerability, 
https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/, it would 
appear that the QEMU processes need to be restarted.

Our understanding is thus that a soft reboot of the VM is not sufficient but a 
hard one would be OK.

Some quick tests have shown that a suspend/resume of the VM also causes a new 
process.

How are others looking to address this vulnerability ?

(I guess the security session will have a few extra people signing up in 
Vancouver now...)

Tim

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread Joe Topjian
 Hello,

 Looking through the details of the Venom vulnerability,
 https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/, it
 would appear that the QEMU processes need to be restarted.



 Our understanding is thus that a soft reboot of the VM is not sufficient
 but a hard one would be OK.



 Some quick tests have shown that a suspend/resume of the VM also causes a
 new process.


The RedHat KB article (linked in the blog post you gave) also mentions that
migrating to a patched server should also be sufficient. If either methods
(suspend or migration) work, I think those are nicer ways of handling this
than hard reboots.

I also found this statement to be curious:

The sVirt and seccomp functionalities used to restrict host's QEMU process
privileges and resource access might mitigate the impact of successful
exploitation of this issue.

So perhaps RedHat already has mechanisms in place to prevent exploits such
as this from being successful? I wonder if Ubuntu has something similar in
place.


   How are others looking to address this vulnerability ?


It looks like RedHat has released updates, but I haven't received an
announcement for Ubuntu yet -- does anyone know the status?

As soon as a fix is released, we'll update our hosts. That will ensure new
instances aren't vulnerable. We'll then figure out some way of coordinating
fixing of older instances.

Joe
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread Matt Van Winkle
So far, your assessment is spot on from what we've seen.  A migration (if you 
have live migrate that's even better) should net the same result for QEMU.  
Some have floated the idea of live migrate within the same host.  I don't know 
if nova out of the box would support such a thing.

Thanks!
Matt

From: Tim Bell tim.b...@cern.chmailto:tim.b...@cern.ch
Date: Wednesday, May 13, 2015 9:31 AM
To: 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: [Openstack-operators] Venom vulnerability


Looking through the details of the Venom vulnerability, 
https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/, it would 
appear that the QEMU processes need to be restarted.

Our understanding is thus that a soft reboot of the VM is not sufficient but a 
hard one would be OK.

Some quick tests have shown that a suspend/resume of the VM also causes a new 
process.

How are others looking to address this vulnerability ?

(I guess the security session will have a few extra people signing up in 
Vancouver now...)

Tim

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread Daniel P. Berrange
On Wed, May 13, 2015 at 02:31:26PM +, Tim Bell wrote:
 
 Looking through the details of the Venom vulnerability,
 https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/, it
 would appear that the QEMU processes need to be restarted.
 
 Our understanding is thus that a soft reboot of the VM is not sufficient
 but a hard one would be OK.

Yes, the key requirement is that you get a new QEMU process running. So
this means a save-to-disk followed by restore, or a shutdown + boot,
or a live migration to another (patched) host.

In current Nova code a hard reboot operation will terminate the QEMU
process and then start it again, which is the same as shutdown + boot
really. A soft reboot will also terminate the QEMU process and start
it again, when when terminating it, it will try to do so gracefully.
ie init gets a chance todo a orderly shutdown of services. A soft
reboot though is not guaranteed to ever finish / happen, since it
relies on co-operating guest OS to respond to the ACPI signal. So
soft reboot is probably not a reliable way of guaranteeing you get
a new QEMU process.

My recommendation would be a live migration, or save to disk and restore
though, since those both minimise interruption to your guest OS workloads
where as a hard reboot or shutdown obviously kills them.


Also note that this kind of bug in QEMU device emulation is the poster
child example for the benefit of having sVirt (either SELinux or AppArmor
backends) enabled on your compute hosts. With sVirt, QEMU is restricted
to only access resources that have been explicitly assigned to it. This
makes it very difficult (likely/hopefully impossible[1]) for a compromised
QEMU to be used to break out to compromise the host as a whole, likewise
protect against compromising other QEMU processes on the same host. The
common Linux distros like RHEL, Fedora, Debian, Ubuntu, etc all have
sVirt feature available and enabled by default and OpenStack doesn't
do anything to prevent it from working. Hopefully no one is actively
disabling it themselves leaving themselves open to attack...

Finally QEMU processes don't run as root by default, they use a
'qemu' user account with minimal privileges, which adds another layer
of protection against total host compromise

So while this bug is no doubt serious and worth patching asap, IMHO,
it is not the immediate end of the world scale disaster that some
are promoting it to be.


NB, this mail is my personal analysis of the problem - please refer
to the above linked redhat.com blog post and/or CVE errata notes,
or contact Red Hat support team, for the official Red Hat view on
this.

Regards,
Daniel

[1] I'll never claim anything is 100% foolproof, but it is intended to
to be impossible to escape sVirt, so any such viable escape routes
would themselves be considered security bugs.
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] how to filter outgoing VM traffic in icehouse

2015-05-13 Thread Gustavo Randich
Hi, sorry, I forgot to mention: I'm using nova-network


On Wed, May 13, 2015 at 6:39 PM, Abel Lopez alopg...@gmail.com wrote:

 Yes, you can define egress security group rules.

  On May 13, 2015, at 2:32 PM, Gustavo Randich gustavo.rand...@gmail.com
 wrote:
 
  Hi,
 
  Is there any way to filter outgoing VM traffic in Icehouse, preferably
 using security groups? I.e. deny all traffic except to certain IPs
 
  Thanks!
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] how to filter outgoing VM traffic in icehouse

2015-05-13 Thread Kevin Bringard (kevinbri)
Ah, I don't believe nova-network supports EGRESS rules.

On 5/13/15, 3:41 PM, Gustavo Randich gustavo.rand...@gmail.com wrote:

Hi, sorry, I forgot to mention: I'm using nova-network



On Wed, May 13, 2015 at 6:39 PM, Abel Lopez
alopg...@gmail.com wrote:

Yes, you can define egress security group rules.

 On May 13, 2015, at 2:32 PM, Gustavo Randich
gustavo.rand...@gmail.com wrote:

 Hi,

 Is there any way to filter outgoing VM traffic in Icehouse, preferably
using security groups? I.e. deny all traffic except to certain IPs

 Thanks!



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators








___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] how to filter outgoing VM traffic in icehouse

2015-05-13 Thread Kevin Bringard (kevinbri)
Specifically, look at neutron security-group-rule-create:

usage: neutron security-group-rule-create [-h] [-f {shell,table}] [-c
COLUMN]
  [--variable VARIABLE]
  [--prefix PREFIX]
  [--request-format {json,xml}]
  [--tenant-id TENANT_ID]
  [--direction {ingress,egress}]
  [--ethertype ETHERTYPE]
  [--protocol PROTOCOL]
  [--port-range-min PORT_RANGE_MIN]
  [--port-range-max PORT_RANGE_MAX]
  [--remote-ip-prefix
REMOTE_IP_PREFIX]
  [--remote-group-id REMOTE_GROUP]
  SECURITY_GROUP

The --direction option is what you're looking for. You may need to remove
a default egress rule... I think by default it allows everything.


On 5/13/15, 3:39 PM, Abel Lopez alopg...@gmail.com wrote:

Yes, you can define egress security group rules.

 On May 13, 2015, at 2:32 PM, Gustavo Randich
gustavo.rand...@gmail.com wrote:
 
 Hi,
 
 Is there any way to filter outgoing VM traffic in Icehouse, preferably
using security groups? I.e. deny all traffic except to certain IPs
 
 Thanks!
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread Favyen Bastani
Would a virsh suspend/save/restore/resume operation accomplish similar
result as the localhost migration?

Best,
Favyen

On 05/13/2015 12:44 PM, Matt Van Winkle wrote:
 Yeah, something like that would be handy.
 
 From: matt m...@nycresistor.commailto:m...@nycresistor.com
 Date: Wednesday, May 13, 2015 10:29 AM
 To: Daniel P. Berrange berra...@redhat.commailto:berra...@redhat.com
 Cc: Matt Van Winkle mvanw...@rackspace.commailto:mvanw...@rackspace.com, 
 openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
  
 openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
 Subject: Re: [Openstack-operators] Venom vulnerability
 
 honestly that seems like a very useful feature to ask for... specifically for 
 upgrading qemu.
 
 -matt
 
 On Wed, May 13, 2015 at 11:19 AM, Daniel P. Berrange 
 berra...@redhat.commailto:berra...@redhat.com wrote:
 On Wed, May 13, 2015 at 03:08:47PM +, Matt Van Winkle wrote:
 So far, your assessment is spot on from what we've seen.  A migration
 (if you have live migrate that's even better) should net the same result
 for QEMU.  Some have floated the idea of live migrate within the same
 host.  I don't know if nova out of the box would support such a thing.
 
 Localhost migration (aka migration within the same host) is not something
 that is supported by libvirt/KVM. Various files QEMU has on disk are based
 on the VM name/uuid and you can't have 2 QEMU processes on the host having
 the files at the same time, which precludes localhost migration working.
 
 Regards,
 Daniel
 
 
 
 -BEGIN PGP MESSAGE- Version: GnuPG v1 Comment: Charset: us-ascii 
 hQIMA4ToeuPbGFzLAQ//WKATa6VRGKJKq7zAcUTO0tS8Lgq5zuo1buc2pJtbPKXi 
 pFmHpgTsXxoU3LNhfWelAToCQdacVLUw5OiFsWyoVsjAcuRzMrN+l8WHYG4jZDGs 
 bXCUp4XwShex35/vmI15NTAKrmbgIJZRi80sewCZ8H13rei86TPKA5b1C9SFxiqq 
 KGmntJdiEyk+x2SOz5xvZVx/29XryUSBXo6YAVQmW4AZrrdVdkRxDKCX3tw90UZ+ 
 RCibGl1nac4n2rrXZ+izKcq6d+CYo28yBaEJ5zecrU1K9M/rZwyVWnr5NTP0bs0B 
 EOBV+0YsaBJdfbdrntKGUZCKVta4QdX9mOIQ7GYM/DP3IxHywFKfcwjG0iRjHYQG 
 sNCK0ymhr+eNcBKWHjyVqvy/W5IIep+ES1Y7xhmwqPfWEraNQ+Scc9T6i7mWAaam 
 dn7fVaO3dOHEoKVGX6Z+TtQS+FjesrgtOtvEeonVAkQLNEBVnQcMaMOrz+Ia1AXf 
 +SwkcksDaqylXC1TqTLjyA7ceEHWqPL7d6EfIM7dBT/tg0h5WL2XgoJlFddSXDoR 
 99b2Arc9jaG+tJamvRO+M8Ky8uVuD5pF68wDwfvPqHbzSzzt3fmmkQkOVmtNLkjp 
 ZAGDxV/0+xhurdz4HFDz6q3ShpgREsgBEOd8uY7UCn67nRZbrS4YtdUIV25dhknS 
 6gGkwfhs5IR99F/IvQUXsUs1m5DCWZI0GkWEaTcTEJfNoYHLPH+vLdtzupNz7ihp 
 sNtie42q3urYLW5irAFeTW8jyjS4V5TPMMUXMvp5DG4eOGGCoKiZQhmT3JJB3PHe 
 5kghWgOlRQyK9trkH1zS8cgpXPhL+g/LGRfrp+xH7E7Hn1DLMizeQargFpcLmpdR 
 KHQQCHlBuB4gTQ0n/ai5zRVrioH+6GVMVedUxsYTMlrVWNGocYVZ/lzjHdDGVPiQ 
 JoxmMxVqL8icPu21FoIXGKiTA6VI0cAmugpQDXFVuk+HVYyYGtj9swmPyaR7ykXU 
 1+4KAyBXsmz4y/mQxKsSVZnlp+cq9Y6iR7IPcj06KMeTF61Zc6sJZ0aIDl6IzzOB 
 UErMtFTKuAMAFPmB2wZ2kMsuz5K48BZcDSeO6PT6fbsWtQvmRK+Fqjf8iLtpLnEj 
 2aG0hKeDTJkZKJOtaHoePx1MBrfRS1kCSAhjTCIxgSuIKLsRx9M+8KfqB+suYXUA 
 RbrSrOyvl16YfUmTaWdYS+PdKuLYEVHViqZecvc30jALJoQOcvoWO7Kwzh4Tl4H9 
 jeSA1+lpV0P25tm7x+PbpAVgbX0aBD4rs2TYU79MersBvL8trm3q6UcB0Bcud/XQ 
 rUTUa7xUgS8XO+EsU6WMKmRZ+Usl+yTqaXH4eTMMAAL1b2Kq9Lr3RZP/zuQpYfiG 
 aSfX8al6YJQRGRVwYORbeUjcOw5fioash8Xf1OEpj0fYLGbsqhRUZU6UbADjEcHo 
 YJID1xvBUmw149iCbOTwHb1rTfw2t8VThkfIxbSTd7t/urYNn5F5H1dhWocvs+oR 
 cd4GKZJjvQcT2/RH8taspQjWNL5asRQvwdb6ZUYQDa5G6o2N3pjIrP9Itue8Iaf6 
 B/xZ6MnFnAB821YiT1V0KbX7FB8bE6HE9z7jR1zpqBA3LbPxVtst2AxenVxbCSQT 
 scA5c4YoXXgxPbrCyX22lyAKwuYEaRa7KrPVjrJoyjDDK1uFD0JRqzokJcS/7dBY 
 F9xrz5H9yRoyVwy/pG9uEdoQkGth3DiOBkqUMYrvipqP0AKHRHcASdL/3fbgdB9Q 
 bmCwWVTyUVbmqztawJ8Xc9+QRk1wEbLvt3df9DZkUT8lqR9JUt4xLWpMvhOhsIVQ 
 iXFaeSoZTpa7B8NzTpJPfCrZtTYnZxzHewxg0gViHQPSv+LmvpR2Z3k6CkgRdqKE 
 1vM/+Ih2Ksc+Yyd5T40IObyaTmSigXnIkKv3vHQtaZaLmwiZRFJY8EmLASSz5/o/ 
 LUNMH1CPPvj00W3rLzMHDnYu2ZhWETpQBGjNUWcQnzo6Vfg3SBXse3WbZu73Ix2f 
 O+kMHjMtB9Nf4URij4D3obLpSVZ1F95wyS63yTuS7nncSNnvbm891946F4/k/J79 
 4fsPVdOA3JSrR9nl10yKsxlfbeTh3saPP2GvDd7TWmC1AdCej64RyyNojJONvbi2 
 su4esVJicnUZM0/d4nqhiYacVxhDU4PnWcy9xISEwgKT0LTlC8VWO8qdRqa5RFlq 
 ewUoE1pCoxapKYOv+GC4DKHmGXp4xcpDnQvQFqcG7ntlZGPmfu4kyCguniCGF0yV 
 nbffVuNUQYNlBt9Y1X9YBZX+DAlx822qOXWDnqe9yhPlEcH7RxmXqdQlqDDZZDhs 
 QhJvqVuBSRxmEoi4K/vE04HPa79L39h40jX1NmGuBjwhst1+1fYfHHS16PqlbNZF 
 H1KuELxVkK4HKhyxr5xTGubLHjIC13tMe73bQadFod5cUiZj+fhRSTzHrAUru9Jk 
 HvUDPF3b9R2fcPRqD5Mtg2gjRDsgWvrODoLW+tCdNuBf3eg3JJYzlFkJU1wiMaml 
 XdQwiGD8m0hABnae7RFODogXpzfKkeFIRmV7vWQqkRc4LUBE0+diw61qaIJE+9d8 
 3NGdESlAleI9hMQVSuwzb5vEn5d4+qPoi1/LhlToho2WJo1By9KkAIUY4eSo8jih 
 CY+QgrLGZ6CRDLkkj7hVIDdThVcTxesPeDL4DStdee/d2g1PLzWMsQlp0/NDyDZx 
 azBbdEZub5/el9Buzgmrv/NgKP3GYLiexFcMe4B9p8Q7AqbtE7oPxOZD4a6EVVe0 
 3u6WKkNOzqDgLKUmt6EAYI9zxwKz/r8K4UKahoi9abrmGwvsrApICJfThC74aw== =QMaw 
 -END 
 PGP MESSAGE-
 
 
 
 -BEGIN PGP MESSAGE-
 Version: GnuPG v1
 Comment: Charset: us-ascii
 
 hQIMA4ToeuPbGFzLARAAg0zb38BESkbvvLbom+Lcf+NpIfxCZvsok8DRTEeEO3v5
 sCsiK50E/IwxRpdO0IhqfOmMhJDmHAOD8emqgNMH6dppiV2ftuxraTU27+I8Kmdi
 

Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread Matt Van Winkle
It would.  I'd test though.  Depending on the amount of RAM and the I/O of
the underlying host, we saw that some larger instances could take longer
to suspend/resume than shutdown/power up.  You maintain the state of the
system, but may see longer downtime for the instance.  Something to
think about.

Thanks!
Matt

On 5/13/15 6:19 PM, Favyen Bastani fbast...@perennate.com wrote:

Would a virsh suspend/save/restore/resume operation accomplish similar
result as the localhost migration?

Best,
Favyen

On 05/13/2015 12:44 PM, Matt Van Winkle wrote:
 Yeah, something like that would be handy.
 
 From: matt m...@nycresistor.commailto:m...@nycresistor.com
 Date: Wednesday, May 13, 2015 10:29 AM
 To: Daniel P. Berrange
berra...@redhat.commailto:berra...@redhat.com
 Cc: Matt Van Winkle
mvanw...@rackspace.commailto:mvanw...@rackspace.com,
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists
.openstack.org 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists
.openstack.org
 Subject: Re: [Openstack-operators] Venom vulnerability
 
 honestly that seems like a very useful feature to ask for...
specifically for upgrading qemu.
 
 -matt
 
 On Wed, May 13, 2015 at 11:19 AM, Daniel P. Berrange
berra...@redhat.commailto:berra...@redhat.com wrote:
 On Wed, May 13, 2015 at 03:08:47PM +, Matt Van Winkle wrote:
 So far, your assessment is spot on from what we've seen.  A migration
 (if you have live migrate that's even better) should net the same
result
 for QEMU.  Some have floated the idea of live migrate within the same
 host.  I don't know if nova out of the box would support such a thing.
 
 Localhost migration (aka migration within the same host) is not
something
 that is supported by libvirt/KVM. Various files QEMU has on disk are
based
 on the VM name/uuid and you can't have 2 QEMU processes on the host
having
 the files at the same time, which precludes localhost migration working.
 
 Regards,
 Daniel
 
 
 
 -BEGIN PGP MESSAGE- Version: GnuPG v1 Comment: Charset:
us-ascii 
 hQIMA4ToeuPbGFzLAQ//WKATa6VRGKJKq7zAcUTO0tS8Lgq5zuo1buc2pJtbPKXi
 pFmHpgTsXxoU3LNhfWelAToCQdacVLUw5OiFsWyoVsjAcuRzMrN+l8WHYG4jZDGs
 bXCUp4XwShex35/vmI15NTAKrmbgIJZRi80sewCZ8H13rei86TPKA5b1C9SFxiqq
 KGmntJdiEyk+x2SOz5xvZVx/29XryUSBXo6YAVQmW4AZrrdVdkRxDKCX3tw90UZ+
 RCibGl1nac4n2rrXZ+izKcq6d+CYo28yBaEJ5zecrU1K9M/rZwyVWnr5NTP0bs0B
 EOBV+0YsaBJdfbdrntKGUZCKVta4QdX9mOIQ7GYM/DP3IxHywFKfcwjG0iRjHYQG
 sNCK0ymhr+eNcBKWHjyVqvy/W5IIep+ES1Y7xhmwqPfWEraNQ+Scc9T6i7mWAaam
 dn7fVaO3dOHEoKVGX6Z+TtQS+FjesrgtOtvEeonVAkQLNEBVnQcMaMOrz+Ia1AXf
 +SwkcksDaqylXC1TqTLjyA7ceEHWqPL7d6EfIM7dBT/tg0h5WL2XgoJlFddSXDoR
 99b2Arc9jaG+tJamvRO+M8Ky8uVuD5pF68wDwfvPqHbzSzzt3fmmkQkOVmtNLkjp
 ZAGDxV/0+xhurdz4HFDz6q3ShpgREsgBEOd8uY7UCn67nRZbrS4YtdUIV25dhknS
 6gGkwfhs5IR99F/IvQUXsUs1m5DCWZI0GkWEaTcTEJfNoYHLPH+vLdtzupNz7ihp
 sNtie42q3urYLW5irAFeTW8jyjS4V5TPMMUXMvp5DG4eOGGCoKiZQhmT3JJB3PHe
 5kghWgOlRQyK9trkH1zS8cgpXPhL+g/LGRfrp+xH7E7Hn1DLMizeQargFpcLmpdR
 KHQQCHlBuB4gTQ0n/ai5zRVrioH+6GVMVedUxsYTMlrVWNGocYVZ/lzjHdDGVPiQ
 JoxmMxVqL8icPu21FoIXGKiTA6VI0cAmugpQDXFVuk+HVYyYGtj9swmPyaR7ykXU
 1+4KAyBXsmz4y/mQxKsSVZnlp+cq9Y6iR7IPcj06KMeTF61Zc6sJZ0aIDl6IzzOB
 UErMtFTKuAMAFPmB2wZ2kMsuz5K48BZcDSeO6PT6fbsWtQvmRK+Fqjf8iLtpLnEj
 2aG0hKeDTJkZKJOtaHoePx1MBrfRS1kCSAhjTCIxgSuIKLsRx9M+8KfqB+suYXUA
 RbrSrOyvl16YfUmTaWdYS+PdKuLYEVHViqZecvc30jALJoQOcvoWO7Kwzh4Tl4H9
 jeSA1+lpV0P25tm7x+PbpAVgbX0aBD4rs2TYU79MersBvL8trm3q6UcB0Bcud/XQ
 rUTUa7xUgS8XO+EsU6WMKmRZ+Usl+yTqaXH4eTMMAAL1b2Kq9Lr3RZP/zuQpYfiG
 aSfX8al6YJQRGRVwYORbeUjcOw5fioash8Xf1OEpj0fYLGbsqhRUZU6UbADjEcHo
 YJID1xvBUmw149iCbOTwHb1rTfw2t8VThkfIxbSTd7t/urYNn5F5H1dhWocvs+oR
 cd4GKZJjvQcT2/RH8taspQjWNL5asRQvwdb6ZUYQDa5G6o2N3pjIrP9Itue8Iaf6
 B/xZ6MnFnAB821YiT1V0KbX7FB8bE6HE9z7jR1zpqBA3LbPxVtst2AxenVxbCSQT
 scA5c4YoXXgxPbrCyX22lyAKwuYEaRa7KrPVjrJoyjDDK1uFD0JRqzokJcS/7dBY
 F9xrz5H9yRoyVwy/pG9uEdoQkGth3DiOBkqUMYrvipqP0AKHRHcASdL/3fbgdB9Q
 bmCwWVTyUVbmqztawJ8Xc9+QRk1wEbLvt3df9DZkUT8lqR9JUt4xLWpMvhOhsIVQ
 iXFaeSoZTpa7B8NzTpJPfCrZtTYnZxzHewxg0gViHQPSv+LmvpR2Z3k6CkgRdqKE
 1vM/+Ih2Ksc+Yyd5T40IObyaTmSigXnIkKv3vHQtaZaLmwiZRFJY8EmLASSz5/o/
 LUNMH1CPPvj00W3rLzMHDnYu2ZhWETpQBGjNUWcQnzo6Vfg3SBXse3WbZu73Ix2f
 O+kMHjMtB9Nf4URij4D3obLpSVZ1F95wyS63yTuS7nncSNnvbm891946F4/k/J79
 4fsPVdOA3JSrR9nl10yKsxlfbeTh3saPP2GvDd7TWmC1AdCej64RyyNojJONvbi2
 su4esVJicnUZM0/d4nqhiYacVxhDU4PnWcy9xISEwgKT0LTlC8VWO8qdRqa5RFlq
 ewUoE1pCoxapKYOv+GC4DKHmGXp4xcpDnQvQFqcG7ntlZGPmfu4kyCguniCGF0yV
 nbffVuNUQYNlBt9Y1X9YBZX+DAlx822qOXWDnqe9yhPlEcH7RxmXqdQlqDDZZDhs
 QhJvqVuBSRxmEoi4K/vE04HPa79L39h40jX1NmGuBjwhst1+1fYfHHS16PqlbNZF
 H1KuELxVkK4HKhyxr5xTGubLHjIC13tMe73bQadFod5cUiZj+fhRSTzHrAUru9Jk
 HvUDPF3b9R2fcPRqD5Mtg2gjRDsgWvrODoLW+tCdNuBf3eg3JJYzlFkJU1wiMaml
 XdQwiGD8m0hABnae7RFODogXpzfKkeFIRmV7vWQqkRc4LUBE0+diw61qaIJE+9d8
 3NGdESlAleI9hMQVSuwzb5vEn5d4+qPoi1/LhlToho2WJo1By9KkAIUY4eSo8jih
 CY+QgrLGZ6CRDLkkj7hVIDdThVcTxesPeDL4DStdee/d2g1PLzWMsQlp0/NDyDZx
 azBbdEZub5/el9Buzgmrv/NgKP3GYLiexFcMe4B9p8Q7AqbtE7oPxOZD4a6EVVe0
 

Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread Matt Van Winkle
Yeah, something like that would be handy.

From: matt m...@nycresistor.commailto:m...@nycresistor.com
Date: Wednesday, May 13, 2015 10:29 AM
To: Daniel P. Berrange berra...@redhat.commailto:berra...@redhat.com
Cc: Matt Van Winkle mvanw...@rackspace.commailto:mvanw...@rackspace.com, 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
 
openstack-operators@lists.openstack.orgmailto:openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Venom vulnerability

honestly that seems like a very useful feature to ask for... specifically for 
upgrading qemu.

-matt

On Wed, May 13, 2015 at 11:19 AM, Daniel P. Berrange 
berra...@redhat.commailto:berra...@redhat.com wrote:
On Wed, May 13, 2015 at 03:08:47PM +, Matt Van Winkle wrote:
 So far, your assessment is spot on from what we've seen.  A migration
 (if you have live migrate that's even better) should net the same result
 for QEMU.  Some have floated the idea of live migrate within the same
 host.  I don't know if nova out of the box would support such a thing.

Localhost migration (aka migration within the same host) is not something
that is supported by libvirt/KVM. Various files QEMU has on disk are based
on the VM name/uuid and you can't have 2 QEMU processes on the host having
the files at the same time, which precludes localhost migration working.

Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.orgmailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread David Medberry
Hi Tim, et al,

We (Time Warner Cable) will be doing a live-migration (L-M) of all
instances one the QEMU package is upgraded. That will start new QEMU
instances on the target host allowing us to vacate the source host. We may
roll in a kernel upgrade due to another security vulnerability at the same
time.

I'm doing a Show and Tell in YVR about the topic of L-Ms and this topic now
has its own slide.

On Wed, May 13, 2015 at 8:31 AM, Tim Bell tim.b...@cern.ch wrote:



 Looking through the details of the Venom vulnerability,
 https://securityblog.redhat.com/2015/05/13/venom-dont-get-bitten/, it
 would appear that the QEMU processes need to be restarted.



 Our understanding is thus that a soft reboot of the VM is not sufficient
 but a hard one would be OK.



 Some quick tests have shown that a suspend/resume of the VM also causes a
 new process.



 How are others looking to address this vulnerability ?



 (I guess the security session will have a few extra people signing up in
 Vancouver now...)



 Tim



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread matt
honestly that seems like a very useful feature to ask for... specifically
for upgrading qemu.

-matt

On Wed, May 13, 2015 at 11:19 AM, Daniel P. Berrange berra...@redhat.com
wrote:

 On Wed, May 13, 2015 at 03:08:47PM +, Matt Van Winkle wrote:
  So far, your assessment is spot on from what we've seen.  A migration
  (if you have live migrate that's even better) should net the same result
  for QEMU.  Some have floated the idea of live migrate within the same
  host.  I don't know if nova out of the box would support such a thing.

 Localhost migration (aka migration within the same host) is not something
 that is supported by libvirt/KVM. Various files QEMU has on disk are based
 on the VM name/uuid and you can't have 2 QEMU processes on the host having
 the files at the same time, which precludes localhost migration working.

 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
 :|
 |: http://libvirt.org  -o- http://virt-manager.org
 :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
 :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
 :|

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread Joe Topjian
Looks like the updated Ubuntu packages are available:

http://www.ubuntu.com/usn/usn-2608-1/

On Wed, May 13, 2015 at 10:44 AM, Matt Van Winkle mvanw...@rackspace.com
wrote:

  Yeah, something like that would be handy.

   From: matt m...@nycresistor.com
 Date: Wednesday, May 13, 2015 10:29 AM
 To: Daniel P. Berrange berra...@redhat.com
 Cc: Matt Van Winkle mvanw...@rackspace.com, 
 openstack-operators@lists.openstack.org 
 openstack-operators@lists.openstack.org
 Subject: Re: [Openstack-operators] Venom vulnerability

honestly that seems like a very useful feature to ask for...
 specifically for upgrading qemu.

  -matt

 On Wed, May 13, 2015 at 11:19 AM, Daniel P. Berrange berra...@redhat.com
 wrote:

 On Wed, May 13, 2015 at 03:08:47PM +, Matt Van Winkle wrote:
  So far, your assessment is spot on from what we've seen.  A migration
  (if you have live migrate that's even better) should net the same result
  for QEMU.  Some have floated the idea of live migrate within the same
  host.  I don't know if nova out of the box would support such a thing.

 Localhost migration (aka migration within the same host) is not something
 that is supported by libvirt/KVM. Various files QEMU has on disk are based
 on the VM name/uuid and you can't have 2 QEMU processes on the host having
 the files at the same time, which precludes localhost migration working.

 Regards,
 Daniel
  --
 |: http://berrange.com  -o-
 http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o-
 http://virt-manager.org :|
 |: http://autobuild.org   -o-
 http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-
 http://live.gnome.org/gtk-vnc :|

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] chef

2015-05-13 Thread JJ Asghar
Ashwarya, can you try knife node list and knife client list for me? You should 
see the the machine you’re attempting to create.

I’m betting that there is a node like the error says, and you’ll need to do a 
knife node delete that_server_name then run the bootstrap command again and 
it should work.

-J
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators