Re: [Openstack-operators] how to use the new openstack client to nova snapshot ??

2016-01-19 Thread Favyen Bastani
Hi,

I believe the correct command is:

openstack server image create

You can use `openstack help server image create` to check the usage.
`openstack help` will print all available commands.

Regards,
Favyen

On 01/19/2016 10:27 AM, Saverio Proto wrote:
> Hello there,
> 
> I am trying to stick to the new openstack client CLI, but sometimes I
> get completely lost.
> 
> So I used to do with python-novaclient instance snapshots like this:
> 
> nova image-create   snapshotname
> 
> 
> I just cannot understand how to do the same with the new client. Could
> someone explain ?
> 
> openstack image create 
> 
> thanks
> 
> Saverio
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-14 Thread Favyen Bastani
On 05/14/2015 05:23 PM, Sławek Kapłoński wrote:
> Hello,
> 
> So if I understand You correct, it is not so dangeorus if I'm using
> ibvirt with apparmor and this libvirt is adding apparmor rules for
> every qemu process, yes?
> 
> 

You should certainly verify that apparmor rules are enabled for the qemu
processes.

Apparmor reduces the danger of the vulnerability. However, if you are
assuming that virtual machines are untrusted, then you should also
assume that an attacker can execute whatever operations permitted by the
apparmor rules (mostly built based on abstraction usually at
/etc/apparmor.d/libvirt-qemu); so you should check that you have
reasonable limits on those permissions. Best is to restart the processes
by way of live migration or otherwise.

Best,
Favyen

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Venom vulnerability

2015-05-13 Thread Favyen Bastani
Would a virsh suspend/save/restore/resume operation accomplish similar
result as the localhost migration?

Best,
Favyen

On 05/13/2015 12:44 PM, Matt Van Winkle wrote:
> Yeah, something like that would be handy.
> 
> From: matt mailto:m...@nycresistor.com>>
> Date: Wednesday, May 13, 2015 10:29 AM
> To: "Daniel P. Berrange" mailto:berra...@redhat.com>>
> Cc: Matt Van Winkle mailto:mvanw...@rackspace.com>>, 
> "openstack-operators@lists.openstack.org"
>  
> mailto:openstack-operators@lists.openstack.org>>
> Subject: Re: [Openstack-operators] Venom vulnerability
> 
> honestly that seems like a very useful feature to ask for... specifically for 
> upgrading qemu.
> 
> -matt
> 
> On Wed, May 13, 2015 at 11:19 AM, Daniel P. Berrange 
> mailto:berra...@redhat.com>> wrote:
> On Wed, May 13, 2015 at 03:08:47PM +, Matt Van Winkle wrote:
>> So far, your assessment is spot on from what we've seen.  A migration
>> (if you have live migrate that's even better) should net the same result
>> for QEMU.  Some have floated the idea of live migrate within the same
>> host.  I don't know if nova out of the box would support such a thing.
> 
> Localhost migration (aka migration within the same host) is not something
> that is supported by libvirt/KVM. Various files QEMU has on disk are based
> on the VM name/uuid and you can't have 2 QEMU processes on the host having
> the files at the same time, which precludes localhost migration working.
> 
> Regards,
> Daniel
> 
> 
> 
> -BEGIN PGP MESSAGE- Version: GnuPG v1 Comment: Charset: us-ascii 
> hQIMA4ToeuPbGFzLAQ//WKATa6VRGKJKq7zAcUTO0tS8Lgq5zuo1buc2pJtbPKXi 
> pFmHpgTsXxoU3LNhfWelAToCQdacVLUw5OiFsWyoVsjAcuRzMrN+l8WHYG4jZDGs 
> bXCUp4XwShex35/vmI15NTAKrmbgIJZRi80sewCZ8H13rei86TPKA5b1C9SFxiqq 
> KGmntJdiEyk+x2SOz5xvZVx/29XryUSBXo6YAVQmW4AZrrdVdkRxDKCX3tw90UZ+ 
> RCibGl1nac4n2rrXZ+izKcq6d+CYo28yBaEJ5zecrU1K9M/rZwyVWnr5NTP0bs0B 
> EOBV+0YsaBJdfbdrntKGUZCKVta4QdX9mOIQ7GYM/DP3IxHywFKfcwjG0iRjHYQG 
> sNCK0ymhr+eNcBKWHjyVqvy/W5IIep+ES1Y7xhmwqPfWEraNQ+Scc9T6i7mWAaam 
> dn7fVaO3dOHEoKVGX6Z+TtQS+FjesrgtOtvEeonVAkQLNEBVnQcMaMOrz+Ia1AXf 
> +SwkcksDaqylXC1TqTLjyA7ceEHWqPL7d6EfIM7dBT/tg0h5WL2XgoJlFddSXDoR 
> 99b2Arc9jaG+tJamvRO+M8Ky8uVuD5pF68wDwfvPqHbzSzzt3fmmkQkOVmtNLkjp 
> ZAGDxV/0+xhurdz4HFDz6q3ShpgREsgBEOd8uY7UCn67nRZbrS4YtdUIV25dhknS 
> 6gGkwfhs5IR99F/IvQUXsUs1m5DCWZI0GkWEaTcTEJfNoYHLPH+vLdtzupNz7ihp 
> sNtie42q3urYLW5irAFeTW8jyjS4V5TPMMUXMvp5DG4eOGGCoKiZQhmT3JJB3PHe 
> 5kghWgOlRQyK9trkH1zS8cgpXPhL+g/LGRfrp+xH7E7Hn1DLMizeQargFpcLmpdR 
> KHQQCHlBuB4gTQ0n/ai5zRVrioH+6GVMVedUxsYTMlrVWNGocYVZ/lzjHdDGVPiQ 
> JoxmMxVqL8icPu21FoIXGKiTA6VI0cAmugpQDXFVuk+HVYyYGtj9swmPyaR7ykXU 
> 1+4KAyBXsmz4y/mQxKsSVZnlp+cq9Y6iR7IPcj06KMeTF61Zc6sJZ0aIDl6IzzOB 
> UErMtFTKuAMAFPmB2wZ2kMsuz5K48BZcDSeO6PT6fbsWtQvmRK+Fqjf8iLtpLnEj 
> 2aG0hKeDTJkZKJOtaHoePx1MBrfRS1kCSAhjTCIxgSuIKLsRx9M+8KfqB+suYXUA 
> RbrSrOyvl16YfUmTaWdYS+PdKuLYEVHViqZecvc30jALJoQOcvoWO7Kwzh4Tl4H9 
> jeSA1+lpV0P25tm7x+PbpAVgbX0aBD4rs2TYU79MersBvL8trm3q6UcB0Bcud/XQ 
> rUTUa7xUgS8XO+EsU6WMKmRZ+Usl+yTqaXH4eTMMAAL1b2Kq9Lr3RZP/zuQpYfiG 
> aSfX8al6YJQRGRVwYORbeUjcOw5fioash8Xf1OEpj0fYLGbsqhRUZU6UbADjEcHo 
> YJID1xvBUmw149iCbOTwHb1rTfw2t8VThkfIxbSTd7t/urYNn5F5H1dhWocvs+oR 
> cd4GKZJjvQcT2/RH8taspQjWNL5asRQvwdb6ZUYQDa5G6o2N3pjIrP9Itue8Iaf6 
> B/xZ6MnFnAB821YiT1V0KbX7FB8bE6HE9z7jR1zpqBA3LbPxVtst2AxenVxbCSQT 
> scA5c4YoXXgxPbrCyX22lyAKwuYEaRa7KrPVjrJoyjDDK1uFD0JRqzokJcS/7dBY 
> F9xrz5H9yRoyVwy/pG9uEdoQkGth3DiOBkqUMYrvipqP0AKHRHcASdL/3fbgdB9Q 
> bmCwWVTyUVbmqztawJ8Xc9+QRk1wEbLvt3df9DZkUT8lqR9JUt4xLWpMvhOhsIVQ 
> iXFaeSoZTpa7B8NzTpJPfCrZtTYnZxzHewxg0gViHQPSv+LmvpR2Z3k6CkgRdqKE 
> 1vM/+Ih2Ksc+Yyd5T40IObyaTmSigXnIkKv3vHQtaZaLmwiZRFJY8EmLASSz5/o/ 
> LUNMH1CPPvj00W3rLzMHDnYu2ZhWETpQBGjNUWcQnzo6Vfg3SBXse3WbZu73Ix2f 
> O+kMHjMtB9Nf4URij4D3obLpSVZ1F95wyS63yTuS7nncSNnvbm891946F4/k/J79 
> 4fsPVdOA3JSrR9nl10yKsxlfbeTh3saPP2GvDd7TWmC1AdCej64RyyNojJONvbi2 
> su4esVJicnUZM0/d4nqhiYacVxhDU4PnWcy9xISEwgKT0LTlC8VWO8qdRqa5RFlq 
> ewUoE1pCoxapKYOv+GC4DKHmGXp4xcpDnQvQFqcG7ntlZGPmfu4kyCguniCGF0yV 
> nbffVuNUQYNlBt9Y1X9YBZX+DAlx822qOXWDnqe9yhPlEcH7RxmXqdQlqDDZZDhs 
> QhJvqVuBSRxmEoi4K/vE04HPa79L39h40jX1NmGuBjwhst1+1fYfHHS16PqlbNZF 
> H1KuELxVkK4HKhyxr5xTGubLHjIC13tMe73bQadFod5cUiZj+fhRSTzHrAUru9Jk 
> HvUDPF3b9R2fcPRqD5Mtg2gjRDsgWvrODoLW+tCdNuBf3eg3JJYzlFkJU1wiMaml 
> XdQwiGD8m0hABnae7RFODogXpzfKkeFIRmV7vWQqkRc4LUBE0+diw61qaIJE+9d8 
> 3NGdESlAleI9hMQVSuwzb5vEn5d4+qPoi1/LhlToho2WJo1By9KkAIUY4eSo8jih 
> CY+QgrLGZ6CRDLkkj7hVIDdThVcTxesPeDL4DStdee/d2g1PLzWMsQlp0/NDyDZx 
> azBbdEZub5/el9Buzgmrv/NgKP3GYLiexFcMe4B9p8Q7AqbtE7oPxOZD4a6EVVe0 
> 3u6WKkNOzqDgLKUmt6EAYI9zxwKz/r8K4UKahoi9abrmGwvsrApICJfThC74aw== =QMaw 
> -END 
> PGP MESSAGE-
> 
> 
> 
> -BEGIN PGP MESSAGE-
> Version: GnuPG v1
> Comment: Charset: us-ascii
> 
> hQIMA4ToeuPbGFzLARAAg0zb38BESkbvvLbom+Lcf+NpIfxCZvsok8DRTEeEO3v5
> sCsiK50E/IwxRpdO0IhqfOmMhJDmHAOD8emqgNMH6dppiV2ftuxraTU27+I8Kmdi
> o8VUDb98XvH1DjsjcKLGWwM5+dKqtnh7adiJwsRRiEswuumtsh+eH5R9D7928kgV
> ZZn6

[Openstack-operators] nova rescue issue: boots to non-rescue disk

2014-09-27 Thread Favyen Bastani
Hi,

I am running OpenStack nova 2.17.0 on Ubuntu 14.04.1 with libvirt 1.2.2
and QEMU 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.5).

When attempting to nova rescue an instance, the instance boots with
updated libvirt XML but ends up booting to the old, non-rescue disk
(second disk in XML) 80% of the time. Occasionally after several reboots
(for example, via ctrl+alt+del in VNC) I can randomly get it to boot to
the rescue disk.

The XML looks like this:


  
  
  
  
  


  
  
  
  
  


And the command line:

/usr/bin/qemu-system-x86_64 -name instance-0a65 -S -machine
pc-i440fx-trusty,accel=kvm,usb=off -m 512 -realtime mlock=off -smp
1,sockets=1,cores=1,threads=1 -uuid e9019a9b-03de-47e9-b372-673305ea5c66
-smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack
Nova,version=2014.1.2,serial=44454c4c-3700-1056-8053-b6c04f504e31,uuid=e9019a9b-03de-47e9-b372-673305ea5c66
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0a65.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet
-no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/nova/instances/e9019a9b-03de-47e9-b372-673305ea5c66/disk.rescue,if=none,id=drive-virtio-disk0,format=qcow2,cache=writeback
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/var/lib/nova/instances/e9019a9b-03de-47e9-b372-673305ea5c66/disk,if=none,id=drive-virtio-disk1,format=qcow2,cache=writeback
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1
-netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ac:13:08,bus=pci.0,addr=0x3
-chardev
file,id=charserial0,path=/var/lib/nova/instances/e9019a9b-03de-47e9-b372-673305ea5c66/console.log
-device isa-serial,chardev=charserial0,id=serial0 -chardev
pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1
-device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -device
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6

So everything looks like it should boot to the rescue disk (strict boot
on and boot index set for the desired device), yet usually it still ends
up booting to the old OS. I tried manually specifying boot index on
second device to 2 and it still fails.

Any ideas what's going on with this? All the packages are also
up-to-date on the host node; in this case the VM has Ubuntu 14.04 OS but
I don't think that affects the boot order.

Thanks,
- Favyen Bastani


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators