[Openstack] Cloudpipe - Routing not working
Hello guys, I need some advice with a cloudpipe setup. I have a basic Folsom installation (single server), using VlanManager. I am setting up a vpn for the subnet 10.0.4.0 (please see diagram below). instance1 nova-controller cloudpipe openvpnhost1 10.100.200.120---10.0.4.2=== 10.0.4.254 ---10.100.100.143 (piblic ip) || 10.100.100.142 || || || || instance2 10.0.4.3 Short story: from host1, can not ping instance2 (or cloudpipe). From clopudpie (or instance2) cannot ping host1. Desired behaviour: From instance2, want to ping host1. From host1, want to ping instance2. Long story: The vpn link is working just fine from point to point. However, packets are not being fully routed from one network to the other. To troubleshoot this, I am using tcpdump, so: On cloudpipe instance, I run: tcpdump -i any icmp Then, on host1 a ping'ed cloudpipe: ping 10.0.4.2 The tcpdump on cloudpipe is like this: 21:27:56.958108 In 62:59:fd:d3:0d:f3 (oui Unknown) ethertype IPv4 (0x0800), length 100: 10.100.100.143 efe762bef1364f8bab0d5c71434388e2-vpn.novalocal: ICMP echo request, id 28421, seq 10, length 64 21:27:56.969406 In 00:00:00:00:00:00 (oui Ethernet) ethertype IPv4 (0x0800), length 128: efe762bef1364f8bab0d5c71434388e2-vpn.novalocal efe762bef1364f8bab0d5c71434388e2-vpn.novalocal: ICMP host 10.100.100.143 unreachable, length 92 --- Looks like each point in the vpn does not know the arp address for hosts in the other network. PS: I created routes between host1 and network 10.0.4.0: $ ip route list 10.0.4.0/24 via 10.100.100.142 dev eth0 10.0.0.0/24 via 10.100.100.142 dev eth0 10.100.100.0/24 dev eth0 proto kernel scope link src 10.100.100.143 169.254.0.0/16 dev eth0 scope link metric 1002 default via 10.100.100.1 dev eth0 OpenVPN client: $ ip route list 10.0.4.0/24 dev tap0 proto kernel scope link src 10.0.4.254 10.0.0.0/24 via 10.0.4.1 dev tap0 10.100.100.0/24 dev eth0 proto kernel scope link src 10.100.100.142 169.254.0.0/16 dev eth0 scope link metric 1002 default via 10.100.100.1 dev eth0 Cloudpipe instance: $ ip route list default via 10.0.4.1 dev br0 metric 100 10.0.4.0/24 dev br0 proto kernel scope link src 10.0.4.2 10.0.4.254 via 10.0.4.2 dev br0 10.100.100.0/24 via 10.0.4.2 dev br0 ?? The openvpn (cloudpipe) is setup for bridge. Should not the arp transit to the other side of the tunnel? ?? Any tips to get this working? I appreciate any help, thanks. Roni. -- http://cloud0.dyndns-web.com/blog/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Problem starting a VM
Have you looked the compute.log file? You should really start from there. :D Also, if you really expect some help, you must provide some information. Search in the logs for clues, and if it can't help or you don't understand what you see, then send it along as well. Roni. No dia 5 de Fev de 2013 11:59, Guilherme Russi luisguilherme...@gmail.com escreveu: Hello friends, I am getting problem when I try to start a VM, it keeps ate Scheduling task and doesn't start the VM. It is happining after I made a apt-get update and apt-get upgrade. I really don't know what to do, anybody can help me, please? Thank you. Guilherme. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Problem starting a VM
Your logs contain no useful information related to your problem. You should send the piece of log that contains the information generated when you launched the vm. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] OpenStack Single Node Network Setting
Hello Li, and everyone on the list. Yes, indeed I have my personal cloud working for several months now. If you want to deploy it on Ubuntu, check my blog. I have updated it with my deployment tool. It works just fine for both physical boxes and virtual boxes. Also, it setup the network using VlanManager, so you get tenant isolation. Check it out at: http://cloud0.dyndns-web.com/blog/technology/deploy-openstack-folsom-easily-on-ubuntu-12-10/ Regards, Roni. On 26 January 2013 20:31, Rain Li lyp20062...@gmail.com wrote: Hi, Ronivon, I saw your post on openstack mail-list, that you set up a single-node openstack environment. Cloud you please show me your network configuration. How many NICs do you use, do you configure any them in static or promiscuous mode? How do you setup the route info? I tried to set up one, but always can't accessping my initialized instances. Thanks. Regards, Rain Lee ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Tenant Isolation - Virtualbox
Hi Vish, You are right, it was a misunderstanding. In fact, during in the period of time between my email and you answer, I managed to setup a test environment to capture packets using tcpdump, and could verify in loco the tenant isolation at L2. PS: I have carried out this verification in a physical box, in a single-server openstack deployment. Cheers, Roni. On 24 January 2013 01:53, Vishvananda Ishaya vishvana...@gmail.com wrote: There is nothing wrong with your setup. L3 routing is done by the network node. L3 is already blocked by security groups. The vlans provide L2 isolation. Essentially we handle this with convention, as in tell your tenants not to open up their firewalls if they don't want to be accessed by other tenants. for example: nova secgroup-add-rule default tcp 22 22 192.168.0.0/24 # or some other restricted range instead of: nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 People seem to expect l3 traffic to be totally blocked between tenants. I'm not totally convinced that is good behavior, but it should be possible to produce a patch that will do this. In fact I've put together a potential version here: https://review.openstack.org/#/c/20362/ Unless I've messed something up, with this patch, you should be able to set: bridge_forward_inteface=xxx # where xxx is your public_interface And get the behavior you expect. Vish On Jan 23, 2013, at 2:27 PM, Ronivon Costa ronivon.co...@gmail.com wrote: Hello, I have just installed Folsom in a physical server, and the tenants can also ping and ssh into each others instances. I think there is something wrong with my setup. Below I provide some info from the deployment. Any tip will be very much appreciated. Thanks. Roni nova-manage network list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 1 10.0.0.0/24 None 10.0.0.3 None None 100 c0561ee64e6c40b2aea3bdcf47916f18 c417baf7-f989-49d9-973d-f6f2b51a2d5c 2 10.0.1.0/24 None 10.0.1.3 None None 101 36ae086d927f49039cedfcb046463876 4bff308a-7990-46a4-952b-772d4953cb10 -- brctl show bridge name bridge id STP enabled interfaces br100 8000.fa163e7b7397 no vlan100 vnet0 br101 8000.fa163e7baec0 no vlan101 vnet1 --- br100 Link encap:Ethernet HWaddr fa:16:3e:7b:73:97 inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::b016:8dff:fefa:43db/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:531 errors:0 dropped:0 overruns:0 frame:0 TX packets:803 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:66890 (66.8 KB) TX bytes:90421 (90.4 KB) br101 Link encap:Ethernet HWaddr fa:16:3e:7b:ae:c0 inet addr:10.0.1.1 Bcast:10.0.1.255 Mask:255.255.255.0 inet6 addr: fe80::c41:bbff:fed4:354b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:422 errors:0 dropped:0 overruns:0 frame:0 TX packets:574 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:65212 (65.2 KB) TX bytes:69840 (69.8 KB) dummy0Link encap:Ethernet HWaddr 02:dc:e1:5c:aa:5e inet6 addr: fe80::dc:e1ff:fe5c:aa5e/64 Scope:Link UP BROADCAST RUNNING NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:169 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:23932 (23.9 KB) dummy1Link encap:Ethernet HWaddr 72:2d:2b:59:a2:d1 BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) dummy2Link encap:Ethernet HWaddr 72:6f:28:d7:e8:cd BROADCAST NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr 00:1a:92:08:1f:47 inet addr:10.100.200.126 Bcast:10.100.200.255 Mask:255.255.255.0 inet6 addr: fe80::21a:92ff:fe08:1f47/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:210280 errors:1 dropped:0 overruns:0 frame:1 TX packets:20752 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:310541700 (310.5 MB) TX bytes:1983489 (1.9 MB) loLink encap:Local Loopback inet addr:127.0.0.1 Mask
Re: [Openstack] Tenant Isolation - Virtualbox
encap:Ethernet HWaddr fe:16:3e:5c:99:18 inet6 addr: fe80::fc16:3eff:fe5c:9918/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:422 errors:0 dropped:0 overruns:0 frame:0 TX packets:520 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:71120 (71.1 KB) TX bytes:63161 (63.1 KB) wlan0 Link encap:Ethernet HWaddr 00:24:01:12:c8:6b BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) On 21 January 2013 11:15, Kevin Jackson ke...@linuxservices.co.uk wrote: Hi Roni, VirtualBox should honour the VLAN tagging, but it seems its related to the driver type used: e1000 strips the VLAN tag it seems. I don't recall having this issue, but if I get time I'll be happy to spin an environment up and have a play. See this post: http://humbledown.org/virtualbox-intel-vlan-tag-stripping.xhtml Regards, Kev On 20 January 2013 15:32, Ronivon Costa ronivon.co...@gmail.com wrote: Hello, I am playing with Openstack and VlanManager in a Virtualbox machine. Is it tenant isolation supposed to work in this setup? I have several tenants, and the instances for them have landed on different subnets (11.0.1.x, 11.0.2.x, 11.0.3.x, etc). It is possible to ping and ssh other tenant instances from any tenant! Is this the correct behaviour for a virtualized deployement ? Cheers, Roni ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- Kevin Jackson @itarchitectkev ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Tenant Isolation - Virtualbox
Hello, I am playing with Openstack and VlanManager in a Virtualbox machine. Is it tenant isolation supposed to work in this setup? I have several tenants, and the instances for them have landed on different subnets (11.0.1.x, 11.0.2.x, 11.0.3.x, etc). It is possible to ping and ssh other tenant instances from any tenant! Is this the correct behaviour for a virtualized deployement ? Cheers, Roni ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Partition Guide for single server install?
Hi Lance, I had some experiences with a single server deployment, and the bottom line is, use LVM for your partitions. Glance will take lots os space, so I would recommend creating separate LVM for its images. Also, you can create one LVM for the instances. Size depends on how many images you will have, and how many instances will intend to launch. Because of that, you should start will smaller partitions wich you can extend when needed. I think you can't go much wrong following these guidelines. At least it is working for me. Cheers, Roni. On 26 November 2012 07:58, Lance Haig lh...@haigmail.com wrote: Hi All, Is there a partition guide for a single server install? I have 3TB of disk and want to make best use of it. I have 12Gb for swap and 10 GB for / and 20GB for /home. The rest I want to dedicate to OpenStack. Can someone help me with some suggestions as to how to paertition it up. Thanks Lance __**_ Mailing list: https://launchpad.net/~**openstackhttps://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~**openstackhttps://launchpad.net/~openstack More help : https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp -- -- Ronivon C. Costa IBM Certified for Tivoli Software ITIL V3 Certified Tlm: (+351) 96 676 4458 Skype: ronivon.costa BLog ((hosted in my own personal cloud infrastructure): http://cloud0.dyndns-web.com/blog/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] nova-volumes problem after host reboot
=true else shift fi done if [ $FOUND == true ];then echo $1 else echo 1 fi } get_instanceid_in_db() { INSTANCENAME=$1 UUID=$(/usr/bin/nova list | /bin/grep -w $INSTANCENAME | /bin/cut -f2 -d|) echo $UUID } reset_instance_status_db() { INSTUUID=$1 echo MySQL Password: read PW mysql -unova -p$PW nova -e update instances set power_state='1', vm_state='active', task_state=NULL where uuid='$INSTUUID' } INSTUID=$(get_instanceid_in_db $1) if [ $INSTUID == ];then echo Instance not found else INSTDIR=$(search_instanceid_in_xml $INSTUID) if [ $INSTDIR == ];then echo Instance not found else INSTIDNAME=$(echo $INSTDIR | /bin/awk -F / '{print $NF}') /usr/bin/sudo /usr/bin/virsh undefine $INSTIDNAME --managed-save /usr/bin/sudo /usr/bin/virsh define $INSTDIR/$VIRXML /usr/bin/sudo /usr/bin/virsh start $INSTIDNAME reset_instance_status_db $INSTUID fi fi --- xx -- restore-instance xx -- On 10 November 2012 17:25, Ronivon Costa ronivon.co...@gmail.com wrote: Hi, Had some improvement with this issue. Could boot the instance using virsh, following livemoon advice with small adaptation. However the problem still is not fixed. The volume table was update: mysql -unova -p$PW nova -e update volumes set mountpoint=NULL, attach_status='detached', instance_uuid=0 mysql -unova -p$PW nova -e update volumes set status='available' where status 'error_deleting' Restarted the instance: # virsh undefine instance-0038 error: Refusing to undefine while domain managed save image exists # virsh undefine instance-0038 --managed-save Domain instance-0038 has been undefined # virsh define libvirt.xml Domain instance-0038 defined from libvirt.xml # virsh start instance-0038 Domain instance-0038 started Then I update the database with the new instances status: # mysql -unova -p nova -e update instances set power_state='1',vm_state='active',task_state=NULL where uuid = '7e732b31-2ff8-4cf2-a7ac-f1562070cfb3' I can now connect to the instance. That is a great improvement from my original problem. But there still some serius issues to fix: The instance can not be rebooted (hard reboot). It will not start, with the same errors as before. Also, we can not attach the volume back to the instance: # nova volume-attach 7e732b31-2ff8-4cf2-a7ac-f1562070cfb3 647db677-aa48-4d1e-b875-80be73469cb5 /dev/vdc ERROR: The supplied device path (/dev/vdb) is in use. ... The error is: DevicePathInUse: The supplied device path (/dev/vdb) is in use. /dev/vdb is one ephemeral disk. Why nova is trying to use /dev/vdb when I specified /dev/vdc ? -- -- Ronivon C. Costa IBM Certified for Tivoli Software ITIL V3 Certified Tlm: (+351) 96 676 4458 Skype: ronivon.costa BLog ((hosted in my own personal cloud infrastructure): http://cloud0.dyndns-web.com/blog/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] nova-volumes problem after host reboot
Hi there, I am dealing with this issue for a while, but could not figure out what is going on. After a reboot in the openstack server, I am not able to restart ANY instance that had a nova-volume attached. I tried the DR procedure here without any improvement: http://docs.openstack.org/trunk/openstack-compute/admin/content/nova-disaster-recovery-process.html The error in compute.log is: ERROR nova.compute.manager [req-adacca25-ede8-4c6d-be92-9e8bd8578469 cb302c58bb4245cebc61e132c79c 768bd68a0ac149eb8e300665eb3d3950] [instance: 3cd109e4-addf-4aa8-bf66-b69df6573cea] Cannot reboot instance: iSCSI device not found at /dev/disk/by-path/ip-10.100.200.120:3260-iscsi-iqn.2010-10.org.openstack:volume-20db45cc-c97f-4589-9c9f-ed283b0bc16e-lun-1 This is a very restrictive issue, because I can not simply attach volumes to instances knowing that in a power failure or reboot for maintenance I will have my instances unavailable. Below there is some info about my setup. Any idea? Anything! :) Linux nova-controller 2.6.32-279.11.1.el6.x86_64 #1 SMP Tue Oct 16 15:57:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux rpm -qa |grep openstack openstack-nova-api-2012.2-2.el6.noarch openstack-dashboard-2012.2-3.el6.noarch openstack-utils-2012.2-5.el6.noarch openstack-nova-volume-2012.2-2.el6.noarch openstack-nova-novncproxy-0.4-2.el6.noarch openstack-nova-common-2012.2-2.el6.noarch openstack-nova-console-2012.2-2.el6.noarch openstack-nova-network-2012.2-2.el6.noarch openstack-nova-compute-2012.2-2.el6.noarch openstack-nova-cert-2012.2-2.el6.noarch openstack-nova-2012.2-2.el6.noarch openstack-glance-2012.2-2.el6.noarch python-django-openstack-auth-1.0.2-3.el6.noarch openstack-nova-objectstore-2012.2-2.el6.noarch openstack-nova-scheduler-2012.2-2.el6.noarch openstack-keystone-2012.2-1.el6.noarch -- -- Ronivon C. Costa IBM Certified for Tivoli Software ITIL V3 Certified Tlm: (+351) 96 676 4458 Skype: ronivon.costa BLog ((hosted in my own personal cloud infrastructure): http://cloud0.dyndns-web.com/blog/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] nova-volumes problem after host reboot
Hi, Had some improvement with this issue. Could boot the instance using virsh, following livemoon advice with small adaptation. However the problem still is not fixed. The volume table was update: mysql -unova -p$PW nova -e update volumes set mountpoint=NULL, attach_status='detached', instance_uuid=0 mysql -unova -p$PW nova -e update volumes set status='available' where status 'error_deleting' Restarted the instance: # virsh undefine instance-0038 error: Refusing to undefine while domain managed save image exists # virsh undefine instance-0038 --managed-save Domain instance-0038 has been undefined # virsh define libvirt.xml Domain instance-0038 defined from libvirt.xml # virsh start instance-0038 Domain instance-0038 started Then I update the database with the new instances status: # mysql -unova -p nova -e update instances set power_state='1',vm_state='active',task_state=NULL where uuid = '7e732b31-2ff8-4cf2-a7ac-f1562070cfb3' I can now connect to the instance. That is a great improvement from my original problem. But there still some serius issues to fix: The instance can not be rebooted (hard reboot). It will not start, with the same errors as before. Also, we can not attach the volume back to the instance: # nova volume-attach 7e732b31-2ff8-4cf2-a7ac-f1562070cfb3 647db677-aa48-4d1e-b875-80be73469cb5 /dev/vdc ERROR: The supplied device path (/dev/vdb) is in use. ... The error is: DevicePathInUse: The supplied device path (/dev/vdb) is in use. /dev/vdb is one ephemeral disk. Why nova is trying to use /dev/vdb when I specified /dev/vdc ? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Can't access Dashboard
Hello, After initial installs I had seen 404 several times, but most of the times was not due dashboard itself. The dashboard refuses to open when you have some critical errors such as the nova-api not starting. Also, when glance does not work, the Images panel on the dashboard will throw an error. This information is here only to illustrate what might be happening with your setup. So, go to nova.conf and verify if everything is how it should be. Try this also: cd /etc/init.d; for i in `ls openstack*` do service $i restart done What is the output? If you did run the above commands a second time, can you confirm that all services were still running do anyone of of them died? Can you run keystone user-list successfully? Can you run glance index successfully Are you opening the dashboard using a browser in the same box you installed Openstack or are you using a remote browser? If so, did you open port 80 in the firewall ? (iptables) Cheers. On 9 November 2012 12:19, Daniel Oliveira dvalbr...@gmail.com wrote: Hello. Anyone? 2012/11/7 Daniel Oliveira dvalbr...@gmail.com My bad. This is the tutorialhttp://www.hastexo.com/resources/docs/installing-openstack-essex-20121-ubuntu-1204-precise-pangolinI talked about. 2012/11/7 Daniel Oliveira dvalbr...@gmail.com Hello, I've been following this tutorial to install openstack on a machine running Ubuntu Server 12.04, and on the step regarding installation of Horizon, well, there must be something wrong either with the tutorial or with the configuration files on my machine. The point is, whenever I try to access the GUI via browser, I get a 404, and I have no clue as to where to look for the error. Thanks in advance. P.S.: I would like to say a special thanks to everybody on this community. You have been helping me A LOT. I've learned much from you people. -- My best regards, Daniel Oliveira. -- My best regards, Daniel Oliveira. -- My best regards, Daniel Oliveira. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- -- Ronivon C. Costa IBM Certified for Tivoli Software ITIL V3 Certified Tlm: (+351) 96 676 4458 Skype: ronivon.costa BLog ((hosted in my own personal cloud infrastructure): http://cloud0.dyndns-web.com/blog/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Imposible terminate instance in essex
Hi, Try setting the instance to Active/None/Running in the database, then terminate the instance. That works for me... :) Cheers, Roni. On 25 October 2012 01:27, Daniel Vázquez daniel2d2...@gmail.com wrote: Hi here! I can't terminate instance in essex version. I tried from horizon and from nova delete command. I tried killall and restarting nova-network. Restarting host too. I Re-tried set to null task_state by sql query I Re-tried with nova.conf with dhcp release to false ... good work this instance is indestructible ;) I don't want to delete instance folder, because openstack needs to release ips, and some update and synchronizations data in Database. Whan can I do? Thx! ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- -- Ronivon C. Costa IBM Certified for Tivoli Software ITIL V3 Certified Tlm: (+351) 96 676 4458 Skype: ronivon.costa Web presence: https://sites.google.com/site/ronivoncosta/ https://sites.google.com/site/z80soc/ ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp