Hi Wayne, Thanks for your reply. On my slave node, if I run following command, I am able to see instances but state is ' shut off ' root@openstack3:~# virsh list --all Id Name State ---------------------------------- - instance-0000002a shut off - instance-0000003c shut off - instance-0000003e shut off - instance-00000040 shut off
root@openstack3:~# I went ahead and started instances manually, I have got following errrors. virsh # start instance-0000002a *error: Failed to start domain instance-0000002a error: operation failed: failed to retrieve chardev info in qemu with 'info chardev'* virsh # I have checked the log files also, 2011-08-18 11:03:06.324: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-0000003c -uuid d846d3ab-e7c4-fe41-7e6f-769e899943d7 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-0000003c.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c -kernel /var/lib/nova/instances/instance-0000003c/kernel -append root=/dev/vda console=ttyS0 -drive file=/var/lib/nova/instances/instance-0000003c/disk,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:16:3e:51:69:8a,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/instance-0000003c/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb -vnc 0.0.0.0:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 char device redirected to /dev/pts/2 *qemu: could not load kernel '/var/lib/nova/instances/instance-0000003c/kernel': Inappropriate ioctl for device 2011-08-18 11:03:06.625: shutting down* Could any one can help me here. --Thanks and regards, Praveen GK. On Thu, Aug 18, 2011 at 10:03 AM, Wayne A. Walls <[email protected]>wrote: > Greetings, Praveen! > > On your compute nodes, the only required service is nova-compute. There > has been a semi-recent update to nova-network that adds HA to the service, > and this requires nova-network to be installed on all the compute nodes as > well. As far as api and scheduler go, they are not required on every node. > > > For more information on nova-network's HA work, check out Vish's blog: > http://www.unchainyourbrain.com/openstack/13-networking-in-nova > > Cheers, > > > Wayne > > From: praveen_kumar girir <[email protected]> > Date: Thu, 18 Aug 2011 09:28:32 +0530 > To: Dan Wendlandt <[email protected]> > Cc: openstack <[email protected]> > Subject: Re: [Openstack] Unable to run instances , instance status is > networking, > > Dear Dan, > I have changed my /etc/nova/nova.conf file to reflect proper fixed_ip > range, > then it start to launch. > > I have one more question, > > does the compute nodes required to have all the nova softwares to run. > because on my slave nodes initially it was only nova-compute, later > installed nova-scheduler, api, network n all. > > Started multiple instances from master node, able to create vm on slave > nodes, but the status is shutdown. No idea. > > if I am able to access these from slave nodes, it requires EC2_ACCESS_KEY. > By default it is not set, do I need to set it manually. > > Could any one please clarify my doubts. > > --Thanks and regards, > Praveen GK. > > On Wed, Aug 17, 2011 at 8:43 PM, Dan Wendlandt <[email protected]> wrote: > >> Hi Praveen, >> >> The error you are seeing is because there is no 'network' record in the >> nova database corresponding to 'br100' (which is the default value for the >> bridge). Spawning a VM requires finding the appropriate network(s) for that >> VM in the database, and assigning the VM an IP address from the associated >> network subnet. >> >> Did you run nova-manage to create a network? If so, can you send out the >> command you ran? >> >> For example the Running Nova wiki (http://wiki.openstack.org/RunningNova) >> includes the line: >> >> sudo nova-manage network create novanetwork 10.0.0.0/8 1 64 >> >> Dan >> >> >> On Tue, Aug 16, 2011 at 9:45 PM, praveen_kumar girir >> <[email protected]>wrote: >> >>> Dear Mandell, >>> The nova-network process is running. >>> But I am not able to see any log file under /var/log/libvert/qemu/ >>> directory. >>> When I run describe instance command , I am able to see this output. >>> root@openstack2:~# euca-describe-instances >>> RESERVATION r-l4zr8lyg bexar default >>> INSTANCE i-0000000c ami-2b84327c networking test >>> (bexar, openstack2) 0 m1.tiny 2011-08-12T11:29:42Z nova >>> >>> RESERVATION r-s8zj5yje bexar default >>> INSTANCE i-0000000a ami-2b84327c networking test >>> (bexar, openstack2) 0 m1.tiny 2011-08-12T11:29:33Z nova >>> >>> >>> logs output: >>> >>> nova-compute.log: >>> >>> 2011-08-17 10:00:46,016 ERROR nova [-] Exception during message handling >>> (nova): TRACE: Traceback (most recent call last): >>> (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/rpc.py", line >>> 188, in _receive >>> (nova): TRACE: rval = node_func(context=ctxt, **node_args) >>> (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/exception.py", >>> line 120, in _wrap >>> (nova): TRACE: return f(*args, **kw) >>> (nova): TRACE: File >>> "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 219, in >>> run_instance >>> (nova): TRACE: self.get_network_topic(context), >>> (nova): TRACE: File >>> "/usr/lib/pymodules/python2.7/nova/compute/manager.py", line 173, in >>> get_network_topic >>> (nova): TRACE: host = self.network_manager.get_network_host(context) >>> (nova): TRACE: File >>> "/usr/lib/pymodules/python2.7/nova/network/manager.py", line 276, in >>> get_network_host >>> (nova): TRACE: FLAGS.flat_network_bridge) >>> (nova): TRACE: File "/usr/lib/pymodules/python2.7/nova/db/api.py", line >>> 620, in network_get_by_bridge >>> (nova): TRACE: return IMPL.network_get_by_bridge(context, bridge) >>> (nova): TRACE: File >>> "/usr/lib/pymodules/python2.7/nova/db/sqlalchemy/api.py", line 98, in >>> wrapper >>> (nova): TRACE: return f(*args, **kwargs) >>> (nova): TRACE: File >>> "/usr/lib/pymodules/python2.7/nova/db/sqlalchemy/api.py", line 1294, in >>> network_get_by_bridge >>> (nova): TRACE: raise exception.NotFound(_('No network for bridge %s') >>> % bridge) >>> *(nova): TRACE: NotFound: No network for bridge br100* >>> (nova): TRACE: >>> 2011-08-17 10:01:45,446 INFO nova.compute.manager [-] Found instance >>> 'instance-0000000b' in DB but no VM. State=0, so assuming spawn is in >>> progress. >>> >>> Here, I have highlighted the error, >>> here I am pasting br100 details. >>> root@openstack2:~# cat /etc/network/interfaces >>> # The loopback network interface >>> auto lo >>> iface lo inet loopback >>> >>> auto br100 >>> iface br100 inet static >>> bridge_ports eth0 >>> bridge_stp off >>> bridge_maxwait 0 >>> bridge_fd 0 >>> address 10.223.84.45 >>> netmask 255.255.255.0 >>> broadcast 10.223.84.255 >>> gateway 10.223.84.251 >>> dns-nameservers 10.223.45.36 >>> root@openstack2:~# >>> >>> could any one help me out here. >>> >>> --Thanks and regards, >>> Praveen GK. >>> >>> On Tue, Aug 16, 2011 at 8:24 PM, Mandell Degerness >>> <[email protected]>wrote: >>> >>>> Check first that the network process is running and not producing >>>> errors. Then check for errors in >>>> /var/log/libvert/qemu/instance-00000001.log. I suspect the issue lies >>>> with, either, the network configuration or with a missing file for >>>> qemu (kvm-pxe). >>>> >>>> -Mandell >>>> >>>> On Mon, Aug 15, 2011 at 9:48 PM, praveen_kumar girir < >>>> [email protected]> wrote: >>>> >> Dear All, >>>> > >>>> > I am facing issue while running instances under ubuntu 11.04 server >>>> > edition. >>>> > Steps followed: >>>> > >>>> > Installed openstack nova, bexar edition to my cluster. >>>> > check all the running processes >>>> > Able to publish the image. >>>> > Able to describe the images, which change the state from .gz to >>>> untarring. >>>> > >>>> > Run the command euca-run-instances $emi -k my_key -t >>>> m1.tiny. >>>> > After this, >>>> > check the status using, euca-describe-images. the status is >>>> shows >>>> > "NETWORKING" rather RUNNING >>>> > When I checked the logs under /var/log/nova/nova-manage.log file, I am >>>> able >>>> > to see this error. >>>> > 2011-08-12 17:36:22,305 INFO nova.compute.manager [-] Found instance >>>> > 'instance-0000000e' in DB but no VM. State=0, so assuming spawn is in >>>> > progress. >>>> > >>>> > Could any one put some light on this >>>> > >>>> > --Thanks and regards, >>>> > Praveen GK, >>>> > >>>> > >>>> > _______________________________________________ >>>> > Mailing list: https://launchpad.net/~openstack >>>> > Post to : [email protected] >>>> > Unsubscribe : https://launchpad.net/~openstack >>>> > More help : https://help.launchpad.net/ListHelp >>>> > >>>> > >>>> >>>> >>>> >>>> -- >>>> Regards, >>>> Mandell Degerness >>>> >>>> "True glory consists in doing what deserves to be written; in writing >>>> what deserves to be read; and in so living as to make the world >>>> happier for our living in it." >>>> Pliny the Elder >>>> >>>> _______________________________________________ >>>> Mailing list: https://launchpad.net/~openstack >>>> Post to : [email protected] >>>> Unsubscribe : https://launchpad.net/~openstack >>>> More help : https://help.launchpad.net/ListHelp >>>> >>> >>> >>> _______________________________________________ >>> Mailing list: https://launchpad.net/~openstack >>> Post to : [email protected] >>> Unsubscribe : https://launchpad.net/~openstack >>> More help : https://help.launchpad.net/ListHelp >>> >>> >> >> >> -- >> ~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> Dan Wendlandt >> Nicira Networks, Inc. >> www.nicira.com | www.openvswitch.org >> Sr. Product Manager >> cell: 650-906-2650 >> ~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> >> > _______________________________________________ Mailing list: > https://launchpad.net/~openstack Post to : > [email protected] : > https://launchpad.net/~openstack More help : > https://help.launchpad.net/ListHelp >
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : [email protected] Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp

