Re: [Openstack] [openstack-dev] booting VM with customized kernel and rootfs image

2014-05-16 Thread sonia verma
 Hi


I'm getting following repeated nova-compute logs when trying to boot VM ..


05-16 05:34:19.503 26935 DEBUG nova.openstack.common.periodic_task [-]
Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.504 26935 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task ComputeManager._instance_usage_audit run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.504 26935 DEBUG nova.openstack.common.periodic_task [-] Running
periodic task ComputeManager.update_available_resource run_periodic_tasks
/opt/stack/nova/nova/openstack/common/periodic_task.py:176^M 2014-05-16
05:34:19.505 26935 DEBUG nova.openstack.common.lockutils [-] Got semaphore
compute_resources lock
/opt/stack/nova/nova/openstack/common/lockutils.py:166^M 2014-05-16
05:34:19.505 26935 DEBUG nova.openstack.common.lockutils [-] Got semaphore
/ lock update_available_resource inner
/opt/stack/nova/nova/openstack/common/lockutils.py:245^M 2014-05-16
05:34:19.505 26935 AUDIT nova.compute.resource_tracker [-] Auditing locally
available compute resources^M 2014-05-16 05:34:19.506 26935 DEBUG
nova.virt.libvirt.driver [-] Updating host stats update_status
/opt/stack/nova/nova/virt/libvirt/driver.py:4865^M 2014-05-16 05:34:19.566
26935 DEBUG nova.openstack.common.processutils [-] Running cmd
(subprocess): env LC_ALL=C LANG=C qemu-img info
/opt/stack/data/nova/instances/961b0fcd-60e3-488f-93df-5b852d93ede2/disk
execute /opt/stack/nova/nova/openstack/common/processutils.py:147^M
2014-05-16 05:34:19.612 26935 DEBUG nova.openstack.common.processutils [-]
Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img info
/opt/stack/data/nova/instances/961b0fcd-60e3-488f-93df-5b852d93ede2/disk
execute /opt/stack/nova/nova/openstack/common/processutils.py:147^M
2014-05-16 05:34:19.703 26935 DEBUG nova.compute.resource_tracker [-]
Hypervisor: free ram (MB): 5565 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:388^M 2014-05-16
05:34:19.705 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor: free
disk (GB): 95 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:389^M 2014-05-16
05:34:19.705 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor: free
VCPUs: 24 _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:394^M 2014-05-16
05:34:19.706 26935 DEBUG nova.compute.resource_tracker [-] Hypervisor:
assignable PCI devices: [] _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:401^M 2014-05-16
05:34:19.708 26935 DEBUG nova.openstack.common.rpc.amqp [-] Making
synchronous call on conductor ... multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553^M 2014-05-16
05:34:19.709 26935 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
7435553a261b4f3eb61f985017441333 multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:556^M 2014-05-16
05:34:19.709 26935 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is
f2dd9f9fc517406bbe82366085de5523. _add_unique_id
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:341^M 2014-05-16
05:34:19.716 26935 DEBUG nova.openstack.common.rpc.amqp [-] Making
synchronous call on conductor ... multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:553^M 2014-05-16
05:34:19.717 26935 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
965b77a6b9da47c884bd22a2d47de23c multicall
/opt/stack/nova/nova/openstack/com

Please help regarding this.

Thanks



On Tue, May 13, 2014 at 5:51 PM, Parthipan, Loganathan parthi...@hp.comwrote:

  You can upload your custom kernel/rootdisk pair to glance and use the
 rootdisk uuid to boot an instance.



 http://docs.openstack.org/user-guide/content/cli_manage_images.html





 *From:* sonia verma [mailto:soniaverma9...@gmail.com]
 *Sent:* 13 May 2014 06:33
 *To:* OpenStack Development Mailing List (not for usage questions);
 openstack@lists.openstack.org
 *Subject:* [openstack-dev] booting VM with customized kernel and rootfs
 image



 Hi all

 I have installed openstack using devstack.I'm able able to boot VM from
 the opebstack dashboard onto the compute node.

 Now i need to boot VM from the openstack dashboard(controller node) onto
 compute node using customized kernel imae and rootfs.

 Therefore my question is whether can we boot VM from controller node onto
 compute node using the customized kernel and rootfs image.

 Please help regarding this.


  Thanks

 Sonia

 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openstack@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http

Re: [Openstack] [openstack-dev] booting VM with customized kernel and rootfs image

2014-05-13 Thread Parthipan, Loganathan
You can upload your custom kernel/rootdisk pair to glance and use the rootdisk 
uuid to boot an instance.

http://docs.openstack.org/user-guide/content/cli_manage_images.html


From: sonia verma [mailto:soniaverma9...@gmail.com]
Sent: 13 May 2014 06:33
To: OpenStack Development Mailing List (not for usage questions); 
openstack@lists.openstack.org
Subject: [openstack-dev] booting VM with customized kernel and rootfs image

Hi all
I have installed openstack using devstack.I'm able able to boot VM from the 
opebstack dashboard onto the compute node.
Now i need to boot VM from the openstack dashboard(controller node) onto 
compute node using customized kernel imae and rootfs.
Therefore my question is whether can we boot VM from controller node onto 
compute node using the customized kernel and rootfs image.
Please help regarding this.


Thanks
Sonia
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack