1) PV-bootloader should be "" (empty string). To fix back - 'pygrub',
but to use external kernels - empty.
2) Here sample of settings for VM with external kernel:
HVM-boot-policy ( RW):
HVM-boot-params (MRW):
HVM-shadow-multiplier ( RW): 1.000
PV-kernel ( RW):
/boot/guest/64/vmlinux-2.6.34-12-xen.gz
PV-ramdisk ( RW): /boot/guest/64/initrd-2.6.34-12-xen
PV-args ( RW): CPUFREQ=no root=/dev/xvda1
console=xvc0
PV-legacy-args ( RW):
PV-bootloader ( RW):
PV-bootloader-args ( RW):
... And you need to put those files manually, of cause.
PS If you using multihost pool, use xe vm-start uuid=.... on=hostname to
start vm on host with kernels in /boot/guest.
On 05.09.2012 23:47, Nathanial Byrnes wrote:
I've tried the PV-* options below and am surprised to find no change in
behavior. Is there some place in the dom0 logs I should see references to the
dom0 provided kernel and initrd being loaded or provided to the guest? (I've
tried with no path, /boot/guest and no change....)
On Sep 5, 2012, at 11:43 AM, George Shuklin wrote:
Okay, I don't know anything about HVM, but PV is much more interesting.
You need to check if vm is running or not (is that message from virtual machine
or from some component of xapi).
There is one dirty but very nice way:
xe vm-start vm=... on host=(here); /etc/init.d/xapi stop
after that dying domain will stay in list_domains with -d- status.
If not, that means domain dying instantly or do not start at all.
Other trick is to try to boot with external kernel (PV-bootloader="",
PV-kernel=..., PV-ramdisk=..., and kernel/ramdisk somewhere in /boot/guest in dom0).
05.09.2012 18:13, Nathanial Byrnes пишет:
These are PV guests. The appropriate VBD (in some cases (that work) there are
more than one VBD) is set to bootable. The HVM-boot-{policy,params} are the
same for working and non-working pv domU's for what it's worth.
Thanks,
Nate
On Sep 5, 2012, at 10:00 AM, George Shuklin wrote:
Your are talking about HVM or PV guests?
Not sure if this somehow related to that problem, but here some vm/vbd
attributes to play with:
vbd:
bootable=true/false
vm:
HVM-boot-policy (separate PV from HVM)
HVM-boot-params
05.09.2012 16:37, Nathanial Byrnes пишет:
Hello,
I have recently done a number of bad things to my XCP 1.0 environment.
I believed most of them sorted. Then I upgraded from XCP 1.0 to 1.5 by way of
1.1. The bad things involved moving the shared storage backend from NFS to
Glusterfs, monkeying with the SR and its PBD's, losing all the vm vbd's in the
process having to manually find and remap the VDI's to the correct VM. Once I
survived all of that self induced unpleasantness, I decided to upgrade to
1.5.... (obviously a genius behind this keyboard) After the upgrade some VM's
boot and run as before, but others attempt to boot, then the console shows the
subject message and shut down after 30 seconds. Please note that the
functioning VM's are from the name SR/PBD as the non-functioning ones. Also, I
can attach the non-booting vdi's to Dom0 and mount/fdisk them without issue. My
question is: how do I further interrogate / investigate this boot process
failure and success to ID the source of the issue?
Thanks very much in advance.
Regards,
Nate
_______________________________________________
Xen-api mailing list
Xen-api@lists.xen.org
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
_______________________________________________
Xen-api mailing list
Xen-api@lists.xen.org
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
_______________________________________________
Xen-api mailing list
Xen-api@lists.xen.org
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api