the weird thing is that I have some volumes that were created from a snapshot, that actually boot (they complain about not being able to connect to the metadataserver (which I guess is a totally different problem) but in the end, they come up.
I haven't been able to see the difference between the volumes…. I re-snapshotted the instance whose volume wouldn't boot, and made a volume out of it, and this instance booted nicely from the volume. weirder and weirder… /jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia On 21.11.2013, at 15:05, Jens-Christian Fischer <jens-christian.fisc...@switch.ch> wrote: > Hi all > > I'm playing with the boot from volume options in Havana and have run into > problems: > > (Openstack Havana, Ceph Dumpling (0.67.4), rbd for glance, cinder and > experimental ephemeral disk support) > > The following things do work: > - glance images are in rbd > - cinder volumes are in rbd > - creating a VM from an image works > - creating a VM from a snapshot works > > > However, the booting from volume fails: > > Steps to reproduce: > > Boot from image > Create snapshot from running instance > Create volume from this snapshot > Start a new instance with "boot from volume" and the volume just created: > > The boot process hangs after around 3 seconds, and the console.log of the > instance shows this: > > [ 0.000000] Linux version 3.11.0-12-generic (buildd@allspice) (gcc version > 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu7) ) #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC > 2013 (Ubuntu 3.11.0-12.19-generic 3.11.3) > [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.11.0-12-generic > root=LABEL=cloudimg-rootfs ro console=tty1 console=ttyS0 > ... > [ 0.098221] Brought up 1 CPUs > [ 0.098964] smpboot: Total of 1 processors activated (4588.94 BogoMIPS) > [ 0.100408] NMI watchdog: enabled on all CPUs, permanently consumes one > hw-PMU counter. > [ 0.102667] devtmpfs: initialized > … > [ 0.560202] Linux agpgart interface v0.103 > [ 0.562276] brd: module loaded > [ 0.563599] loop: module loaded > [ 0.565315] vda: vda1 > [ 0.568386] scsi0 : ata_piix > [ 0.569217] scsi1 : ata_piix > [ 0.569972] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0a0 irq 14 > [ 0.571289] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0a8 irq 15 > … > [ 0.742082] Freeing unused kernel memory: 1040K (ffff8800016fc000 - > ffff880001800000) > [ 0.746153] Freeing unused kernel memory: 836K (ffff880001b2f000 - > ffff880001c00000) > Loading, please wait... > [ 0.764177] systemd-udevd[95]: starting version 204 > [ 0.787913] floppy: module verification failed: signature and/or required > key missing - tainting kernel > [ 0.825174] FDC 0 is a S82078B > … > [ 1.448178] tsc: Refined TSC clocksource calibration: 2294.376 MHz > error: unexpectedly disconnected from boot status daemon > Begin: Loading essential drivers ... done. > Begin: Running /scripts/init-premount ... done. > Begin: Mounting root file system ... Begin: Running /scripts/local-top ... > done. > Begin: Running /scripts/local-premount ... done. > [ 2.384452] EXT4-fs (vda1): mounted filesystem with ordered data mode. > Opts: (null) > Begin: Running /scripts/local-bottom ... done. > done. > Begin: Running /scripts/init-bottom ... done. > [ 3.021268] init: mountall main process (193) killed by FPE signal > General error mounting filesystems. > A maintenance shell will now be started. > CONTROL-D will terminate this shell and reboot the system. > root@box-web1:~# > The console is stuck, I can't get to the rescue shell > > I can "rbd map" the volume and mount it from a physical host - the filesystem > etc all is in good order. > > Any ideas? > > cheers > jc > > -- > SWITCH > Jens-Christian Fischer, Peta Solutions > Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland > phone +41 44 268 15 15, direct +41 44 268 15 71 > jens-christian.fisc...@switch.ch > http://www.switch.ch > > http://www.switch.ch/socialmedia > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com