I wrote:
>   migrate "exec:dd of=STATEFILE"
> ... I was having trouble getting this to work (loading gave
> "Migration failed rc=233") and discovered that not all of the data
> was being saved, probably because of some buffering/pipe issues.

Actually, maybe that isn't the issue.  I guess the migration data
might be different but still valid in both cases.  The real place I'm
running into trouble is when trying to add migrate-to-file support
in libvirt.  I'm having problems that I can't reproduce outside of 
libvirt (ie. running kvm manually).  For example I'm getting this
segfault, any ideas?  It's almost as if migrate_write() is being
called after migrate_finish() ??

-jim

This is kvm-33:

Core was generated by `/usr/local/bin/qemu-system-x86_64 -M pc -m 256 -smp 1 
-monitor pty -boot c -hda'.
Program terminated with signal 11, Segmentation fault.
#0  0x00002b1b0f786794 in memset () from /lib/libc.so.6
(gdb) bt full 8
#0  0x00002b1b0f786794 in memset () from /lib/libc.so.6
No symbol table info available.
#1  0x000000000047fd56 in kvm_get_dirty_pages_log_slot (slot=3, bitmap=0x0, 
offset=0, len=24)
    at /usr/local/src/kvm/kvm-33/qemu/qemu-kvm.c:1119
        r = 3
        i = 2620318256
        j = 0
        n = 3
        c = 3 '\003'
        page_number = 0
#2  0x000000000047fdf9 in kvm_update_dirty_pages_log () at 
/usr/local/src/kvm/kvm-33/qemu/qemu-kvm.c:1152
        r = 3
        len = 0
#3  0x00000000004162c5 in migrate_write (opaque=0x0) at 
/usr/local/src/kvm/kvm-33/qemu/migration.c:328
        s = (MigrationState *) 0x2b58010
#4  0x000000000040c45d in main_loop_wait (timeout=2) at 
/usr/local/src/kvm/kvm-33/qemu/vl.c:6236
        pioh = (IOHandlerRecord **) 0x1
        ioh = (IOHandlerRecord *) 0x2993f70
        rfds = {fds_bits = {0 <repeats 16 times>}}
        wfds = {fds_bits = {49152, 0 <repeats 15 times>}}
        xfds = {fds_bits = {0 <repeats 16 times>}}
        ret = 2
        nfds = 15
        tv = {tv_sec = 0, tv_usec = 0}
        pe = (PollingEntry *) 0x2993f70
#5  0x000000000047f1c5 in kvm_main_loop_wait (env=0x296b670, timeout=10) at 
/usr/local/src/kvm/kvm-33/qemu/qemu-kvm.c:604
No locals.
#6  0x000000000047f39f in kvm_main_loop_cpu (env=0x296b670) at 
/usr/local/src/kvm/kvm-33/qemu/qemu-kvm.c:667
No locals.
#7  0x000000000040c6aa in main_loop () at 
/usr/local/src/kvm/kvm-33/qemu/vl.c:6289
        ret = 3
        timeout = 0
        env = (CPUX86State *) 0x0
(More stack frames follow...)
(gdb) p *(MigrationState *) 0x2b58010
$1 = {fd = 14, throttle_count = 533900, bps = 13271376, updated_pages = 0, 
last_updated_pages = 11167, iteration = 1, 
  n_buffer = 9, l_buffer = 9, throttled = 0, has_error = 0x296b640, 
  buffer = 
"\020\204�000\001\000\000\000\000\000\004\016\a\021\vv\fv\rv\016v\022\004\023\000\027
 \0334\0344\0354\0364��\021)[EMAIL 
PROTECTED]"\035\000��_\000\200\002�001\b\000\000\000a|G|\205|\b\004\000\000\000\000\000\000\000\000d\000\200\002�001\020\000�000�|G|�\020\006\005\v\006\005\005\000\000\000f\000\200\002�001\017\000�000�|G|�\020\006\005\n\005\005\005\000\001\017q\000\200\002�001\030\000�000\t}G|-}\030\006\b\020\b\b\b\000\000\000\\\000
 \003X\002\b\000\000\000"..., addr = 0, timer = 0x2993f40, 
  opaque = 0x2993fc0, detach = 0, release = 0x4168c0 <cmd_release>, 
rapid_writes = 0}
(gdb) p *((MigrationState *) 0x2b58010)->has_error
$2 = 0





  I ran the following commands:
> 
>   (qemu) stop
>   (qemu) migrate "exec:dd of=/tmp/jr1"
>   (qemu) migrate "exec:cat > /tmp/jr2" 
>   (qemu) migrate "exec:dd bs=1 of=/tmp/jr3"
> 
> And the file sizes:
> 
>   $ ls -al /tmp/jr[123]
>   -rw-r--r--  1 root root    86061424 2007-08-02 16:52 jr1
>   -rw-r--r--  1 root root    86220963 2007-08-02 16:53 jr2
>   -rw-r--r--  1 root root    86220963 2007-08-02 16:56 jr3
> 
> Sometimes the "cat" gives a filesize similar to "dd", depending on
> image size.  Only "dd bs=1" appears to always give me all of the data.
> Sometimes the truncated images work fine for resume, other times they
> cause a "migration failed".
> 
> I haven't had a chance yet to dig too deep in the source to find the
> cause.  I haven't seeen if this truncation also happens over TCP.
> 
> This was tested with kvm-28 modules and both kvm-28 and kvm-33
> userspace.  Has anyone else seen this?
> 
> -jim

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to