On Thu, May 22, 2008 at 5:11 AM, Rusty Russell <[EMAIL PROTECTED]> wrote:
> OK, patch is nice. It only works for small disks unfortunately, but it's > simpler than using a (sparse) temporary file and a bitmap (as I did in qemu). do you want a more standard patch format, and where do I send it? I'd like to get this in for real as I keep forward porting it :-) > This seems really odd. I haven't stress tested here, but what are the > symptoms? well, this is getting interesting. I am on a 2.6.25 release kernel from the ericvh-v9fs repo, and I can now pretty reliably panic the host -- to the point of locking it up so badly that only a power switch can save the day. But, the oops output is long enough that I can't see it all. Note that ericvh repo is really all about net/9p and fs/9p, so I'm not ready to assume it's all Eric's fault :-) On 2.6.23 I could reliably start and run 100 or more guests, which allowed me to prototype cluster code on my laptop. One part of this picture *seems* to be that different tapx devices end up with the same mac, but that didn't happen last night with 50 guests, so that can't be all of it. The oops was (apparently) in a switch_to, and happened after all guests had booted and I pinged one of them, so I feel network is in this picture somehow. More as I find it. I am going to try to set up a serial console machine when I get back to CA and try to get a reasonable diagnostic. Sorry I can't give more useful info, but the failure mode makes it hard to do much but say "fallen and can't get up". BTW I'm intrigued by your idea of mapping disk directly into guest. I'd like to preserve the copy-on-write semantics of the mmap block device. I wonder if we could do the folliowing: 1. set up special E820 "copy on write" map entry 2. kernel boots, maps that e820 segment write-protected 3. write faults on that segment result in copy-on-write behavior for the page in question Probably more trouble than it's worth but intriguiing. The E820 map could be kind of useful for this sort of thing. Thanks ron _______________________________________________ Lguest mailing list [email protected] https://ozlabs.org/mailman/listinfo/lguest
