On 12.07.2010 18:03, Antti Kantee wrote: > On Mon Jul 12 2010 at 17:36:10 +0200, Jean-Yves Migeon wrote: >> - routing (the same way you pass message back and forth with puffs/pud); >> for Xen, this cannot be solved through the hypervisor API, you have to >> pull in Xen ring I/Os, and Xenstore, which acts as key:value storage >> facility so that domains can share information ("if you want to issue >> block requests for xbd0, contact domain 'foo'"). > > Just to make sure everyone (else) understands: forget puffs/pud. > They are *completely* orthogonal to rump -- one does not require nor > imply the other. > > Now, why do you expect this to not work like it does with the current > style of dom0? ... aah, currently everything just goes to dom0, and > now you'd need to pick which server can actually service your request? > While I'm not familiar with the Xen ecosystem (buzzword!!), from the > technical perspective it looks like this falls under "SMOP".
Huh? On what ground? My initial step would be to have a block backend driver running in userland, so I can make it run as a standalone server. Purpose is to help diagnosing errors in a more graceful manner. I lost tremendous time tracking a DoS issue between frontend and backend drivers for domain migration, for one stupid reason: the backend caused a DoS to the rest of the *dom0* kernel, without even being capable of breaking into ddb... >> - drivers (how could it work with passthrough, what is attainable/what >> is not) > > "passthrough"? You mean exporting hardware directly to guests? How would > it be different? As you say below: it needs a bus_space implementation. > Hmm, I assume you'd want to mostly run hardware drivers for dom0. > You might have to write a bus_space implementation for rump (some others > have been interested in this as well). The good news is that the x86 > ports are clear of the macro madness (thanks ad) and I assume you're > mostly interested in them. I don't even know how Xen deals with DMA. -- Jean-Yves Migeon jeanyves.mig...@free.fr