Re: [kvm-devel] portability layer?

2007-03-28 Thread Avi Kivity
Hollis Blanchard wrote:
>> No, I'm saying that some #ifdeffery in both libkvm and the ioctl 
>> interface is unavoidable.
>> 
>
> If by #ifdeffery you mean having per-architecture definitions of
> structures like kvm_regs, absolutely. If you mean literal #ifdefs in the
> middle a header file, I believe that can and should be avoided.
>
>   

If it can be avoided I'm all for it.

>> Right now this is handled by qemu, which means our higher level tools 
>> are _already_ nonportable.
>> 
>
> Yes, but not *all* the higher level tools are. At some point you have a
> common interface, and at this point I think I've answered my own
> question: the qemu monitor connection is the portable interface.
>
> That means everything layered above qemu, such as libvirt and thus
> virt-manager, should work on all architectures +/- without changes.
> Lower-level software, such as GDB, would need per-architecture support.
>
>   

Ah, _those_ higher layer tools.

Each of these interfaces needs to be stabilized for different reasons:

- the kernel ABI allows the kernel and userspace to be upgraded 
independently
- libkvm is mainly for when we've merged all our changes into mainline 
qemu, and for the theoretical second user
- the qemu monitor is for the higher level tools

Note that the qemu monitor (and commandline) interface is under the 
control of the qemu maintainers, not us.  So far it has been steadily 
improving.

>> [I have a feeling we're talking a little past each other, probably due 
>> to me not knowing ppc at any level of detail.  No doubt things will 
>> become clearer when the code arrives]
>> 
>
> I don't have any code for you, but you will be the first to know when I
> do. :) Right now I'm just trying to make sure we don't accidentally
> paint ourselves into a corner with a stable ABI.
>   

The stable ABI here is just the support baseline, not a freeze.  We know 
for certain that changes are needed for smp, paravirt drivers, new 
hardware virtualization extensions, and new archs.  And of course it 
only holds for x86; other archs will stabilize when they are ready.


-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] portability layer?

2007-03-28 Thread Hollis Blanchard
On Wed, 2007-03-28 at 17:48 +0200, Avi Kivity wrote:
> Hollis Blanchard wrote:
> > On Tue, 2007-03-27 at 08:57 +0200, Avi Kivity wrote:
> >>
> >> I don't think we should be aiming at full source portability.  
> >> Virtualization is inherently nonportable, and as it is mostly done in 
> >> hardware, software gets to do the quirky stuff that the hardware people 
> >> couldn't bother with :)  instead we should be aiming at code reuse.
> >> 
> >
> > I'm not sure I see the distinction you're making. Operating systems
> > could also be considered "inherently nonportable", yet Linux and the
> > BSDs support an enormous range of platforms. If you're saying that we
> > shouldn't try to run x86 MMU code on a PowerPC then I can't agree
> > more. :)
> 
> No, I'm saying that some #ifdeffery in both libkvm and the ioctl 
> interface is unavoidable.

If by #ifdeffery you mean having per-architecture definitions of
structures like kvm_regs, absolutely. If you mean literal #ifdefs in the
middle a header file, I believe that can and should be avoided.

> Right now this is handled by qemu, which means our higher level tools 
> are _already_ nonportable.

Yes, but not *all* the higher level tools are. At some point you have a
common interface, and at this point I think I've answered my own
question: the qemu monitor connection is the portable interface.

That means everything layered above qemu, such as libvirt and thus
virt-manager, should work on all architectures +/- without changes.
Lower-level software, such as GDB, would need per-architecture support.

> [I have a feeling we're talking a little past each other, probably due 
> to me not knowing ppc at any level of detail.  No doubt things will 
> become clearer when the code arrives]

I don't have any code for you, but you will be the first to know when I
do. :) Right now I'm just trying to make sure we don't accidentally
paint ourselves into a corner with a stable ABI.

-Hollis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] portability layer?

2007-03-28 Thread Avi Kivity
Hollis Blanchard wrote:
> On Tue, 2007-03-27 at 08:57 +0200, Avi Kivity wrote:
>   
>> Hollis Blanchard wrote:
>> 
>>> Hi Avi, I was wondering what you think is the right abstraction layer to
>>> target for porting KVM to non-x86 architectures? To me it looks like
>>> libkvm is the answer.
>>>
>>> The kernel/userland interface is heavily x86-specific, including things
>>> like struct kvm_run. So it looks like the higher-level API of
>>> kvm_init(), kvm_create(), etc would be the right cut? struct
>>> kvm_callbacks is even reasonably portable, especially if cpuid is hidden
>>> behind an "arch" callback.
>>>   
>>>   
>> Disclaimer: I know little about powerpc (or ia64).  What I say may or 
>> may not have any connection with reality.
>>
>> I don't think we should be aiming at full source portability.  
>> Virtualization is inherently nonportable, and as it is mostly done in 
>> hardware, software gets to do the quirky stuff that the hardware people 
>> couldn't bother with :)  instead we should be aiming at code reuse.
>> 
>
> I'm not sure I see the distinction you're making. Operating systems
> could also be considered "inherently nonportable", yet Linux and the
> BSDs support an enormous range of platforms. If you're saying that we
> shouldn't try to run x86 MMU code on a PowerPC then I can't agree
> more. :)
>   

No, I'm saying that some #ifdeffery in both libkvm and the ioctl 
interface is unavoidable.

A trivial example is kvm_get_regs().  If you want to do anything other 
than memcpy() the result, the caller has to be nonportable. 
kvm_setup_cpuid() doesn't make sense on ppc, as you said.  The in*/out* 
callbacks don't belong, and there will probably be a few callbacks that 
will leave me puzzled when you add them.

The fact is that the "higher level tools" will emulate a powerpc when 
running on a powerpc, and an x86 when running on an x86.  That's 
different from a webserver which is implementing the http protocol no 
matter what the underlying platform is.  That's what I meant by 
"inherently nonportable".

> Aside from code reuse though (on which I absolutely agree), it's
> critical that the interface be the same, i.e. each architecture
> implements the same interface in different ways. With that, all the
> higher-level tools will work with minimal modification. (This is
> analogous to an OS interface like POSIX.)
>
>   

A function like sys_read() can be made reasonably portable, but 
injecting an interrupt into an x86 requires peeking into a register 
which is aliased to an mmio location (cr8/tpr).  No doubt ppc has its 
own wierdnesses, but they'll be different.

Right now this is handled by qemu, which means our higher level tools 
are _already_ nonportable.

>> I think there's some potential there:
>>
>> - memory slot management, including the dirty log, could be mostly 
>> reused (possibly updated for multiple page sizes). possibly msrs as well.
>> 
>
> I'm not familiar with KVM's memory slots or dirty log. My first
> impression was that the dirty log is tied to the x86 shadow pagetable
> implementation, but I admit I haven't investigated further.
>   

The implementation is, but the interface and use is generic.  The dirty 
log is used for two purposes:

- minimization of screen updates on framebuffer changes
- tracking pages which need to be re-copied during live migration

Hopefully the interface and some parts of the kernel code can be reused.

The memory slots thing is just a way for userspace to specify physically 
discontiguous memory.  Each slot is contiguous within itself, but 
different slots may be discontiguous.  It is used for the framebuffer, 
and for various memory holes in x86 (640KB-1MB and the pci hole).

>   
>> I don't see a big difference between the ioctl layer and libkvm.  In 
>> general, a libkvm function is an ioctl, and kvm_callback members are a 
>> decoding of kvm_run fields.  If you edit kvm_run to suit your needs, you 
>> can probably reuse some of it.
>> 
>
> kvm_run as it stands is 100% x86-specific. (I doubt it could even be
> easily adapted for ia64, which is more similar to x86 than PowerPC.) So
> right now the kernel ioctl interface has an architecture-specific
> component, which violates the principle of identical interfaces I
> described earlier.
>   

Just #ifdef the x86 specific parts away, and add your own magic where 
necessary.

> That means we either a) need to change the kernel interface or b) define
> a higher-level interface that *is* identical. That higher-level
> interface would be libkvm, hence my original question.
>
> Does my original question make more sense now? If you make libkvm the
> official interface, you would at least need to hide the "cpuid"
> callback, since it is intimately tied to an x86 instruction.
>   

Well, libkvm is _an_ official interface.  Any changes needed to make it 
portable are welcome.

[I have a feeling we're talking a little past each other, probably due 
to me not knowing ppc at any level o

Re: [kvm-devel] portability layer?

2007-03-28 Thread Arnd Bergmann
On Wednesday 28 March 2007, Hollis Blanchard wrote:
> > I don't see a big difference between the ioctl layer and libkvm.  In 
> > general, a libkvm function is an ioctl, and kvm_callback members are a 
> > decoding of kvm_run fields.  If you edit kvm_run to suit your needs, you 
> > can probably reuse some of it.
> 
> kvm_run as it stands is 100% x86-specific. (I doubt it could even be
> easily adapted for ia64, which is more similar to x86 than PowerPC.) So
> right now the kernel ioctl interface has an architecture-specific
> component, which violates the principle of identical interfaces I
> described earlier.

Remember that there _is_ an equivalent of kvm_run on powerpc (not powerpc64)
inside of MacOnLinux, though I could not find it now when looking through
the source.

> That means we either a) need to change the kernel interface or b) define
> a higher-level interface that *is* identical. That higher-level
> interface would be libkvm, hence my original question.
> 
> Does my original question make more sense now? If you make libkvm the
> official interface, you would at least need to hide the "cpuid"
> callback, since it is intimately tied to an x86 instruction.

If there is going to be an architecture independent interface, it
should really be able to cover s390 as well, which has yet other
requirements. It's probably closer to amd64 than to powerpc64 though.

Arnd <><

-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] portability layer?

2007-03-28 Thread Hollis Blanchard
On Tue, 2007-03-27 at 08:57 +0200, Avi Kivity wrote:
> Hollis Blanchard wrote:
> > Hi Avi, I was wondering what you think is the right abstraction layer to
> > target for porting KVM to non-x86 architectures? To me it looks like
> > libkvm is the answer.
> >
> > The kernel/userland interface is heavily x86-specific, including things
> > like struct kvm_run. So it looks like the higher-level API of
> > kvm_init(), kvm_create(), etc would be the right cut? struct
> > kvm_callbacks is even reasonably portable, especially if cpuid is hidden
> > behind an "arch" callback.
> >   
> 
> Disclaimer: I know little about powerpc (or ia64).  What I say may or 
> may not have any connection with reality.
> 
> I don't think we should be aiming at full source portability.  
> Virtualization is inherently nonportable, and as it is mostly done in 
> hardware, software gets to do the quirky stuff that the hardware people 
> couldn't bother with :)  instead we should be aiming at code reuse.

I'm not sure I see the distinction you're making. Operating systems
could also be considered "inherently nonportable", yet Linux and the
BSDs support an enormous range of platforms. If you're saying that we
shouldn't try to run x86 MMU code on a PowerPC then I can't agree
more. :)

Aside from code reuse though (on which I absolutely agree), it's
critical that the interface be the same, i.e. each architecture
implements the same interface in different ways. With that, all the
higher-level tools will work with minimal modification. (This is
analogous to an OS interface like POSIX.)

> I think there's some potential there:
> 
> - memory slot management, including the dirty log, could be mostly 
> reused (possibly updated for multiple page sizes). possibly msrs as well.

I'm not familiar with KVM's memory slots or dirty log. My first
impression was that the dirty log is tied to the x86 shadow pagetable
implementation, but I admit I haven't investigated further.

> - the vcpu management calls (get regs/set regs,  vcpu_run) can be 
> reused, but only as wrappers.  The actual contents (including the 
> kvm_run structure) would be very different.

Right, each architecture would define its own, and all code that touches
these data structures would be moved out of common code.

> I don't see a big difference between the ioctl layer and libkvm.  In 
> general, a libkvm function is an ioctl, and kvm_callback members are a 
> decoding of kvm_run fields.  If you edit kvm_run to suit your needs, you 
> can probably reuse some of it.

kvm_run as it stands is 100% x86-specific. (I doubt it could even be
easily adapted for ia64, which is more similar to x86 than PowerPC.) So
right now the kernel ioctl interface has an architecture-specific
component, which violates the principle of identical interfaces I
described earlier.

That means we either a) need to change the kernel interface or b) define
a higher-level interface that *is* identical. That higher-level
interface would be libkvm, hence my original question.

Does my original question make more sense now? If you make libkvm the
official interface, you would at least need to hide the "cpuid"
callback, since it is intimately tied to an x86 instruction.

-Hollis


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel


Re: [kvm-devel] portability layer?

2007-03-26 Thread Avi Kivity
Hollis Blanchard wrote:
> Hi Avi, I was wondering what you think is the right abstraction layer to
> target for porting KVM to non-x86 architectures? To me it looks like
> libkvm is the answer.
>
> The kernel/userland interface is heavily x86-specific, including things
> like struct kvm_run. So it looks like the higher-level API of
> kvm_init(), kvm_create(), etc would be the right cut? struct
> kvm_callbacks is even reasonably portable, especially if cpuid is hidden
> behind an "arch" callback.
>   

Disclaimer: I know little about powerpc (or ia64).  What I say may or 
may not have any connection with reality.

I don't think we should be aiming at full source portability.  
Virtualization is inherently nonportable, and as it is mostly done in 
hardware, software gets to do the quirky stuff that the hardware people 
couldn't bother with :)  instead we should be aiming at code reuse.  I 
think there's some potential there:

- memory slot management, including the dirty log, could be mostly 
reused (possibly updated for multiple page sizes). possibly msrs as well.
- the vcpu management calls (get regs/set regs,  vcpu_run) can be 
reused, but only as wrappers.  The actual contents (including the 
kvm_run structure) would be very different.

I don't see a big difference between the ioctl layer and libkvm.  In 
general, a libkvm function is an ioctl, and kvm_callback members are a 
decoding of kvm_run fields.  If you edit kvm_run to suit your needs, you 
can probably reuse some of it.




-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.


-
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
___
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel