Re: [PATCH] lguest: disable SYSENTER for guests

2007-07-12 Thread Rusty Russell
On Thu, 2007-07-12 at 10:47 +0300, Avi Kivity wrote:
 Rusty Russell wrote:
  But what kind of daredevil coder would propose such a thing?)
 
 Ah, so this is why you want -next in preempt hooks.  Well, my plan for
 this sort of thing (for kvm has the same issues with the *STAR family of
 msrs) is to add a new hook on switching from kernel to userspace, and
 swap those msrs there.  This allows not only the guest1-guest2 case to
 be optimized, but also guest-kthread-guest, which is a common pattern
 with I/O (and very common with -rt, which runs interrupts in threads).

Adding instructions to the syscall path is not going to make you
popular.  But if you do it, I'll use it 8)

Thanks,
Rusty.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: lguest over qemu

2007-07-12 Thread Matias Zabaljauregui

Rusty,

You were right, it was the kernel configuration.
Changing the subject, in the next month I will be studying the possibilities
of implementing the idea behind xenoprof, for lguest.
Do you think this can be useful ?
Any advice?

Thanks
Matias


2007/7/12, Rusty Russell [EMAIL PROTECTED]:


On Wed, 2007-07-11 at 11:51 +0200, Matias Zabaljauregui wrote:
 Hi,

 I'm setting my lguest playing environment with qemu, but didn't have a
 good start... maybe because my modest laptop only has 512Mb of RAM.

 This is my qemu  command:

  qemu -s -no-kqemu -m 400 -hda linux26.img
 -net nic,model=rtl8139 -net tap


 ( linux26.img  includes  a  2.6.21.5 kernel with the lguest-2.6.21-307
 patch  )


 This is my lguest command (executed within the qemu VM):

   ./lguest 64m /boot/kernel-2.6.21.5
 --block=initrd-1.1-i386.img root=/dev/lbga

 always get this message (I have insisted with different virtual ram
 amounts):

 lguest: failed to get page 167304

I did most of my development under qemu, so I don't think that's the
issue.  Can you send me your .config?

Thanks,
Rusty.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: lguest over qemu

2007-07-12 Thread Rusty Russell
On Thu, 2007-07-12 at 17:02 +0200, Matias Zabaljauregui wrote:
 Rusty, 
 
 You were right, it was the kernel configuration. 

Hi Matias,

Any chance you could tell me what configuration option broke lguest?  I
should either fix it or document it...

 Changing the subject, in the next month I will be studying the
 possibilities of implementing the idea behind xenoprof, for lguest. 
 Do you think this can be useful ?  
 Any advice? 

One purpose of lguest is to demonstrate how virtualization should work,
so including profiling support is useful.  But the other purpose of
lguest is to be simple, so it depends how simple the patch to lguest
is...

(Note that lguest doesn't support NMIs, but Steven has code for NMI
support for lguest-x86-64 which could be ported across).

Cheers!
Rusty.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: lguest over qemu

2007-07-12 Thread Steven Rostedt


   (Note that lguest doesn't support NMIs, but Steven has code for NMI
 support for lguest-x86-64 which could be ported across).

Rusty,

About that.  Is there a way to get a NMI only stack in i386? In x86_64
it's part of the TSS. So I can always know I have a good stack for the
NMI. I'm not sure i368 has the same thing. Or do we always have a good
stack whenever we are in ring0?

Oh, and btw, I've just rewrote all of the Lguest64 page table handling.
I'm just going over one design change that is really bothering me. In
x86_64 we can have 2M or 4k pages (like the PSE in i386). But since 4K
pages are used by the shadow page tables, I have to map them like that.
But this means that I can have the same guest address as both a PMD and a
PTE. Which is breaking some of my code. I'm working on a fix as I write
this.

But to get you up-to-date to where I'm at. I've implemented a way to have
the HV mapped uniquely for all VCPUs.  So there's a HV text section (the
same for all VCPUs), a HV VCPU Data section (readonly in all rings with
guest cr3), and a HV VCPU Scratch Pad section (read/write in rings
0,1,and2).  So now the guest kernel runs in ring 1. With this change, I
already implemented a syscall trampoline that no longer needs to switch to
the host, as well as iretq by the guest kernel goes directly to the guest
user space (or kernel).  The next version of lguest64 will be much cleaner
and faster

-- Steve

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization