can kvm tool load vmlinux?

2011-12-14 Thread David Evensky

I've been trying see if I can use a vmlinux kernel binary image with kvm tool.
If use a bzImage as

lkvm run -k .../bzImage --console serial -d /dev/null

which runs until it expectedly dies since I didn't give it an initramfs or disk.
However, when I try:

lkvm run -k .../bzImage --console serial -d /dev/null

I get the following printed and then nothing:

  # kvm run -k .../vmlinux -m 448 -c 4 --name guest-12289
  Warning: .../vmlinux is not a bzImage. Trying to load it as a flat binary...


Is this supposed to work?

Thanks,
\dae

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH V3 1/2] kvm tools: Add ability to map guest RAM from hugetlbfs

2011-12-13 Thread David Evensky


On an x86 32bit system (and using the 32bit CodeSourcery toolchain on a x86_64 
system) I get:

evensky@machine:~/.../linux-kvm/tools/kvm$ make
  CC   util/util.o
util/util.c: In function 'mmap_hugetlbfs':
util/util.c:93:17: error: comparison between signed and unsigned integer 
expressions [-Werror=sign-compare]
util/util.c:99:7: error: format '%ld' expects argument of type 'long int', but 
argument 2 has type 'int' [-Werror=format]
cc1: all warnings being treated as errors

make: *** [util/util.o] Error 1

Thanks,
\dae

On Tue, Dec 13, 2011 at 05:21:46PM +1100, Matt Evans wrote:
> Add a --hugetlbfs commandline option to give a path to hugetlbfs-map guest
> memory (down in kvm__arch_init()).  For x86, guest memory is a normal
> ANON mmap() if this option is not provided, otherwise a hugetlbfs mmap.
> 
> This maps directly from a hugetlbfs temp file rather than using something
> like MADV_HUGEPAGES so that, if the user asks for hugepages, we definitely
> are using hugepages.  (This is particularly useful for architectures that
> don't yet support KVM without hugepages, so we definitely need to use
> them for the whole of guest RAM.)
> 
> Signed-off-by: Matt Evans 
> ---
>  tools/kvm/builtin-run.c  |4 +++-
>  tools/kvm/include/kvm/kvm.h  |4 ++--
>  tools/kvm/include/kvm/util.h |4 
>  tools/kvm/kvm.c  |4 ++--
>  tools/kvm/util.c |   38 ++
>  tools/kvm/x86/kvm.c  |   20 +---
>  6 files changed, 66 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/kvm/builtin-run.c b/tools/kvm/builtin-run.c
> index 4411c9e..ab05f8c 100644
> --- a/tools/kvm/builtin-run.c
> +++ b/tools/kvm/builtin-run.c
> @@ -85,6 +85,7 @@ static const char *host_mac;
>  static const char *script;
>  static const char *guest_name;
>  static const char *sandbox;
> +static const char *hugetlbfs_path;
>  static struct virtio_net_params *net_params;
>  static bool single_step;
>  static bool readonly_image[MAX_DISK_IMAGES];
> @@ -437,6 +438,7 @@ static const struct option options[] = {
>tty_parser),
>   OPT_STRING('\0', "sandbox", &sandbox, "script",
>   "Run this script when booting into custom rootfs"),
> + OPT_STRING('\0', "hugetlbfs", &hugetlbfs_path, "path", "Hugetlbfs 
> path"),
>  
>   OPT_GROUP("Kernel options:"),
>   OPT_STRING('k', "kernel", &kernel_filename, "kernel",
> @@ -924,7 +926,7 @@ int kvm_cmd_run(int argc, const char **argv, const char 
> *prefix)
>   guest_name = default_name;
>   }
>  
> - kvm = kvm__init(dev, ram_size, guest_name);
> + kvm = kvm__init(dev, hugetlbfs_path, ram_size, guest_name);
>  
>   kvm->single_step = single_step;
>  
> diff --git a/tools/kvm/include/kvm/kvm.h b/tools/kvm/include/kvm/kvm.h
> index 5fe6e75..7159952 100644
> --- a/tools/kvm/include/kvm/kvm.h
> +++ b/tools/kvm/include/kvm/kvm.h
> @@ -30,7 +30,7 @@ struct kvm_ext {
>  void kvm__set_dir(const char *fmt, ...);
>  const char *kvm__get_dir(void);
>  
> -struct kvm *kvm__init(const char *kvm_dev, u64 ram_size, const char *name);
> +struct kvm *kvm__init(const char *kvm_dev, const char *hugetlbfs_path, u64 
> ram_size, const char *name);
>  int kvm__recommended_cpus(struct kvm *kvm);
>  int kvm__max_cpus(struct kvm *kvm);
>  void kvm__init_ram(struct kvm *kvm);
> @@ -54,7 +54,7 @@ int kvm__enumerate_instances(int (*callback)(const char 
> *name, int pid));
>  void kvm__remove_socket(const char *name);
>  
>  void kvm__arch_set_cmdline(char *cmdline, bool video);
> -void kvm__arch_init(struct kvm *kvm, const char *kvm_dev, u64 ram_size, 
> const char *name);
> +void kvm__arch_init(struct kvm *kvm, const char *kvm_dev, const char 
> *hugetlbfs_path, u64 ram_size, const char *name);
>  void kvm__arch_setup_firmware(struct kvm *kvm);
>  bool kvm__arch_cpu_supports_vm(void);
>  void kvm__arch_periodic_poll(struct kvm *kvm);
> diff --git a/tools/kvm/include/kvm/util.h b/tools/kvm/include/kvm/util.h
> index dc2e0b9..1f6fbbd 100644
> --- a/tools/kvm/include/kvm/util.h
> +++ b/tools/kvm/include/kvm/util.h
> @@ -20,6 +20,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #ifdef __GNUC__
>  #define NORETURN __attribute__((__noreturn__))
> @@ -75,4 +76,7 @@ static inline void msleep(unsigned int msecs)
>  {
>   usleep(MSECS_TO_USECS(msecs));
>  }
> +
> +void *mmap_hugetlbfs(const char *htlbfs_path, u64 size);
> +
>  #endif /* KVM__UTIL_H */
> diff --git a/tools/kvm/kvm.c b/tools/kvm/kvm.c
> index c54f886..35ca2c5 100644
> --- a/tools/kvm/kvm.c
> +++ b/tools/kvm/kvm.c
> @@ -306,7 +306,7 @@ int kvm__max_cpus(struct kvm *kvm)
>   return ret;
>  }
>  
> -struct kvm *kvm__init(const char *kvm_dev, u64 ram_size, const char *name)
> +struct kvm *kvm__init(const char *kvm_dev, const char *hugetlbfs_path, u64 
> ram_size, const char *name)
>  {
>   struct kvm *kvm;
>   int ret;
> @@ -339,7 +339,7 @@ struct kvm *kvm__init(const char *kvm_dev, u6

Re: kvm-tools: can't seem to set guest_mac and KVM_GET_SUPPORTED_CPUID failed.

2011-11-17 Thread David Evensky

Avi, sure:

evensky@waltz:~$ gcc supported-cpuid.c -o supported-cpuid
evensky@waltz:~$ ./supported-cpuid 
Returned entries: 37
func  ind  flags  -> 000d 756e6547 6c65746e 49656e69
func 0001 ind  flags  -> 000206a7 01100800 16b82203 0f8bfbff
func 0002 ind  flags 0006 -> 76035a01 00f0b2ff  00ca
func 0003 ind  flags  ->    
func 0004 ind  flags 0001 -> 1c004121 01c0003f 003f 
func 0004 ind 0001 flags 0001 -> 1c004122 01c0003f 003f 
func 0004 ind 0002 flags 0001 -> 1c004143 01c0003f 01ff 
func 0004 ind 0003 flags 0001 -> 1c03c163 03c0003f 0fff 0006
func 0004 ind 0004 flags 0001 ->    
func 0005 ind  flags  -> 0040 0040 0003 00021120
func 0006 ind  flags  -> 0077 0002 0009 
func 0007 ind  flags  ->    
func 0008 ind  flags  ->    
func 0009 ind  flags  ->    
func 000a ind  flags  -> 07300403   0603
func 000b ind  flags 0001 -> 0001 0002 0100 0001
func 000b ind 0001 flags 0001 -> 0004 0004 0201 0001
func 000b ind 0002 flags 0001 ->   0002 0001
func 000c ind  flags  ->    
func 000d ind  flags 0001 -> 0007 0340 0340 
func 000d ind 0001 flags 0001 -> 0001   
func 8001 ind  flags  ->   0001 28100800
func 000d ind 0003 flags 0001 ->    
func 000d ind 0004 flags 0001 ->    
func 000d ind 0005 flags 0001 ->    
func 8005 ind  flags  ->    
func 8000 ind  flags  -> 8008   
func 8001 ind  flags  ->   0001 28100800
func 8002 ind  flags  -> 20202020 49202020 6c65746e 20295228
func 8003 ind  flags  -> 65726f43 294d5428 2d376920 30323632
func 8004 ind  flags  -> 5043204d 20402055 30372e32 007a4847
func 8005 ind  flags  ->    
func 8006 ind  flags  ->   01006040 
func 8007 ind  flags  ->    0100
func 8008 ind  flags  -> 3024   
func 4000 ind  flags  ->  4b4d564b 564b4d56 004d
func 4001 ind  flags  -> 011b  0000 

\dae

On Thu, Nov 17, 2011 at 06:49:16PM +0200, Avi Kivity wrote:
> On 11/17/2011 06:29 PM, David Evensky wrote:
> > Avi,
> >
> > evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 
> > 0x01 ./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys 
> > %histogram){print "$o: $histogram{$o}\n"}'
> > KVM_GET_SUPPORTED_CPUID returned -1 with errno 7: 3
> > Returned entries: 31: 9995
> > Returned entries: 32: 1
> > Returned entries: 64: 1
> > evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 
> > 0x02 ./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys 
> > %histogram){print "$o: $histogram{$o}\n"}'
> > KVM_GET_SUPPORTED_CPUID returned -1 with errno 7: 1
> > Returned entries: 31: 
> > evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 
> > 0x03 ./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys 
> > %histogram){print "$o: $histogram{$o}\n"}'
> > KVM_GET_SUPPORTED_CPUID returned -1 with errno 7: 3
> > Returned entries: 31: 9995
> > Returned entries: 57: 1
> > Returned entries: 58: 1
> > evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 
> > 0x04 ./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys 
> > %histogram){print "$o: $histogram{$o}\n"}'
> > Returned entries: 31: 1
> > evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 
> > 0x08 ./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys 
> > %histogram){p

Re: kvm-tools: can't seem to set guest_mac and KVM_GET_SUPPORTED_CPUID failed.

2011-11-17 Thread David Evensky

Avi,

evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 0x01 
./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys %histogram){print 
"$o: $histogram{$o}\n"}'
KVM_GET_SUPPORTED_CPUID returned -1 with errno 7: 3
Returned entries: 31: 9995
Returned entries: 32: 1
Returned entries: 64: 1
evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 0x02 
./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys %histogram){print 
"$o: $histogram{$o}\n"}'
KVM_GET_SUPPORTED_CPUID returned -1 with errno 7: 1
Returned entries: 31: 
evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 0x03 
./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys %histogram){print 
"$o: $histogram{$o}\n"}'
KVM_GET_SUPPORTED_CPUID returned -1 with errno 7: 3
Returned entries: 31: 9995
Returned entries: 57: 1
Returned entries: 58: 1
evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 0x04 
./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys %histogram){print 
"$o: $histogram{$o}\n"}'
Returned entries: 31: 1
evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..1){ $o=`taskset 0x08 
./4sasha`; chomp($o); $histogram{$o}++}; for $o (sort keys %histogram){print 
"$o: $histogram{$o}\n"}'
KVM_GET_SUPPORTED_CPUID returned -1 with errno 7: 1
Returned entries: 31: 9998
Returned entries: 54: 1

\dae

On Thu, Nov 17, 2011 at 06:20:33PM +0200, Avi Kivity wrote:
> On 11/17/2011 06:12 PM, David Evensky wrote:
> >
> > evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..10){ 
> > $o=`./4sasha`; chomp($o); $histogram{$o}++}; for $o (keys %histogram){print 
> > "$o: $histogram{$o}\n"}'
> > Returned entries: 31: 99987
> > Returned entries: 56: 1
> > KVM_GET_SUPPORTED_CPUID returned -1 with errno 7: 8
> > Returned entries: 37: 4
> >
> >
> 
> So it seems to be cpu migration related.  But there's a get_cpu() in
> do_cpuid_ent().
> 
> What happens if you change `./4sasha` to `taskset 1 ./4sasha`? or 2 4 8
> 10 20 etc?
> 
> -- 
> error compiling committee.c: too many arguments to function
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-tools: can't seem to set guest_mac and KVM_GET_SUPPORTED_CPUID failed.

2011-11-17 Thread David Evensky


evensky@waltz:~/megatux/vmatic$ perl -e 'for $cnt (1..10){ $o=`./4sasha`; 
chomp($o); $histogram{$o}++}; for $o (keys %histogram){print "$o: 
$histogram{$o}\n"}'
Returned entries: 31: 99987
Returned entries: 56: 1
KVM_GET_SUPPORTED_CPUID returned -1 with errno 7: 8
Returned entries: 37: 4

\dae

On Thu, Nov 17, 2011 at 05:53:50PM +0200, Avi Kivity wrote:
> On 11/17/2011 05:52 PM, Sasha Levin wrote:
> > On Thu, 2011-11-17 at 07:50 -0800, David Evensky wrote:
> > > It prints 'Returned entries: 31'
> > > \dae
> >
> > Thats the OK scenario, could you run it several times to see if you can
> > trigger it to print something else?
> 
> Maybe with 'taskset' to get it to run on different cpus?
> 
> In a loop please.
> 
> -- 
> error compiling committee.c: too many arguments to function
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-tools: can't seem to set guest_mac and KVM_GET_SUPPORTED_CPUID failed.

2011-11-17 Thread David Evensky
On Thu, Nov 17, 2011 at 08:07:35AM +0200, Sasha Levin wrote:
> On Wed, 2011-11-16 at 16:42 -0800, David Evensky wrote:
> > 
> > ...
> This should be '-n mode=tap,guest_mac=00:11:11:11:11:11'
> ...
Thanks!

> > 
> > Also, when I start the guest I sometimes get the following error message:
> > 
> >   # kvm run -k /path/to/bzImage-3.0.8 -m 256 -c 1 --name guest-15757
> > KVM_GET_SUPPORTED_CPUID failed: Argument list too long
> 
> Heh, we were talking about it couple of weeks ago, but since I couldn't
> reproduce it here (it was happening to me before, but now it's gone) the
> discussing died.
> 
> Could you please provide some statistics on how often it happens to you?
> Also, can you try wrapping the ioctl with a 'while (1)' (theres only 1
> ioctl call to KVM_GET_SUPPORTED_CPUID) and see if it would happen at
> some point?

Last night I was getting it > 10%; but now that the sun has risen, of
course I can't produce it with 10 tries with 3 sets of args that I was
using. I did upgrade a few packages last night, but nothing that
should affect this.

> Thanks!
> 
> > I haven't seen that before.
> > 
> > Thanks,
> > \dae
> > ...

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-tools: can't seem to set guest_mac and KVM_GET_SUPPORTED_CPUID failed.

2011-11-17 Thread David Evensky

It prints 'Returned entries: 31'
\dae

On Thu, Nov 17, 2011 at 05:43:49PM +0200, Sasha Levin wrote:
> On Thu, 2011-11-17 at 07:38 -0800, David Evensky wrote:
> > On Thu, Nov 17, 2011 at 08:56:38AM +0200, Sasha Levin wrote:
> > > On Thu, 2011-11-17 at 08:53 +0200, Pekka Enberg wrote:
> > > > On Thu, Nov 17, 2011 at 8:07 AM, Sasha Levin  
> > > > wrote:
> > > > >> Also, when I start the guest I sometimes get the following error 
> > > > >> message:
> > > 
> > > 
> > > David, which host kernel do you use?
> > 
> > I'm using the kernel that ships with Debian Sid, which I last booted as 
> > 3.0.0-2-amd64.
> > My guest kernel is a 32bit kernel built from kernel.org's linux-3.0.8.
> 
> Hm... This should be new enough...
> 
> Could you please try compiling and running the code below several times
> and see if you get an error message? This should help us understand if
> it's a usermode or a kernel issue.
> 
> Thanks!
> 
>  cut here---
> 
> #include 
> #include 
> #include 
> #include 
> #include 
> 
> int main(void)
> {
>   struct kvm_cpuid2 *cpuid;
>   int kvm, r = 0;
> 
>   kvm = open("/dev/kvm", O_RDWR);
>   cpuid = malloc(sizeof(*cpuid) + sizeof(struct kvm_cpuid_entry2) * 100);
>   cpuid->nent = 100;
> 
>   r = ioctl(kvm, KVM_GET_SUPPORTED_CPUID, cpuid);
>   if (r)
>   printf("KVM_GET_SUPPORTED_CPUID returned %d with errno %d\n", 
> r, errno);
>   else
>   printf("Returned entries: %d\n", cpuid->nent);
> 
>   free(cpuid);
>   close(kvm);
> 
>   return 0;
> }
> 
> -- 
> 
> Sasha.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-tools: can't seem to set guest_mac and KVM_GET_SUPPORTED_CPUID failed.

2011-11-17 Thread David Evensky
On Thu, Nov 17, 2011 at 08:56:38AM +0200, Sasha Levin wrote:
> On Thu, 2011-11-17 at 08:53 +0200, Pekka Enberg wrote:
> > On Thu, Nov 17, 2011 at 8:07 AM, Sasha Levin  
> > wrote:
> > >> Also, when I start the guest I sometimes get the following error message:
> 
> 
> David, which host kernel do you use?

I'm using the kernel that ships with Debian Sid, which I last booted as 
3.0.0-2-amd64.
My guest kernel is a 32bit kernel built from kernel.org's linux-3.0.8.

> 
> -- 
> 
> Sasha.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


kvm-tools: can't seem to set guest_mac and KVM_GET_SUPPORTED_CPUID failed.

2011-11-16 Thread David Evensky


There was a patch (quoted below) that changed networking at the end of 
September. When I
try to set the guest_mac from the usage in the patch and an admittaly too
brief a look at the code, the guest's mac address isn't being set. I'm using:

sudo /path/to/linux-kvm/tools/kvm/kvm run -c 1 -m 256 -k /path/to/bzImage-3.0.8 
\
   -i /path/to/initramfs-host.img --console serial -p ' console=ttyS0  ' -n 
tap,guest_mac=00:11:11:11:11:11

In the guest I get:

# ifconfig eth0
eth0  Link encap:Ethernet  HWaddr 02:15:15:15:15:15  
  inet addr:192.168.122.237  Bcast:192.168.122.255  Mask:255.255.255.0
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:24 errors:0 dropped:2 overruns:0 frame:0
  TX packets:2 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:1874 (1.8 KiB)  TX bytes:656 (656.0 B)

which is the default.

Also, when I start the guest I sometimes get the following error message:

  # kvm run -k /path/to/bzImage-3.0.8 -m 256 -c 1 --name guest-15757
KVM_GET_SUPPORTED_CPUID failed: Argument list too long

I haven't seen that before.

Thanks,
\dae

On Sat, Sep 24, 2011 at 12:17:51PM +0300, Sasha Levin wrote:
> This patch adds support for multiple network devices. The command line syntax
> changes to the following:
> 
>   --network/-n [mode=[tap/user/none]] [guest_ip=[guest ip]] [host_ip=
> [host_ip]] [guest_mac=[guest_mac]] [script=[script]]
> 
> Each of the parameters is optional, and the config defaults to a TAP based
> networking with a random MAC.
> ...

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: Allow remapping guest TTY into host PTS

2011-09-15 Thread David Evensky
>'kvm run --tty [id] [other options]'
> > >
> > > The tty will be mapped to a pts and will be printed on the screen:
> > >'  Info: Assigned terminal 1 to pty /dev/pts/X'
> > >
> > > At this point, it is possible to communicate with the guest using that 
> > > pty.
> > >
> > > This is useful for debugging guest kernel using KGDB:
> > >
> > > 1. Run the guest:
> > >'kvm run -k [vmlinuz] -p "kdbgoc=ttyS1 kdbgwait" --tty 1'
> > >
> > > And see which PTY got assigned to ttyS1.
> > >
> > > 2. Run GDB on the host:
> > >'gdb [vmlinuz]'
> > >
> > > 3. Connect to the guest (from within GDB):
> > >'target remote /dev/pty/X'
> > >
> > > 4. Start debugging! (enter 'continue' to continue boot).
> > >
> > > Cc: David Evensky 
> > > Signed-off-by: Sasha Levin 
> > 
> > Neat! Would a tools/kvm/Documentation/debugging.txt be helpful for
> > people who want to do kernel debugging with kvmtool?
> 
> I'll write a basic doc with the details provided above.
> 
> David, does this patch allows you to properly debug guest kernels? If
> so, could you mail back any issues or hacks you had to do to set it up
> so I could add it to the doc and move it into 'Documentation/'?
> 
> -- 
> 
> Sasha.
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kgdb hooks and kvm-tool

2011-09-14 Thread David Evensky

Thanks!
\dae

On Thu, Sep 15, 2011 at 08:39:03AM +0300, Sasha Levin wrote:
> On Thu, 2011-09-15 at 08:32 +0300, Pekka Enberg wrote:
> > On Thu, Sep 15, 2011 at 2:17 AM, David Evensky
> >  wrote:
> > > Hi. Is it possible to use kvm-tool with a kernel compiled with kgdb?
> > > I've tried adding 'kgdbwait kgdboc=ttyS0' to -p, but that doesn't seem
> > > to work.
> > 
> > I've never tried kgdb myself but I'm rather surprised it doesn't just
> > work. Sasha, Cyrill, Asias, have you guys ever tried kvmtool with
> > kgdb?
> 
> You can either use 'kgdboc=kbd' to use it over the keyboard. I also have
> a patch which uses forktty() to spawn serial consoles and redirect guest
> tty's into them, but it's somewhat ugly.
> 
> Give me a day or two to make it nicer and I'll send it over.
> 
> -- 
> 
> Sasha.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


kgdb hooks and kvm-tool

2011-09-14 Thread David Evensky

Hi. Is it possible to use kvm-tool with a kernel compiled with kgdb?
I've tried adding 'kgdbwait kgdboc=ttyS0' to -p, but that doesn't seem
to work.

Thanks,
\dae
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-28 Thread David Evensky
On Sun, Aug 28, 2011 at 10:34:45AM +0300, Avi Kivity wrote:
> On 08/26/2011 01:03 AM, David Evensky wrote:
> >I need to specify the physical address because I need to ioremap the
> >memory during boot.
> 
> Did you consider pci_ioremap_bar()?

No, the code needs a physical memory address, not a PCI device. I
suppose we could do something special for PCI devices, but that wasn't
our intent. That said, this was also one of those things you learn every
day. :-)

> >The production issue I think is a memory limitation. We certainly do
> >use QEMU a lot; but for this the kvm tool is a better fit.
> >
> 
> What is this memory limitation?

It isn't a hard and fast number; just trying to maximize the number of
guests we can have.

> -- 
> I have a truly marvellous patch that fixes the bug which this
> signature is too narrow to contain.
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-26 Thread David Evensky

Sasha,
That is wonderful. It sounds like it should be OK, and will be happy
to test.

\dae

On Fri, Aug 26, 2011 at 09:33:58AM +0300, Sasha Levin wrote:
> On Thu, 2011-08-25 at 08:08 -0700, David Evensky wrote:
> > Adding in the rest of what ivshmem does shouldn't affect our use, *I
> > think*.  I hadn't intended this to do everything that ivshmem does,
> > but I can see how that would be useful. It would be cool if it could
> > grow into that. 
> 
> David,
> 
> I've added most of ivshmem on top of your driver (still working on fully
> understanding the client-server protocol).
> 
> The changes that might affect your use have been simple:
>  * The shared memory BAR is now 2 instead of 0.
>  * Vendor and device IDs changed.
>  * The device now has MSI-X capability in the header and supporting code
> to run it.
> 
> If these points won't affect your use I think there shouldn't be any
> other issues.
> 
> -- 
> 
> Sasha.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-25 Thread David Evensky


Thanks. My initial version did use the E820 map (thus the reason I
want to have an 'address family'), but it was suggested that PCI would
be a better way to go. When I get the rest of the project going, I
will certainly test against that. I am going to have to do a LOT of
ioremap's so that might be the bottleneck. That said, I don't think
it will end up as an issue.

\dae

On Thu, Aug 25, 2011 at 03:08:52PM -0700, Eric Northup wrote:
> Just FYI, one issue that I found with exposing host memory regions as
> a PCI BAR (including via a very old version of the ivshmem driver...
> haven't tried a newer one) is that x86's pci_mmap_page_range doesn't
> want to set up a write-back cacheable mapping of a BAR.
> 
> It may not matter for your requirements, but the uncached access
> reduced guest<->host bandwidth via the shared memory driver by a lot.
> 
> 
> If you need the physical address to be fixed, you might be better off
> by reserving a memory region in the e820 map rather than a PCI BAR,
> since BARs can move around.
> 
> 
> On Thu, Aug 25, 2011 at 8:08 AM, David Evensky
>  wrote:
> >
> > Adding in the rest of what ivshmem does shouldn't affect our use, *I
> > think*. ?I hadn't intended this to do everything that ivshmem does,
> > but I can see how that would be useful. It would be cool if it could
> > grow into that.
> >
> > Our requirements for the driver in kvm tool are that another program
> > on the host can create a shared segment (anonymous, non-file backed)
> > with a specified handle, size, and contents. That this segment is
> > available to the guest at boot time at a specified address and that no
> > driver will change the contents of the memory except under direct user
> > action. Also, when the guest goes away the shared memory segment
> > shouldn't be affected (e.g. contents changed). Finally, we cannot
> > change the lightweight nature of kvm tool.
> >
> > This is the feature of ivshmem that I need to check today. I did some
> > testing a month ago, but it wasn't detailed enough to check this out.
> >
> > \dae
> >
> >
> >
> >
> > On Thu, Aug 25, 2011 at 02:25:48PM +0300, Sasha Levin wrote:
> > > On Thu, 2011-08-25 at 11:59 +0100, Stefan Hajnoczi wrote:
> > > > On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg  
> > > > wrote:
> > > > > Hi Stefan,
> > > > >
> > > > > On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi  
> > > > > wrote:
> > > > >>> It's obviously not competing. One thing you might want to consider 
> > > > >>> is
> > > > >>> making the guest interface compatible with ivshmem. Is there any 
> > > > >>> reason
> > > > >>> we shouldn't do that? I don't consider that a requirement, just 
> > > > >>> nice to
> > > > >>> have.
> > > > >>
> > > > >> The point of implementing the same interface as ivshmem is that users
> > > > >> don't need to rejig guests or applications in order to switch between
> > > > >> hypervisors. ?A different interface also prevents same-to-same
> > > > >> benchmarks.
> > > > >>
> > > > >> There is little benefit to creating another virtual device interface
> > > > >> when a perfectly good one already exists. ?The question should be: 
> > > > >> how
> > > > >> is this shmem device different and better than ivshmem? ?If there is
> > > > >> no justification then implement the ivshmem interface.
> > > > >
> > > > > So which interface are we actually taking about? Userspace/kernel in 
> > > > > the
> > > > > guest or hypervisor/guest kernel?
> > > >
> > > > The hardware interface. ?Same PCI BAR layout and semantics.
> > > >
> > > > > Either way, while it would be nice to share the interface but it's 
> > > > > not a
> > > > > *requirement* for tools/kvm unless ivshmem is specified in the virtio
> > > > > spec or the driver is in mainline Linux. We don't intend to require 
> > > > > people
> > > > > to implement non-standard and non-Linux QEMU interfaces. OTOH,
> > > > > ivshmem would make the PCI ID problem go away.
> > > >
> > > > Introducing yet another non-standard and non-Linux interface doesn't
> > > > help though

Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-25 Thread David Evensky

I need to specify the physical address because I need to ioremap the
memory during boot.

The production issue I think is a memory limitation. We certainly do
use QEMU a lot; but for this the kvm tool is a better fit.

\dae

On Fri, Aug 26, 2011 at 12:11:03AM +0300, Avi Kivity wrote:
> On 08/26/2011 12:00 AM, David Evensky wrote:
> >I've tested ivshmem with the latest git pull (had minor trouble
> >building on debian sid, vnc and unused var, but trivial to work
> >around).
> >
> >QEMU's  -device ivshmem,size=16,shm=/kvm_shmem
> >
> >seems to function as my proposed
> >
> > --shmem pci:0xfd00:16M:handle=/kvm_shmem
> >
> >except that I can't specify the BAR. I am able to read what
> >I'm given, 0xfd00, from lspci -vvv; but for our application
> >we need to be able to specify the address on the command line.
> >
> >If folks are open, I would like to request this feature in the
> >ivshmem.
> 
> It's not really possible. Qemu does not lay out the BARs, the guest
> does (specifically the bios).  You might be able to re-arrange the
> layout after the guest boots.
> 
> Why do you need the BAR at a specific physical address?
> 
> >It would be cool to test our application with QEMU,
> >even if we can't use it in production.
> 
> Why can't you use qemu in production?
> 
> -- 
> I have a truly marvellous patch that fixes the bug which this
> signature is too narrow to contain.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-25 Thread David Evensky
On Thu, Aug 25, 2011 at 04:35:29PM -0500, Anthony Liguori wrote:
dev.h
> 
> >--- linux-kvm/tools/kvm/include/kvm/virtio-pci-dev.h 2011-08-09 
> >15:38:48.760120973 -0700
> >+++ linux-kvm_pci_shmem/tools/kvm/include/kvm/virtio-pci-dev.h   
> >2011-08-18 10:06:12.171539230 -0700
> >@@ -15,10 +15,13 @@
> >  #define PCI_DEVICE_ID_VIRTIO_BLN   0x1005
> >  #define PCI_DEVICE_ID_VIRTIO_P90x1009
> >  #define PCI_DEVICE_ID_VESA 0x2000
> >+#define PCI_DEVICE_ID_PCI_SHMEM 0x0001
> >
> >  #define PCI_VENDOR_ID_REDHAT_QUMRANET  0x1af4
> >+#define PCI_VENDOR_ID_PCI_SHMEM 0x0001
> >  #define PCI_SUBSYSTEM_VENDOR_ID_REDHAT_QUMRANET0x1af4
> 
> FYI, that's not a valid vendor and device ID.
> 
> Perhaps the RH folks would be willing to reserve a portion of the
> device ID space in their vendor ID for ya'll to play around with.

That would be cool! I've started asking around some folks at my
place to see if we have such a thing; but so far, I've heard nothing.



> 
> Regards,
> 
> Anthony Liguori
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-25 Thread David Evensky

I've tested ivshmem with the latest git pull (had minor trouble
building on debian sid, vnc and unused var, but trivial to work
around).

QEMU's  -device ivshmem,size=16,shm=/kvm_shmem

seems to function as my proposed

--shmem pci:0xfd00:16M:handle=/kvm_shmem

except that I can't specify the BAR. I am able to read what
I'm given, 0xfd00, from lspci -vvv; but for our application
we need to be able to specify the address on the command line.

If folks are open, I would like to request this feature in the
ivshmem. It would be cool to test our application with QEMU,
even if we can't use it in production.

I didn't check the case where QEMU must create the shared
segment from scratch, etc. so I didn't test what differences
there are with my proposed 'create' flag or not, but I did look
at the ivshmem source and looks like it does the right thing.
(Makes me want to steal code to make mine better :-))


\dae

On Thu, Aug 25, 2011 at 08:08:06AM -0700, David Evensky wrote:
> 
> Adding in the rest of what ivshmem does shouldn't affect our use, *I
> think*.  I hadn't intended this to do everything that ivshmem does,
> but I can see how that would be useful. It would be cool if it could
> grow into that.
> 
> Our requirements for the driver in kvm tool are that another program
> on the host can create a shared segment (anonymous, non-file backed)
> with a specified handle, size, and contents. That this segment is
> available to the guest at boot time at a specified address and that no
> driver will change the contents of the memory except under direct user
> action. Also, when the guest goes away the shared memory segment
> shouldn't be affected (e.g. contents changed). Finally, we cannot
> change the lightweight nature of kvm tool.
> 
> This is the feature of ivshmem that I need to check today. I did some
> testing a month ago, but it wasn't detailed enough to check this out.
> 
> \dae
> 
> 
> 
> 
> On Thu, Aug 25, 2011 at 02:25:48PM +0300, Sasha Levin wrote:
> > On Thu, 2011-08-25 at 11:59 +0100, Stefan Hajnoczi wrote:
> > > On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg  wrote:
> > > > Hi Stefan,
> > > >
> > > > On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi  
> > > > wrote:
> > > >>> It's obviously not competing. One thing you might want to consider is
> > > >>> making the guest interface compatible with ivshmem. Is there any 
> > > >>> reason
> > > >>> we shouldn't do that? I don't consider that a requirement, just nice 
> > > >>> to
> > > >>> have.
> > > >>
> > > >> The point of implementing the same interface as ivshmem is that users
> > > >> don't need to rejig guests or applications in order to switch between
> > > >> hypervisors.  A different interface also prevents same-to-same
> > > >> benchmarks.
> > > >>
> > > >> There is little benefit to creating another virtual device interface
> > > >> when a perfectly good one already exists.  The question should be: how
> > > >> is this shmem device different and better than ivshmem?  If there is
> > > >> no justification then implement the ivshmem interface.
> > > >
> > > > So which interface are we actually taking about? Userspace/kernel in the
> > > > guest or hypervisor/guest kernel?
> > > 
> > > The hardware interface.  Same PCI BAR layout and semantics.
> > > 
> > > > Either way, while it would be nice to share the interface but it's not a
> > > > *requirement* for tools/kvm unless ivshmem is specified in the virtio
> > > > spec or the driver is in mainline Linux. We don't intend to require 
> > > > people
> > > > to implement non-standard and non-Linux QEMU interfaces. OTOH,
> > > > ivshmem would make the PCI ID problem go away.
> > > 
> > > Introducing yet another non-standard and non-Linux interface doesn't
> > > help though.  If there is no significant improvement over ivshmem then
> > > it makes sense to let ivshmem gain critical mass and more users
> > > instead of fragmenting the space.
> > 
> > I support doing it ivshmem-compatible, though it doesn't have to be a
> > requirement right now (that is, use this patch as a base and build it
> > towards ivshmem - which shouldn't be an issue since this patch provides
> > the PCI+SHM parts which are required by ivshmem anyway).
> > 
> > ivshmem is a good, documented, stable in

Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-25 Thread David Evensky

Adding in the rest of what ivshmem does shouldn't affect our use, *I
think*.  I hadn't intended this to do everything that ivshmem does,
but I can see how that would be useful. It would be cool if it could
grow into that.

Our requirements for the driver in kvm tool are that another program
on the host can create a shared segment (anonymous, non-file backed)
with a specified handle, size, and contents. That this segment is
available to the guest at boot time at a specified address and that no
driver will change the contents of the memory except under direct user
action. Also, when the guest goes away the shared memory segment
shouldn't be affected (e.g. contents changed). Finally, we cannot
change the lightweight nature of kvm tool.

This is the feature of ivshmem that I need to check today. I did some
testing a month ago, but it wasn't detailed enough to check this out.

\dae




On Thu, Aug 25, 2011 at 02:25:48PM +0300, Sasha Levin wrote:
> On Thu, 2011-08-25 at 11:59 +0100, Stefan Hajnoczi wrote:
> > On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg  wrote:
> > > Hi Stefan,
> > >
> > > On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi  
> > > wrote:
> > >>> It's obviously not competing. One thing you might want to consider is
> > >>> making the guest interface compatible with ivshmem. Is there any reason
> > >>> we shouldn't do that? I don't consider that a requirement, just nice to
> > >>> have.
> > >>
> > >> The point of implementing the same interface as ivshmem is that users
> > >> don't need to rejig guests or applications in order to switch between
> > >> hypervisors.  A different interface also prevents same-to-same
> > >> benchmarks.
> > >>
> > >> There is little benefit to creating another virtual device interface
> > >> when a perfectly good one already exists.  The question should be: how
> > >> is this shmem device different and better than ivshmem?  If there is
> > >> no justification then implement the ivshmem interface.
> > >
> > > So which interface are we actually taking about? Userspace/kernel in the
> > > guest or hypervisor/guest kernel?
> > 
> > The hardware interface.  Same PCI BAR layout and semantics.
> > 
> > > Either way, while it would be nice to share the interface but it's not a
> > > *requirement* for tools/kvm unless ivshmem is specified in the virtio
> > > spec or the driver is in mainline Linux. We don't intend to require people
> > > to implement non-standard and non-Linux QEMU interfaces. OTOH,
> > > ivshmem would make the PCI ID problem go away.
> > 
> > Introducing yet another non-standard and non-Linux interface doesn't
> > help though.  If there is no significant improvement over ivshmem then
> > it makes sense to let ivshmem gain critical mass and more users
> > instead of fragmenting the space.
> 
> I support doing it ivshmem-compatible, though it doesn't have to be a
> requirement right now (that is, use this patch as a base and build it
> towards ivshmem - which shouldn't be an issue since this patch provides
> the PCI+SHM parts which are required by ivshmem anyway).
> 
> ivshmem is a good, documented, stable interface backed by a lot of
> research and testing behind it. Looking at the spec it's obvious that
> Cam had KVM in mind when designing it and thats exactly what we want to
> have in the KVM tool.
> 
> David, did you have any plans to extend it to become ivshmem-compatible?
> If not, would turning it into such break any code that depends on it
> horribly?
> 
> -- 
> 
> Sasha.
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-24 Thread David Evensky
On Thu, Aug 25, 2011 at 09:02:56AM +0300, Pekka Enberg wrote:
> On Thu, Aug 25, 2011 at 1:25 AM, David Evensky  wrote:
> > + ? ? ? if (*next == '\0')
> > + ? ? ? ? ? ? ? p = next;
> > + ? ? ? else
> > + ? ? ? ? ? ? ? p = next + 1;
> > + ? ? ? /* parse out size */
> > + ? ? ? base = 10;
> > + ? ? ? if (strcasestr(p, "0x"))
> > + ? ? ? ? ? ? ? base = 16;
> > + ? ? ? size = strtoll(p, &next, base);
> > + ? ? ? if (next == p && size == 0) {
> > + ? ? ? ? ? ? ? pr_info("shmem: no size specified, using default.");
> > + ? ? ? ? ? ? ? size = default_size;
> > + ? ? ? }
> > + ? ? ? /* look for [KMGkmg][Bb]* ?uses base 2. */
> > + ? ? ? int skip_B = 0;
> > + ? ? ? if (strspn(next, "KMGkmg")) { ? /* might have a prefix */
> > + ? ? ? ? ? ? ? if (*(next + 1) == 'B' || *(next + 1) == 'b')
> > + ? ? ? ? ? ? ? ? ? ? ? skip_B = 1;
> > + ? ? ? ? ? ? ? switch (*next) {
> > + ? ? ? ? ? ? ? case 'K':
> > + ? ? ? ? ? ? ? case 'k':
> > + ? ? ? ? ? ? ? ? ? ? ? size = size << KB_SHIFT;
> > + ? ? ? ? ? ? ? ? ? ? ? break;
> > + ? ? ? ? ? ? ? case 'M':
> > + ? ? ? ? ? ? ? case 'm':
> > + ? ? ? ? ? ? ? ? ? ? ? size = size << MB_SHIFT;
> > + ? ? ? ? ? ? ? ? ? ? ? break;
> > + ? ? ? ? ? ? ? case 'G':
> > + ? ? ? ? ? ? ? case 'g':
> > + ? ? ? ? ? ? ? ? ? ? ? size = size << GB_SHIFT;
> > + ? ? ? ? ? ? ? ? ? ? ? break;
> > + ? ? ? ? ? ? ? default:
> > + ? ? ? ? ? ? ? ? ? ? ? die("shmem: bug in detecting size prefix.");
> > + ? ? ? ? ? ? ? ? ? ? ? break;
> > + ? ? ? ? ? ? ? }
> 
> There's some nice code in perf to parse sizes like this. We could just
> steal that.

That sounds good to me.

> > +inline void fill_mem(void *buf, size_t buf_size, char *fill, size_t 
> > fill_len)
> > +{
> > + ? ? ? size_t i;
> > +
> > + ? ? ? if (fill_len == 1) {
> > + ? ? ? ? ? ? ? memset(buf, fill[0], buf_size);
> > + ? ? ? } else {
> > + ? ? ? ? ? ? ? if (buf_size > fill_len) {
> > + ? ? ? ? ? ? ? ? ? ? ? for (i = 0; i < buf_size - fill_len; i += fill_len)
> > + ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? memcpy(((char *)buf) + i, fill, fill_len);
> > + ? ? ? ? ? ? ? ? ? ? ? memcpy(buf + i, fill, buf_size - i);
> > + ? ? ? ? ? ? ? } else {
> > + ? ? ? ? ? ? ? ? ? ? ? memcpy(buf, fill, buf_size);
> > + ? ? ? ? ? ? ? }
> > + ? ? ? }
> > +}
> 
> Can we do a memset_pattern4() type of interface instead? I think it's
> mostly pointless to try to support arbitrary-length 'fill'.

Yeah, I can see how the arbitrary fill thing might be too cute. It
certainly isn't necessary.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-24 Thread David Evensky

I don't know if there is a PCI card that only provides a region
of memory. I'm not really trying to provide emulation for a known
piece of hardware, so I picked values that weren't being used since
there didn't appear to be an 'unknown'. I'll ask around.

\dae

On Thu, Aug 25, 2011 at 08:41:43AM +0300, Avi Kivity wrote:
> On 08/25/2011 01:25 AM, David Evensky wrote:
> >  #define PCI_DEVICE_ID_VIRTIO_BLN   0x1005
> >  #define PCI_DEVICE_ID_VIRTIO_P90x1009
> >  #define PCI_DEVICE_ID_VESA 0x2000
> >+#define PCI_DEVICE_ID_PCI_SHMEM 0x0001
> >
> >  #define PCI_VENDOR_ID_REDHAT_QUMRANET  0x1af4
> >+#define PCI_VENDOR_ID_PCI_SHMEM 0x0001
> >  #define PCI_SUBSYSTEM_VENDOR_ID_REDHAT_QUMRANET0x1af4
> >
> >
> 
> Please use a real life vendor ID from http://www.pcidatabase.com.
> If you're following an existing spec, you should pick the vendor ID
> matching the device you're emulating.  If not, as seems to be the
> case here, you need your own, or permission from an existing owner
> of a vendor ID.
> 
> -- 
> I have a truly marvellous patch that fixes the bug which this
> signature is too narrow to contain.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-24 Thread David Evensky
On Thu, Aug 25, 2011 at 08:06:34AM +0300, Pekka Enberg wrote:
> On Wed, 2011-08-24 at 21:49 -0700, David Evensky wrote:
> > On Wed, Aug 24, 2011 at 10:27:18PM -0500, Alexander Graf wrote:
> > > 
> > > On 24.08.2011, at 17:25, David Evensky wrote:
> > > 
> > > > 
> > > > 
> > > > This patch adds a PCI device that provides PCI device memory to the
> > > > guest. This memory in the guest exists as a shared memory segment in
> > > > the host. This is similar memory sharing capability of Nahanni
> > > > (ivshmem) available in QEMU. In this case, the shared memory segment
> > > > is exposed as a PCI BAR only.
> > > > 
> > > > A new command line argument is added as:
> > > >--shmem pci:0xc800:16MB:handle=/newmem:create
> > > > 
> > > > which will set the PCI BAR at 0xc800, the shared memory segment
> > > > and the region pointed to by the BAR will be 16MB. On the host side
> > > > the shm_open handle will be '/newmem', and the kvm tool will create
> > > > the shared segment, set its size, and initialize it. If the size,
> > > > handle, or create flag are absent, they will default to 16MB,
> > > > handle=/kvm_shmem, and create will be false. The address family,
> > > > 'pci:' is also optional as it is the only address family currently
> > > > supported. Only a single --shmem is supported at this time.
> > > 
> > > Did you have a look at ivshmem? It does that today, but also gives
> > you an IRQ line so the guests can poke each other. For something as
> > simple as this, I don't see why we'd need two competing
> > implementations.
> > 
> > Isn't ivshmem in QEMU? If so, then I don't think there isn't any
> > competition. How do you feel that these are competing?
> 
> It's obviously not competing. One thing you might want to consider is
> making the guest interface compatible with ivshmem. Is there any reason
> we shouldn't do that? I don't consider that a requirement, just nice to
> have.

I think it depends on what the goal is. For us, just having a hunk of
memory shared between the host and guests that the guests can ioremap
provides a lot. Having the rest of ivshmem's guest interface I don't
think would impact our use above, but I haven't tested things with
QEMU to verify that.

\dae


> 
>   Pekka
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-24 Thread David Evensky
On Wed, Aug 24, 2011 at 10:27:18PM -0500, Alexander Graf wrote:
> 
> On 24.08.2011, at 17:25, David Evensky wrote:
> 
> > 
> > 
> > This patch adds a PCI device that provides PCI device memory to the
> > guest. This memory in the guest exists as a shared memory segment in
> > the host. This is similar memory sharing capability of Nahanni
> > (ivshmem) available in QEMU. In this case, the shared memory segment
> > is exposed as a PCI BAR only.
> > 
> > A new command line argument is added as:
> >--shmem pci:0xc800:16MB:handle=/newmem:create
> > 
> > which will set the PCI BAR at 0xc800, the shared memory segment
> > and the region pointed to by the BAR will be 16MB. On the host side
> > the shm_open handle will be '/newmem', and the kvm tool will create
> > the shared segment, set its size, and initialize it. If the size,
> > handle, or create flag are absent, they will default to 16MB,
> > handle=/kvm_shmem, and create will be false. The address family,
> > 'pci:' is also optional as it is the only address family currently
> > supported. Only a single --shmem is supported at this time.
> 
> Did you have a look at ivshmem? It does that today, but also gives you an IRQ 
> line so the guests can poke each other. For something as simple as this, I 
> don't see why we'd need two competing implementations.

Isn't ivshmem in QEMU? If so, then I don't think there isn't any
competition. How do you feel that these are competing?

\dae

> 
> 
> Alex
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest

2011-08-24 Thread David Evensky


This patch adds a PCI device that provides PCI device memory to the
guest. This memory in the guest exists as a shared memory segment in
the host. This is similar memory sharing capability of Nahanni
(ivshmem) available in QEMU. In this case, the shared memory segment
is exposed as a PCI BAR only.

A new command line argument is added as:
--shmem pci:0xc800:16MB:handle=/newmem:create

which will set the PCI BAR at 0xc800, the shared memory segment
and the region pointed to by the BAR will be 16MB. On the host side
the shm_open handle will be '/newmem', and the kvm tool will create
the shared segment, set its size, and initialize it. If the size,
handle, or create flag are absent, they will default to 16MB,
handle=/kvm_shmem, and create will be false. The address family,
'pci:' is also optional as it is the only address family currently
supported. Only a single --shmem is supported at this time.

Signed-off-by: David Evensky 

diff -uprN -X linux-kvm/Documentation/dontdiff 
linux-kvm/tools/kvm/builtin-run.c linux-kvm_pci_shmem/tools/kvm/builtin-run.c
--- linux-kvm/tools/kvm/builtin-run.c   2011-08-24 10:21:22.342077674 -0700
+++ linux-kvm_pci_shmem/tools/kvm/builtin-run.c 2011-08-24 14:17:33.190451297 
-0700
@@ -28,6 +28,8 @@
 #include "kvm/sdl.h"
 #include "kvm/vnc.h"
 #include "kvm/guest_compat.h"
+#include "shmem-util.h"
+#include "kvm/pci-shmem.h"
 
 #include 
 
@@ -52,6 +54,8 @@
 #define DEFAULT_SCRIPT "none"
 
 #define MB_SHIFT   (20)
+#define KB_SHIFT   (10)
+#define GB_SHIFT   (30)
 #define MIN_RAM_SIZE_MB(64ULL)
 #define MIN_RAM_SIZE_BYTE  (MIN_RAM_SIZE_MB << MB_SHIFT)
 
@@ -151,6 +155,130 @@ static int virtio_9p_rootdir_parser(cons
return 0;
 }
 
+static int shmem_parser(const struct option *opt, const char *arg, int unset)
+{
+   const uint64_t default_size = SHMEM_DEFAULT_SIZE;
+   const uint64_t default_phys_addr = SHMEM_DEFAULT_ADDR;
+   const char *default_handle = SHMEM_DEFAULT_HANDLE;
+   enum { PCI, UNK } addr_type = PCI;
+   uint64_t phys_addr;
+   uint64_t size;
+   char *handle = NULL;
+   int create = 0;
+   const char *p = arg;
+   char *next;
+   int base = 10;
+   int verbose = 0;
+
+   const int skip_pci = strlen("pci:");
+   if (verbose)
+   pr_info("shmem_parser(%p,%s,%d)", opt, arg, unset);
+   /* parse out optional addr family */
+   if (strcasestr(p, "pci:")) {
+   p += skip_pci;
+   addr_type = PCI;
+   } else if (strcasestr(p, "mem:")) {
+   die("I can't add to E820 map yet.\n");
+   }
+   /* parse out physical addr */
+   base = 10;
+   if (strcasestr(p, "0x"))
+   base = 16;
+   phys_addr = strtoll(p, &next, base);
+   if (next == p && phys_addr == 0) {
+   pr_info("shmem: no physical addr specified, using default.");
+   phys_addr = default_phys_addr;
+   }
+   if (*next != ':' && *next != '\0')
+   die("shmem: unexpected chars after phys addr.\n");
+   if (*next == '\0')
+   p = next;
+   else
+   p = next + 1;
+   /* parse out size */
+   base = 10;
+   if (strcasestr(p, "0x"))
+   base = 16;
+   size = strtoll(p, &next, base);
+   if (next == p && size == 0) {
+   pr_info("shmem: no size specified, using default.");
+   size = default_size;
+   }
+   /* look for [KMGkmg][Bb]*  uses base 2. */
+   int skip_B = 0;
+   if (strspn(next, "KMGkmg")) {   /* might have a prefix */
+   if (*(next + 1) == 'B' || *(next + 1) == 'b')
+   skip_B = 1;
+   switch (*next) {
+   case 'K':
+   case 'k':
+   size = size << KB_SHIFT;
+   break;
+   case 'M':
+   case 'm':
+   size = size << MB_SHIFT;
+   break;
+   case 'G':
+   case 'g':
+   size = size << GB_SHIFT;
+   break;
+   default:
+   die("shmem: bug in detecting size prefix.");
+   break;
+   }
+   next += 1 + skip_B;
+   }
+   if (*next != ':' && *next != '\0') {
+   die("shmem: unexpected chars after phys size. <%c><%c>\n",
+   *next, *p);
+   }
+   if (*next == '\0')
+ 

Re: [PATCH resend] kvm tools: Use correct size for VESA memory bar

2011-08-11 Thread David Evensky

Sasha,

I've done a pull to get patch below and otherwise sync up and got an
ioremap error. It is a little different from the patch I got from you
previously that worked. The patch that worked changed pci.c as:

-   u8 bar = offset - PCI_BAR_OFFSET(0);
+   u8 bar = (offset - PCI_BAR_OFFSET(0)) / sizeof(u32);

vs the patch below which drops the () around the numerator:

- u8 bar = offset - PCI_BAR_OFFSET(0);  
 
+ u8 bar = offset - PCI_BAR_OFFSET(0) / (sizeof(u32));  
 


I've put back the () in my local copy, and that works for me.

Thanks,
\dae

On Thu, Aug 11, 2011 at 02:56:51PM +0300, Sasha Levin wrote:
> This patch makes BAR 1 16k, instead of BAR0 - which is the PIO bar.
> 
> This fixes wrong output on lspci command and ioremap warnings during boot.
> 
> Signed-off-by: Sasha Levin 
> ---
>  tools/kvm/hw/vesa.c |2 +-
>  tools/kvm/pci.c |2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/kvm/hw/vesa.c b/tools/kvm/hw/vesa.c
> index 2af08df..9caa6c4 100644
> --- a/tools/kvm/hw/vesa.c
> +++ b/tools/kvm/hw/vesa.c
> @@ -39,6 +39,7 @@ static struct pci_device_header vesa_pci_device = {
>   .subsys_vendor_id   = PCI_SUBSYSTEM_VENDOR_ID_REDHAT_QUMRANET,
>   .subsys_id  = PCI_SUBSYSTEM_ID_VESA,
>   .bar[1] = VESA_MEM_ADDR | PCI_BASE_ADDRESS_SPACE_MEMORY,
> + .bar_size[1]= VESA_MEM_SIZE,
>  };
>  
>  static struct framebuffer vesafb;
> @@ -56,7 +57,6 @@ struct framebuffer *vesa__init(struct kvm *kvm)
>   vesa_pci_device.irq_line= line;
>   vesa_base_addr  = ioport__register(IOPORT_EMPTY, 
> &vesa_io_ops, IOPORT_SIZE, NULL);
>   vesa_pci_device.bar[0]  = vesa_base_addr | 
> PCI_BASE_ADDRESS_SPACE_IO;
> - vesa_pci_device.bar_size[0] = VESA_MEM_SIZE;
>   pci__register(&vesa_pci_device, dev);
>  
>   mem = mmap(NULL, VESA_MEM_SIZE, PROT_RW, MAP_ANON_NORESERVE, -1, 0);
> diff --git a/tools/kvm/pci.c b/tools/kvm/pci.c
> index fd19b73..0449aca 100644
> --- a/tools/kvm/pci.c
> +++ b/tools/kvm/pci.c
> @@ -95,7 +95,7 @@ static bool pci_config_data_out(struct ioport *ioport, 
> struct kvm *kvm, u16 port
>   offset = start + (pci_config_address.register_number << 2);
>   if (offset < sizeof(struct pci_device_header)) {
>   void *p = pci_devices[dev_num];
> - u8 bar = offset - PCI_BAR_OFFSET(0);
> + u8 bar = offset - PCI_BAR_OFFSET(0) / (sizeof(u32));
>   u32 sz = PCI_IO_SIZE;
>  
>   if (bar < 6 && pci_devices[dev_num]->bar_size[bar])
> -- 
> 1.7.6
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] kvm tools: Use correct size for VESA memory BAR

2011-08-10 Thread David Evensky

I don't know if there were any other drivers for this patch, but it
along with another patch (maybe integrated elsewhere for 32bit BAR vs
8bit) certainly helped me out a lot.  These patches fixed ioremap
errors I was seeing (I had a 16MB PCI memory region, but it appeared
to be only 256 bytes in size; the kernel complained bitterly about
that on ioremap). It also was an issue of expected vs unexpected
output from lspci -vvv.

I'm working on my out-of-tree PCI driver to see if it can become in
tree. I have more cleanup to do, and seeing how close I can come to
the target coding standards.

\dae

On Wed, Aug 10, 2011 at 09:13:35AM +0300, Pekka Enberg wrote:
> On 8/9/11 7:46 PM, Avi Kivity wrote:
> >On 08/09/2011 06:39 PM, Ingo Molnar wrote:
> >>* Sasha Levin  wrote:
> >>
> >>>  This patch makes BAR 1 16k, instead of BAR0 - which is the PIO bar.
> >>>
> >>
> >>This changelog is missing some key information:
> >>
> >>  - how did you find the bug (by chance via code review or did you see
> >>some actual badness?)
> >>
> >>  - what practical effect (if any) did you see from this patch?
> >>
> >>  - what practical effect (if any) do you expect others to see
> >>from this patch?
> >>
> >>I suspect this patch is only for completeness/correctness but has no
> >>practical effect - but that's a guess.
> >>
> >
> >My guess would be that seabios tried to lay out the BARs and had
> >trouble fitting a 16k pio bar in the small PCI pio region.
> 
> Sasha? IIRC this fixed some issue with David's out-of-tree PCI driver?

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


coding standards for kvm tools?

2011-08-08 Thread David Evensky


Is that a pointer to the coding standards used for the kvm tools?
Thanks,
\dae

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


memory zones and the KVM guest kernel

2011-05-23 Thread David Evensky

Hi,

When I boot my guest kernel with KVM, the dmesg output says that:

...
[0.00] Zone PFN ranges:
[0.00]   DMA  0x0010 -> 0x1000
[0.00]   DMA320x1000 -> 0x0010
[0.00]   Normal   empty
[0.00] Movable zone start PFN for each node
[0.00] early_node_map[2] active PFN ranges
[0.00] 0: 0x0010 -> 0x009f
[0.00] 0: 0x0100 -> 0x0007fffd
...

Why is the Normal Zone empty? Is it possible to have some of the
guest's memory mapped in the Normal zone?

Is there a good reference that talks about the normal, movable,
etc. memory zones?

Thanks,
\dae

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html