Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Gleb Natapov
On Sat, Nov 20, 2010 at 10:17:09PM +0200, Michael S. Tsirkin wrote:
 On Fri, Nov 19, 2010 at 10:38:42PM +0200, Gleb Natapov wrote:
  On Fri, Nov 19, 2010 at 06:02:58PM +0100, Markus Armbruster wrote:
   Michael S. Tsirkin m...@redhat.com writes:
   
On Tue, Nov 09, 2010 at 11:41:43AM +0900, Isaku Yamahata wrote:
On Mon, Nov 08, 2010 at 06:26:33PM +0200, Michael S. Tsirkin wrote:
 Replace bus number with slot numbers of parent bridges up to the 
 root.
 This works for root bridge in a compatible way because bus number 
 there
 is hard-coded to 0.
 IMO nested bridges are broken anyway, no way to be compatible there.
 
 
 Gleb, Markus, I think the following should be sufficient for PCI.  
 What
 do you think?  Also - do we need to update QMP/monitor to teach them 
 to
 work with these paths?
 
 This is on top of Alex's patch, completely untested.
 
 
 pci: fix device path for devices behind nested bridges
 
 We were using bus number in the device path, which is clearly
 broken as this number is guest-assigned for all devices
 except the root.
 
 Fix by using hierarchical list of slots, walking the path
 from root down to device, instead. Add :00 as bus number
 so that if there are no nested bridges, this is compatible
 with what we have now.

This format, Domain:00:Slot:Slot:Slot.Function, doesn't work
because pci-to-pci bridge is pci function.
So the format should be
Domain:00:Slot.Function:Slot.Function:Slot.Function

thanks,
   
Hmm, interesting. If we do this we aren't backwards compatible
though, so maybe we could try using openfirmware paths, just as well.
   
   Whatever we do, we need to make it work for all (qdevified) devices and
   buses.
   
   It should also be possible to use canonical addressing with device_add 
   friends.  I.e. permit naming a device by (a unique abbreviation of) its
   canonical address in addition to naming it by its user-defined ID.  For
   instance, something like
   
  device_del /pci/@1,1
   
  FWIW openbios allows this kind of abbreviation.
  
   in addition to
   
  device_del ID
   
   Open Firmware is a useful source of inspiration there, but should it
   come into conflict with usability, we should let usability win.
  
  --
  Gleb.
 
 
 I think that the domain (PCI segment group), bus, slot, function way to
 address pci devices is still the most familiar and the easiest to map to
Most familiar to whom? It looks like you identify yourself with most of
qemu users, but if most qemu users are like you then qemu has not enough
users :) Most users that consider themselves to be advanced may know
what eth1 or /dev/sdb means. This doesn't mean we should provide
device_del eth1 or device_add /dev/sdb command though. 

More important is that domain (encoded as number like you used to)
and bus number has no meaning from inside qemu. So while I said many
times I don't care about exact CLI syntax to much it should make sense
at least. It can use id to specify PCI bus in CLI like this:
device_del pci.0:1.1. Or it can even use device id too like this:
device_del pci.0:ide.0. Or it can use HW topology like in FO device
path. But doing ah-hoc device enumeration inside qemu and then using it
for CLI is not it.

 functionality in the guests.  Qemu is buggy in the moment in that is
 uses the bus addresses assigned by guest and not the ones in ACPI,
 but that can be fixed.
It looks like you confused ACPI _SEG for something it isn't. ACPI spec
says that PCI segment group is purely software concept managed by system
firmware. In fact one segment may include multiple PCI host bridges. _SEG
is not what OSPM uses to tie HW resource to ACPI resource. It used _CRS
(Current Resource Settings) for that just like OF. No surprise there.

 
 That should be enough for e.g. device_del. We do have the need to
 describe the topology when we interface with firmware, e.g. to describe
 the ACPI tables themselves to qemu (this is what Gleb's patches deal
 with), but that's probably the only case.
 
Describing HW topology is the only way to unambiguously describe device to
something or someone outside qemu and have persistent device naming
between different HW configuration.

--
Gleb.



Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Michael S. Tsirkin
On Sun, Nov 21, 2010 at 10:32:11AM +0200, Gleb Natapov wrote:
 On Sat, Nov 20, 2010 at 10:17:09PM +0200, Michael S. Tsirkin wrote:
  On Fri, Nov 19, 2010 at 10:38:42PM +0200, Gleb Natapov wrote:
   On Fri, Nov 19, 2010 at 06:02:58PM +0100, Markus Armbruster wrote:
Michael S. Tsirkin m...@redhat.com writes:

 On Tue, Nov 09, 2010 at 11:41:43AM +0900, Isaku Yamahata wrote:
 On Mon, Nov 08, 2010 at 06:26:33PM +0200, Michael S. Tsirkin wrote:
  Replace bus number with slot numbers of parent bridges up to the 
  root.
  This works for root bridge in a compatible way because bus number 
  there
  is hard-coded to 0.
  IMO nested bridges are broken anyway, no way to be compatible 
  there.
  
  
  Gleb, Markus, I think the following should be sufficient for PCI.  
  What
  do you think?  Also - do we need to update QMP/monitor to teach 
  them to
  work with these paths?
  
  This is on top of Alex's patch, completely untested.
  
  
  pci: fix device path for devices behind nested bridges
  
  We were using bus number in the device path, which is clearly
  broken as this number is guest-assigned for all devices
  except the root.
  
  Fix by using hierarchical list of slots, walking the path
  from root down to device, instead. Add :00 as bus number
  so that if there are no nested bridges, this is compatible
  with what we have now.
 
 This format, Domain:00:Slot:Slot:Slot.Function, doesn't work
 because pci-to-pci bridge is pci function.
 So the format should be
 Domain:00:Slot.Function:Slot.Function:Slot.Function
 
 thanks,

 Hmm, interesting. If we do this we aren't backwards compatible
 though, so maybe we could try using openfirmware paths, just as well.

Whatever we do, we need to make it work for all (qdevified) devices and
buses.

It should also be possible to use canonical addressing with device_add 
friends.  I.e. permit naming a device by (a unique abbreviation of) its
canonical address in addition to naming it by its user-defined ID.  For
instance, something like

   device_del /pci/@1,1

   FWIW openbios allows this kind of abbreviation.
   
in addition to

   device_del ID

Open Firmware is a useful source of inspiration there, but should it
come into conflict with usability, we should let usability win.
   
   --
 Gleb.
  
  
  I think that the domain (PCI segment group), bus, slot, function way to
  address pci devices is still the most familiar and the easiest to map to
 Most familiar to whom?

The guests.
For CLI, we need an easy way to map a device in guest to the
device in qemu and back.

 It looks like you identify yourself with most of
 qemu users, but if most qemu users are like you then qemu has not enough
 users :) Most users that consider themselves to be advanced may know
 what eth1 or /dev/sdb means. This doesn't mean we should provide
 device_del eth1 or device_add /dev/sdb command though. 
 
 More important is that domain (encoded as number like you used to)
 and bus number has no meaning from inside qemu.
 So while I said many
 times I don't care about exact CLI syntax to much it should make sense
 at least. It can use id to specify PCI bus in CLI like this:
 device_del pci.0:1.1. Or it can even use device id too like this:
 device_del pci.0:ide.0. Or it can use HW topology like in FO device
 path. But doing ah-hoc device enumeration inside qemu and then using it
 for CLI is not it.
 
  functionality in the guests.  Qemu is buggy in the moment in that is
  uses the bus addresses assigned by guest and not the ones in ACPI,
  but that can be fixed.
 It looks like you confused ACPI _SEG for something it isn't.

Maybe I did. This is what linux does:

struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_pci_root
*root)
{
struct acpi_device *device = root-device;
int domain = root-segment;
int busnum = root-secondary.start;

And I think this is consistent with the spec.

 ACPI spec
 says that PCI segment group is purely software concept managed by system
 firmware. In fact one segment may include multiple PCI host bridges.

It can't I think:
Multiple Host Bridges

A platform may have multiple PCI Express or PCI-X host bridges. The base
address for the
MMCONFIG space for these host bridges may need to be allocated at
different locations. In such
cases, using MCFG table and _CBA method as defined in this section means
that each of these host
bridges must be in its own PCI Segment Group.


 _SEG
 is not what OSPM uses to tie HW resource to ACPI resource. It used _CRS
 (Current Resource Settings) for that just like OF. No surprise there.

OSPM uses both I think.

All I see linux do with CRS is get the bus number range.
And the spec says, 

Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Gleb Natapov
On Sun, Nov 21, 2010 at 11:50:18AM +0200, Michael S. Tsirkin wrote:
 On Sun, Nov 21, 2010 at 10:32:11AM +0200, Gleb Natapov wrote:
  On Sat, Nov 20, 2010 at 10:17:09PM +0200, Michael S. Tsirkin wrote:
   On Fri, Nov 19, 2010 at 10:38:42PM +0200, Gleb Natapov wrote:
On Fri, Nov 19, 2010 at 06:02:58PM +0100, Markus Armbruster wrote:
 Michael S. Tsirkin m...@redhat.com writes:
 
  On Tue, Nov 09, 2010 at 11:41:43AM +0900, Isaku Yamahata wrote:
  On Mon, Nov 08, 2010 at 06:26:33PM +0200, Michael S. Tsirkin wrote:
   Replace bus number with slot numbers of parent bridges up to the 
   root.
   This works for root bridge in a compatible way because bus 
   number there
   is hard-coded to 0.
   IMO nested bridges are broken anyway, no way to be compatible 
   there.
   
   
   Gleb, Markus, I think the following should be sufficient for 
   PCI.  What
   do you think?  Also - do we need to update QMP/monitor to teach 
   them to
   work with these paths?
   
   This is on top of Alex's patch, completely untested.
   
   
   pci: fix device path for devices behind nested bridges
   
   We were using bus number in the device path, which is clearly
   broken as this number is guest-assigned for all devices
   except the root.
   
   Fix by using hierarchical list of slots, walking the path
   from root down to device, instead. Add :00 as bus number
   so that if there are no nested bridges, this is compatible
   with what we have now.
  
  This format, Domain:00:Slot:Slot:Slot.Function, doesn't work
  because pci-to-pci bridge is pci function.
  So the format should be
  Domain:00:Slot.Function:Slot.Function:Slot.Function
  
  thanks,
 
  Hmm, interesting. If we do this we aren't backwards compatible
  though, so maybe we could try using openfirmware paths, just as 
  well.
 
 Whatever we do, we need to make it work for all (qdevified) devices 
 and
 buses.
 
 It should also be possible to use canonical addressing with 
 device_add 
 friends.  I.e. permit naming a device by (a unique abbreviation of) 
 its
 canonical address in addition to naming it by its user-defined ID.  
 For
 instance, something like
 
device_del /pci/@1,1
 
FWIW openbios allows this kind of abbreviation.

 in addition to
 
device_del ID
 
 Open Firmware is a useful source of inspiration there, but should it
 come into conflict with usability, we should let usability win.

--
Gleb.
   
   
   I think that the domain (PCI segment group), bus, slot, function way to
   address pci devices is still the most familiar and the easiest to map to
  Most familiar to whom?
 
 The guests.
Which one? There are many guests. Your favorite?

 For CLI, we need an easy way to map a device in guest to the
 device in qemu and back.
Then use eth0, /dev/sdb, or even C:. Your way is not less broken since what
you are saying is lets use name that guest assigned to a device. 

 
  It looks like you identify yourself with most of
  qemu users, but if most qemu users are like you then qemu has not enough
  users :) Most users that consider themselves to be advanced may know
  what eth1 or /dev/sdb means. This doesn't mean we should provide
  device_del eth1 or device_add /dev/sdb command though. 
  
  More important is that domain (encoded as number like you used to)
  and bus number has no meaning from inside qemu.
  So while I said many
  times I don't care about exact CLI syntax to much it should make sense
  at least. It can use id to specify PCI bus in CLI like this:
  device_del pci.0:1.1. Or it can even use device id too like this:
  device_del pci.0:ide.0. Or it can use HW topology like in FO device
  path. But doing ah-hoc device enumeration inside qemu and then using it
  for CLI is not it.
  
   functionality in the guests.  Qemu is buggy in the moment in that is
   uses the bus addresses assigned by guest and not the ones in ACPI,
   but that can be fixed.
  It looks like you confused ACPI _SEG for something it isn't.
 
 Maybe I did. This is what linux does:
 
 struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_pci_root
 *root)
 {
 struct acpi_device *device = root-device;
 int domain = root-segment;
 int busnum = root-secondary.start;
 
 And I think this is consistent with the spec.
 
It means that one domain may include several host bridges. At that level
domain is defined as something that have unique name for each device
inside it thus no two buses in one segment/domain can have same bus
number. This is what PCI spec tells you. 

And this further shows that using domain as defined by guest is very
bad idea. 

  ACPI spec
  says that PCI segment group is purely software concept managed by system
  firmware. In 

[Qemu-devel] Oudated vgabios.git on git.qemu.org

2010-11-21 Thread Avi Kivity
qemu.git points to vgabios.git 19ea12c230ded95928ecaef0db47a82231c2e485, 
which isn't available on git.qemu.org/vgabios.git.


--
error compiling committee.c: too many arguments to function




Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Michael S. Tsirkin
On Sun, Nov 21, 2010 at 12:19:03PM +0200, Gleb Natapov wrote:
 On Sun, Nov 21, 2010 at 11:50:18AM +0200, Michael S. Tsirkin wrote:
  On Sun, Nov 21, 2010 at 10:32:11AM +0200, Gleb Natapov wrote:
   On Sat, Nov 20, 2010 at 10:17:09PM +0200, Michael S. Tsirkin wrote:
On Fri, Nov 19, 2010 at 10:38:42PM +0200, Gleb Natapov wrote:
 On Fri, Nov 19, 2010 at 06:02:58PM +0100, Markus Armbruster wrote:
  Michael S. Tsirkin m...@redhat.com writes:
  
   On Tue, Nov 09, 2010 at 11:41:43AM +0900, Isaku Yamahata wrote:
   On Mon, Nov 08, 2010 at 06:26:33PM +0200, Michael S. Tsirkin 
   wrote:
Replace bus number with slot numbers of parent bridges up to 
the root.
This works for root bridge in a compatible way because bus 
number there
is hard-coded to 0.
IMO nested bridges are broken anyway, no way to be compatible 
there.


Gleb, Markus, I think the following should be sufficient for 
PCI.  What
do you think?  Also - do we need to update QMP/monitor to 
teach them to
work with these paths?

This is on top of Alex's patch, completely untested.


pci: fix device path for devices behind nested bridges

We were using bus number in the device path, which is clearly
broken as this number is guest-assigned for all devices
except the root.

Fix by using hierarchical list of slots, walking the path
from root down to device, instead. Add :00 as bus number
so that if there are no nested bridges, this is compatible
with what we have now.
   
   This format, Domain:00:Slot:Slot:Slot.Function, doesn't work
   because pci-to-pci bridge is pci function.
   So the format should be
   Domain:00:Slot.Function:Slot.Function:Slot.Function
   
   thanks,
  
   Hmm, interesting. If we do this we aren't backwards compatible
   though, so maybe we could try using openfirmware paths, just as 
   well.
  
  Whatever we do, we need to make it work for all (qdevified) devices 
  and
  buses.
  
  It should also be possible to use canonical addressing with 
  device_add 
  friends.  I.e. permit naming a device by (a unique abbreviation of) 
  its
  canonical address in addition to naming it by its user-defined ID.  
  For
  instance, something like
  
 device_del /pci/@1,1
  
 FWIW openbios allows this kind of abbreviation.
 
  in addition to
  
 device_del ID
  
  Open Firmware is a useful source of inspiration there, but should it
  come into conflict with usability, we should let usability win.
 
 --
   Gleb.


I think that the domain (PCI segment group), bus, slot, function way to
address pci devices is still the most familiar and the easiest to map to
   Most familiar to whom?
  
  The guests.
 Which one? There are many guests. Your favorite?
 
  For CLI, we need an easy way to map a device in guest to the
  device in qemu and back.
 Then use eth0, /dev/sdb, or even C:. Your way is not less broken since what
 you are saying is lets use name that guest assigned to a device. 

No I am saying let's use the name that our ACPI tables assigned.

  
   It looks like you identify yourself with most of
   qemu users, but if most qemu users are like you then qemu has not enough
   users :) Most users that consider themselves to be advanced may know
   what eth1 or /dev/sdb means. This doesn't mean we should provide
   device_del eth1 or device_add /dev/sdb command though. 
   
   More important is that domain (encoded as number like you used to)
   and bus number has no meaning from inside qemu.
   So while I said many
   times I don't care about exact CLI syntax to much it should make sense
   at least. It can use id to specify PCI bus in CLI like this:
   device_del pci.0:1.1. Or it can even use device id too like this:
   device_del pci.0:ide.0. Or it can use HW topology like in FO device
   path. But doing ah-hoc device enumeration inside qemu and then using it
   for CLI is not it.
   
functionality in the guests.  Qemu is buggy in the moment in that is
uses the bus addresses assigned by guest and not the ones in ACPI,
but that can be fixed.
   It looks like you confused ACPI _SEG for something it isn't.
  
  Maybe I did. This is what linux does:
  
  struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_pci_root
  *root)
  {
  struct acpi_device *device = root-device;
  int domain = root-segment;
  int busnum = root-secondary.start;
  
  And I think this is consistent with the spec.
  
 It means that one domain may include several host bridges.
 At that level
 domain is defined as something that have unique name for each device
 inside it thus no two buses in one 

Re: [Qemu-devel] [PATCH 1/1] NBD isn't used by qemu-img, so don't link qemu-img against NBD objects

2010-11-21 Thread Jes Sorensen
On 11/20/10 19:31, Stefan Hajnoczi wrote:
 On Sat, Nov 20, 2010 at 6:04 PM, Andreas Färber andreas.faer...@web.de 
 wrote:
 http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man3/daemon.3.html
 
 Deprecated in favor of using launchd.
 
 Removing qemu-nbd from the build because there is warning isn't a good
 strategy.  You may not use qemu-io either but it is always built.
 That's important because otherwise it could bitrot, no one would
 notice, and one day qemu-tools wouldn't work on Mac OSX at all
 anymore.  Instead we should fix the code that causes a warning.
 
 Is it cheating much to daemonize manually in qemu-nbd? ;)

Maybe it would be feasible to write an os_daemonize() implementation
based on launchd() on OSX?




Re: [Qemu-devel] [PATCH 1/1] NBD isn't used by qemu-img, so don't link qemu-img against NBD objects

2010-11-21 Thread Jes Sorensen
On 11/20/10 18:22, Andreas Färber wrote:
 Am 19.11.2010 um 17:30 schrieb jes.soren...@redhat.com:
 
 From: Jes Sorensen jes.soren...@redhat.com

 Signed-off-by: Jes Sorensen jes.soren...@redhat.com
 ---
 Makefile  |2 +-
 Makefile.objs |   12 ++--
 2 files changed, 11 insertions(+), 3 deletions(-)
 
 Tested-by: Andreas Färber andreas.faer...@web.de
 
 Looks good to me and a clean build works okay.
 
 Any plans for a way to disable NBD build completely? There are warnings
 about use of daemon() on Mac OS X and possibly Solaris, and there's
 little point in building qemu-nbd if one does not use it.

I think it would be worth adding as an option, however letting it
default to 'on' to make sure it gets at least build tested unless a user
explicitly disables it.

Cheers,
Jes



Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Gleb Natapov
On Sun, Nov 21, 2010 at 01:53:26PM +0200, Michael S. Tsirkin wrote:
   The guests.
  Which one? There are many guests. Your favorite?
  
   For CLI, we need an easy way to map a device in guest to the
   device in qemu and back.
  Then use eth0, /dev/sdb, or even C:. Your way is not less broken since what
  you are saying is lets use name that guest assigned to a device. 
 
 No I am saying let's use the name that our ACPI tables assigned.
 
ACPI does not assign any name. In a best case ACPI tables describe resources
used by a device. And not all guests qemu supports has support for ACPI. Qemu
even support machines types that do not support ACPI.
 
   
   
It looks like you identify yourself with most of
qemu users, but if most qemu users are like you then qemu has not enough
users :) Most users that consider themselves to be advanced may know
what eth1 or /dev/sdb means. This doesn't mean we should provide
device_del eth1 or device_add /dev/sdb command though. 

More important is that domain (encoded as number like you used to)
and bus number has no meaning from inside qemu.
So while I said many
times I don't care about exact CLI syntax to much it should make sense
at least. It can use id to specify PCI bus in CLI like this:
device_del pci.0:1.1. Or it can even use device id too like this:
device_del pci.0:ide.0. Or it can use HW topology like in FO device
path. But doing ah-hoc device enumeration inside qemu and then using it
for CLI is not it.

 functionality in the guests.  Qemu is buggy in the moment in that is
 uses the bus addresses assigned by guest and not the ones in ACPI,
 but that can be fixed.
It looks like you confused ACPI _SEG for something it isn't.
   
   Maybe I did. This is what linux does:
   
   struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_pci_root
   *root)
   {
   struct acpi_device *device = root-device;
   int domain = root-segment;
   int busnum = root-secondary.start;
   
   And I think this is consistent with the spec.
   
  It means that one domain may include several host bridges.
  At that level
  domain is defined as something that have unique name for each device
  inside it thus no two buses in one segment/domain can have same bus
  number. This is what PCI spec tells you. 
 
 And that really is enough for CLI because all we need is locate the
 specific slot in a unique way.
 
At qemu level we do not have bus numbers. They are assigned by a guest.
So inside a guest domain:bus:slot.func points you to a device, but in
qemu does not enumerate buses.

  And this further shows that using domain as defined by guest is very
  bad idea. 
 
 As defined by ACPI, really.
 
ACPI is a part of a guest software that may not event present in the
guest. How is it relevant?

ACPI spec
says that PCI segment group is purely software concept managed by system
firmware. In fact one segment may include multiple PCI host bridges.
   
   It can't I think:
  Read _BBN definition:
   The _BBN object is located under a PCI host bridge and must be unique for
   every host bridge within a segment since it is the PCI bus number.
  
  Clearly above speaks about multiple host bridge within a segment.
 
 Yes, it looks like the firmware spec allows that.
It even have explicit example that shows it.

 
 Multiple Host Bridges
   
 A platform may have multiple PCI Express or PCI-X host bridges. The base
 address for the
 MMCONFIG space for these host bridges may need to be allocated at
 different locations. In such
 cases, using MCFG table and _CBA method as defined in this section means
 that each of these host
 bridges must be in its own PCI Segment Group.
   
  This is not from ACPI spec,
 
 PCI Firmware Specification 3.0
 
  but without going to deep into it above
  paragraph talks about some particular case when each host bridge must
  be in its own PCI Segment Group with is a definite prove that in other
  cases multiple host bridges can be in on segment group.
 
 I stand corrected. I think you are right. But note that if they are,
 they must have distinct bus numbers assigned by ACPI.
ACPI does not assign any numbers. Bios enumerates buses and assign
numbers. ACPI, in a base case, describes what BIOS did to OSPM. Qemu sits
one layer below all this and does not enumerate PC buses. Even if we make
it to do so there is not way to guaranty that guest will enumerate them
in the same order since there is more then one way to do enumeration. I
repeated this numerous times to you already.

 
   
_SEG
is not what OSPM uses to tie HW resource to ACPI resource. It used _CRS
(Current Resource Settings) for that just like OF. No surprise there.
   
   OSPM uses both I think.
   
   All I see linux do with CRS is get the bus number range.
  So lets assume that HW has two PCI host bridges and ACPI has:
  Device(PCI0) {
  Name 

Re: [Qemu-devel] [PATCH 10/11] config: Add header file for device config options

2010-11-21 Thread Blue Swirl
On Sun, Nov 21, 2010 at 12:45 PM, Alexander Graf ag...@suse.de wrote:

 On 21.11.2010, at 13:37, Blue Swirl wrote:

 On Fri, Nov 19, 2010 at 2:56 AM, Alexander Graf ag...@suse.de wrote:
 So far we have C preprocessor defines for target and host config
 options, but we're lacking any information on which devices are
 available.

 We do need that information at times though, for example in the
 ahci patch where we need to call a legacy init function depending
 on whether we have support compiled in or not.

 That does not seem right. Devices should not care about what other
 devices may exist. Perhaps stub style approach would be better.

 Well, for the -drive parameter we need to know what devices we can create and 
 I'd like to keep that code as close as possible to the actual device code.

 So the stub alternative would be to create a stub .c file for each device 
 that could get created during -drive. I'm not sure that is a good idea :).

 Another alternative would be to move the instantiation code to somewhere 
 generic. But that sounds rather ugly to me.

 Also, devices really shouldn't care about other devices' availability. 
 Machine descriptions should care, and that's what this patch is there for :).

Then only machine descriptions should include config-devices.h.



[Qemu-devel] Re: Oudated vgabios.git on git.qemu.org

2010-11-21 Thread Anthony Liguori

On 11/21/2010 04:23 AM, Avi Kivity wrote:
qemu.git points to vgabios.git 
19ea12c230ded95928ecaef0db47a82231c2e485, which isn't available on 
git.qemu.org/vgabios.git.


As I mentioned in another mail, I had to switch the repo from mirror to 
push mode.  It's all now fixed up.


Regards,

Anthony Liguori





Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Michael S. Tsirkin
On Sun, Nov 21, 2010 at 02:50:14PM +0200, Gleb Natapov wrote:
 On Sun, Nov 21, 2010 at 01:53:26PM +0200, Michael S. Tsirkin wrote:
The guests.
   Which one? There are many guests. Your favorite?
   
For CLI, we need an easy way to map a device in guest to the
device in qemu and back.
   Then use eth0, /dev/sdb, or even C:. Your way is not less broken since 
   what
   you are saying is lets use name that guest assigned to a device. 
  
  No I am saying let's use the name that our ACPI tables assigned.
  
 ACPI does not assign any name. In a best case ACPI tables describe resources
 used by a device.

Not only that. bus number and segment aren't resources as such.
They describe addressing.

 And not all guests qemu supports has support for ACPI. Qemu
 even support machines types that do not support ACPI.

So? Different machines - different names.



 It looks like you identify yourself with most of
 qemu users, but if most qemu users are like you then qemu has not 
 enough
 users :) Most users that consider themselves to be advanced may know
 what eth1 or /dev/sdb means. This doesn't mean we should provide
 device_del eth1 or device_add /dev/sdb command though. 
 
 More important is that domain (encoded as number like you used to)
 and bus number has no meaning from inside qemu.
 So while I said many
 times I don't care about exact CLI syntax to much it should make sense
 at least. It can use id to specify PCI bus in CLI like this:
 device_del pci.0:1.1. Or it can even use device id too like this:
 device_del pci.0:ide.0. Or it can use HW topology like in FO device
 path. But doing ah-hoc device enumeration inside qemu and then using 
 it
 for CLI is not it.
 
  functionality in the guests.  Qemu is buggy in the moment in that is
  uses the bus addresses assigned by guest and not the ones in ACPI,
  but that can be fixed.
 It looks like you confused ACPI _SEG for something it isn't.

Maybe I did. This is what linux does:

struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_pci_root
*root)
{
struct acpi_device *device = root-device;
int domain = root-segment;
int busnum = root-secondary.start;

And I think this is consistent with the spec.

   It means that one domain may include several host bridges.
   At that level
   domain is defined as something that have unique name for each device
   inside it thus no two buses in one segment/domain can have same bus
   number. This is what PCI spec tells you. 
  
  And that really is enough for CLI because all we need is locate the
  specific slot in a unique way.
  
 At qemu level we do not have bus numbers. They are assigned by a guest.
 So inside a guest domain:bus:slot.func points you to a device, but in
 qemu does not enumerate buses.
 
   And this further shows that using domain as defined by guest is very
   bad idea. 
  
  As defined by ACPI, really.
  
 ACPI is a part of a guest software that may not event present in the
 guest. How is it relevant?

It's relevant because this is what guests use. To access the root
device with cf8/cfc you need to know the bus number assigned to it
by firmware. How that was assigned is of interest to BIOS/ACPI but not
really interesting to the user or, I suspect, guest OS.

 ACPI spec
 says that PCI segment group is purely software concept managed by 
 system
 firmware. In fact one segment may include multiple PCI host bridges.

It can't I think:
   Read _BBN definition:
The _BBN object is located under a PCI host bridge and must be unique for
every host bridge within a segment since it is the PCI bus number.
   
   Clearly above speaks about multiple host bridge within a segment.
  
  Yes, it looks like the firmware spec allows that.
 It even have explicit example that shows it.
 
  
Multiple Host Bridges

A platform may have multiple PCI Express or PCI-X host bridges. 
The base
address for the
MMCONFIG space for these host bridges may need to be allocated 
at
different locations. In such
cases, using MCFG table and _CBA method as defined in this 
section means
that each of these host
bridges must be in its own PCI Segment Group.

   This is not from ACPI spec,
  
  PCI Firmware Specification 3.0
  
   but without going to deep into it above
   paragraph talks about some particular case when each host bridge must
   be in its own PCI Segment Group with is a definite prove that in other
   cases multiple host bridges can be in on segment group.
  
  I stand corrected. I think you are right. But note that if they are,
  they must have distinct bus numbers assigned by ACPI.
 ACPI does not assign any numbers.

For all root pci devices firmware must supply BBN number. This is the
bus number, isn't it? 

Re: [Qemu-devel] [PATCH] configure: Add compiler option -Wmissing-format-attribute

2010-11-21 Thread Anthony Liguori

On 11/15/2010 02:22 PM, Stefan Weil wrote:

With the previous patches, hopefully all functions with
printf like arguments use gcc's format checking.

This was tested with default build configuration on linux
and windows hosts (including some cross compilations),
so chances are good that there remain few (if any) functions
without format checking.

Cc: Blue Swirlblauwir...@gmail.com
Signed-off-by: Stefan Weilw...@mail.berlios.de
   


This breaks the build for me.  disas.c doesn't build.

Regards,

Anthony Liguori


---
  HACKING   |3 ---
  configure |1 +
  2 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/HACKING b/HACKING
index 6ba9d7e..3af53fd 100644
--- a/HACKING
+++ b/HACKING
@@ -120,6 +120,3 @@ gcc's printf attribute directive in the prototype.
  This makes it so gcc's -Wformat and -Wformat-security options can do
  their jobs and cross-check format strings with the number and types
  of arguments.
-
-Currently many functions in QEMU are not following this rule but
-patches to add the attribute would be very much appreciated.
diff --git a/configure b/configure
index 7025d2b..d4c983a 100755
--- a/configure
+++ b/configure
@@ -140,6 +140,7 @@ windres=${cross_prefix}${windres}
  QEMU_CFLAGS=-fno-strict-aliasing $QEMU_CFLAGS
  CFLAGS=-g $CFLAGS
  QEMU_CFLAGS=-Wall -Wundef -Wendif-labels -Wwrite-strings -Wmissing-prototypes 
$QEMU_CFLAGS
+QEMU_CFLAGS=-Wmissing-format-attribute $QEMU_CFLAGS
  QEMU_CFLAGS=-Wstrict-prototypes -Wredundant-decls $QEMU_CFLAGS
  QEMU_CFLAGS=-D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE 
$QEMU_CFLAGS
  QEMU_CFLAGS=-D_FORTIFY_SOURCE=2 $QEMU_CFLAGS
   





Re: [Qemu-devel] [PATCH] vgabios update: handle compatibility with older qemu versions

2010-11-21 Thread Anthony Liguori

On 11/17/2010 05:06 AM, Gerd Hoffmann wrote:

As pointed out by avi the vgabios update is guest-visible and thus has
migration implications.

One change is that the vga has a valid pci rom bar now.  We already have
a pci bus property to enable/disable the rom bar and we'll load the bios
via fw_cfg as fallback for the no-rom-bar case.  So we just have to add
compat properties to handle this case.

A second change is that the magic bochs lfb @ 0xe000 is gone.  When
live-migrating a guest from a older qemu version it might be using the
lfb though, so we have to keep it for the old machine types.  The patch
enables the bochs lfb in case we don't have the pci rom bar enabled
(i.e. we are in 0.13+older compat mode).

This patch depends on these patches which add (and use) the pc-0.13
machine type:
   http://patchwork.ozlabs.org/patch/70797/
   http://patchwork.ozlabs.org/patch/70798/

Signed-off-by: Gerd Hoffmannkra...@redhat.com
   


Applied.  Thanks.

Regards,

Anthony Liguori


Cc: a...@redhat.com
---
  hw/pc_piix.c|   16 
  hw/vga-pci.c|5 +
  hw/vmware_vga.c |5 +
  3 files changed, 26 insertions(+), 0 deletions(-)

diff --git a/hw/pc_piix.c b/hw/pc_piix.c
index e9752db..a85d58e 100644
--- a/hw/pc_piix.c
+++ b/hw/pc_piix.c
@@ -231,6 +231,14 @@ static QEMUMachine pc_machine_v0_13 = {
  .driver   = virtio-9p-pci,
  .property = vectors,
  .value= stringify(0),
+},{
+.driver   = VGA,
+.property = rombar,
+.value= stringify(0),
+},{
+.driver   = vmware-svga,
+.property = rombar,
+.value= stringify(0),
  },
  { /* end of list */ }
  },
@@ -250,6 +258,14 @@ static QEMUMachine pc_machine_v0_12 = {
  .driver   = virtio-serial-pci,
  .property = vectors,
  .value= stringify(0),
+},{
+.driver   = VGA,
+.property = rombar,
+.value= stringify(0),
+},{
+.driver   = vmware-svga,
+.property = rombar,
+.value= stringify(0),
  },
  { /* end of list */ }
  }
diff --git a/hw/vga-pci.c b/hw/vga-pci.c
index 28b174b..4931eee 100644
--- a/hw/vga-pci.c
+++ b/hw/vga-pci.c
@@ -96,6 +96,11 @@ static int pci_vga_initfn(PCIDevice *dev)
   pci_register_bar(d-dev, 0, VGA_RAM_SIZE,
PCI_BASE_ADDRESS_MEM_PREFETCH, vga_map);

+ if (!dev-rom_bar) {
+ /* compatibility with pc-0.13 and older */
+ vga_init_vbe(s);
+ }
+
   return 0;
  }

diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c
index e96b7db..e852620 100644
--- a/hw/vmware_vga.c
+++ b/hw/vmware_vga.c
@@ -1305,6 +1305,11 @@ static int pci_vmsvga_initfn(PCIDevice *dev)

  vmsvga_init(s-chip, VGA_RAM_SIZE);

+if (!dev-rom_bar) {
+/* compatibility with pc-0.13 and older */
+vga_init_vbe(s-chip.vga);
+}
+
  return 0;
  }

   





Re: [Qemu-devel] [PATCH][RESEND] pcnet: Do not receive external frames in loopback mode

2010-11-21 Thread Anthony Liguori

On 10/19/2010 10:03 AM, Jan Kiszka wrote:

While not explicitly stated in the spec, it was observed on real systems
that enabling loopback testing on the pcnet controller disables
reception of external frames. And some legacy software relies on it, so
provide this behavior.

Signed-off-by: Jan Kiszkajan.kis...@siemens.com
   


Applied.  Thanks.

Regards,

Anthony Liguori

---
  hw/pcnet.c |5 +++--
  1 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/pcnet.c b/hw/pcnet.c
index b52935a..f970bda 100644
--- a/hw/pcnet.c
+++ b/hw/pcnet.c
@@ -1048,9 +1048,10 @@ ssize_t pcnet_receive(VLANClientState *nc, const uint8_t 
*buf, size_t size_)
  int crc_err = 0;
  int size = size_;

-if (CSR_DRX(s) || CSR_STOP(s) || CSR_SPND(s) || !size)
+if (CSR_DRX(s) || CSR_STOP(s) || CSR_SPND(s) || !size ||
+(CSR_LOOP(s)  !s-looptest)) {
  return -1;
-
+}
  #ifdef PCNET_DEBUG
  printf(pcnet_receive size=%d\n, size);
  #endif
   





Re: [Qemu-devel] [PATCH 0/2] msi support for virtfs

2010-11-21 Thread Anthony Liguori

On 11/11/2010 05:59 AM, Gerd Hoffmann wrote:

   Hi,

This tiny patch series adds msi support for virtfs.  It's two patches
only because we need a compat property to stay compatible with -stable
and we don't have a pc-0.14 machine type yet, so this is added first.
   


Applied all.  Thanks.

Regards,

Anthony Liguori


please apply,
   Gerd

Gerd Hoffmann (2):
   pc: add 0.13 pc machine type
   virtfs: enable MSI-X

  hw/pc_piix.c|   18 +-
  hw/virtio-pci.c |5 -
  2 files changed, 21 insertions(+), 2 deletions(-)



   





Re: [Qemu-devel] [PATCH] trace: Trace vm_start()/vm_stop()

2010-11-21 Thread Anthony Liguori

On 11/16/2010 06:20 AM, Stefan Hajnoczi wrote:

VM state change notifications are invoked from vm_start()/vm_stop().
Trace these state changes so we can reason about the state of the VM
from trace output.

Signed-off-by: Stefan Hajnoczistefa...@linux.vnet.ibm.com
   


Applied.  Thanks.

Regards,

Anthony Liguori


---
  trace-events |3 +++
  vl.c |3 +++
  2 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/trace-events b/trace-events
index 947f8b0..da03d4b 100644
--- a/trace-events
+++ b/trace-events
@@ -189,3 +189,6 @@ disable sun4m_iommu_mem_writel_pgflush(uint32_t val) page flush 
%x
  disable sun4m_iommu_page_get_flags(uint64_t pa, uint64_t iopte, uint32_t ret) get flags addr 
%PRIx64 =  pte %PRIx64, *pte = %x
  disable sun4m_iommu_translate_pa(uint64_t addr, uint64_t pa, uint32_t iopte) xlate dva 
%PRIx64 =  pa %PRIx64 iopte = %x
  disable sun4m_iommu_bad_addr(uint64_t addr) bad addr %PRIx64
+
+# vl.c
+disable vm_state_notify(int running, int reason) running %d reason %d
diff --git a/vl.c b/vl.c
index c58583d..87e76ad 100644
--- a/vl.c
+++ b/vl.c
@@ -158,6 +158,7 @@ int main(int argc, char **argv)

  #include slirp/libslirp.h

+#include trace.h
  #include qemu-queue.h
  #include cpus.h
  #include arch_init.h
@@ -1074,6 +1075,8 @@ void vm_state_notify(int running, int reason)
  {
  VMChangeStateEntry *e;

+trace_vm_state_notify(running, reason);
+
  for (e = vm_change_state_head.lh_first; e; e = e-entries.le_next) {
  e-cb(e-opaque, running, reason);
  }
   





Re: [Qemu-devel] [PATCH 0/4] virtio: Convert fprintf() to error_report()

2010-11-21 Thread Anthony Liguori

On 11/15/2010 02:44 PM, Stefan Hajnoczi wrote:

The virtio hardware emulation code uses fprintf(stderr, ...) error messages.
Improve things slightly by moving to error_report() so error messages will be
printed to the monitor, if present.

We want to handle device error states properly instead of bailing out with
exit(1) but this series does not attempt to fix that yet.

Leave virtio-9p for now where there are many cases of fprintf(stderr, ...) and
development is still very active.
   


Applied all.  Thanks.

Regards,

Anthony Liguori



   





Re: [Qemu-devel] [PATCH] trace: Use fprintf_function (format checking)

2010-11-21 Thread Anthony Liguori

On 11/15/2010 02:17 PM, Stefan Weil wrote:

fprintf_function adds format checking with GCC_FMT_ATTR.

Cc: Blue Swirlblauwir...@gmail.com
Signed-off-by: Stefan Weilw...@mail.berlios.de
   


Applied.  Thanks.

Regards,

Anthony Liguori


---
  simpletrace.h |6 +++---
  1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/simpletrace.h b/simpletrace.h
index 72614ec..2f44ed3 100644
--- a/simpletrace.h
+++ b/simpletrace.h
@@ -29,10 +29,10 @@ void trace3(TraceEventID event, uint64_t x1, uint64_t x2, 
uint64_t x3);
  void trace4(TraceEventID event, uint64_t x1, uint64_t x2, uint64_t x3, 
uint64_t x4);
  void trace5(TraceEventID event, uint64_t x1, uint64_t x2, uint64_t x3, 
uint64_t x4, uint64_t x5);
  void trace6(TraceEventID event, uint64_t x1, uint64_t x2, uint64_t x3, 
uint64_t x4, uint64_t x5, uint64_t x6);
-void st_print_trace(FILE *stream, int (*stream_printf)(FILE *stream, const 
char *fmt, ...));
-void st_print_trace_events(FILE *stream, int (*stream_printf)(FILE *stream, 
const char *fmt, ...));
+void st_print_trace(FILE *stream, fprintf_function stream_printf);
+void st_print_trace_events(FILE *stream, fprintf_function stream_printf);
  bool st_change_trace_event_state(const char *tname, bool tstate);
-void st_print_trace_file_status(FILE *stream, int (*stream_printf)(FILE 
*stream, const char *fmt, ...));
+void st_print_trace_file_status(FILE *stream, fprintf_function stream_printf);
  void st_set_trace_file_enabled(bool enable);
  bool st_set_trace_file(const char *file);
  void st_flush_trace_buffer(void);
   





Re: [Qemu-devel] [PATCH] slirp: Remove unused code for bad sprintf

2010-11-21 Thread Anthony Liguori

On 11/15/2010 02:15 PM, Stefan Weil wrote:

Neither DECLARE_SPRINTF nor BAD_SPRINTF are needed for QEMU.

QEMU won't support systems with missing or bad declarations
for sprintf. The unused code was detected while looking for
functions with missing format checking. Instead of adding
GCC_FMT_ATTR, the unused code was removed.

Cc: Blue Swirlblauwir...@gmail.com
Signed-off-by: Stefan Weilw...@mail.berlios.de
   


Applied.  Thanks.

Regards,

Anthony Liguori


---
  slirp/misc.c |   42 --
  slirp/slirp.h|   14 --
  slirp/slirp_config.h |6 --
  3 files changed, 0 insertions(+), 62 deletions(-)

diff --git a/slirp/misc.c b/slirp/misc.c
index 1aeb401..19dbec4 100644
--- a/slirp/misc.c
+++ b/slirp/misc.c
@@ -264,48 +264,6 @@ void lprint(const char *format, ...)
  va_end(args);
  }

-#ifdef BAD_SPRINTF
-
-#undef vsprintf
-#undef sprintf
-
-/*
- * Some BSD-derived systems have a sprintf which returns char *
- */
-
-int
-vsprintf_len(string, format, args)
-   char *string;
-   const char *format;
-   va_list args;
-{
-   vsprintf(string, format, args);
-   return strlen(string);
-}
-
-int
-#ifdef __STDC__
-sprintf_len(char *string, const char *format, ...)
-#else
-sprintf_len(va_alist) va_dcl
-#endif
-{
-   va_list args;
-#ifdef __STDC__
-   va_start(args, format);
-#else
-   char *string;
-   char *format;
-   va_start(args);
-   string = va_arg(args, char *);
-   format = va_arg(args, char *);
-#endif
-   vsprintf(string, format, args);
-   return strlen(string);
-}
-
-#endif
-
  void
  u_sleep(int usec)
  {
diff --git a/slirp/slirp.h b/slirp/slirp.h
index 462292d..dfd977a 100644
--- a/slirp/slirp.h
+++ b/slirp/slirp.h
@@ -237,20 +237,6 @@ void if_start(Slirp *);
  void if_start(struct ttys *);
  #endif

-#ifdef BAD_SPRINTF
-# define vsprintf vsprintf_len
-# define sprintf sprintf_len
- extern int vsprintf_len(char *, const char *, va_list);
- extern int sprintf_len(char *, const char *, ...);
-#endif
-
-#ifdef DECLARE_SPRINTF
-# ifndef BAD_SPRINTF
- extern int vsprintf(char *, const char *, va_list);
-# endif
- extern int vfprintf(FILE *, const char *, va_list);
-#endif
-
  #ifndef HAVE_STRERROR
   extern char *strerror(int error);
  #endif
diff --git a/slirp/slirp_config.h b/slirp/slirp_config.h
index f19c703..18db45c 100644
--- a/slirp/slirp_config.h
+++ b/slirp/slirp_config.h
@@ -85,9 +85,6 @@
  /* Define if the machine is big endian */
  //#undef HOST_WORDS_BIGENDIAN

-/* Define if your sprintf returns char * instead of int */
-#undef BAD_SPRINTF
-
  /* Define if you have readv */
  #undef HAVE_READV

@@ -97,9 +94,6 @@
  #define DECLARE_IOVEC
  #endif

-/* Define if a declaration of sprintf/fprintf is needed */
-#undef DECLARE_SPRINTF
-
  /* Define if you have a POSIX.1 sys/wait.h */
  #undef HAVE_SYS_WAIT_H

   





Re: [Qemu-devel] [PATCH v3] virtio-9p: fix build on !CONFIG_UTIMENSAT

2010-11-21 Thread Anthony Liguori

On 11/14/2010 08:15 PM, Hidetoshi Seto wrote:

This patch introduce a fallback mechanism for old systems that do not
support utimensat().  This fix build failure with following warnings:

hw/virtio-9p-local.c: In function 'local_utimensat':
hw/virtio-9p-local.c:479: warning: implicit declaration of function 'utimensat'
hw/virtio-9p-local.c:479: warning: nested extern declaration of 'utimensat'

and:

hw/virtio-9p.c: In function 'v9fs_setattr_post_chmod':
hw/virtio-9p.c:1410: error: 'UTIME_NOW' undeclared (first use in this function)
hw/virtio-9p.c:1410: error: (Each undeclared identifier is reported only once
hw/virtio-9p.c:1410: error: for each function it appears in.)
hw/virtio-9p.c:1413: error: 'UTIME_OMIT' undeclared (first use in this function)
hw/virtio-9p.c: In function 'v9fs_wstat_post_chmod':
hw/virtio-9p.c:2905: error: 'UTIME_OMIT' undeclared (first use in this function)

v3:
   - Use better alternative handling for UTIME_NOW/OMIT
   - Move qemu_utimensat() to cutils.c
V2:
   - Introduce qemu_utimensat()

Signed-off-by: Hidetoshi Setoseto.hideto...@jp.fujitsu.com
   


Applied.  Thanks.

Regards,

Anthony Liguori


---
  cutils.c |   43 +++
  hw/virtio-9p-local.c |4 ++--
  qemu-common.h|   10 ++
  3 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/cutils.c b/cutils.c
index 536ee93..3c18941 100644
--- a/cutils.c
+++ b/cutils.c
@@ -288,3 +288,46 @@ int fcntl_setfl(int fd, int flag)
  }
  #endif

+int qemu_utimensat(int dirfd, const char *path, const struct timespec *times,
+   int flags)
+{
+#ifdef CONFIG_UTIMENSAT
+return utimensat(dirfd, path, times, flags);
+#else
+/* Fallback: use utimes() instead of utimensat() */
+struct timeval tv[2], tv_now;
+struct stat st;
+int i;
+
+/* happy if special cases */
+if (times[0].tv_nsec == UTIME_OMIT  times[1].tv_nsec == UTIME_OMIT) {
+return 0;
+}
+if (times[0].tv_nsec == UTIME_NOW  times[1].tv_nsec == UTIME_NOW) {
+return utimes(path, NULL);
+}
+
+/* prepare for hard cases */
+if (times[0].tv_nsec == UTIME_NOW || times[1].tv_nsec == UTIME_NOW) {
+gettimeofday(tv_now, NULL);
+}
+if (times[0].tv_nsec == UTIME_OMIT || times[1].tv_nsec == UTIME_OMIT) {
+stat(path,st);
+}
+
+for (i = 0; i  2; i++) {
+if (times[i].tv_nsec == UTIME_NOW) {
+tv[i].tv_sec = tv_now.tv_sec;
+tv[i].tv_usec = 0;
+} else if (times[i].tv_nsec == UTIME_OMIT) {
+tv[i].tv_sec = (i == 0) ? st.st_atime : st.st_mtime;
+tv[i].tv_usec = 0;
+} else {
+tv[i].tv_sec = times[i].tv_sec;
+tv[i].tv_usec = times[i].tv_nsec / 1000;
+}
+}
+
+return utimes(path,tv[0]);
+#endif
+}
diff --git a/hw/virtio-9p-local.c b/hw/virtio-9p-local.c
index 0d52020..41603ea 100644
--- a/hw/virtio-9p-local.c
+++ b/hw/virtio-9p-local.c
@@ -480,9 +480,9 @@ static int local_chown(FsContext *fs_ctx, const char *path, 
FsCred *credp)
  }

  static int local_utimensat(FsContext *s, const char *path,
-  const struct timespec *buf)
+   const struct timespec *buf)
  {
-return utimensat(AT_FDCWD, rpath(s, path), buf, AT_SYMLINK_NOFOLLOW);
+return qemu_utimensat(AT_FDCWD, rpath(s, path), buf, AT_SYMLINK_NOFOLLOW);
  }

  static int local_remove(FsContext *ctx, const char *path)
diff --git a/qemu-common.h b/qemu-common.h
index 2fbc27f..7fe4c16 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -146,6 +146,16 @@ time_t mktimegm(struct tm *tm);
  int qemu_fls(int i);
  int qemu_fdatasync(int fd);
  int fcntl_setfl(int fd, int flag);
+#ifndef CONFIG_UTIMENSAT
+#ifndef UTIME_NOW
+# define UTIME_NOW ((1l  30) - 1l)
+#endif
+#ifndef UTIME_OMIT
+# define UTIME_OMIT((1l  30) - 2l)
+#endif
+#endif
+int qemu_utimensat(int dirfd, const char *path, const struct timespec *times,
+int flags);

  /* path.c */
  void init_paths(const char *prefix);
   





Re: [Qemu-devel] [patch 0/3] block migration fixes

2010-11-21 Thread Anthony Liguori

On 11/08/2010 01:02 PM, Marcelo Tosatti wrote:

Following patchset fixes block migration corruption issues
   


Applied all.  Thanks.

Regards,

Anthony Liguori



   





Re: [Qemu-devel] [PATCH] Makefile: Fix check dependency breakage

2010-11-21 Thread Anthony Liguori

On 11/12/2010 08:55 AM, Luiz Capitulino wrote:

Commit b152aa84d52882bb1846485a89baf13aa07c86bc broke the unit-tests
build, fix it.

Signed-off-by: Luiz Capitulinolcapitul...@redhat.com
   


Applied.  Thanks.

Regards,

Anthony Liguori


---
  Makefile |   14 --
  1 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/Makefile b/Makefile
index 02698e9..719aca9 100644
--- a/Makefile
+++ b/Makefile
@@ -142,12 +142,14 @@ qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx

  check-qint.o check-qstring.o check-qdict.o check-qlist.o check-qfloat.o 
check-qjson.o: $(GENERATED_HEADERS)

-check-qint: check-qint.o qint.o qemu-malloc.o $(trace-obj-y)
-check-qstring: check-qstring.o qstring.o qemu-malloc.o $(trace-obj-y)
-check-qdict: check-qdict.o qdict.o qfloat.o qint.o qstring.o qbool.o 
qemu-malloc.o qlist.o $(trace-obj-y)
-check-qlist: check-qlist.o qlist.o qint.o qemu-malloc.o $(trace-obj-y)
-check-qfloat: check-qfloat.o qfloat.o qemu-malloc.o $(trace-obj-y)
-check-qjson: check-qjson.o qfloat.o qint.o qdict.o qstring.o qlist.o qbool.o 
qjson.o json-streamer.o json-lexer.o json-parser.o qemu-malloc.o $(trace-obj-y)
+CHECK_PROG_DEPS = qemu-malloc.o $(oslib-obj-y) $(trace-obj-y)
+
+check-qint: check-qint.o qint.o $(CHECK_PROG_DEPS)
+check-qstring: check-qstring.o qstring.o $(CHECK_PROG_DEPS)
+check-qdict: check-qdict.o qdict.o qfloat.o qint.o qstring.o qbool.o qlist.o 
$(CHECK_PROG_DEPS)
+check-qlist: check-qlist.o qlist.o qint.o $(CHECK_PROG_DEPS)
+check-qfloat: check-qfloat.o qfloat.o $(CHECK_PROG_DEPS)
+check-qjson: check-qjson.o qfloat.o qint.o qdict.o qstring.o qlist.o qbool.o 
qjson.o json-streamer.o json-lexer.o json-parser.o $(CHECK_PROG_DEPS)

  clean:
  # avoid old build problems by removing potentially incorrect old files
   





[Qemu-devel] [PATCH vgabios] Add 1280x768 mode

2010-11-21 Thread Avi Kivity
Signed-off-by: Avi Kivity a...@redhat.com
---
 vbetables-gen.c |3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/vbetables-gen.c b/vbetables-gen.c
index 550935a..76b8842 100644
--- a/vbetables-gen.c
+++ b/vbetables-gen.c
@@ -55,6 +55,9 @@ ModeInfo modes[] = {
 { 1152, 864, 16  , 0x14a},
 { 1152, 864, 24  , 0x14b},
 { 1152, 864, 32  , 0x14c},
+{ 1280, 768, 16  , 0x175},
+{ 1280, 768, 24  , 0x176},
+{ 1280, 768, 32  , 0x177},
 { 1280, 800, 16  , 0x178},
 { 1280, 800, 24  , 0x179},
 { 1280, 800, 32  , 0x17a},
-- 
1.7.1




[Qemu-devel] Re: [PATCH vgabios] Add 1280x768 mode

2010-11-21 Thread Avi Kivity

On 11/21/2010 05:33 PM, Avi Kivity wrote:

Signed-off-by: Avi Kivitya...@redhat.com
---
  vbetables-gen.c |3 +++
  1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/vbetables-gen.c b/vbetables-gen.c
index 550935a..76b8842 100644
--- a/vbetables-gen.c
+++ b/vbetables-gen.c
@@ -55,6 +55,9 @@ ModeInfo modes[] = {
  { 1152, 864, 16  , 0x14a},
  { 1152, 864, 24  , 0x14b},
  { 1152, 864, 32  , 0x14c},
+{ 1280, 768, 16  , 0x175},
+{ 1280, 768, 24  , 0x176},
+{ 1280, 768, 32  , 0x177},


This is of course from qemu-kvm, and was added at a user's request.

--
error compiling committee.c: too many arguments to function




[Qemu-devel] Re: Oudated vgabios.git on git.qemu.org

2010-11-21 Thread Avi Kivity

On 11/21/2010 04:36 PM, Anthony Liguori wrote:

On 11/21/2010 04:23 AM, Avi Kivity wrote:
qemu.git points to vgabios.git 
19ea12c230ded95928ecaef0db47a82231c2e485, which isn't available on 
git.qemu.org/vgabios.git.


As I mentioned in another mail, I had to switch the repo from mirror 
to push mode.  It's all now fixed up.




 It is, thanks.

--
error compiling committee.c: too many arguments to function




Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Gleb Natapov
On Sun, Nov 21, 2010 at 04:48:44PM +0200, Michael S. Tsirkin wrote:
 On Sun, Nov 21, 2010 at 02:50:14PM +0200, Gleb Natapov wrote:
  On Sun, Nov 21, 2010 at 01:53:26PM +0200, Michael S. Tsirkin wrote:
 The guests.
Which one? There are many guests. Your favorite?

 For CLI, we need an easy way to map a device in guest to the
 device in qemu and back.
Then use eth0, /dev/sdb, or even C:. Your way is not less broken since 
what
you are saying is lets use name that guest assigned to a device. 
   
   No I am saying let's use the name that our ACPI tables assigned.
   
  ACPI does not assign any name. In a best case ACPI tables describe resources
  used by a device.
 
 Not only that. bus number and segment aren't resources as such.
 They describe addressing.
 
  And not all guests qemu supports has support for ACPI. Qemu
  even support machines types that do not support ACPI.
 
 So? Different machines - different names.
 
You want to have different cli for different type of machines qemu
supports?

 
 
  It looks like you identify yourself with most of
  qemu users, but if most qemu users are like you then qemu has not 
  enough
  users :) Most users that consider themselves to be advanced may 
  know
  what eth1 or /dev/sdb means. This doesn't mean we should provide
  device_del eth1 or device_add /dev/sdb command though. 
  
  More important is that domain (encoded as number like you used to)
  and bus number has no meaning from inside qemu.
  So while I said many
  times I don't care about exact CLI syntax to much it should make 
  sense
  at least. It can use id to specify PCI bus in CLI like this:
  device_del pci.0:1.1. Or it can even use device id too like this:
  device_del pci.0:ide.0. Or it can use HW topology like in FO device
  path. But doing ah-hoc device enumeration inside qemu and then 
  using it
  for CLI is not it.
  
   functionality in the guests.  Qemu is buggy in the moment in that 
   is
   uses the bus addresses assigned by guest and not the ones in ACPI,
   but that can be fixed.
  It looks like you confused ACPI _SEG for something it isn't.
 
 Maybe I did. This is what linux does:
 
 struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_pci_root
 *root)
 {
 struct acpi_device *device = root-device;
 int domain = root-segment;
 int busnum = root-secondary.start;
 
 And I think this is consistent with the spec.
 
It means that one domain may include several host bridges.
At that level
domain is defined as something that have unique name for each device
inside it thus no two buses in one segment/domain can have same bus
number. This is what PCI spec tells you. 
   
   And that really is enough for CLI because all we need is locate the
   specific slot in a unique way.
   
  At qemu level we do not have bus numbers. They are assigned by a guest.
  So inside a guest domain:bus:slot.func points you to a device, but in
  qemu does not enumerate buses.
  
And this further shows that using domain as defined by guest is very
bad idea. 
   
   As defined by ACPI, really.
   
  ACPI is a part of a guest software that may not event present in the
  guest. How is it relevant?
 
 It's relevant because this is what guests use. To access the root
 device with cf8/cfc you need to know the bus number assigned to it
 by firmware. How that was assigned is of interest to BIOS/ACPI but not
 really interesting to the user or, I suspect, guest OS.
 
Of course this is incorrect. OS can re-enumerate PCI if it wishes. Linux
have cmd just for that. And saying that ACPI is relevant because this is
what guest software use in a reply to sentence that states that not all
guest even use ACPI is, well, strange.

And ACPI describes only HW that present at boot time. What if you
hot-plugged root pci bridge? How non existent PCI naming helps you?

  ACPI spec
  says that PCI segment group is purely software concept managed by 
  system
  firmware. In fact one segment may include multiple PCI host bridges.
 
 It can't I think:
Read _BBN definition:
 The _BBN object is located under a PCI host bridge and must be unique 
for
 every host bridge within a segment since it is the PCI bus number.

Clearly above speaks about multiple host bridge within a segment.
   
   Yes, it looks like the firmware spec allows that.
  It even have explicit example that shows it.
  
   
   Multiple Host Bridges
 
   A platform may have multiple PCI Express or PCI-X host bridges. 
 The base
   address for the
   MMCONFIG space for these host bridges may need to be allocated 
 at
   different locations. In such
   cases, using MCFG table and _CBA method as defined in this 
 section means
  

Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Michael S. Tsirkin
On Sun, Nov 21, 2010 at 06:01:11PM +0200, Gleb Natapov wrote:
 On Sun, Nov 21, 2010 at 04:48:44PM +0200, Michael S. Tsirkin wrote:
  On Sun, Nov 21, 2010 at 02:50:14PM +0200, Gleb Natapov wrote:
   On Sun, Nov 21, 2010 at 01:53:26PM +0200, Michael S. Tsirkin wrote:
  The guests.
 Which one? There are many guests. Your favorite?
 
  For CLI, we need an easy way to map a device in guest to the
  device in qemu and back.
 Then use eth0, /dev/sdb, or even C:. Your way is not less broken 
 since what
 you are saying is lets use name that guest assigned to a device. 

No I am saying let's use the name that our ACPI tables assigned.

   ACPI does not assign any name. In a best case ACPI tables describe 
   resources
   used by a device.
  
  Not only that. bus number and segment aren't resources as such.
  They describe addressing.
  
   And not all guests qemu supports has support for ACPI. Qemu
   even support machines types that do not support ACPI.
  
  So? Different machines - different names.
  
 You want to have different cli for different type of machines qemu
 supports?

Different device names.

  
  
   It looks like you identify yourself with most of
   qemu users, but if most qemu users are like you then qemu has not 
   enough
   users :) Most users that consider themselves to be advanced may 
   know
   what eth1 or /dev/sdb means. This doesn't mean we should provide
   device_del eth1 or device_add /dev/sdb command though. 
   
   More important is that domain (encoded as number like you used 
   to)
   and bus number has no meaning from inside qemu.
   So while I said many
   times I don't care about exact CLI syntax to much it should make 
   sense
   at least. It can use id to specify PCI bus in CLI like this:
   device_del pci.0:1.1. Or it can even use device id too like this:
   device_del pci.0:ide.0. Or it can use HW topology like in FO 
   device
   path. But doing ah-hoc device enumeration inside qemu and then 
   using it
   for CLI is not it.
   
functionality in the guests.  Qemu is buggy in the moment in 
that is
uses the bus addresses assigned by guest and not the ones in 
ACPI,
but that can be fixed.
   It looks like you confused ACPI _SEG for something it isn't.
  
  Maybe I did. This is what linux does:
  
  struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_pci_root
  *root)
  {
  struct acpi_device *device = root-device;
  int domain = root-segment;
  int busnum = root-secondary.start;
  
  And I think this is consistent with the spec.
  
 It means that one domain may include several host bridges.
 At that level
 domain is defined as something that have unique name for each device
 inside it thus no two buses in one segment/domain can have same bus
 number. This is what PCI spec tells you. 

And that really is enough for CLI because all we need is locate the
specific slot in a unique way.

   At qemu level we do not have bus numbers. They are assigned by a guest.
   So inside a guest domain:bus:slot.func points you to a device, but in
   qemu does not enumerate buses.
   
 And this further shows that using domain as defined by guest is very
 bad idea. 

As defined by ACPI, really.

   ACPI is a part of a guest software that may not event present in the
   guest. How is it relevant?
  
  It's relevant because this is what guests use. To access the root
  device with cf8/cfc you need to know the bus number assigned to it
  by firmware. How that was assigned is of interest to BIOS/ACPI but not
  really interesting to the user or, I suspect, guest OS.
  
 Of course this is incorrect. OS can re-enumerate PCI if it wishes. Linux
 have cmd just for that.

I haven't looked but I suspect linux will simply assume cf8/cfc and
and start doing it from there. If that doesn't get you the root
device you wanted, tough.

 And saying that ACPI is relevant because this is
 what guest software use in a reply to sentence that states that not all
 guest even use ACPI is, well, strange.
 
 And ACPI describes only HW that present at boot time. What if you
 hot-plugged root pci bridge? How non existent PCI naming helps you?

that's described by ACPI as well.

   ACPI spec
   says that PCI segment group is purely software concept managed by 
   system
   firmware. In fact one segment may include multiple PCI host 
   bridges.
  
  It can't I think:
 Read _BBN definition:
  The _BBN object is located under a PCI host bridge and must be 
 unique for
  every host bridge within a segment since it is the PCI bus number.
 
 Clearly above speaks about multiple host bridge within a segment.

Yes, it looks like the firmware spec allows that.

Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Gleb Natapov
On Sun, Nov 21, 2010 at 06:38:31PM +0200, Michael S. Tsirkin wrote:
 On Sun, Nov 21, 2010 at 06:01:11PM +0200, Gleb Natapov wrote:
  On Sun, Nov 21, 2010 at 04:48:44PM +0200, Michael S. Tsirkin wrote:
   On Sun, Nov 21, 2010 at 02:50:14PM +0200, Gleb Natapov wrote:
On Sun, Nov 21, 2010 at 01:53:26PM +0200, Michael S. Tsirkin wrote:
   The guests.
  Which one? There are many guests. Your favorite?
  
   For CLI, we need an easy way to map a device in guest to the
   device in qemu and back.
  Then use eth0, /dev/sdb, or even C:. Your way is not less broken 
  since what
  you are saying is lets use name that guest assigned to a device. 
 
 No I am saying let's use the name that our ACPI tables assigned.
 
ACPI does not assign any name. In a best case ACPI tables describe 
resources
used by a device.
   
   Not only that. bus number and segment aren't resources as such.
   They describe addressing.
   
And not all guests qemu supports has support for ACPI. Qemu
even support machines types that do not support ACPI.
   
   So? Different machines - different names.
   
  You want to have different cli for different type of machines qemu
  supports?
 
 Different device names.
 
You mean on qemu-sparc for deleting PCI device you will use different
device naming scheme?!

   
   
It looks like you identify yourself with most of
qemu users, but if most qemu users are like you then qemu has 
not enough
users :) Most users that consider themselves to be advanced 
may know
what eth1 or /dev/sdb means. This doesn't mean we should provide
device_del eth1 or device_add /dev/sdb command though. 

More important is that domain (encoded as number like you 
used to)
and bus number has no meaning from inside qemu.
So while I said many
times I don't care about exact CLI syntax to much it should 
make sense
at least. It can use id to specify PCI bus in CLI like this:
device_del pci.0:1.1. Or it can even use device id too like 
this:
device_del pci.0:ide.0. Or it can use HW topology like in FO 
device
path. But doing ah-hoc device enumeration inside qemu and then 
using it
for CLI is not it.

 functionality in the guests.  Qemu is buggy in the moment in 
 that is
 uses the bus addresses assigned by guest and not the ones in 
 ACPI,
 but that can be fixed.
It looks like you confused ACPI _SEG for something it isn't.
   
   Maybe I did. This is what linux does:
   
   struct pci_bus * __devinit pci_acpi_scan_root(struct acpi_pci_root
   *root)
   {
   struct acpi_device *device = root-device;
   int domain = root-segment;
   int busnum = root-secondary.start;
   
   And I think this is consistent with the spec.
   
  It means that one domain may include several host bridges.
  At that level
  domain is defined as something that have unique name for each device
  inside it thus no two buses in one segment/domain can have same bus
  number. This is what PCI spec tells you. 
 
 And that really is enough for CLI because all we need is locate the
 specific slot in a unique way.
 
At qemu level we do not have bus numbers. They are assigned by a guest.
So inside a guest domain:bus:slot.func points you to a device, but in
qemu does not enumerate buses.

  And this further shows that using domain as defined by guest is 
  very
  bad idea. 
 
 As defined by ACPI, really.
 
ACPI is a part of a guest software that may not event present in the
guest. How is it relevant?
   
   It's relevant because this is what guests use. To access the root
   device with cf8/cfc you need to know the bus number assigned to it
   by firmware. How that was assigned is of interest to BIOS/ACPI but not
   really interesting to the user or, I suspect, guest OS.
   
  Of course this is incorrect. OS can re-enumerate PCI if it wishes. Linux
  have cmd just for that.
 
 I haven't looked but I suspect linux will simply assume cf8/cfc and
 and start doing it from there. If that doesn't get you the root
 device you wanted, tough.
 
Linux runs on many platforms and on most of them there is no such thing
as IO space. Also I believe Linux on x86 supported multi-root without
mmconfig in the past. It may be enough to describe IO resource second pci
host bridge uses in ACPI for Linux to use it.

  And saying that ACPI is relevant because this is
  what guest software use in a reply to sentence that states that not all
  guest even use ACPI is, well, strange.
  
  And ACPI describes only HW that present at boot time. What if you
  hot-plugged root pci bridge? How non existent PCI naming helps you?
 
 that's described by 

Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Michael S. Tsirkin
On Sun, Nov 21, 2010 at 06:01:11PM +0200, Gleb Natapov wrote:
  This FW is given to guest by qemu. It only assigns bus numbers
  because qemu told it to do so.
 Seabios is just a guest qemu ships. There are other FW for qemu.

We don't support them though, do we?

 Bochs
 bios, openfirmware, efi. All of them where developed outside of qemu
 project and all of them are usable without qemu. You can't consider them
 be part of qemu any more then Linux/Windows with virtio drivers.

You can also burn linuxbios onto your motherboard. If you do you
void your warranty.


  

  And the spec says, e.g.:
  
the memory mapped configuration base
  address (always corresponds to bus number 0) for the PCI 
  Segment Group
  of the host bridge is provided by _CBA and the bus range 
  covered by the
  base address is indicated by the corresponding bus range 
  specified in
  _CRS.
  
 Don't see how it is relevant. And _CBA is defined only for PCI 
 Express. Lets
 solve the problem for PCI first and then move to PCI Express. Jumping 
 from one
 to another destruct us from main discussion.

I think this is what confuses us.  As long as you are using cf8/cfc 
there's no
concept of a domain really.
Thus:
/p...@i0cf8

is probably enough for BIOS boot because we'll need to make root bus 
numbers
unique for legacy guests/option ROMs.  But this is not a hardware 
requirement
and might become easier to ignore eith EFI.

   You do not need MMCONFIG to have multiple PCI domains. You can have one
   configured via standard cf8/cfc and another one on ef8/efc and one more
   at mmio fce0 and you can address all of them:
   /p...@i0cf8
   /p...@i0ef8
   /p...@fce0

Isn't the mmio one relocatable?

   
   And each one of those PCI domains can have 256 subbridges.
  
  Will common guests such as windows or linux be able to use them? This
 With proper drivers yes. There is HW with more then one PCI bus and I
 think qemu emulates some of it (PPC MAC for instance). 

MAC is probably just poking in a couple of predefined places.  That's
not enumeration.

  seems to be outside the scope of the PCI Firmware specification, which
  says that bus numbers must be unique.
 They must be unique per PCI segment group.

We've come full circle, didn't we? i am saying we should let users
specify PCI Segment group+bus as opposed to the io port, which they
don't use.

  
  

That should be enough for e.g. device_del. We do have the need 
to
describe the topology when we interface with firmware, e.g. to 
describe
the ACPI tables themselves to qemu (this is what Gleb's patches 
deal
with), but that's probably the only case.

   Describing HW topology is the only way to unambiguously describe 
   device to
   something or someone outside qemu and have persistent device 
   naming
   between different HW configuration.
  
  Not really, since ACPI is a binary blob programmed by qemu.
  
 APCI is part of the guest, not qemu.

Yes it runs in the guest but it's generated by qemu. On real hardware,
it's supplied by the motherboard.

   It is not generated by qemu. Parts of it depend on HW and other part 
   depend
   on how BIOS configure HW. _BBN for instance is clearly defined to return
   address assigned bu the BIOS.
  
  BIOS is supplied on the motherboard and in our case by qemu as well.
 You can replace MB bios by coreboot+seabios on some of them.
 Manufacturer don't want you to do it and make it hard to do, but
 otherwise this is just software, not some magic dust.

They support common hardware but not all features will automatically work.

  There's no standard way for BIOS to assign bus number to the pci root,
  so it does it in device-specific way. Why should a management tool
  or a CLI user care about these? As far as they are concerned
  we could use some PV scheme to find root devices and assign bus
  numbers, and it would be exactly the same.
  
 Go write KVM userspace that does that. AFAIK there is project out there
 that tries to do that. No luck so far. Your world view is very x86/Linux
 centric. You need to broaden it a little bit. Next time you propose
 something ask yourself will it work with qemu-sparc, qemu-ppc, qemu-amd.

Your view is very qemu centric. Ask yourself whether what you propose
will work with libvirt. Better yet, ask libvirt developers.

 Just saying not really doesn't
 prove much. I still haven't seen any proposition from you that 
 actually
 solve the problem. No, lets use guest naming is not it. There is no
 such thing as The Guest. 
 
 --
   Gleb.

I am sorry if I didn't make this clear.  I think we should use the 
domain:bus
pair to name the root device. As these are unique 

[Qemu-devel] [PATCHv3 RFC] qemu-kvm: stop devices on vmstop

2010-11-21 Thread Michael S. Tsirkin
Stop running devices on vmstop, so that VM does not interact with
outside world at that time.

Whitelist system handlers which run even when VM is stopped.
These are specific handlers like monitor, gdbstub, migration.
I'm not really sure about ui: spice and vnc: do they need to run?

Untested.

Signed-off-by: Michael S. Tsirkin m...@redhat.com
---

Change from previous versions:

reversed the approach, block all handlers except very specific ones.


 aio.c  |   32 -
 cmd.c  |8 +-
 gdbstub.c  |2 +-
 migration-exec.c   |4 +-
 migration-fd.c |4 +-
 migration-tcp.c|8 +-
 migration-unix.c   |8 +-
 migration.c|8 +-
 qemu-aio.h |7 ++
 qemu-char.c|   10 +++
 qemu-char.h|8 ++
 qemu-kvm.c |2 +-
 qemu-tool.c|9 +++
 ui/spice-core.c|2 +-
 ui/vnc-auth-sasl.c |2 +-
 ui/vnc-auth-vencrypt.c |6 +-
 ui/vnc.c   |   14 ++--
 vl.c   |  174 +++-
 18 files changed, 210 insertions(+), 98 deletions(-)

diff --git a/aio.c b/aio.c
index 2f08655..0d50c87 100644
--- a/aio.c
+++ b/aio.c
@@ -52,14 +52,15 @@ static AioHandler *find_aio_handler(int fd)
 return NULL;
 }
 
-int qemu_aio_set_fd_handler(int fd,
+static int qemu_aio_assign_fd_handler(int fd,
 IOHandler *io_read,
 IOHandler *io_write,
 AioFlushHandler *io_flush,
 AioProcessQueue *io_process_queue,
-void *opaque)
+void *opaque, bool system)
 {
 AioHandler *node;
+int r;
 
 node = find_aio_handler(fd);
 
@@ -93,11 +94,34 @@ int qemu_aio_set_fd_handler(int fd,
 node-opaque = opaque;
 }
 
-qemu_set_fd_handler2(fd, NULL, io_read, io_write, opaque);
-
+r = system ? qemu_set_fd_handler2(fd, NULL, io_read, io_write, opaque) :
+qemu_set_system_fd_handler(fd, NULL, io_read, io_write, opaque);
+assert(!r);
 return 0;
 }
 
+int qemu_aio_set_fd_handler(int fd,
+IOHandler *io_read,
+IOHandler *io_write,
+AioFlushHandler *io_flush,
+AioProcessQueue *io_process_queue,
+void *opaque)
+{
+return qemu_aio_assign_fd_handler(fd, io_read, io_write, io_flush,
+  io_process_queue, opaque, false);
+}
+
+int qemu_aio_set_system_fd_handler(int fd,
+IOHandler *io_read,
+IOHandler *io_write,
+AioFlushHandler *io_flush,
+AioProcessQueue *io_process_queue,
+void *opaque)
+{
+return qemu_aio_assign_fd_handler(fd, io_read, io_write, io_flush,
+  io_process_queue, opaque, true);
+}
+
 void qemu_aio_flush(void)
 {
 AioHandler *node;
diff --git a/cmd.c b/cmd.c
index db2c9c4..26dcfe4 100644
--- a/cmd.c
+++ b/cmd.c
@@ -154,7 +154,7 @@ static void prep_fetchline(void *opaque)
 {
 int *fetchable = opaque;
 
-qemu_aio_set_fd_handler(STDIN_FILENO, NULL, NULL, NULL, NULL, NULL);
+qemu_aio_set_system_fd_handler(STDIN_FILENO, NULL, NULL, NULL, NULL, NULL);
 *fetchable= 1;
 }
 
@@ -202,8 +202,8 @@ command_loop(void)
 if (!prompted) {
 printf(%s, get_prompt());
 fflush(stdout);
-qemu_aio_set_fd_handler(STDIN_FILENO, prep_fetchline, NULL, NULL,
-NULL, fetchable);
+qemu_aio_set_system_fd_handler(STDIN_FILENO, prep_fetchline,
+   NULL, NULL, NULL, fetchable);
 prompted = 1;
 }
 
@@ -228,7 +228,7 @@ command_loop(void)
 prompted = 0;
 fetchable = 0;
}
-qemu_aio_set_fd_handler(STDIN_FILENO, NULL, NULL, NULL, NULL, NULL);
+qemu_aio_set_system_fd_handler(STDIN_FILENO, NULL, NULL, NULL, NULL, NULL);
 }
 
 /* from libxcmd/input.c */
diff --git a/gdbstub.c b/gdbstub.c
index eb8465d..1dd53d9 100644
--- a/gdbstub.c
+++ b/gdbstub.c
@@ -2653,7 +2653,7 @@ int gdbserver_start(const char *device)
 sigaction(SIGINT, act, NULL);
 }
 #endif
-chr = qemu_chr_open(gdb, device, NULL);
+chr = qemu_chr_open_system(gdb, device, NULL);
 if (!chr)
 return -1;
 
diff --git a/migration-exec.c b/migration-exec.c
index 14718dd..a8026bf 100644
--- a/migration-exec.c
+++ b/migration-exec.c
@@ -123,7 +123,7 @@ static void exec_accept_incoming_migration(void *opaque)
 QEMUFile *f = opaque;
 
 process_incoming_migration(f);
-qemu_set_fd_handler2(qemu_stdio_fd(f), NULL, NULL, NULL, NULL);
+qemu_set_system_fd_handler(qemu_stdio_fd(f), NULL, 

Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Gleb Natapov
On Sun, Nov 21, 2010 at 08:22:03PM +0200, Michael S. Tsirkin wrote:
 On Sun, Nov 21, 2010 at 06:01:11PM +0200, Gleb Natapov wrote:
   This FW is given to guest by qemu. It only assigns bus numbers
   because qemu told it to do so.
  Seabios is just a guest qemu ships. There are other FW for qemu.
 
 We don't support them though, do we?
 
Who is we? As qemu community we support any guest code.

  Bochs
  bios, openfirmware, efi. All of them where developed outside of qemu
  project and all of them are usable without qemu. You can't consider them
  be part of qemu any more then Linux/Windows with virtio drivers.
 
 You can also burn linuxbios onto your motherboard. If you do you
 void your warranty.
And? What is your point?

 
 
   
 
   And the spec says, e.g.:
   
   the memory mapped configuration base
 address (always corresponds to bus number 0) for the PCI 
   Segment Group
 of the host bridge is provided by _CBA and the bus range 
   covered by the
 base address is indicated by the corresponding bus range 
   specified in
 _CRS.
   
  Don't see how it is relevant. And _CBA is defined only for PCI 
  Express. Lets
  solve the problem for PCI first and then move to PCI Express. 
  Jumping from one
  to another destruct us from main discussion.
 
 I think this is what confuses us.  As long as you are using cf8/cfc 
 there's no
 concept of a domain really.
 Thus:
   /p...@i0cf8
 
 is probably enough for BIOS boot because we'll need to make root bus 
 numbers
 unique for legacy guests/option ROMs.  But this is not a hardware 
 requirement
 and might become easier to ignore eith EFI.
 
You do not need MMCONFIG to have multiple PCI domains. You can have one
configured via standard cf8/cfc and another one on ef8/efc and one more
at mmio fce0 and you can address all of them:
/p...@i0cf8
/p...@i0ef8
/p...@fce0
 
 Isn't the mmio one relocatable?
 
No. If it is, there is a way to specify so.



And each one of those PCI domains can have 256 subbridges.
   
   Will common guests such as windows or linux be able to use them? This
  With proper drivers yes. There is HW with more then one PCI bus and I
  think qemu emulates some of it (PPC MAC for instance). 
 
 MAC is probably just poking in a couple of predefined places.  That's
 not enumeration.
 
System but is not enumerable on PC too. That is why you need ACPI to
describe resources or you just poke into predefined places (0cf8-0cff in
case of PCI).

   seems to be outside the scope of the PCI Firmware specification, which
   says that bus numbers must be unique.
  They must be unique per PCI segment group.
 
 We've come full circle, didn't we? i am saying we should let users
 specify PCI Segment group+bus as opposed to the io port, which they
 don't use.
 
When qemu creates device model there is no any bus number assigned to
pci bridges and thus bus number is meaningless for qemu.  PCI Segment
group has no HW meaning at all (you seems to ignore this completely). So
lets say you run qemu with -S and want to delete or add device. Where do
you get bus number an PCI Segment group number to specify it?

   
   
 
 That should be enough for e.g. device_del. We do have the 
 need to
 describe the topology when we interface with firmware, e.g. 
 to describe
 the ACPI tables themselves to qemu (this is what Gleb's 
 patches deal
 with), but that's probably the only case.
 
Describing HW topology is the only way to unambiguously 
describe device to
something or someone outside qemu and have persistent device 
naming
between different HW configuration.
   
   Not really, since ACPI is a binary blob programmed by qemu.
   
  APCI is part of the guest, not qemu.
 
 Yes it runs in the guest but it's generated by qemu. On real hardware,
 it's supplied by the motherboard.
 
It is not generated by qemu. Parts of it depend on HW and other part 
depend
on how BIOS configure HW. _BBN for instance is clearly defined to return
address assigned bu the BIOS.
   
   BIOS is supplied on the motherboard and in our case by qemu as well.
  You can replace MB bios by coreboot+seabios on some of them.
  Manufacturer don't want you to do it and make it hard to do, but
  otherwise this is just software, not some magic dust.
 
 They support common hardware but not all features will automatically work.
So what? The same can be said about Linux on a lot of HW.

 
   There's no standard way for BIOS to assign bus number to the pci root,
   so it does it in device-specific way. Why should a management tool
   or a CLI user care about these? As far as they are concerned
   we could use some PV scheme to find root devices and assign bus
   numbers, and it 

Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Michael S. Tsirkin
On Sun, Nov 21, 2010 at 09:29:59PM +0200, Gleb Natapov wrote:
  We can control the bus numbers that the BIOS will assign to pci root.
 And if you need to insert device that sits behind pci-to-pci bridge? Do
 you want to control that bus address too? And bus number do not unambiguously
 describe pci root since two roots can have same bus number just fine.

They have to have different segment numbers.

  It's probably required to make them stable anyway.
  
 Why?

To avoid bus renumbering on reboot after you add a pci-to-pci bridge.

-- 
MST



[Qemu-devel] [PATCH 1/1] iscsi: add iSCSI block device support

2010-11-21 Thread ronnie sahlberg
List,

Please find attached a gzipped patch against master that adds support of iSCSI.
It is sent in gz format because of its uncompressed size, 100kb.


This patch adds support for attaching directorly to iSCSI resources,
both DISK/SBC and CDROM/MMC, without the need to make the devices
visible to the host.

There are two parts to this support :

./block/iscsi/* : This is a fully async iscsi initiator client
library. This directory does not contain any QEMU specific code with
the hope it can later be re-used by other projects that also need
iscsi-initiator functionality.

./block/iscsi.c : The QEMU block driver that intergrates the
iscsi-initiator library with QEMU.


The iscsi library is incomplete, but sufficient to interoperate with
TGTD. With trivial enhancements(target-NOP) it should work with most
other targets too.

The goal is to expand the library to add more features over time, so
that over time it will become a full-blown iscsi initiator
implementation as a library.

Some features that should be added are
* support for target initiated NOP.  TGTD does not use these but some
other targets do.
* support for uni- and bi-directional CHAP authentication.
* additional iscsi functionality

The syntax to specify a iscsi resource is
   iscsi://host[:port]/target-iqn-name/lun

Example :
   -drive file=iscsi://127.0.0.1:3260/iqn.ronnie.test/1

  -cdrom iscsi://127.0.0.1:3260/iqn.ronnie.test/2


Please review and/or commit.


regards
ronnie sahlberg



Re: [Qemu-devel] [PATCH 1/1] iscsi: add iSCSI block device support

2010-11-21 Thread FUJITA Tomonori
On Mon, 22 Nov 2010 09:17:50 +1100
ronnie sahlberg ronniesahlb...@gmail.com wrote:

 This patch adds support for attaching directorly to iSCSI resources,
 both DISK/SBC and CDROM/MMC, without the need to make the devices
 visible to the host.

This adds the iscsi initiator feature to qemu (as you did to dbench),
right?


 The syntax to specify a iscsi resource is
iscsi://host[:port]/target-iqn-name/lun
 
 Example :
-drive file=iscsi://127.0.0.1:3260/iqn.ronnie.test/1
 
   -cdrom iscsi://127.0.0.1:3260/iqn.ronnie.test/2

Specifying lun looks odd (from the perspective of SCSI initiator)?

btw, how do we specify initiator's configuration (iscsi params, chap,
initiator iqn, etc)?



Re: [Qemu-devel] [PATCH 1/1] iscsi: add iSCSI block device support

2010-11-21 Thread ronnie sahlberg
On Mon, Nov 22, 2010 at 10:13 AM, FUJITA Tomonori
fujita.tomon...@lab.ntt.co.jp wrote:
 On Mon, 22 Nov 2010 09:17:50 +1100
 ronnie sahlberg ronniesahlb...@gmail.com wrote:

 This patch adds support for attaching directorly to iSCSI resources,
 both DISK/SBC and CDROM/MMC, without the need to make the devices
 visible to the host.

 This adds the iscsi initiator feature to qemu (as you did to dbench),
 right?

Yes.



 The syntax to specify a iscsi resource is
    iscsi://host[:port]/target-iqn-name/lun

 Example :
    -drive file=iscsi://127.0.0.1:3260/iqn.ronnie.test/1

   -cdrom iscsi://127.0.0.1:3260/iqn.ronnie.test/2

 Specifying lun looks odd (from the perspective of SCSI initiator)?


Ok.
I can easily to change this.
How would you prefer the iscsi:// url look like ?


 btw, how do we specify initiator's configuration (iscsi params, chap,
 initiator iqn, etc)?


You cant yet. :-(

Right now they are hardcoded to reasonable defaults I think will work
for most targets.
I want to add more of these features and make them settable from the
command line,
but that will take a lot of time.

Hopefully having a basic/non-flexible implementation as a first step
may even encourage others to
contribute to improve it until it is full featured.


regards
ronnie sahlberg



[Qemu-devel] [Bug 678363] [NEW] Swapping caps lock and control causes qemu confusion

2010-11-21 Thread Ralph Loader
Public bug reported:

Running Fedora14 [host], I have caps-lock and control swapped over in my
keyboard preferences.

Qemu doesn't cope very well with this; running an OS inside Qemu (again
fedora, suspect that it doesn't matter):

The physical caps-lock key [which the host uses as control] toggles
caps-lock on both press and release.

The physical control key [which the host uses as caps-lock], acts as
both a caps-lock and control key.

Qemu should either respect my keyboard layout or else ignore it
completely.

** Affects: qemu
 Importance: Undecided
 Status: New

-- 
Swapping caps lock and control causes qemu confusion
https://bugs.launchpad.net/bugs/678363
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.

Status in QEMU: New

Bug description:
Running Fedora14 [host], I have caps-lock and control swapped over in my 
keyboard preferences.

Qemu doesn't cope very well with this; running an OS inside Qemu (again fedora, 
suspect that it doesn't matter):

The physical caps-lock key [which the host uses as control] toggles caps-lock 
on both press and release.

The physical control key [which the host uses as caps-lock], acts as both a 
caps-lock and control key.

Qemu should either respect my keyboard layout or else ignore it completely.





Re: [Qemu-devel] [PATCH] stop the iteration when too many pages is transferred

2010-11-21 Thread Wen Congyang
At 2010年11月20日 10:23, Anthony Liguori Write:
 On 11/17/2010 08:32 PM, Wen Congyang wrote:
 When the total sent page size is larger than max_factor
 times of the size of guest OS's memory, stop the
 iteration.
 The default value of max_factor is 3.

 This is similar to XEN.


 Signed-off-by: Wen Congyang
   
 
 I'm strongly opposed to doing this. I think Xen gets this totally wrong.
 
 Migration is a contract. When you set the stop time, you're saying that
 you want only want the guest to experience a fixed amount of downtime.
 Stopping the guest after some arbitrary number of iterations makes the
 downtime non-deterministic. With a very large guest, this could wreak
 havoc causing dropped networking connections, etc.
 

Thanks for your comment.
As a developer, I know the downtime.
But as a user, he does not know the downtime.
When he migrates a very large guest lively without setting the stop time,
he does not say I want the guest to experience a fixed amount of downtime,
he only wants to migrate the guest in a short time, the migration should
be done during some minutes, not ever for ever.

If we set the stop time too larger, this could also wreak havoc causing
dropped networking connections, etc.

I think we can do it as the following:
1. If the user does not set the stop time, we should complete the
   migration in a short time.
2. If the user sets the stop time, we do it as now.


 It's totally unsafe.
 
 If a management tool wants this behavior, they can set a timeout and
 explicitly stop the guest during the live migration. IMHO, such a
 management tool is not doing it's job properly but it still can be
 implemented.
 
 Regards,
 
 Anthony Liguori
 




[Qemu-devel] snapshots structure

2010-11-21 Thread chandra shekar
Hi, everyone i have applied a patch and copied the snapshot out of the qcow2
image file
and i tried to open it with many editors but it does not open is there any
way that we can
analyse the snapshot file or any source which explains snapshot file,thanks


[Qemu-devel] Weird Issue with IDE

2010-11-21 Thread Adhyas Avasthi
I am doing something specific and slightly different than normal uses.
Would appreciate if someone can help me with a pointer as to what I
can do next. I am kinda stuck at this point.

My experiment is trying to move the PIIX4 chipset (which currently
resides on dev 1, func 0/1/2/3/) to dev 2 (same function sequence). I
have modified the ACPI tables to report the IRQ mapping accordingly,
which is probably not useful for IDE anyway as it just uses the legacy
IRQ14/15 interrupts.

Unfortunately, my Linux kernel fail to find the IDE device when I do
this simple change. I have compiled the kernel to dump code from its
ATA driver and it does tell me that the interrupt handler is invoked
for IRQ14 and IRQ15, so this may not be an interrupt routing issue
(which is what I first assumed it to be). Is there some other code in
qemu that resides on PCI slot location of the IDE controller? My
current modification to change the PCI slot is very simple (I just
change the pci_create_simple_multifunction call in i440fx_init to put
that on devfn 0x10, instead of default).

I am comparing the dmesg dump for normal kernel boot (with default
PIIX4 on dev 1) vs modified qemu boot (which just lands me to
initramfs shell if I choose console to ttyS0, as it complains it could
not find IDE disk). I tried to attach the the console output and the
dmesg output for the both cases, but the mail bounced back on me,
perhaps it does not accept 250K attachment.
In the latter case, the ports are disabled after recovery and validate routines
are called in ata driver code in the kernel.

I would appreciate if someone can give any pointers on what may be
going wrong with this simple change? And what more may be required in qemu
to change the PCI slot of the PIIX4 controller. I assumed this should
be a simple change, but apparently not so simple :-(

-- 
Adhyas

Two types have compatible type if their types are the same.
    — ANSI C Standard, 3.1.2.6.




[Qemu-devel] Re: [PATCH 1/2] pci: Automatically patch PCI vendor id and device id in PCI ROM

2010-11-21 Thread Michael S. Tsirkin
On Tue, Oct 19, 2010 at 11:08:21PM +0200, Stefan Weil wrote:
 PCI devices with different vendor or device ids sometimes share
 the same rom code. Only the ids and the checksum
 differs in a boot rom for such devices.
 
 The i825xx ethernet controller family is a typical example
 which is implemented in hw/eepro100.c. It uses at least
 3 different device ids, so normally 3 boot roms would be needed.
 
 By automatically patching vendor id and device id (and the checksum)
 in qemu, all emulated family members can share the same boot rom.
 
 VGA bios roms are another example with different vendor and device ids.
 
 Only qemu's built-in default rom files will be patched.
 
 v2:
 * Patch also the vendor id (and remove the sanity check for vendor id).
 
 v3:
 * Don't patch a rom file when its name was set by the user.
   Thus we avoid modifications of unknown rom data.
 
 Cc: Gerd Hoffmann kra...@redhat.com
 Cc: Markus Armbruster arm...@redhat.com
 Cc: Michael S. Tsirkin m...@redhat.com
 Signed-off-by: Stefan Weil w...@mail.berlios.de

Looks safe enough for me.
Applied.

 ---
  hw/pci.c |   73 ++---
  1 files changed, 69 insertions(+), 4 deletions(-)
 
 diff --git a/hw/pci.c b/hw/pci.c
 index 1280d4d..74cbea5 100644
 --- a/hw/pci.c
 +++ b/hw/pci.c
 @@ -78,7 +78,7 @@ static struct BusInfo pci_bus_info = {
  
  static void pci_update_mappings(PCIDevice *d);
  static void pci_set_irq(void *opaque, int irq_num, int level);
 -static int pci_add_option_rom(PCIDevice *pdev);
 +static int pci_add_option_rom(PCIDevice *pdev, bool is_default_rom);
  static void pci_del_option_rom(PCIDevice *pdev);
  
  static uint16_t pci_default_sub_vendor_id = PCI_SUBVENDOR_ID_REDHAT_QUMRANET;
 @@ -1672,6 +1672,7 @@ static int pci_qdev_init(DeviceState *qdev, DeviceInfo 
 *base)
  PCIDeviceInfo *info = container_of(base, PCIDeviceInfo, qdev);
  PCIBus *bus;
  int devfn, rc;
 +bool is_default_rom;
  
  /* initialize cap_present for pci_is_express() and pci_config_size() */
  if (info-is_express) {
 @@ -1692,9 +1693,12 @@ static int pci_qdev_init(DeviceState *qdev, DeviceInfo 
 *base)
  }
  
  /* rom loading */
 -if (pci_dev-romfile == NULL  info-romfile != NULL)
 +is_default_rom = false;
 +if (pci_dev-romfile == NULL  info-romfile != NULL) {
  pci_dev-romfile = qemu_strdup(info-romfile);
 -pci_add_option_rom(pci_dev);
 +is_default_rom = true;
 +}
 +pci_add_option_rom(pci_dev, is_default_rom);
  
  if (qdev-hotplugged) {
  rc = bus-hotplug(bus-hotplug_qdev, pci_dev, 1);
 @@ -1797,8 +1801,64 @@ static void pci_map_option_rom(PCIDevice *pdev, int 
 region_num, pcibus_t addr, p
  cpu_register_physical_memory(addr, size, pdev-rom_offset);
  }
  
 +/* Patch the PCI vendor and device ids in a PCI rom image if necessary.
 +   This is needed for an option rom which is used for more than one device. 
 */
 +static void pci_patch_ids(PCIDevice *pdev, uint8_t *ptr, int size)
 +{
 +uint16_t vendor_id;
 +uint16_t device_id;
 +uint16_t rom_vendor_id;
 +uint16_t rom_device_id;
 +uint16_t rom_magic;
 +uint16_t pcir_offset;
 +uint8_t checksum;
 +
 +/* Words in rom data are little endian (like in PCI configuration),
 +   so they can be read / written with pci_get_word / pci_set_word. */
 +
 +/* Only a valid rom will be patched. */
 +rom_magic = pci_get_word(ptr);
 +if (rom_magic != 0xaa55) {
 +PCI_DPRINTF(Bad ROM magic %04x\n, rom_magic);
 +return;
 +}
 +pcir_offset = pci_get_word(ptr + 0x18);
 +if (pcir_offset + 8 = size || memcmp(ptr + pcir_offset, PCIR, 4)) {
 +PCI_DPRINTF(Bad PCIR offset 0x%x or signature\n, pcir_offset);
 +return;
 +}
 +
 +vendor_id = pci_get_word(pdev-config + PCI_VENDOR_ID);
 +device_id = pci_get_word(pdev-config + PCI_DEVICE_ID);
 +rom_vendor_id = pci_get_word(ptr + pcir_offset + 4);
 +rom_device_id = pci_get_word(ptr + pcir_offset + 6);
 +
 +PCI_DPRINTF(%s: ROM id %04x%04x / PCI id %04x%04x\n, pdev-romfile,
 +vendor_id, device_id, rom_vendor_id, rom_device_id);
 +
 +checksum = ptr[6];
 +
 +if (vendor_id != rom_vendor_id) {
 +/* Patch vendor id and checksum (at offset 6 for etherboot roms). */
 +checksum += (uint8_t)rom_vendor_id + (uint8_t)(rom_vendor_id  8);
 +checksum -= (uint8_t)vendor_id + (uint8_t)(vendor_id  8);
 +PCI_DPRINTF(ROM checksum %02x / %02x\n, ptr[6], checksum);
 +ptr[6] = checksum;
 +pci_set_word(ptr + pcir_offset + 4, vendor_id);
 +}
 +
 +if (device_id != rom_device_id) {
 +/* Patch device id and checksum (at offset 6 for etherboot roms). */
 +checksum += (uint8_t)rom_device_id + (uint8_t)(rom_device_id  8);
 +checksum -= (uint8_t)device_id + (uint8_t)(device_id  8);
 +PCI_DPRINTF(ROM checksum %02x / %02x\n, ptr[6], checksum);
 +

Re: [Qemu-devel] [PATCH v3] virtio-9p: fix build on !CONFIG_UTIMENSAT

2010-11-21 Thread Jes Sorensen
On 11/21/10 16:22, Anthony Liguori wrote:
 On 11/14/2010 08:15 PM, Hidetoshi Seto wrote:
 This patch introduce a fallback mechanism for old systems that do not
 support utimensat().  This fix build failure with following warnings:

 hw/virtio-9p-local.c: In function 'local_utimensat':
 hw/virtio-9p-local.c:479: warning: implicit declaration of function
 'utimensat'
 hw/virtio-9p-local.c:479: warning: nested extern declaration of
 'utimensat'

 and:

 hw/virtio-9p.c: In function 'v9fs_setattr_post_chmod':
 hw/virtio-9p.c:1410: error: 'UTIME_NOW' undeclared (first use in this
 function)
 hw/virtio-9p.c:1410: error: (Each undeclared identifier is reported
 only once
 hw/virtio-9p.c:1410: error: for each function it appears in.)
 hw/virtio-9p.c:1413: error: 'UTIME_OMIT' undeclared (first use in this
 function)
 hw/virtio-9p.c: In function 'v9fs_wstat_post_chmod':
 hw/virtio-9p.c:2905: error: 'UTIME_OMIT' undeclared (first use in this
 function)

 v3:
- Use better alternative handling for UTIME_NOW/OMIT
- Move qemu_utimensat() to cutils.c
 V2:
- Introduce qemu_utimensat()

 Signed-off-by: Hidetoshi Setoseto.hideto...@jp.fujitsu.com

 
 Applied.  Thanks.
 
 Regards,
 
 Anthony Liguori

Anthony,

Did you actually apply this one? I don't see it in the git tree.

However if you did, that was a mistake, the qemu_utimensat() should not
have gone into cutils.c as I pointed out earlier, it is inconsistent.

Cheers,
Jes




Re: [Qemu-devel] [PATCH] stop the iteration when too many pages is transferred

2010-11-21 Thread KAMEZAWA Hiroyuki
On Fri, 19 Nov 2010 20:23:55 -0600
Anthony Liguori anth...@codemonkey.ws wrote:

 On 11/17/2010 08:32 PM, Wen Congyang wrote:
  When the total sent page size is larger than max_factor
  times of the size of guest OS's memory, stop the
  iteration.
  The default value of max_factor is 3.
 
  This is similar to XEN.
 
 
  Signed-off-by: Wen Congyang

 
 I'm strongly opposed to doing this. I think Xen gets this totally wrong.
 
 Migration is a contract. When you set the stop time, you're saying that
 you want only want the guest to experience a fixed amount of downtime.
 Stopping the guest after some arbitrary number of iterations makes the
 downtime non-deterministic. With a very large guest, this could wreak
 havoc causing dropped networking connections, etc.
 
 It's totally unsafe.
 
 If a management tool wants this behavior, they can set a timeout and
 explicitly stop the guest during the live migration. IMHO, such a
 management tool is not doing it's job properly but it still can be
 implemented.
 

Hmm, is there any information available for management-tools about
the reason migration failed was because migration never ends because
 of new dirty pages or some ?

I'm grad if I know cold-migraton will success at high rate before stop machine
even when live migration failed by timeout. If the network or target node is
too busy is the reason of failure, cold migration will also be in trouble and
we'll see longer down time than expected.
I think it's helpful to show how the transfer was done, as sent 3x pages of 
guest
pages but failed.

any idea ?

Thanks,
-Kame




Re: [Qemu-devel] [PATCH 12/16] scsi-generic: use plain ioctl

2010-11-21 Thread Hannes Reinecke
On 11/20/2010 02:25 AM, adq wrote:
 On 20 November 2010 00:41, Nicholas A. Bellinger n...@linux-iscsi.org wrote:
 On Fri, 2010-11-19 at 19:39 +0100, Christoph Hellwig wrote:
 On Thu, Nov 18, 2010 at 03:47:36PM +0100, Hannes Reinecke wrote:

 aio_ioctl is emulated anyway and currently broken.

 What's broken about it currently?

 Mm, I do not recall this being broken in the first place..?  There
 was a single issue with megasas+bdrv_aio_ioctl() with WinXP (that did
 not appear with lsi53c895a) that was mentioned on the list earlier in
 the year that required a patch to use bdev_ioctl(), but last I recall
 Hannes had already fixed this in recent megasas.c code w/ 32-bit MSFT
 guests.  Also, this is what I have been with scsi_generic.c and
 scsi_bsg.c into TCM_loop in my v0.12.5 megasas tree, and I am not
 observing any obvious issues with AIO IOCTLs for SG_IO/BSG into Linux
 guests.

 I will give AIO IOCTL ops a run with these on v2.6.37-rc2 lock-less KVM
 host mode - TCM_Loop to verify against the v0.12.5 megasas tree.
 
 Could this AIO ioctl breakage perhaps be the one I fixed here?
 http://web.archiveorange.com/archive/v/1XS1vROmfC7dN9wYxsmt
 
 The patch is defintely in the latest git... it works fine for me with
 my scsi-generic MMC command patches.

Ah. Yes, this looks like it. I'll give it a spin; I've made the
original patch against an older git rev so I might've missed that one.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke   zSeries  Storage
h...@suse.de  +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)



Re: [Qemu-devel] Re: [PATCH] PCI: Bus number from the bridge, not the device

2010-11-21 Thread Gleb Natapov
On Sun, Nov 21, 2010 at 10:39:31PM +0200, Michael S. Tsirkin wrote:
 On Sun, Nov 21, 2010 at 09:29:59PM +0200, Gleb Natapov wrote:
   We can control the bus numbers that the BIOS will assign to pci root.
  And if you need to insert device that sits behind pci-to-pci bridge? Do
  you want to control that bus address too? And bus number do not 
  unambiguously
  describe pci root since two roots can have same bus number just fine.
 
 They have to have different segment numbers.
 
AKA PCI domains AKA one more number assigned by a guest.

   It's probably required to make them stable anyway.
   
  Why?
 
 To avoid bus renumbering on reboot after you add a pci-to-pci bridge.
 
Why should qemu care?

--
Gleb.



[Qemu-devel] Re: [PATCH v2 0/6] qdev reset refactoring and pci bus reset

2010-11-21 Thread Michael S. Tsirkin
On Fri, Nov 19, 2010 at 06:55:57PM +0900, Isaku Yamahata wrote:
 Here is v2. I updated the comments, and dropped the pci qdev reset patch.
 
 Patch description:
 The goal of this patch series is to implement secondary bus reset
 emulation in pci-to-pci bridge.
 At first, this patch series refactors qdev reset,
 and then cleans up pci bus reset. Lastly implements pci bridge control
 secondary bus reset bit.
 
 This patch series is for pci bus reset, which is ported
 from the following repo.
 git://repo.or.cz/qemu/aliguori.git qdev-refactor

I've put the series on my pci branch, tweaking patches 5 and 6 in the
process.  Out of time to compile-tested only for now.
We do need to fix up secondary bus reset, so it's a needed bugfix.
Gerd, Anthony - any comments esp. on the qdev part?

 Changes v1 - v2:
 - update comment
 
 Anthony Liguori (2):
   qbus: add functions to walk both devices and busses
   qdev: reset qdev along with qdev tree
 
 Isaku Yamahata (4):
   qdev: introduce reset call back for qbus level
   qdev: introduce a helper function which triggers reset from a given
 device
   pci: make use of qdev reset frame work to pci bus reset.
   pci bridge: implement secondary bus reset
 
  hw/pci.c|   33 ++--
  hw/pci.h|3 ++
  hw/pci_bridge.c |   12 +++-
  hw/qdev.c   |   87 
 +--
  hw/qdev.h   |   18 +++
  vl.c|1 +
  6 files changed, 140 insertions(+), 14 deletions(-)