[Qemu-devel] [PATCH] ide: Adds model=s option, allowing the user to override the default disk model name QEMU HARDDISK
Some Linux distributions use the /dev/disk/by-id/scsi-SATA_name-of-disk-model_serial addressing scheme when refering to partitions in /etc/fstab and elsewhere. This causes problems when starting a disk image taken from an existing physical server under qemu, because when running under qemu name-of-disk-model is always QEMU HARDDISK This patch introduces a model=s option which in combination with the existing serial=s option can be used to fake the disk the operating system was previously on, allowing the OS to boot properly. Cc: kw...@redhat.com Signed-off-by: Floris Bos d...@noc-ps.com --- blockdev.c|4 blockdev.h|2 ++ hw/ide/core.c | 27 ++- hw/ide/internal.h |4 +++- hw/ide/qdev.c | 17 +++-- qemu-config.c |4 qemu-options.hx |4 +++- 7 files changed, 53 insertions(+), 9 deletions(-) diff --git a/blockdev.c b/blockdev.c index d78aa51..66fcc14 100644 --- a/blockdev.c +++ b/blockdev.c @@ -277,6 +277,7 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi) const char *file = NULL; char devname[128]; const char *serial; +const char *model; const char *mediastr = ; BlockInterfaceType type; enum { MEDIA_DISK, MEDIA_CDROM } media; @@ -313,6 +314,7 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi) file = qemu_opt_get(opts, file); serial = qemu_opt_get(opts, serial); +model = qemu_opt_get(opts, model); if ((buf = qemu_opt_get(opts, if)) != NULL) { pstrcpy(devname, sizeof(devname), buf); @@ -534,6 +536,8 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi) dinfo-refcount = 1; if (serial) strncpy(dinfo-serial, serial, sizeof(dinfo-serial) - 1); +if (model) +strncpy(dinfo-model, model, sizeof(dinfo-model) - 1); QTAILQ_INSERT_TAIL(drives, dinfo, next); bdrv_set_on_error(dinfo-bdrv, on_read_error, on_write_error); diff --git a/blockdev.h b/blockdev.h index 260e16b..21eb4b5 100644 --- a/blockdev.h +++ b/blockdev.h @@ -18,6 +18,7 @@ void blockdev_mark_auto_del(BlockDriverState *bs); void blockdev_auto_del(BlockDriverState *bs); #define BLOCK_SERIAL_STRLEN 20 +#define BLOCK_MODEL_STRLEN 40 typedef enum { IF_DEFAULT = -1,/* for use with drive_add() only */ @@ -37,6 +38,7 @@ struct DriveInfo { int media_cd; QemuOpts *opts; char serial[BLOCK_SERIAL_STRLEN + 1]; +char model[BLOCK_MODEL_STRLEN + 1]; QTAILQ_ENTRY(DriveInfo) next; int refcount; }; diff --git a/hw/ide/core.c b/hw/ide/core.c index 4d568ac..2a38030 100644 --- a/hw/ide/core.c +++ b/hw/ide/core.c @@ -101,7 +101,7 @@ static void ide_identify(IDEState *s) put_le16(p + 21, 512); /* cache size in sectors */ put_le16(p + 22, 4); /* ecc bytes */ padstr((char *)(p + 23), s-version, 8); /* firmware version */ -padstr((char *)(p + 27), QEMU HARDDISK, 40); /* model */ +padstr((char *)(p + 27), s-drive_model_str, 40); /* model */ #if MAX_MULT_SECTORS 1 put_le16(p + 47, 0x8000 | MAX_MULT_SECTORS); #endif @@ -189,7 +189,7 @@ static void ide_atapi_identify(IDEState *s) put_le16(p + 21, 512); /* cache size in sectors */ put_le16(p + 22, 4); /* ecc bytes */ padstr((char *)(p + 23), s-version, 8); /* firmware version */ -padstr((char *)(p + 27), QEMU DVD-ROM, 40); /* model */ +padstr((char *)(p + 27), s-drive_model_str, 40); /* model */ put_le16(p + 48, 1); /* dword I/O (XXX: should not be set on CDROM) */ #ifdef USE_DMA_CDROM put_le16(p + 49, 1 9 | 1 8); /* DMA and LBA supported */ @@ -246,7 +246,7 @@ static void ide_cfata_identify(IDEState *s) padstr((char *)(p + 10), s-drive_serial_str, 20); /* serial number */ put_le16(p + 22, 0x0004); /* ECC bytes */ padstr((char *) (p + 23), s-version, 8); /* Firmware Revision */ -padstr((char *) (p + 27), QEMU MICRODRIVE, 40);/* Model number */ +padstr((char *) (p + 27), s-drive_model_str, 40);/* Model number */ #if MAX_MULT_SECTORS 1 put_le16(p + 47, 0x8000 | MAX_MULT_SECTORS); #else @@ -1834,7 +1834,7 @@ static const BlockDevOps ide_cd_block_ops = { }; int ide_init_drive(IDEState *s, BlockDriverState *bs, IDEDriveKind kind, - const char *version, const char *serial) + const char *version, const char *serial, const char *model) { int cylinders, heads, secs; uint64_t nb_sectors; @@ -1885,6 +1885,22 @@ int ide_init_drive(IDEState *s, BlockDriverState *bs, IDEDriveKind kind, snprintf(s-drive_serial_str, sizeof(s-drive_serial_str), QM%05d, s-drive_serial); } +if (model) { +strncpy(s-drive_model_str, model, sizeof(s-drive_model_str)); +} else { +switch (kind) { +case IDE_CD: +strcpy(s-drive_model_str, QEMU DVD-ROM); +break; +case IDE_CFATA: +
[Qemu-devel] How to trace all the guest OS instructions and the micro-ops
Hi! I am doing some research based on the QEMU. Does anyone know how to get (trace) all the instructions of the guest OS, and get all the intermediate micro-ops ? (Not in the 0.9.1 version) Additionally, how to get the whole memory or each process' memory data of the guest OS? I really appreciate your help.
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
- Original Message - From: Anthony Liguori anth...@codemonkey.ws To: Daniel P. Berrange berra...@redhat.com, libvir-l...@redhat.com, qemu-devel@nongnu.org, Gleb Natapov g...@redhat.com, Jiri Denemark jdene...@redhat.com, Avi Kivity a...@redhat.com, a...@ovirt.org Sent: Saturday, March 10, 2012 1:24:47 PM Subject: Re: [libvirt] [Qemu-devel] Modern CPU models cannot be used with libvirt On 03/10/2012 09:58 AM, Eduardo Habkost wrote: On Sat, Mar 10, 2012 at 12:42:46PM +, Daniel P. Berrange wrote: I could have sworn we had this discussion a year ago or so, and had decided that the default CPU models would be in something like /usr/share/qemu/cpu-x86_64.conf and loaded regardless of the -nodefconfig setting. /etc/qemu/target-x86_64.conf would be solely for end user configuration changes, not for QEMU builtin defaults. But looking at the code in QEMU, it doesn't seem we ever implemented this ? Arrrgggh. It seems this was implemented as a patch in RHEL-6 qemu RPMs but, contrary to our normal RHEL development practice, it was not based on a cherry-pick of an upstream patch :-( For sake of reference, I'm attaching the two patches from the RHEL6 source RPM that do what I'm describing NB, I'm not neccessarily advocating these patches for upstream. I still maintain that libvirt should write out a config file containing the exact CPU model description it desires and specify that with -readconfig. The end result would be identical from QEMU's POV and it would avoid playing games with QEMU's config loading code. I agree that libvirt should just write the config somewhere. The problem here is to define: 1) what information should be mandatory on that config data; 2) who should be responsible to test and maintain sane defaults (and where should they be maintained). The current cpudef definitions are simply too low-level to require it to be written from scratch. Lots of testing have to be done to make sure we have working combinations of CPUID bits defined, so they can be used as defaults or templates. Not facilitating reuse of those tested defauls/templates by libvirt is duplication of efforts. Really, if we expect libvirt to define all the CPU bits from scratch on a config file, we could as well just expect libvirt to open /dev/kvm itself and call the all CPUID setup ioctl()s itself. That's how low-level some of the cpudef bits are. Let's step back here. Why are you writing these patches? It's probably not because you have a desire to say -cpu Westmere when you run QEMU on your laptop. I'd wager to say that no human has ever done that or that if they had, they did so by accident because they read documentation and thought they had to. Humans probably do one of two things: 1) no cpu option or 2) -cpu host. So then why are you introducing -cpu Westmere? Because ovirt-engine has a concept of datacenters and the entire datacenter has to use a compatible CPU model to allow migration compatibility. Today, the interface that ovirt-engine exposes is based on CPU codenames. Presumably ovirt-engine wants to add a Westmere CPU group and as such have levied a requirement down the stack to QEMU. But there's no intrinsic reason why it uses CPU model names. VMware doesn't do this. It has a concept of compatibility groups[1]. s/has/had That was back in the 3.5 days and it was hit and miss, it relied on a user putting the same kind of machines in the resource groups and often caused issues. Now they've moved up to a model very similar to what we're using: http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1003212 oVirt could just as well define compatibility groups like GroupA, GroupB, GroupC, etc. and then the -cpu option we would be discussing would be -cpu GroupA. This is why it's a configuration option and not builtin to QEMU. It's a user interface as as such, should be defined at a higher level. Perhaps it really should be VDSM that is providing the model info to libvirt? Then they can add whatever groups then want whenever they want as long as we have the appropriate feature bits. I think the real (model specific) names are the best place to start. But if a user wants to override those with their own specific types then it should be allowed P.S. I spent 30 minutes the other day helping a user who was attempting to figure out whether his processor was a Conroe, Penryn, etc. Making this determination is fairly difficult and it makes me wonder whether having CPU code names is even the best interface for oVirt.. I think that was more about a bad choice in UI than a bad choice in the architecture. It should be made clear to a user what kind of machine they have and what it's capabilities are This bug was borne out of that issue
Re: [Qemu-devel] [PATCH 1/2] Support @documentencoding in scripts/texi2pod.pl
Ping? It's been more than a month since this patch has been posted. Maybe it is a good candidate for -trivial queue? Thanks, /mjt On 02.02.2012 18:16, Michael Tokarev wrote: Currently our texi2pod ignores @documentencoding even if it is set properly in *.texi files. This results in a mojibake in documents generated from qemu.pod (which is generated from qemu-doc.texi by texi2pod), because the rest of the tools assumes ASCII encoding. This patch recognizes first @documentencoding in input and places it at the beginning of output as =encoding directive. Signed-Off-By: Michael Tokarev m...@tls.msk.ru --- scripts/texi2pod.pl |9 + 1 files changed, 9 insertions(+), 0 deletions(-) diff --git a/scripts/texi2pod.pl b/scripts/texi2pod.pl index 9ed056a..94097fb 100755 --- a/scripts/texi2pod.pl +++ b/scripts/texi2pod.pl @@ -36,6 +36,7 @@ $fnno = 1; $inf = ; $ibase = ; @ipath = (); +$encoding = undef; while ($_ = shift) { if (/^-D(.*)$/) { @@ -97,6 +98,12 @@ while($inf) { /^\@setfilename\s+([^.]+)/ and $fn = $1, next; /^\@settitle\s+([^.]+)/ and $tl = postprocess($1), next; +# Look for document encoding +/^\@documentencoding\s+([^.]+)/ and do { +$encoding = $1 unless defined $encoding; +next; +}; + # Identify a man title but keep only the one we are interested in. /^\@c\s+man\s+title\s+([A-Za-z0-9-]+)\s+(.+)/ and do { if (exists $defs{$1}) { @@ -336,6 +343,8 @@ $inf = pop @instack; die No filename or title\n unless defined $fn defined $tl; +print =encoding $encoding\n\n if defined $encoding; + $sects{NAME} = $fn \- $tl\n; $sects{FOOTNOTES} .= =back\n if exists $sects{FOOTNOTES};
Re: [Qemu-devel] [PATCH 2/2] Run pod2man with --utf8 option to enable utf8 in manpages
Ping? It's been more than a month since this patch has been posted. Maybe it is a good candidate for -trivial queue? Thanks, /mjt On 02.02.2012 18:16, Michael Tokarev wrote: This option makes no difference for manpages which contains only ascii chars. But for manpages with actual UTF8 characters (qemu docs contains these), this change allows to see real characters instead of mojibakes or substitutes. Signed-off-By: Michael Tokarev m...@tls.msk.ru --- Makefile |9 + 1 files changed, 5 insertions(+), 4 deletions(-) diff --git a/Makefile b/Makefile index 2560b59..737cda2 100644 --- a/Makefile +++ b/Makefile @@ -337,28 +337,29 @@ QMP/qmp-commands.txt: $(SRC_PATH)/qmp-commands.hx qemu-img-cmds.texi: $(SRC_PATH)/qemu-img-cmds.hx $(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t $ $@, GEN $@) +POD2MAN = pod2man --utf8 qemu.1: qemu-doc.texi qemu-options.texi qemu-monitor.texi $(call quiet-command, \ perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $ qemu.pod \ - pod2man --section=1 --center= --release= qemu.pod $@, \ + $(POD2MAN) --section=1 --center= --release= qemu.pod $@, \ GEN $@) qemu-img.1: qemu-img.texi qemu-img-cmds.texi $(call quiet-command, \ perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $ qemu-img.pod \ - pod2man --section=1 --center= --release= qemu-img.pod $@, \ + $(POD2MAN) --section=1 --center= --release= qemu-img.pod $@, \ GEN $@) fsdev/virtfs-proxy-helper.1: fsdev/virtfs-proxy-helper.texi $(call quiet-command, \ perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $ fsdev/virtfs-proxy-helper.pod \ - pod2man --section=1 --center= --release= fsdev/virtfs-proxy-helper.pod $@, \ + $(POD2MAN) --section=1 --center= --release= fsdev/virtfs-proxy-helper.pod $@, \ GEN $@) qemu-nbd.8: qemu-nbd.texi $(call quiet-command, \ perl -Ww -- $(SRC_PATH)/scripts/texi2pod.pl $ qemu-nbd.pod \ - pod2man --section=8 --center= --release= qemu-nbd.pod $@, \ + $(POD2MAN) --section=8 --center= --release= qemu-nbd.pod $@, \ GEN $@) dvi: qemu-doc.dvi qemu-tech.dvi
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On Sat, Mar 10, 2012 at 12:24 PM, Anthony Liguori anth...@codemonkey.ws wrote: On 03/10/2012 09:58 AM, Eduardo Habkost wrote: On Sat, Mar 10, 2012 at 12:42:46PM +, Daniel P. Berrange wrote: I could have sworn we had this discussion a year ago or so, and had decided that the default CPU models would be in something like /usr/share/qemu/cpu-x86_64.conf and loaded regardless of the -nodefconfig setting. /etc/qemu/target-x86_64.conf would be solely for end user configuration changes, not for QEMU builtin defaults. But looking at the code in QEMU, it doesn't seem we ever implemented this ? Arrrgggh. It seems this was implemented as a patch in RHEL-6 qemu RPMs but, contrary to our normal RHEL development practice, it was not based on a cherry-pick of an upstream patch :-( For sake of reference, I'm attaching the two patches from the RHEL6 source RPM that do what I'm describing NB, I'm not neccessarily advocating these patches for upstream. I still maintain that libvirt should write out a config file containing the exact CPU model description it desires and specify that with -readconfig. The end result would be identical from QEMU's POV and it would avoid playing games with QEMU's config loading code. I agree that libvirt should just write the config somewhere. The problem here is to define: 1) what information should be mandatory on that config data; 2) who should be responsible to test and maintain sane defaults (and where should they be maintained). The current cpudef definitions are simply too low-level to require it to be written from scratch. Lots of testing have to be done to make sure we have working combinations of CPUID bits defined, so they can be used as defaults or templates. Not facilitating reuse of those tested defauls/templates by libvirt is duplication of efforts. Really, if we expect libvirt to define all the CPU bits from scratch on a config file, we could as well just expect libvirt to open /dev/kvm itself and call the all CPUID setup ioctl()s itself. That's how low-level some of the cpudef bits are. Let's step back here. Why are you writing these patches? It's probably not because you have a desire to say -cpu Westmere when you run QEMU on your laptop. I'd wager to say that no human has ever done that or that if they had, they did so by accident because they read documentation and thought they had to. Humans probably do one of two things: 1) no cpu option or 2) -cpu host. So then why are you introducing -cpu Westmere? Because ovirt-engine has a concept of datacenters and the entire datacenter has to use a compatible CPU model to allow migration compatibility. Today, the interface that ovirt-engine exposes is based on CPU codenames. Presumably ovirt-engine wants to add a Westmere CPU group and as such have levied a requirement down the stack to QEMU. But there's no intrinsic reason why it uses CPU model names. VMware doesn't do this. It has a concept of compatibility groups[1]. oVirt could just as well define compatibility groups like GroupA, GroupB, GroupC, etc. and then the -cpu option we would be discussing would be -cpu GroupA. This is why it's a configuration option and not builtin to QEMU. It's a user interface as as such, should be defined at a higher level. Perhaps it really should be VDSM that is providing the model info to libvirt? Then they can add whatever groups then want whenever they want as long as we have the appropriate feature bits. P.S. I spent 30 minutes the other day helping a user who was attempting to figure out whether his processor was a Conroe, Penryn, etc. Making this determination is fairly difficult and it makes me wonder whether having CPU code names is even the best interface for oVirt.. [1] http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1991 Regards, Anthony Liguori FWIW, as a user this would be a good improvement. As it stands right now when a cluster of machines is established as being redundant migratable machines for each other I must do the following for each machine: virsh -c qemu://machine/system capabilities | xpath /capabilities/host/cpu machine-cpu.xml Once I have that data I combine them together and use virsh cpu-baseline, which is a handy addition from the past of doing it manually, but still not optimal. This gives me a model which is mostly meaningless and uninteresting to me, but I know all the guests must use Penryn for example. If ovirt and by extension libvirt let me know that guest X is running on CPU-A, I know I could migrate it to any other machine supporting CPU-A or CPU-B (assuming B is a super set of A). -- Doug Goldstein
Re: [Qemu-devel] Support for Nested Paging
Hi, Thanks for your reply. I am a graduate student at Stony Brook University and am working on design and implementation of hypervisors for OSCAR lab ( http://oscar.cs.stonybrook.edu/). Currently I am working on implementing emulation of Nested Page Tables in QEMU as present in AMD-V architectures. I would like to know if the QEMU team will be interested in having a patch which emulates the Nested Page Table and other hardware virtualization techniques supported by AMD-V or Intel-VT atchitectures. I would love to help in maintenance of my patch or any other issues in the QEMU in future as well. I would also like to know if there is any chance that this can become a part of Google Summer of Code 2012. Thanks, Ankur 2012/3/3 陳韋任 che...@iis.sinica.edu.tw Does QEMU emulate the Nested Page Tables implemented by AMD-V architecture or the Intel VT? I think the answer is no. Also I am trying to understand the QEMU source with an objective of participating in the Google Summer of Code and contributing to QEMU. I have tried tracing through the code but seems this link http://repo.or.cz/w/qemu/stefanha.git/blob_plain/refs/heads/tracing:/docs/tracing.txtis not updated because many of the options do not work here. I would very happy if someone could provide me links to a good starting point to understand QEMU source code. The tracing you mentioned is not tend to help reading the code. Depends on which part of QEMU you're trying to play with, you have some background knowledge of it. See Getting to know the code on the QEMU wiki [1]. And the slides mentioned on the mail below is a good start. http://www.mail-archive.com/qemu-devel@nongnu.org/msg99864.html HTH, chenwj [1] http://wiki.qemu.org/Documentation/GettingStartedDevelopers#Getting_to_know_the_code -- Wei-Ren Chen (陳韋任) Computer Systems Lab, Institute of Information Science, Academia Sinica, Taiwan (R.O.C.) Tel:886-2-2788-3799 #1667 Homepage: http://people.cs.nctu.edu.tw/~chenwj
[Qemu-devel] [PATCH 1/2] ioport: use INT64_MAX for IO ranges
Expression UINT64_MAX + 1 will make the range bigger than what can be represented with a 64 bit type. This would trigger an assert in int128_get64() after the next patch. Signed-off-by: Blue Swirl blauwir...@gmail.com --- ioport.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/ioport.c b/ioport.c index 78a3b89..6e4ca0d 100644 --- a/ioport.c +++ b/ioport.c @@ -385,7 +385,7 @@ static void portio_list_add_1(PortioList *piolist, * rather than an offset relative to to start + off_low. */ memory_region_init_io(region, ops, piolist-opaque, piolist-name, - UINT64_MAX); + INT64_MAX); memory_region_init_alias(alias, piolist-name, region, start + off_low, off_high - off_low); memory_region_add_subregion(piolist-address_space, -- 1.7.9 From 8afc6dd4b177f1dc631d880964191de609d73b54 Mon Sep 17 00:00:00 2001 Message-Id: 8afc6dd4b177f1dc631d880964191de609d73b54.1331463059.git.blauwir...@gmail.com From: Blue Swirl blauwir...@gmail.com Date: Sat, 10 Mar 2012 16:57:10 + Subject: [PATCH 1/5] ioport: use INT64_MAX for IO ranges Expression UINT64_MAX + 1 will make the range bigger than what can be represented with a 64 bit type. This would trigger an assert in int128_get64() after the next patch. Signed-off-by: Blue Swirl blauwir...@gmail.com --- ioport.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/ioport.c b/ioport.c index 78a3b89..6e4ca0d 100644 --- a/ioport.c +++ b/ioport.c @@ -385,7 +385,7 @@ static void portio_list_add_1(PortioList *piolist, * rather than an offset relative to to start + off_low. */ memory_region_init_io(region, ops, piolist-opaque, piolist-name, - UINT64_MAX); + INT64_MAX); memory_region_init_alias(alias, piolist-name, region, start + off_low, off_high - off_low); memory_region_add_subregion(piolist-address_space, -- 1.7.2.5
[Qemu-devel] [PATCH 2/2] memory: print aliased IO ranges in info mtree
Print also I/O ports behind bridges and other aliases. Signed-off-by: Blue Swirl blauwir...@gmail.com --- memory.c | 13 + 1 files changed, 13 insertions(+), 0 deletions(-) diff --git a/memory.c b/memory.c index 4c3dc49..0201392 100644 --- a/memory.c +++ b/memory.c @@ -1639,7 +1639,20 @@ void mtree_info(fprintf_function mon_printf, void *f) if (address_space_io.root !QTAILQ_EMPTY(address_space_io.root-subregions)) { QTAILQ_INIT(ml_head); + mon_printf(f, I/O\n); mtree_print_mr(mon_printf, f, address_space_io.root, 0, 0, ml_head); + +/* print aliased I/O regions */ +QTAILQ_FOREACH(ml, ml_head, queue) { +if (!ml-printed) { +mon_printf(f, %s\n, ml-mr-name); +mtree_print_mr(mon_printf, f, ml-mr, 0, 0, ml_head); +} +} +} + +QTAILQ_FOREACH_SAFE(ml, ml_head, queue, ml2) { +g_free(ml); } } -- 1.7.9 From a93566c2bc6dcb703020cf50b1324a1d437d8802 Mon Sep 17 00:00:00 2001 Message-Id: a93566c2bc6dcb703020cf50b1324a1d437d8802.1331463059.git.blauwir...@gmail.com In-Reply-To: 8afc6dd4b177f1dc631d880964191de609d73b54.1331463059.git.blauwir...@gmail.com References: 8afc6dd4b177f1dc631d880964191de609d73b54.1331463059.git.blauwir...@gmail.com From: Blue Swirl blauwir...@gmail.com Date: Sat, 10 Mar 2012 16:58:35 + Subject: [PATCH 2/5] memory: print aliased IO ranges in info mtree Print also I/O ports behind bridges and other aliases. Signed-off-by: Blue Swirl blauwir...@gmail.com --- memory.c | 13 + 1 files changed, 13 insertions(+), 0 deletions(-) diff --git a/memory.c b/memory.c index 4c3dc49..0201392 100644 --- a/memory.c +++ b/memory.c @@ -1639,7 +1639,20 @@ void mtree_info(fprintf_function mon_printf, void *f) if (address_space_io.root !QTAILQ_EMPTY(address_space_io.root-subregions)) { QTAILQ_INIT(ml_head); + mon_printf(f, I/O\n); mtree_print_mr(mon_printf, f, address_space_io.root, 0, 0, ml_head); + +/* print aliased I/O regions */ +QTAILQ_FOREACH(ml, ml_head, queue) { +if (!ml-printed) { +mon_printf(f, %s\n, ml-mr-name); +mtree_print_mr(mon_printf, f, ml-mr, 0, 0, ml_head); +} +} +} + +QTAILQ_FOREACH_SAFE(ml, ml_head, queue, ml2) { +g_free(ml); } } -- 1.7.2.5
[Qemu-devel] [PATCH 1/3] apb: use normal PCI device header for PBM device
PBM has a normal PCI device header, fix. Signed-off-by: Blue Swirl blauwir...@gmail.com --- hw/apb_pci.c |1 - 1 files changed, 0 insertions(+), 1 deletions(-) diff --git a/hw/apb_pci.c b/hw/apb_pci.c index 1d25da8..b10f31e 100644 --- a/hw/apb_pci.c +++ b/hw/apb_pci.c @@ -444,7 +444,6 @@ static void pbm_pci_host_class_init(ObjectClass *klass, void *data) k-vendor_id = PCI_VENDOR_ID_SUN; k-device_id = PCI_DEVICE_ID_SUN_SABRE; k-class_id = PCI_CLASS_BRIDGE_HOST; -k-is_bridge = 1; } static TypeInfo pbm_pci_host_info = { -- 1.7.9 From 4bd3c025d124cc8ce66346143d5ec906e565c47a Mon Sep 17 00:00:00 2001 Message-Id: 4bd3c025d124cc8ce66346143d5ec906e565c47a.1331463238.git.blauwir...@gmail.com From: Blue Swirl blauwir...@gmail.com Date: Sat, 10 Mar 2012 16:53:47 + Subject: [PATCH 1/3] apb: use normal PCI device header for PBM device PBM has a normal PCI device header, fix. Signed-off-by: Blue Swirl blauwir...@gmail.com --- hw/apb_pci.c |1 - 1 files changed, 0 insertions(+), 1 deletions(-) diff --git a/hw/apb_pci.c b/hw/apb_pci.c index 1d25da8..b10f31e 100644 --- a/hw/apb_pci.c +++ b/hw/apb_pci.c @@ -444,7 +444,6 @@ static void pbm_pci_host_class_init(ObjectClass *klass, void *data) k-vendor_id = PCI_VENDOR_ID_SUN; k-device_id = PCI_DEVICE_ID_SUN_SABRE; k-class_id = PCI_CLASS_BRIDGE_HOST; -k-is_bridge = 1; } static TypeInfo pbm_pci_host_info = { -- 1.7.2.5
[Qemu-devel] [PATCH 2/3] sparc: reset CPU state on reset
Not strictly accurate for Sparc64 but avoid confusing Valgrind. Reported-by: Michael S. Tsirkin m...@redhat.com Signed-off-by: Blue Swirl blauwir...@gmail.com --- target-sparc/cpu.h |5 +++-- target-sparc/cpu_init.c |1 + 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/target-sparc/cpu.h b/target-sparc/cpu.h index 38a7074..b81779b 100644 --- a/target-sparc/cpu.h +++ b/target-sparc/cpu.h @@ -413,14 +413,15 @@ typedef struct CPUSPARCState { #if !defined(TARGET_SPARC64) int psref;/* enable fpu */ #endif -target_ulong version; int interrupt_index; -uint32_t nwindows; /* NOTE: we allow 8 more registers to handle wrapping */ target_ulong regbase[MAX_NWINDOWS * 16 + 8]; CPU_COMMON +target_ulong version; +uint32_t nwindows; + /* MMU regs */ #if defined(TARGET_SPARC64) uint64_t lsu; diff --git a/target-sparc/cpu_init.c b/target-sparc/cpu_init.c index c7269b5..bd4ab6a 100644 --- a/target-sparc/cpu_init.c +++ b/target-sparc/cpu_init.c @@ -30,6 +30,7 @@ void cpu_reset(CPUSPARCState *env) log_cpu_state(env, 0); } +memset(env, 0, offsetof(CPUSPARCState, breakpoints)); tlb_flush(env, 1); env-cwp = 0; #ifndef TARGET_SPARC64 -- 1.7.9 From 5c46dbc58893bba4b5890f536040524e44ebc961 Mon Sep 17 00:00:00 2001 Message-Id: 5c46dbc58893bba4b5890f536040524e44ebc961.1331463238.git.blauwir...@gmail.com In-Reply-To: 4bd3c025d124cc8ce66346143d5ec906e565c47a.1331463238.git.blauwir...@gmail.com References: 4bd3c025d124cc8ce66346143d5ec906e565c47a.1331463238.git.blauwir...@gmail.com From: Blue Swirl blauwir...@gmail.com Date: Sat, 10 Mar 2012 17:55:05 + Subject: [PATCH 2/3] sparc: reset CPU state on reset Not strictly accurate for Sparc64 but avoid confusing Valgrind. Reported-by: Michael S. Tsirkin m...@redhat.com Signed-off-by: Blue Swirl blauwir...@gmail.com --- target-sparc/cpu.h |5 +++-- target-sparc/cpu_init.c |1 + 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/target-sparc/cpu.h b/target-sparc/cpu.h index 38a7074..b81779b 100644 --- a/target-sparc/cpu.h +++ b/target-sparc/cpu.h @@ -413,14 +413,15 @@ typedef struct CPUSPARCState { #if !defined(TARGET_SPARC64) int psref;/* enable fpu */ #endif -target_ulong version; int interrupt_index; -uint32_t nwindows; /* NOTE: we allow 8 more registers to handle wrapping */ target_ulong regbase[MAX_NWINDOWS * 16 + 8]; CPU_COMMON +target_ulong version; +uint32_t nwindows; + /* MMU regs */ #if defined(TARGET_SPARC64) uint64_t lsu; diff --git a/target-sparc/cpu_init.c b/target-sparc/cpu_init.c index c7269b5..bd4ab6a 100644 --- a/target-sparc/cpu_init.c +++ b/target-sparc/cpu_init.c @@ -30,6 +30,7 @@ void cpu_reset(CPUSPARCState *env) log_cpu_state(env, 0); } +memset(env, 0, offsetof(CPUSPARCState, breakpoints)); tlb_flush(env, 1); env-cwp = 0; #ifndef TARGET_SPARC64 -- 1.7.2.5
[Qemu-devel] [PATCH 3/3] sparc64: implement PCI and ISA irqs
Generate correct trap for external interrupts. Map PCI and ISA IRQs to RIC/UltraSPARC-IIi interrupt vectors. Signed-off-by: Blue Swirl blauwir...@gmail.com --- hw/apb_pci.c | 48 +++-- hw/apb_pci.h |3 +- hw/sun4u.c | 57 ++-- target-sparc/cpu.h |3 ++ target-sparc/ldst_helper.c | 20 ++ 5 files changed, 93 insertions(+), 38 deletions(-) diff --git a/hw/apb_pci.c b/hw/apb_pci.c index b10f31e..7e28808 100644 --- a/hw/apb_pci.c +++ b/hw/apb_pci.c @@ -66,6 +66,8 @@ do { printf(APB: fmt , ## __VA_ARGS__); } while (0) #define RESET_WCMASK 0x9800 #define RESET_WMASK 0x6000 +#define MAX_IVEC 0x30 + typedef struct APBState { SysBusDevice busdev; PCIBus *bus; @@ -77,7 +79,8 @@ typedef struct APBState { uint32_t pci_control[16]; uint32_t pci_irq_map[8]; uint32_t obio_irq_map[32]; -qemu_irq pci_irqs[32]; +qemu_irq *pbm_irqs; +qemu_irq *ivec_irqs; uint32_t reset_control; unsigned int nr_resets; } APBState; @@ -87,7 +90,7 @@ static void apb_config_writel (void *opaque, target_phys_addr_t addr, { APBState *s = opaque; -APB_DPRINTF(%s: addr TARGET_FMT_lx val %x\n, __func__, addr, val); +APB_DPRINTF(%s: addr TARGET_FMT_lx val % PRIx64 \n, __func__, addr, val); switch (addr 0x) { case 0x30 ... 0x4f: /* DMA error registers */ @@ -104,6 +107,12 @@ static void apb_config_writel (void *opaque, target_phys_addr_t addr, s-pci_irq_map[(addr 0x3f) 3] |= val ~PBM_PCI_IMR_MASK; } break; +case 0x1000 ... 0x1080: /* OBIO interrupt control */ +if (addr 4) { +s-obio_irq_map[(addr 0xff) 3] = PBM_PCI_IMR_MASK; +s-obio_irq_map[(addr 0xff) 3] |= val ~PBM_PCI_IMR_MASK; +} +break; case 0x2000 ... 0x202f: /* PCI control */ s-pci_control[(addr 0x3f) 2] = val; break; @@ -154,6 +163,13 @@ static uint64_t apb_config_readl (void *opaque, val = 0; } break; +case 0x1000 ... 0x1080: /* OBIO interrupt control */ +if (addr 4) { +val = s-obio_irq_map[(addr 0xff) 3]; +} else { +val = 0; +} +break; case 0x2000 ... 0x202f: /* PCI control */ val = s-pci_control[(addr 0x3f) 2]; break; @@ -190,7 +206,7 @@ static void apb_pci_config_write(void *opaque, target_phys_addr_t addr, APBState *s = opaque; val = qemu_bswap_len(val, size); -APB_DPRINTF(%s: addr TARGET_FMT_lx val %x\n, __func__, addr, val); +APB_DPRINTF(%s: addr TARGET_FMT_lx val % PRIx64 \n, __func__, addr, val); pci_data_write(s-bus, addr, val, size); } @@ -280,10 +296,19 @@ static void pci_apb_set_irq(void *opaque, int irq_num, int level) if (irq_num 32) { if (s-pci_irq_map[irq_num 2] PBM_PCI_IMR_ENABLED) { APB_DPRINTF(%s: set irq %d level %d\n, __func__, irq_num, level); -qemu_set_irq(s-pci_irqs[irq_num], level); +qemu_set_irq(s-ivec_irqs[irq_num], level); +} else { +APB_DPRINTF(%s: not enabled: lower irq %d\n, __func__, irq_num); +qemu_irq_lower(s-ivec_irqs[irq_num]); +} +} else { +/* OBIO IRQ map onto the next 16 INO. */ +if (s-obio_irq_map[irq_num - 32] PBM_PCI_IMR_ENABLED) { +APB_DPRINTF(%s: set irq %d level %d\n, __func__, irq_num, level); +qemu_set_irq(s-ivec_irqs[irq_num], level); } else { APB_DPRINTF(%s: not enabled: lower irq %d\n, __func__, irq_num); -qemu_irq_lower(s-pci_irqs[irq_num]); +qemu_irq_lower(s-ivec_irqs[irq_num]); } } } @@ -316,12 +341,12 @@ static int apb_pci_bridge_initfn(PCIDevice *dev) PCIBus *pci_apb_init(target_phys_addr_t special_base, target_phys_addr_t mem_base, - qemu_irq *pic, PCIBus **bus2, PCIBus **bus3) + qemu_irq *ivec_irqs, PCIBus **bus2, PCIBus **bus3, + qemu_irq **pbm_irqs) { DeviceState *dev; SysBusDevice *s; APBState *d; -unsigned int i; PCIDevice *pci_dev; PCIBridge *br; @@ -346,9 +371,8 @@ PCIBus *pci_apb_init(target_phys_addr_t special_base, get_system_io(), 0, 32); -for (i = 0; i 32; i++) { -sysbus_connect_irq(s, i, pic[i]); -} +*pbm_irqs = d-pbm_irqs; +d-ivec_irqs = ivec_irqs; pci_create_simple(d-bus, 0, pbm-pci); @@ -402,9 +426,7 @@ static int pci_pbm_init_device(SysBusDevice *dev) for (i = 0; i 8; i++) { s-pci_irq_map[i] = (0x1f 6) | (i 2); } -for (i = 0; i 32; i++) { -sysbus_init_irq(dev, s-pci_irqs[i]); -} +s-pbm_irqs = qemu_allocate_irqs(pci_apb_set_irq, s, MAX_IVEC); /* apb_config */
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On Fri, Mar 09, 2012 at 03:15:26PM -0600, Anthony Liguori wrote: On 03/09/2012 03:04 PM, Daniel P. Berrange wrote: On Fri, Mar 09, 2012 at 05:56:52PM -0300, Eduardo Habkost wrote: Resurrecting an old thread: I didn't see any clear conclusion in this thread (this is why I am resurrecting it), except that many were arguing that libvirt should simply copy and/or generate the CPU model definitions from Qemu. I really don't think it's reasonable to expect that. On Thu, Dec 15, 2011 at 03:54:15PM +0100, Jiri Denemark wrote: Hi, Recently I realized that all modern CPU models defined in /etc/qemu/target-x86_64.conf are useless when qemu is used through libvirt. That's because we start qemu with -nodefconfig which results in qemu ignoring that file with CPU model definitions. We have a very good reason for using -nodefconfig because we need to control the ABI presented to a guest OS and we don't want any configuration file that can contain lots of things including device definitions to be read by qemu. However, we would really like the new CPU models to be understood by qemu even if used through libvirt. What would be the best way to solve this? I suspect this could have been already discussed in the past but obviously a workable solution was either not found or just not implemented. So, our problem today is basically: A) libvirt uses -nodefconfig; B) -nodefconfig makes Qemu not load the config file containing the CPU model definitions; and C) libvirt expects the full CPU model list from Qemu to be available. I could have sworn we had this discussion a year ago or so, and had decided that the default CPU models would be in something like /usr/share/qemu/cpu-x86_64.conf and loaded regardless of the -nodefconfig setting. /etc/qemu/target-x86_64.conf would be solely for end user configuration changes, not for QEMU builtin defaults. But looking at the code in QEMU, it doesn't seem we ever implemented this ? I don't remember that discussion and really don't think I agree with the conclusion. If libvirt wants to define CPU models on their own, they can. If It can't without knowing qemu/host cpu/host kernel capabilities and knowing the logic that qemu uses to combine them. libvirt wants to use the user's definitions, don't use -nodefconfig. CPU models aren't a QEMU concept. The reason it's in the I do not know what do you mean by that, but CPU capabilities (and CPU model is only a name for a group of them) are KVM/TCG concept and, by inclusion, are QEMU concept. If QEMU will not have built in support for CPU models (as a name for a group of CPU capabilities) then how do you start a guest without specifying full set of CPU capabilities on a command line? configuration file is to allow a user to add their own as they see fit. There is no right model names. It's strictly a policy. So you think it should be user's responsibility to check what his qemu/host cpu/host kernel combo can support? -- Gleb.
Re: [Qemu-devel] [PATCH] ide: Adds model=s option, allowing the user to override the default disk model name QEMU HARDDISK
Am 10.03.2012 20:56, schrieb Floris Bos: Some Linux distributions use the /dev/disk/by-id/scsi-SATA_name-of-disk-model_serial addressing scheme when refering to partitions in /etc/fstab and elsewhere. This causes problems when starting a disk image taken from an existing physical server under qemu, because when running under qemu name-of-disk-model is always QEMU HARDDISK This patch introduces a model=s option which in combination with the existing serial=s option can be used to fake the disk the operating system was previously on, allowing the OS to boot properly. Cc: kw...@redhat.com Signed-off-by: Floris Bos d...@noc-ps.com Patch looks good to me, except for some formal issues scripts/checkpatch.pl should warn about: diff --git a/blockdev.c b/blockdev.c index d78aa51..66fcc14 100644 --- a/blockdev.c +++ b/blockdev.c @@ -534,6 +536,8 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi) dinfo-refcount = 1; if (serial) strncpy(dinfo-serial, serial, sizeof(dinfo-serial) - 1); +if (model) +strncpy(dinfo-model, model, sizeof(dinfo-model) - 1); Please use braces for new ifs. QTAILQ_INSERT_TAIL(drives, dinfo, next); bdrv_set_on_error(dinfo-bdrv, on_read_error, on_write_error); diff --git a/hw/ide/core.c b/hw/ide/core.c index 4d568ac..2a38030 100644 --- a/hw/ide/core.c +++ b/hw/ide/core.c @@ -1977,7 +1993,8 @@ void ide_init2_with_non_qdev_drives(IDEBus *bus, DriveInfo *hd0, if (dinfo) { if (ide_init_drive(bus-ifs[i], dinfo-bdrv, dinfo-media_cd ? IDE_CD : IDE_HD, NULL, - *dinfo-serial ? dinfo-serial : NULL) 0) { + *dinfo-serial ? dinfo-serial : NULL, +*dinfo-model ? dinfo-model : NULL) 0) { Indentation uses tabs here; please use spaces. error_report(Can't set up IDE drive %s, dinfo-id); exit(1); } Andreas -- SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On Sat, Mar 10, 2012 at 12:58:43PM -0300, Eduardo Habkost wrote: On Sat, Mar 10, 2012 at 12:42:46PM +, Daniel P. Berrange wrote: I could have sworn we had this discussion a year ago or so, and had decided that the default CPU models would be in something like /usr/share/qemu/cpu-x86_64.conf and loaded regardless of the -nodefconfig setting. /etc/qemu/target-x86_64.conf would be solely for end user configuration changes, not for QEMU builtin defaults. But looking at the code in QEMU, it doesn't seem we ever implemented this ? Arrrgggh. It seems this was implemented as a patch in RHEL-6 qemu RPMs but, contrary to our normal RHEL development practice, it was not based on a cherry-pick of an upstream patch :-( For sake of reference, I'm attaching the two patches from the RHEL6 source RPM that do what I'm describing NB, I'm not neccessarily advocating these patches for upstream. I still maintain that libvirt should write out a config file containing the exact CPU model description it desires and specify that with -readconfig. The end result would be identical from QEMU's POV and it would avoid playing games with QEMU's config loading code. I agree that libvirt should just write the config somewhere. The problem here is to define: 1) what information should be mandatory on that config data; 2) who should be responsible to test and maintain sane defaults (and where should they be maintained). The current cpudef definitions are simply too low-level to require it to be written from scratch. Lots of testing have to be done to make sure we have working combinations of CPUID bits defined, so they can be used as defaults or templates. Not facilitating reuse of those tested defauls/templates by libvirt is duplication of efforts. Really, if we expect libvirt to define all the CPU bits from scratch on a config file, we could as well just expect libvirt to open /dev/kvm itself and call the all CPUID setup ioctl()s itself. That's how low-level some of the cpudef bits are. s/some/all If libvirt assumes anything about what kvm actually supports it is working only by sheer luck. (Also, there are additional low-level bits that really have to be maintained somewhere, just to have sane defaults. Currently many CPUID leafs are exposed to the guest without letting the user control them, and worse: without keeping stability of guest-visible bits when upgrading Qemu or the host kernel. And that's what machine-types are for: to have sane defaults to be used as base.) Let me give you a practical example: I had a bug report about improper CPU topology information[1]. After investigating it, I have found out that the level cpudef field is too low; CPU core topology information is provided on CPUID leaf 4, and most of the Intel CPU models on Qemu have level=2 today (I don't know why). So, Qemu is responsible for exposing CPU topology information set using '-smp' to the guest OS, but libvirt would have to be responsible for choosing a proper level value that makes that information visible to the guest. We can _allow_ libvirt to fiddle with these low-level bits, of course, but requiring every management layer to build this low-level information from scratch is just a recipe to waste developer time. And QEMU become even less usable from a command line. One more point to kvm-tool I guess. (And I really hope that there's no plan to require all those low-level bits to appear as-is on the libvirt XML definitions. Because that would require users to read the Intel 64 and IA-32 Architectures Software Developer's Manual, or the AMD64 Architecture Programmer's Manual and BIOS and Kernel Developer's Guides, just to understand why something is not working on his Virtual Machine.) [1] https://bugzilla.redhat.com/show_bug.cgi?id=689665 -- Eduardo -- Gleb.
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On Sat, Mar 10, 2012 at 12:24:47PM -0600, Anthony Liguori wrote: Let's step back here. Why are you writing these patches? It's probably not because you have a desire to say -cpu Westmere when you run QEMU on your laptop. I'd wager to say that no human has ever done that or that if they had, they did so by accident because they read documentation and thought they had to. I'd be glad if QEMU will chose -cpu Westmere for me if it detects Westmere host CPU as a default. Humans probably do one of two things: 1) no cpu option or 2) -cpu host. And both are not optimal. Actually both are bad. First one because default cpu is very conservative and the second because there is no guaranty that guest will continue to work after qemu or kernel upgrade. Let me elaborate about the later. Suppose host CPU has kill_guest feature and at the time a guest was installed it was not implemented by kvm. Since it was not implemented by kvm it was not present in vcpu during installation and the guest didn't install workaround kill_guest module. Now unsuspecting user upgrades the kernel and tries to restart the guest and fails. He writes angry letter to qemu-devel and is asked to reinstall his guest and move along. So then why are you introducing -cpu Westmere? Because ovirt-engine has a concept of datacenters and the entire datacenter has to use a compatible CPU model to allow migration compatibility. Today, the interface that ovirt-engine exposes is based on CPU codenames. Presumably ovirt-engine wants to add a Westmere CPU group and as such have levied a requirement down the stack to QEMU. First of all this is not about live migration only. Guest visible vcpu should not change after guest reboot (or hibernate/resume) too. And second this concept exists with only your laptop and single guest on it too. There are three inputs into a CPU model module: 1) host cpu, 2) qemu capabilities, 3) kvm capabilities. With datacenters scenario all three can change, with your laptop only last two can change (first one can change too when you'll get new laptop) , but the net result is that guest visible cpuid can change and it shouldn't. This is the goal of introducing -cpu Westmere, to prevent it from happening. But there's no intrinsic reason why it uses CPU model names. VMware doesn't do this. It has a concept of compatibility groups[1]. As Andrew noted, not any more. There is no intrinsic reason, but people are more familiar with Intel terminology than random hypervisor terminology. oVirt could just as well define compatibility groups like GroupA, GroupB, GroupC, etc. and then the -cpu option we would be discussing would be -cpu GroupA. It could, but I can't see why this is less confusing. This is why it's a configuration option and not builtin to QEMU. It's a user interface as as such, should be defined at a higher level. This is not the only configuration that is builtin in QEMU. As it stands now QEMU does not even allow configuring cpuid enough to define those compatibility groups outside of QEMU. And after the work is done to allow enough configurability there is no much left to provide compatibility groups in QEMU itself. Perhaps it really should be VDSM that is providing the model info to libvirt? Then they can add whatever groups then want whenever they want as long as we have the appropriate feature bits. P.S. I spent 30 minutes the other day helping a user who was attempting to figure out whether his processor was a Conroe, Penryn, etc. Making this determination is fairly difficult and it makes me wonder whether having CPU code names is even the best interface for oVirt.. [1] http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1991 Regards, Anthony Liguori (Also, there are additional low-level bits that really have to be maintained somewhere, just to have sane defaults. Currently many CPUID leafs are exposed to the guest without letting the user control them, and worse: without keeping stability of guest-visible bits when upgrading Qemu or the host kernel. And that's what machine-types are for: to have sane defaults to be used as base.) Let me give you a practical example: I had a bug report about improper CPU topology information[1]. After investigating it, I have found out that the level cpudef field is too low; CPU core topology information is provided on CPUID leaf 4, and most of the Intel CPU models on Qemu have level=2 today (I don't know why). So, Qemu is responsible for exposing CPU topology information set using '-smp' to the guest OS, but libvirt would have to be responsible for choosing a proper level value that makes that information visible to the guest. We can _allow_ libvirt to fiddle with these low-level bits, of course, but requiring every management layer to build this low-level information from scratch is just a recipe to waste developer time. (And I really hope that there's no
[Qemu-devel] SeaBIOS v1.6.3.2 release
A new stable release of SeaBIOS (version 1.6.3.2) has been tagged. This release has some minor bug fixes, mostly build related. The release is available via git: git clone git://git.seabios.org/seabios -b 1.6.3-stable -Kevin Kevin O'Connor (6): Add PYTHON definition to Makefile. Permit .rodata.__PRETTY_FUNCTION__. sections in roms. BCVs should inherrit the legacy harddrive priority. Fix missing NULL pointer checks causing boot failure on 1meg machines. Use #!/bin/sh instead of : in tools/gen-offsets.sh. Update version to 1.6.3.2 Makefile |9 + src/boot.c |2 +- src/pmm.c|3 ++- tools/gen-offsets.sh |2 +- tools/layoutrom.py | 12 5 files changed, 17 insertions(+), 11 deletions(-)
[Qemu-devel] [PATCH] virtio-serial-bus: use correct lengths in control_out() message
In case of more than one control message, the code will use size of the largest message so far for all subsequent messages, instead of using size of current one. Fix it. Signed-off-by: Michael Tokarev m...@tls.msk.ru --- hw/virtio-serial-bus.c |6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/hw/virtio-serial-bus.c b/hw/virtio-serial-bus.c index e22940e..abe48ec 100644 --- a/hw/virtio-serial-bus.c +++ b/hw/virtio-serial-bus.c @@ -451,28 +451,28 @@ static void control_out(VirtIODevice *vdev, VirtQueue *vq) vser = DO_UPCAST(VirtIOSerial, vdev, vdev); len = 0; buf = NULL; while (virtqueue_pop(vq, elem)) { -size_t cur_len, copied; +size_t cur_len; cur_len = iov_size(elem.out_sg, elem.out_num); /* * Allocate a new buf only if we didn't have one previously or * if the size of the buf differs */ if (cur_len len) { g_free(buf); buf = g_malloc(cur_len); len = cur_len; } -copied = iov_to_buf(elem.out_sg, elem.out_num, buf, 0, len); +iov_to_buf(elem.out_sg, elem.out_num, buf, 0, cur_len); -handle_control_message(vser, buf, copied); +handle_control_message(vser, buf, cur_len); virtqueue_push(vq, elem, 0); } g_free(buf); virtio_notify(vdev, vq); } -- 1.7.9.1
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On 03/11/2012 08:27 AM, Gleb Natapov wrote: On Sat, Mar 10, 2012 at 12:24:47PM -0600, Anthony Liguori wrote: Let's step back here. Why are you writing these patches? It's probably not because you have a desire to say -cpu Westmere when you run QEMU on your laptop. I'd wager to say that no human has ever done that or that if they had, they did so by accident because they read documentation and thought they had to. I'd be glad if QEMU will chose -cpu Westmere for me if it detects Westmere host CPU as a default. This is -cpu best that Alex proposed FWIW. Humans probably do one of two things: 1) no cpu option or 2) -cpu host. And both are not optimal. Actually both are bad. First one because default cpu is very conservative and the second because there is no guaranty that guest will continue to work after qemu or kernel upgrade. Let me elaborate about the later. Suppose host CPU has kill_guest feature and at the time a guest was installed it was not implemented by kvm. Since it was not implemented by kvm it was not present in vcpu during installation and the guest didn't install workaround kill_guest module. Now unsuspecting user upgrades the kernel and tries to restart the guest and fails. He writes angry letter to qemu-devel and is asked to reinstall his guest and move along. -cpu best wouldn't solve this. You need a read/write configuration file where QEMU probes the available CPU and records it to be used for the lifetime of the VM. So then why are you introducing -cpu Westmere? Because ovirt-engine has a concept of datacenters and the entire datacenter has to use a compatible CPU model to allow migration compatibility. Today, the interface that ovirt-engine exposes is based on CPU codenames. Presumably ovirt-engine wants to add a Westmere CPU group and as such have levied a requirement down the stack to QEMU. First of all this is not about live migration only. Guest visible vcpu should not change after guest reboot (or hibernate/resume) too. And second this concept exists with only your laptop and single guest on it too. There are three inputs into a CPU model module: 1) host cpu, 2) qemu capabilities, 3) kvm capabilities. With datacenters scenario all three can change, with your laptop only last two can change (first one can change too when you'll get new laptop) , but the net result is that guest visible cpuid can change and it shouldn't. This is the goal of introducing -cpu Westmere, to prevent it from happening. This discussion isn't about whether QEMU should have a Westmere processor definition. In fact, I think I already applied that patch. It's a discussion about how we handle this up and down the stack. The question is who should define and manage CPU compatibility. Right now QEMU does to a certain degree, libvirt discards this and does it's own thing, and VDSM/ovirt-engine assume that we're providing something and has built a UI around it. What I'm proposing we consider: have VDSM manage CPU definitions in order to provide a specific user experience in ovirt-engine. We would continue to have Westmere/etc in QEMU exposed as part of the user configuration. But I don't think it makes a lot of sense to have to modify QEMU any time a new CPU comes out. Regards, Anthony Liguori
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On 03/11/2012 07:41 AM, Gleb Natapov wrote: On Sat, Mar 10, 2012 at 12:58:43PM -0300, Eduardo Habkost wrote: On Sat, Mar 10, 2012 at 12:42:46PM +, Daniel P. Berrange wrote: I could have sworn we had this discussion a year ago or so, and had decided that the default CPU models would be in something like /usr/share/qemu/cpu-x86_64.conf and loaded regardless of the -nodefconfig setting. /etc/qemu/target-x86_64.conf would be solely for end user configuration changes, not for QEMU builtin defaults. But looking at the code in QEMU, it doesn't seem we ever implemented this ? Arrrgggh. It seems this was implemented as a patch in RHEL-6 qemu RPMs but, contrary to our normal RHEL development practice, it was not based on a cherry-pick of an upstream patch :-( For sake of reference, I'm attaching the two patches from the RHEL6 source RPM that do what I'm describing NB, I'm not neccessarily advocating these patches for upstream. I still maintain that libvirt should write out a config file containing the exact CPU model description it desires and specify that with -readconfig. The end result would be identical from QEMU's POV and it would avoid playing games with QEMU's config loading code. I agree that libvirt should just write the config somewhere. The problem here is to define: 1) what information should be mandatory on that config data; 2) who should be responsible to test and maintain sane defaults (and where should they be maintained). The current cpudef definitions are simply too low-level to require it to be written from scratch. Lots of testing have to be done to make sure we have working combinations of CPUID bits defined, so they can be used as defaults or templates. Not facilitating reuse of those tested defauls/templates by libvirt is duplication of efforts. Really, if we expect libvirt to define all the CPU bits from scratch on a config file, we could as well just expect libvirt to open /dev/kvm itself and call the all CPUID setup ioctl()s itself. That's how low-level some of the cpudef bits are. s/some/all If libvirt assumes anything about what kvm actually supports it is working only by sheer luck. Well the simple answer for libvirt is don't use -nodefconfig and then it can reuse the CPU definitions (including any that the user adds). Really, what's the point of having a layer of management if we're saying that doing policy management is too complicated for that layer? What does that layer exist to provide then? (Also, there are additional low-level bits that really have to be maintained somewhere, just to have sane defaults. Currently many CPUID leafs are exposed to the guest without letting the user control them, and worse: without keeping stability of guest-visible bits when upgrading Qemu or the host kernel. And that's what machine-types are for: to have sane defaults to be used as base.) Let me give you a practical example: I had a bug report about improper CPU topology information[1]. After investigating it, I have found out that the level cpudef field is too low; CPU core topology information is provided on CPUID leaf 4, and most of the Intel CPU models on Qemu have level=2 today (I don't know why). So, Qemu is responsible for exposing CPU topology information set using '-smp' to the guest OS, but libvirt would have to be responsible for choosing a proper level value that makes that information visible to the guest. We can _allow_ libvirt to fiddle with these low-level bits, of course, but requiring every management layer to build this low-level information from scratch is just a recipe to waste developer time. And QEMU become even less usable from a command line. One more point to kvm-tool I guess. I'm not sure what your point is. We're talking about an option that humans don't use. How is this a discussion about QEMU usability? Regards, Anthony Liguori
Re: [Qemu-devel] seamless migration with spice
On 03/11/2012 08:16 AM, Yonit Halperin wrote: Hi, We would like to implement seamless migration for Spice, i.e., keeping the currently opened spice client session valid after migration. Today, the spice client establishes the connection to the destination before migration starts, and when migration completes, the client's session is moved to the destination, but all the session data is being reset. We face 2 main challenges when coming to implement seamless migration: (1) Spice client must establish the connection to the destination before the spice password expires. However, during migration, qemu main loop is not processed, and when migration completes, the password might have already expired. Today we solve this by the async command client_migrate_info, which is expected to be called before migration starts. The command is completed once spice client has connected to the destination (or a timeout). Since async monitor commands are no longer supported, we are looking for a new solution. We need to fix async monitor commands. Luiz sent a note our to qemu-devel recently on this topic. I'm not sure we'll get there for 1.1 but if we do a 3 month release cycle for 1.2, then that's a pretty reasonable target IMHO. Regards, Anthony Liguori The straightforward solution would be to process the main loop on the destination side during migration. (2) In order to restore the source-client spice session in the destination, we need to pass data from the source to the destination. Example for such data: in flight copy paste data, in flight usb data We want to pass the data from the source spice server to the destination, via Spice client. This introduces a possible race: after migration completes, the source qemu can be killed before the spice-server completes transferring the migration data to the client. Possible solutions: - Have an async migration state notifiers. The migration state will change after all the notifiers complete callbacks are called. - libvirt will wait for qmp event corresponding to spice completing its migration, and only then will kill the source qemu process. Any thoughts? Thanks, Yonit.
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On Sun, Mar 11, 2012 at 09:12:58AM -0500, Anthony Liguori wrote: On 03/11/2012 08:27 AM, Gleb Natapov wrote: On Sat, Mar 10, 2012 at 12:24:47PM -0600, Anthony Liguori wrote: Let's step back here. Why are you writing these patches? It's probably not because you have a desire to say -cpu Westmere when you run QEMU on your laptop. I'd wager to say that no human has ever done that or that if they had, they did so by accident because they read documentation and thought they had to. I'd be glad if QEMU will chose -cpu Westmere for me if it detects Westmere host CPU as a default. This is -cpu best that Alex proposed FWIW. I didn't look at exact implementation but I doubt it does exactly what we need because currently we do not have infrastructure for that. If qemu is upgraded with support for new cpuid bits and -cpu best will pass them to a guest on next boot then this is not the same. -cpu Westmere can mean different thing for different machine types with proper infrastructure in place. Humans probably do one of two things: 1) no cpu option or 2) -cpu host. And both are not optimal. Actually both are bad. First one because default cpu is very conservative and the second because there is no guaranty that guest will continue to work after qemu or kernel upgrade. Let me elaborate about the later. Suppose host CPU has kill_guest feature and at the time a guest was installed it was not implemented by kvm. Since it was not implemented by kvm it was not present in vcpu during installation and the guest didn't install workaround kill_guest module. Now unsuspecting user upgrades the kernel and tries to restart the guest and fails. He writes angry letter to qemu-devel and is asked to reinstall his guest and move along. -cpu best wouldn't solve this. You need a read/write configuration file where QEMU probes the available CPU and records it to be used for the lifetime of the VM. That what I thought too, but this shouldn't be the case (Avi's idea). We need two things: 1) CPU model config should be per machine type. 2) QEMU should refuse to start if it cannot create cpu exactly as specified by model config. With two conditions above if user creates VM with qemu 1.0 and cpu model Westmere which has no kill_guest feature he will still be able to run it in QEMU 1.1 (where kill_guest is added to Westmere model) and new kvm that support kill_guest by providing -M pc-1.0 flag (old definition of Westmere will be used). If user will try to create VM with QEMU 1.1 on a kernel that does not support kill_guest QEMU will refuse to start. So then why are you introducing -cpu Westmere? Because ovirt-engine has a concept of datacenters and the entire datacenter has to use a compatible CPU model to allow migration compatibility. Today, the interface that ovirt-engine exposes is based on CPU codenames. Presumably ovirt-engine wants to add a Westmere CPU group and as such have levied a requirement down the stack to QEMU. First of all this is not about live migration only. Guest visible vcpu should not change after guest reboot (or hibernate/resume) too. And second this concept exists with only your laptop and single guest on it too. There are three inputs into a CPU model module: 1) host cpu, 2) qemu capabilities, 3) kvm capabilities. With datacenters scenario all three can change, with your laptop only last two can change (first one can change too when you'll get new laptop) , but the net result is that guest visible cpuid can change and it shouldn't. This is the goal of introducing -cpu Westmere, to prevent it from happening. This discussion isn't about whether QEMU should have a Westmere processor definition. In fact, I think I already applied that patch. It's a discussion about how we handle this up and down the stack. The question is who should define and manage CPU compatibility. Right now QEMU does to a certain degree, libvirt discards this and does it's own thing, and VDSM/ovirt-engine assume that we're providing something and has built a UI around it. If we want QEMU to be usable without management layer then QEMU should provide stable CPU models. Stable in a sense that qemu, kernel or CPU upgrade does not change what guest sees. If libvirt wants to override QEMU we should have a way to allow that, but than compatibility becomes libvirt problem. Figuring out what minimal CPU model that can be used across a cluster of different machines should be ovirt task. What I'm proposing we consider: have VDSM manage CPU definitions in order to provide a specific user experience in ovirt-engine. We would continue to have Westmere/etc in QEMU exposed as part of the user configuration. But I don't think it makes a lot of sense to have to modify QEMU any time a new CPU comes out. If new cpu does not provide any new instruction set or capability that can be passed to a guest then there is no point creating CPU model for it in QEMU. If it does it is just a
Re: [Qemu-devel] [PATCHv2 3/7] consolidate qemu_iovec_copy() and qemu_iovec_concat() and make them consistent
Il 11/03/2012 02:49, Michael Tokarev ha scritto: qemu_iovec_concat() is currently a wrapper for qemu_iovec_copy(), use the former (with extra 0 arg) in a few places where it is used. Change skip argument of qemu_iovec_copy() from uint64_t to size_t, since size of qiov itself is size_t, so there's no way to skip larger sizes. Rename it to soffset, to make it clear that the offset is applied to src. Also change the only usage of uint64_t in hw/9pfs/virtio-9p.c, in v9fs_init_qiov_from_pdu() - all callers of it actually uses size_t too, not uint64_t. Semantic change in the meaning of `count' (now renamed to `sbytes') argument. Initial comment said that src is copied to dst until _total_ size is less than specified, so it might be interpreted as maximum size of the _dst_ vector. Actual meaning of if was that total amount of skipped and copied bytes should not exceed `count'. Make it just the amount of bytes to _copy_, without counting skipped bytes. This makes it consistent with other iovec functions, and also matches actual _usage_ of this function. Order of argumens is already good: qemu_iovec_memset(QEMUIOVector *qiov, size_t offset, int c, size_t bytes) vs: qemu_iovec_concat(QEMUIOVector *dst, QEMUIOVector *src, size_t soffset, size_t sbytes) (note soffset is after _src_ not dst, since it applies to src; for memset it applies to qiov). Note that in many places where this function is used, the previous call is qemu_iovec_reset(), which means many callers actually want copy (replacing dst content), not concat. So we may want to add a paramere to allow resetting dst in one go. Yes, this initially left me a bit confused. Let's add a new function qemu_iovec_copy that does reset+concat. Paolo
Re: [Qemu-devel] [PATCHv2 5/7] Export qemu_sendv_recvv() and use it in qemu_sendv() and qemu_recvv()
Il 11/03/2012 02:49, Michael Tokarev ha scritto: Rename do_sendv_recvv() to qemu_sendv_recvv(), change its last arg (do_send) from int to bool, export it in qemu-common.h, and made the two callers of it (qemu_sendv() and qemu_recvv()) to be trivial #defines just adding 5th arg. GCC is smart and knows how to do tail calls in many cases. Thus, I don't see very much the point of this patch. Paolo
Re: [Qemu-devel] [PATCHv2 6/7] cleanup qemu_co_sendv(), qemu_co_recvv() and friends
Il 11/03/2012 02:49, Michael Tokarev ha scritto: The same as for non-coroutine versions in previous patches: rename arguments to be more obvious, change type of arguments from int to size_t where appropriate, and use common code for send and receive paths (with one extra argument) since these are exactly the same. Use common qemu_sendv_recvv() directly. Also constify buf arg of qemu_co_send(). qemu_co_sendv(), qemu_co_recvv(), and qemu_co_recv() are now trivial #define's merely adding one extra arg. qemu_co_send() is an inline function due to `buf' arg de-constification. Again, I don't see the point in using #defines. Either leave the function static, or you can export it but then inlines are preferrable. qemu_co_sendv() and qemu_co_recvv() callers are converted to different argument order. Paolo
Re: [Qemu-devel] IRQ number, interrupt number, interrupt line GPIO[in/out]
IRQ number is actually a word coming from ancient time. When 8259 was popular at that time, we only have 0 ~ 15 interrupts when two 8259 are cascaded. The IRQ number mattered in that time, because 8259 put their vector number in the bus for CPU after the interrupt was delivered. The number did have special meaning to the CPU used as a vector to interrupt table. However, in modern time, especially after we have APIC and MSI, the number don't have much meaning to CPU because the interrupt is targeting some special PCI address to notify the CPU. So APIC don't need put a vector number in the bus. In modern os, it is usually called auto vector for OS to walk the interrupt routine table. Interrupt line is similar to IRQ numbers, interrupt numbers. On 2012-3-2 20:38, Zhi Yong Wu wrote: HI, Can anyone explain their relationship and difference among them? It is very appreciated if you can make some comments. thanks. -- Shu Mingshum...@linux.vnet.ibm.com IBM China Systems and Technology Laboratory
Re: [Qemu-devel] [PATCHv2 0/7] cleanup/consolidate some iovec functions
Il 11/03/2012 02:49, Michael Tokarev ha scritto: This is a little cleanup/consolidation for some iovec-related low-level routines in qemu. The plan is to make library functions more understandable, consistent and useful. The patch changes prototypes of several iov and qiov functions to match each other, changes types of arguments for some functions, _swaps_ some function arguments with each other, and makes use of common code in r/w path. The result of all these changes. 1. Most qiov-related (qemu_iovec_*) functions now accepts 'offset' parameter to specify from which (byte) position to start operation. This is added for _memset (removing _memset_skip), _from_buffer (allowing to copy a bounce- buffer to a middle of qiov). Typical: void qemu_iovec_memset(QEMUIOVector *qiov, size_t offset, int c, size_t bytes); 2. All functions that accepts this `offset' argument does it in a similar manner, following the iov,fromwhere,bytes pattern. This is consistent with (updated) qemu_sendv() and qemu_recvv() and friends, where `offset' and `bytes' arguments were _renamed_, with the following prototypes: int qemu_sendv(sockfd, iov, size_t offset, size_t bytes) instead of int qemu_sendv(sockfd, iov, int len, int iov_offset) See how offset bytes are used in the same way as for qemu_iovec_* A few callers of these are verified and converted. 3. Used size_t instead of various variations for byte counts. Including qemu_iovec_copy which used uint64_t(!) type. 4. Function arguments are renamed to better match with their actual meaning. Compare new and original prototype of qemu_sendv() above: old prototype with `len' does not tell if `len' refers to number of iov elements (as regular writev() call) or to number of data bytes. Ditto for several usages of `count' for some qemu_iovec_*, which is also replaced to `bytes'. The resulting function usage is much more consistent, the functions themselves are nice and understandable, which means they're easier to use and less error-prone. This patchset also consolidates a few low-level sendrecv functions into one, since both versions were exactly the same (and were finally calling common function anyway). This is done by exporting a common send_recv function with one extra bool argument, and making current sendrecv to be just #defines. And while at it all, also made some implementations shorter, cleaner and much easier to read/understand, and add some code comments. The readwrite consolidation has great potential for the block layer, as has been demonstrated before. Unification and generalization of qemu_iovec_* functions will let to optimize/simplify some more code in block/*, especially qemu_iovec_memset() and _from_buffer() (this optimization/simplification isn't done in this series) Michael Tokarev (7): Consolidate qemu_iovec_memset{,_skip}() into single, simplified function allow qemu_iovec_from_buffer() to specify offset from which to start copying consolidate qemu_iovec_copy() and qemu_iovec_concat() and make them consistent change prototypes of qemu_sendv() and qemu_recvv() Export qemu_sendv_recvv() and use it in qemu_sendv() and qemu_recvv() cleanup qemu_co_sendv(), qemu_co_recvv() and friends rewrite and comment qemu_sendv_recvv() block.c |8 +- block/curl.c|4 +- block/nbd.c |4 +- block/qcow.c|2 +- block/qcow2.c | 14 ++-- block/qed.c | 10 +- block/sheepdog.c|6 +- block/vdi.c |2 +- cutils.c| 275 +-- hw/9pfs/virtio-9p.c |8 +- linux-aio.c |4 +- posix-aio-compat.c |2 +- qemu-common.h | 64 +++- qemu-coroutine-io.c | 83 --- 14 files changed, 205 insertions(+), 281 deletions(-) Looks good, except that I don't like #defines. (I also don't like exporting qemu_sendv_recvv, but I can live with it. :)) I commented on the single patches. I'm happy to take 4-7 via the NBD tree. Paolo
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On Sun, Mar 11, 2012 at 09:16:49AM -0500, Anthony Liguori wrote: If libvirt assumes anything about what kvm actually supports it is working only by sheer luck. Well the simple answer for libvirt is don't use -nodefconfig and then it can reuse the CPU definitions (including any that the user adds). CPU models should be usable even with -nodefconfig. CPU model is more like device. By -cpu Nehalem I am saying I want Nehalem device in my machine. Really, what's the point of having a layer of management if we're saying that doing policy management is too complicated for that layer? What does that layer exist to provide then? I was always against libvirt configuring low level details of CPU. What it should do IMO is to chose best CPU model for host cpu (one can argue that fiddling with /proc/cpuinfo is not QEMU busyness). (Also, there are additional low-level bits that really have to be maintained somewhere, just to have sane defaults. Currently many CPUID leafs are exposed to the guest without letting the user control them, and worse: without keeping stability of guest-visible bits when upgrading Qemu or the host kernel. And that's what machine-types are for: to have sane defaults to be used as base.) Let me give you a practical example: I had a bug report about improper CPU topology information[1]. After investigating it, I have found out that the level cpudef field is too low; CPU core topology information is provided on CPUID leaf 4, and most of the Intel CPU models on Qemu have level=2 today (I don't know why). So, Qemu is responsible for exposing CPU topology information set using '-smp' to the guest OS, but libvirt would have to be responsible for choosing a proper level value that makes that information visible to the guest. We can _allow_ libvirt to fiddle with these low-level bits, of course, but requiring every management layer to build this low-level information from scratch is just a recipe to waste developer time. And QEMU become even less usable from a command line. One more point to kvm-tool I guess. I'm not sure what your point is. We're talking about an option that humans don't use. How is this a discussion about QEMU usability? If for a user to have stable guest environment we require libvirt use then QEMU by itself is less usable. We do have machine types in QEMU to expose stable machine to a guest. CPU models should be part of it. -- Gleb.
Re: [Qemu-devel] [PATCH v3] VMXNET3 paravirtual NIC device implementation
Antony, Thanks for you review. We'll go over it and prepare fixes and explanations soon. Best Regards, Dmitry Fleytman.
Re: [Qemu-devel] [PATCHv2 5/7] Export qemu_sendv_recvv() and use it in qemu_sendv() and qemu_recvv()
On 11.03.2012 19:00, Paolo Bonzini wrote: Il 11/03/2012 02:49, Michael Tokarev ha scritto: Rename do_sendv_recvv() to qemu_sendv_recvv(), change its last arg (do_send) from int to bool, export it in qemu-common.h, and made the two callers of it (qemu_sendv() and qemu_recvv()) to be trivial #defines just adding 5th arg. GCC is smart and knows how to do tail calls in many cases. Thus, I don't see very much the point of this patch. The point is to allow qemu_sendv_recvv() to be used directly, see the next patch [PATCHv2 6/7] cleanup qemu_co_sendv(), qemu_co_recvv() and friends for an example, and see my previous attempt to address all this with bdrv_* methods where reads and writes are implemented in common functions and are split back on layer boundary, just to go to a common routine on the next layer. Or worse yet, repeating exactly the same code like in this 6/7 patch for qemu_co_recvv() and qemu_co_sendv(). It is not about tail calls at all. Thanks, /mjt
Re: [Qemu-devel] seamless migration with spice
On Sun, Mar 11, 2012 at 09:18:17AM -0500, Anthony Liguori wrote: On 03/11/2012 08:16 AM, Yonit Halperin wrote: Hi, We would like to implement seamless migration for Spice, i.e., keeping the currently opened spice client session valid after migration. Today, the spice client establishes the connection to the destination before migration starts, and when migration completes, the client's session is moved to the destination, but all the session data is being reset. We face 2 main challenges when coming to implement seamless migration: (1) Spice client must establish the connection to the destination before the spice password expires. However, during migration, qemu main loop is not processed, and when migration completes, the password might have already expired. Today we solve this by the async command client_migrate_info, which is expected to be called before migration starts. The command is completed once spice client has connected to the destination (or a timeout). Since async monitor commands are no longer supported, we are looking for a new solution. We need to fix async monitor commands. Luiz sent a note our to qemu-devel recently on this topic. I'm not sure we'll get there for 1.1 but if we do a 3 month release cycle for 1.2, then that's a pretty reasonable target IMHO. What about the second part? it's independant of the async issue. Regards, Anthony Liguori The straightforward solution would be to process the main loop on the destination side during migration. (2) In order to restore the source-client spice session in the destination, we need to pass data from the source to the destination. Example for such data: in flight copy paste data, in flight usb data We want to pass the data from the source spice server to the destination, via Spice client. This introduces a possible race: after migration completes, the source qemu can be killed before the spice-server completes transferring the migration data to the client. Possible solutions: - Have an async migration state notifiers. The migration state will change after all the notifiers complete callbacks are called. - libvirt will wait for qmp event corresponding to spice completing its migration, and only then will kill the source qemu process. Any thoughts? Thanks, Yonit.
Re: [Qemu-devel] [PATCHv2 6/7] cleanup qemu_co_sendv(), qemu_co_recvv() and friends
On 11.03.2012 19:01, Paolo Bonzini wrote: Il 11/03/2012 02:49, Michael Tokarev ha scritto: The same as for non-coroutine versions in previous patches: rename arguments to be more obvious, change type of arguments from int to size_t where appropriate, and use common code for send and receive paths (with one extra argument) since these are exactly the same. Use common qemu_sendv_recvv() directly. Also constify buf arg of qemu_co_send(). qemu_co_sendv(), qemu_co_recvv(), and qemu_co_recv() are now trivial #define's merely adding one extra arg. qemu_co_send() is an inline function due to `buf' arg de-constification. Again, I don't see the point in using #defines. Either leave the function static, or you can export it but then inlines are preferrable. When you're debugging in gdb, it always enters all inline functions. For a very simple #define there's nothing to enter -- hence the #define. Note that - I still hope - in the end there will be no sendv or recv calls at all, only common sendv_recvv with is_write passed as an argument from upper layer. It will be easier to remove that #define - just two lines of code instead of minimum 5 :) Also, in such cases like these, #defines are more compact and does not clutter the header too much. qemu_co_sendv() and qemu_co_recvv() callers are converted to different argument order. Paolo Thanks, /mjt
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On 03/11/2012 09:56 AM, Gleb Natapov wrote: On Sun, Mar 11, 2012 at 09:12:58AM -0500, Anthony Liguori wrote: -cpu best wouldn't solve this. You need a read/write configuration file where QEMU probes the available CPU and records it to be used for the lifetime of the VM. That what I thought too, but this shouldn't be the case (Avi's idea). We need two things: 1) CPU model config should be per machine type. 2) QEMU should refuse to start if it cannot create cpu exactly as specified by model config. This would either mean: A. pc-1.1 uses -cpu best with a fixed mask for 1.1 B. pc-1.1 hardcodes Westmere or some other family (A) would imply a different CPU if you moved the machine from one system to another. I would think this would be very problematic from a user's perspective. (B) would imply that we had to choose the least common denominator which is essentially what we do today with qemu64. If you want to just switch qemu64 to Conroe, I don't think that's a huge difference from what we have today. It's a discussion about how we handle this up and down the stack. The question is who should define and manage CPU compatibility. Right now QEMU does to a certain degree, libvirt discards this and does it's own thing, and VDSM/ovirt-engine assume that we're providing something and has built a UI around it. If we want QEMU to be usable without management layer then QEMU should provide stable CPU models. Stable in a sense that qemu, kernel or CPU upgrade does not change what guest sees. We do this today by exposing -cpu qemu64 by default. If all you're advocating is doing -cpu Conroe by default, that's fine. But I fail to see where this fits into the larger discussion here. The problem to solve is: I want to use the largest possible subset of CPU features available uniformly throughout my datacenter. QEMU and libvirt have single node views so they cannot solve this problem on their own. Whether that subset is a generic Westmere-like processor that never existed IRL or a specific Westmere processor seems like a decision that should be made by the datacenter level manager with the node level view. If I have a homogeneous environments of Xeon 7540, I would probably like to see a Xeon 7540 in my guest. Doesn't it make sense to enable the management tool to make this decision? Regards, Anthony Liguori
Re: [Qemu-devel] seamless migration with spice
On 03/11/2012 10:25 AM, Alon Levy wrote: On Sun, Mar 11, 2012 at 09:18:17AM -0500, Anthony Liguori wrote: On 03/11/2012 08:16 AM, Yonit Halperin wrote: Hi, We would like to implement seamless migration for Spice, i.e., keeping the currently opened spice client session valid after migration. Today, the spice client establishes the connection to the destination before migration starts, and when migration completes, the client's session is moved to the destination, but all the session data is being reset. We face 2 main challenges when coming to implement seamless migration: (1) Spice client must establish the connection to the destination before the spice password expires. However, during migration, qemu main loop is not processed, and when migration completes, the password might have already expired. Today we solve this by the async command client_migrate_info, which is expected to be called before migration starts. The command is completed once spice client has connected to the destination (or a timeout). Since async monitor commands are no longer supported, we are looking for a new solution. We need to fix async monitor commands. Luiz sent a note our to qemu-devel recently on this topic. I'm not sure we'll get there for 1.1 but if we do a 3 month release cycle for 1.2, then that's a pretty reasonable target IMHO. What about the second part? it's independant of the async issue. Isn't this a client problem? The client has this state, no? If the state is stored in the server, wouldn't it be marshaled as part of the server's migration state? I read that as the client needs to marshal it's own local state in the session and restore it in the new session. Regards, Anthony Liguori Regards, Anthony Liguori The straightforward solution would be to process the main loop on the destination side during migration. (2) In order to restore the source-client spice session in the destination, we need to pass data from the source to the destination. Example for such data: in flight copy paste data, in flight usb data We want to pass the data from the source spice server to the destination, via Spice client. This introduces a possible race: after migration completes, the source qemu can be killed before the spice-server completes transferring the migration data to the client. Possible solutions: - Have an async migration state notifiers. The migration state will change after all the notifiers complete callbacks are called. - libvirt will wait for qmp event corresponding to spice completing its migration, and only then will kill the source qemu process. Any thoughts? Thanks, Yonit.
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On 03/11/2012 10:12 AM, Gleb Natapov wrote: On Sun, Mar 11, 2012 at 09:16:49AM -0500, Anthony Liguori wrote: If libvirt assumes anything about what kvm actually supports it is working only by sheer luck. Well the simple answer for libvirt is don't use -nodefconfig and then it can reuse the CPU definitions (including any that the user adds). CPU models should be usable even with -nodefconfig. CPU model is more like device. By -cpu Nehalem I am saying I want Nehalem device in my machine. Let's say we moved CPU definitions to /usr/share/qemu/cpu-models.xml. Obviously, we'd want a command line option to be able to change that location so we'd introduce -cpu-models PATH. But we want all of our command line options to be settable by the global configuration file so we would have a cpu-model=PATH to the configuration file. But why hard code a path when we can just set the default path in the configuration file so let's avoid hard coding and just put cpu-models=/usr/share/qemu/cpu-models.xml in the default configuration file. But now when libvirt uses -nodefconfig, those models go away. -nodefconfig means start QEMU in the most minimal state possible. You get what you pay for if you use it. We'll have the same problem with machine configuration files. At some point in time, -nodefconfig will make machine models disappear. Regards, Anthony Liguori
Re: [Qemu-devel] [PATCH v2] target-i386: Mask NX bit from cpu_get_phys_page_debug result
Thanks, applied. On Tue, Mar 6, 2012 at 14:22, Jan Kiszka jan.kis...@siemens.com wrote: This was a long pending bug, now revealed by the assert in phys_page_find that stumbled over the large page index returned by cpu_get_phys_page_debug for NX-marked pages: We need to mask out NX and all user-definable bits 52..62 from PDEs and the final PTE to avoid corrupting physical addresses. Signed-off-by: Jan Kiszka jan.kis...@siemens.com --- Changes in v2 (as suggested by Avi): - Mask PDEs as well - Mask user-definable bits target-i386/cpu.h | 1 + target-i386/helper.c | 13 +++-- 2 files changed, 8 insertions(+), 6 deletions(-) diff --git a/target-i386/cpu.h b/target-i386/cpu.h index 196b0c5..36e3d29 100644 --- a/target-i386/cpu.h +++ b/target-i386/cpu.h @@ -241,6 +241,7 @@ #define PG_DIRTY_MASK (1 PG_DIRTY_BIT) #define PG_PSE_MASK (1 PG_PSE_BIT) #define PG_GLOBAL_MASK (1 PG_GLOBAL_BIT) +#define PG_HI_USER_MASK 0x7ff0LL #define PG_NX_MASK (1LL PG_NX_BIT) #define PG_ERROR_W_BIT 1 diff --git a/target-i386/helper.c b/target-i386/helper.c index af6bba2..f4f3c27 100644 --- a/target-i386/helper.c +++ b/target-i386/helper.c @@ -885,8 +885,8 @@ target_phys_addr_t cpu_get_phys_page_debug(CPUState *env, target_ulong addr) if (!(pml4e PG_PRESENT_MASK)) return -1; - pdpe_addr = ((pml4e ~0xfff) + (((addr 30) 0x1ff) 3)) - env-a20_mask; + pdpe_addr = ((pml4e ~0xfff ~(PG_NX_MASK | PG_HI_USER_MASK)) + + (((addr 30) 0x1ff) 3)) env-a20_mask; pdpe = ldq_phys(pdpe_addr); if (!(pdpe PG_PRESENT_MASK)) return -1; @@ -900,8 +900,8 @@ target_phys_addr_t cpu_get_phys_page_debug(CPUState *env, target_ulong addr) return -1; } - pde_addr = ((pdpe ~0xfff) + (((addr 21) 0x1ff) 3)) - env-a20_mask; + pde_addr = ((pdpe ~0xfff ~(PG_NX_MASK | PG_HI_USER_MASK)) + + (((addr 21) 0x1ff) 3)) env-a20_mask; pde = ldq_phys(pde_addr); if (!(pde PG_PRESENT_MASK)) { return -1; @@ -912,11 +912,12 @@ target_phys_addr_t cpu_get_phys_page_debug(CPUState *env, target_ulong addr) pte = pde ~( (page_size - 1) ~0xfff); /* align to page_size */ } else { /* 4 KB page */ - pte_addr = ((pde ~0xfff) + (((addr 12) 0x1ff) 3)) - env-a20_mask; + pte_addr = ((pde ~0xfff ~(PG_NX_MASK | PG_HI_USER_MASK)) + + (((addr 12) 0x1ff) 3)) env-a20_mask; page_size = 4096; pte = ldq_phys(pte_addr); } + pte = ~(PG_NX_MASK | PG_HI_USER_MASK); if (!(pte PG_PRESENT_MASK)) return -1; } else { -- 1.7.3.4
Re: [Qemu-devel] [PATCH] gdbstub: Do not kill target in system emulation mode
Thanks, applied. I've been an accidental killer myself countless times. On Tue, Mar 6, 2012 at 17:32, Jan Kiszka jan.kis...@siemens.com wrote: Too many VM kittens were killed since 7d03f82f81. Another one just died under my fat fingers. When you quit a kgdb session, does the Linux kernel power off? Or when you terminate gdb attached to a hardware debugger, does your board vanish in space? No. So let's stop terminating QEMU when the gdbstub receives a kill commando in system emulation mode. Real termination can still be achieved via monitor quit. We keep the behavior for user mode emulation which is arguably more like a gdbserver scenario. Signed-off-by: Jan Kiszka jan.kis...@siemens.com --- gdbstub.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/gdbstub.c b/gdbstub.c index 7d470b6..ef95ac2 100644 --- a/gdbstub.c +++ b/gdbstub.c @@ -2062,9 +2062,11 @@ static int gdb_handle_packet(GDBState *s, const char *line_buf) goto unknown_command; } case 'k': +#ifdef CONFIG_USER_ONLY /* Kill the target */ fprintf(stderr, \nQEMU: Terminated via GDBstub\n); exit(0); +#endif case 'D': /* Detach packet */ gdb_breakpoint_remove_all(); -- 1.7.3.4
Re: [Qemu-devel] [PATCH] build: Include config-host.mak as soon as possible
Thanks, applied. On Tue, Mar 6, 2012 at 18:50, Lluís Vilanova vilan...@ac.upc.edu wrote: Current code depends on variables defined in config-host.mak before it is actually included. Signed-off-by: Lluís Vilanova vilan...@ac.upc.edu Cc: Anthony Liguori aligu...@us.ibm.com Cc: Paul Brook p...@codesourcery.com --- Makefile | 15 --- 1 files changed, 8 insertions(+), 7 deletions(-) diff --git a/Makefile b/Makefile index 49c775b..408065e 100644 --- a/Makefile +++ b/Makefile @@ -3,13 +3,7 @@ # Always point to the root of the build tree (needs GNU make). BUILD_DIR=$(CURDIR) -GENERATED_HEADERS = config-host.h trace.h qemu-options.def -ifeq ($(TRACE_BACKEND),dtrace) -GENERATED_HEADERS += trace-dtrace.h -endif -GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h -GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c - +# All following code might depend on configuration variables ifneq ($(wildcard config-host.mak),) # Put the all: rule here so that config-host.mak can contain dependencies. all: build-all @@ -24,6 +18,13 @@ config-host.mak: @exit 1 endif +GENERATED_HEADERS = config-host.h trace.h qemu-options.def +ifeq ($(TRACE_BACKEND),dtrace) +GENERATED_HEADERS += trace-dtrace.h +endif +GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h +GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c + # Don't try to regenerate Makefile or configure # We don't generate any of them Makefile: ;
Re: [Qemu-devel] [PATCH] cache-utils: Add missing include file for uintptr_t
Thanks, applied. On Mon, Mar 5, 2012 at 20:15, Stefan Weil s...@weilnetz.de wrote: Commit 021ecd8b9db37927059f5d3234b51ed766706437 breaks the build for PPC hosts because it uses uintptr_t without the necessary include file. uintptr_t is defined in stdint.h, so add this include. Cc: Alexander Graf ag...@suse.de Signed-off-by: Stefan Weil s...@weilnetz.de --- Hi Alex, could you please test whether my patch fixes the build problem? Thanks, Stefan cache-utils.h | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/cache-utils.h b/cache-utils.h index 04a6e2e..2c57f78 100644 --- a/cache-utils.h +++ b/cache-utils.h @@ -2,6 +2,9 @@ #define QEMU_CACHE_UTILS_H #if defined(_ARCH_PPC) + +#include stdint.h /* uintptr_t */ + struct qemu_cache_conf { unsigned long dcache_bsize; unsigned long icache_bsize; -- 1.7.9
Re: [Qemu-devel] [PATCH] w64: Don't redefine lseek, ftruncate
Thanks, applied. On Sat, Mar 10, 2012 at 10:14, Stefan Weil s...@weilnetz.de wrote: MinGW-w64 already defines lseek and ftruncate (and uses the 64 bit variants). The conditional compilation avoids redefinitions (which would be wrong) and compiler warnings. Signed-off-by: Stefan Weil s...@weilnetz.de --- qemu-common.h | 8 ++-- 1 files changed, 6 insertions(+), 2 deletions(-) diff --git a/qemu-common.h b/qemu-common.h index dbfce6f..b0fdf5c 100644 --- a/qemu-common.h +++ b/qemu-common.h @@ -93,9 +93,13 @@ typedef int (*fprintf_function)(FILE *f, const char *fmt, ...) #ifdef _WIN32 #define fsync _commit -#define lseek _lseeki64 +#if !defined(lseek) +# define lseek _lseeki64 +#endif int qemu_ftruncate64(int, int64_t); -#define ftruncate qemu_ftruncate64 +#if !defined(ftruncate) +# define ftruncate qemu_ftruncate64 +#endif static inline char *realpath(const char *path, char *resolved_path) { -- 1.7.9
Re: [Qemu-devel] [PATCH] configure: Test for libiberty.a (mingw32)
Thanks, applied. On Sat, Mar 10, 2012 at 10:14, Stefan Weil s...@weilnetz.de wrote: MinGW-w64 and some versions of MinGW32 don't provide libiberty.a, so add this library only if it was found. Signed-off-by: Stefan Weil s...@weilnetz.de --- configure | 8 +++- 1 files changed, 7 insertions(+), 1 deletions(-) diff --git a/configure b/configure index ca25250..bb16498 100755 --- a/configure +++ b/configure @@ -511,7 +511,13 @@ if test $mingw32 = yes ; then QEMU_CFLAGS=-DWIN32_LEAN_AND_MEAN -DWINVER=0x501 $QEMU_CFLAGS # enable C99/POSIX format strings (needs mingw32-runtime 3.15 or later) QEMU_CFLAGS=-D__USE_MINGW_ANSI_STDIO=1 $QEMU_CFLAGS - LIBS=-lwinmm -lws2_32 -liberty -liphlpapi $LIBS + LIBS=-lwinmm -lws2_32 -liphlpapi $LIBS +cat $TMPC EOF +int main(void) { return 0; } +EOF + if compile_prog -liberty ; then + LIBS=-liberty $LIBS + fi prefix=c:/Program Files/Qemu mandir=\${prefix} datadir=\${prefix} -- 1.7.9
Re: [Qemu-devel] [PATCH] tcg: Improve tcg_out_label and fix its usage for w64
Thanks, applied. On Sat, Mar 10, 2012 at 18:59, Stefan Weil s...@weilnetz.de wrote: tcg_out_label is always called with a third argument of pointer type which was casted to tcg_target_long. These casts can be avoided by changing the prototype of tcg_out_label. There was also a cast to long. For most hosts with sizeof(long) == sizeof(tcg_target_long) == sizeof(void *) this did not matter, but for w64 it was wrong. This is fixed now. Cc: Blue Swirl blauwir...@gmail.com Cc: Richard Henderson r...@twiddle.net Signed-off-by: Stefan Weil s...@weilnetz.de --- tcg/hppa/tcg-target.c | 8 tcg/i386/tcg-target.c | 8 tcg/sparc/tcg-target.c | 6 +++--- tcg/tcg.c | 6 +++--- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/tcg/hppa/tcg-target.c b/tcg/hppa/tcg-target.c index 59d4d12..71f4a8a 100644 --- a/tcg/hppa/tcg-target.c +++ b/tcg/hppa/tcg-target.c @@ -1052,7 +1052,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int opc) /* TLB Miss. */ /* label1: */ - tcg_out_label(s, lab1, (tcg_target_long)s-code_ptr); + tcg_out_label(s, lab1, s-code_ptr); argreg = TCG_REG_R26; tcg_out_mov(s, TCG_TYPE_I32, argreg--, addrlo_reg); @@ -1089,7 +1089,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int opc) } /* label2: */ - tcg_out_label(s, lab2, (tcg_target_long)s-code_ptr); + tcg_out_label(s, lab2, s-code_ptr); #else tcg_out_qemu_ld_direct(s, datalo_reg, datahi_reg, addrlo_reg, (GUEST_BASE ? TCG_GUEST_BASE_REG : TCG_REG_R0), opc); @@ -1171,7 +1171,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, int opc) /* TLB Miss. */ /* label1: */ - tcg_out_label(s, lab1, (tcg_target_long)s-code_ptr); + tcg_out_label(s, lab1, s-code_ptr); argreg = TCG_REG_R26; tcg_out_mov(s, TCG_TYPE_I32, argreg--, addrlo_reg); @@ -1215,7 +1215,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, int opc) tcg_out_call(s, qemu_st_helpers[opc]); /* label2: */ - tcg_out_label(s, lab2, (tcg_target_long)s-code_ptr); + tcg_out_label(s, lab2, s-code_ptr); #else /* There are no indexed stores, so if GUEST_BASE is set we must do the add explicitly. Careful to avoid R20, which is used for the bswaps to follow. */ diff --git a/tcg/i386/tcg-target.c b/tcg/i386/tcg-target.c index dc81572..1dbe240 100644 --- a/tcg/i386/tcg-target.c +++ b/tcg/i386/tcg-target.c @@ -875,7 +875,7 @@ static void tcg_out_brcond2(TCGContext *s, const TCGArg *args, default: tcg_abort(); } - tcg_out_label(s, label_next, (tcg_target_long)s-code_ptr); + tcg_out_label(s, label_next, s-code_ptr); } #endif @@ -917,10 +917,10 @@ static void tcg_out_setcond2(TCGContext *s, const TCGArg *args, tcg_out_movi(s, TCG_TYPE_I32, args[0], 0); tcg_out_jxx(s, JCC_JMP, label_over, 1); - tcg_out_label(s, label_true, (tcg_target_long)s-code_ptr); + tcg_out_label(s, label_true, s-code_ptr); tcg_out_movi(s, TCG_TYPE_I32, args[0], 1); - tcg_out_label(s, label_over, (tcg_target_long)s-code_ptr); + tcg_out_label(s, label_over, s-code_ptr); } else { /* When the destination does not overlap one of the arguments, clear the destination first, jump if cond false, and emit an @@ -934,7 +934,7 @@ static void tcg_out_setcond2(TCGContext *s, const TCGArg *args, tcg_out_brcond2(s, new_args, const_args+1, 1); tgen_arithi(s, ARITH_ADD, args[0], 1, 0); - tcg_out_label(s, label_over, (tcg_target_long)s-code_ptr); + tcg_out_label(s, label_over, s-code_ptr); } } #endif diff --git a/tcg/sparc/tcg-target.c b/tcg/sparc/tcg-target.c index 5cd5a3b..4461fb4 100644 --- a/tcg/sparc/tcg-target.c +++ b/tcg/sparc/tcg-target.c @@ -582,7 +582,7 @@ static void tcg_out_brcond2_i32(TCGContext *s, TCGCond cond, } tcg_out_nop(s); - tcg_out_label(s, label_next, (tcg_target_long)s-code_ptr); + tcg_out_label(s, label_next, s-code_ptr); } #endif @@ -628,7 +628,7 @@ static void tcg_out_setcond_i32(TCGContext *s, TCGCond cond, TCGArg ret, tcg_out_branch_i32(s, INSN_COND(tcg_cond_to_bcond[cond], 1), t); tcg_out_movi_imm13(s, ret, 1); tcg_out_movi_imm13(s, ret, 0); - tcg_out_label(s, t, (tcg_target_long)s-code_ptr); + tcg_out_label(s, t, s-code_ptr); #endif return; } @@ -683,7 +683,7 @@ static void tcg_out_setcond2_i32(TCGContext *s, TCGCond cond, TCGArg ret, tcg_out_setcond_i32(s, tcg_unsigned_cond(cond), ret, al, bl, blconst); - tcg_out_label(s, lab, (tcg_target_long)s-code_ptr); + tcg_out_label(s, lab, s-code_ptr); break; } } diff --git a/tcg/tcg.c b/tcg/tcg.c index cd2db3c..531db55 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@
Re: [Qemu-devel] [PATCH] Add missing const attributes for MemoryRegionOps
Thanks, applied. On Sat, Mar 10, 2012 at 19:15, Stefan Weil s...@weilnetz.de wrote: Am 05.02.2012 21:19, schrieb Stefan Weil: Most MemoryRegionOps already had the const attribute. This patch adds it to the remaining ones. Signed-off-by: Stefan Weils...@weilnetz.de --- hw/cuda.c | 2 +- hw/ide/ahci.c | 4 ++-- hw/ide/cmd646.c | 6 +++--- hw/ide/macio.c | 2 +- hw/ide/piix.c | 2 +- hw/ide/via.c | 2 +- hw/mipsnet.c | 2 +- hw/opencores_eth.c | 4 ++-- hw/spapr_pci.c | 2 +- 9 files changed, 13 insertions(+), 13 deletions(-) diff --git a/hw/cuda.c b/hw/cuda.c index 4077436..233ab66 100644 --- a/hw/cuda.c +++ b/hw/cuda.c @@ -634,7 +634,7 @@ static uint32_t cuda_readl (void *opaque, target_phys_addr_t addr) return 0; } -static MemoryRegionOps cuda_ops = { +static const MemoryRegionOps cuda_ops = { .old_mmio = { .write = { cuda_writeb, diff --git a/hw/ide/ahci.c b/hw/ide/ahci.c index 630d572..cc54590 100644 --- a/hw/ide/ahci.c +++ b/hw/ide/ahci.c @@ -365,7 +365,7 @@ static void ahci_mem_write(void *opaque, target_phys_addr_t addr, } -static MemoryRegionOps ahci_mem_ops = { +static const MemoryRegionOps ahci_mem_ops = { .read = ahci_mem_read, .write = ahci_mem_write, .endianness = DEVICE_LITTLE_ENDIAN, @@ -401,7 +401,7 @@ static void ahci_idp_write(void *opaque, target_phys_addr_t addr, } } -static MemoryRegionOps ahci_idp_ops = { +static const MemoryRegionOps ahci_idp_ops = { .read = ahci_idp_read, .write = ahci_idp_write, .endianness = DEVICE_LITTLE_ENDIAN, diff --git a/hw/ide/cmd646.c b/hw/ide/cmd646.c index d78ed69..0aac94a 100644 --- a/hw/ide/cmd646.c +++ b/hw/ide/cmd646.c @@ -65,7 +65,7 @@ static void cmd646_cmd_write(void *opaque, target_phys_addr_t addr, ide_cmd_write(cmd646bar-bus, addr + 2, data); } -static MemoryRegionOps cmd646_cmd_ops = { +static const MemoryRegionOps cmd646_cmd_ops = { .read = cmd646_cmd_read, .write = cmd646_cmd_write, .endianness = DEVICE_LITTLE_ENDIAN, @@ -104,7 +104,7 @@ static void cmd646_data_write(void *opaque, target_phys_addr_t addr, } } -static MemoryRegionOps cmd646_data_ops = { +static const MemoryRegionOps cmd646_data_ops = { .read = cmd646_data_read, .write = cmd646_data_write, .endianness = DEVICE_LITTLE_ENDIAN, @@ -193,7 +193,7 @@ static void bmdma_write(void *opaque, target_phys_addr_t addr, } } -static MemoryRegionOps cmd646_bmdma_ops = { +static const MemoryRegionOps cmd646_bmdma_ops = { .read = bmdma_read, .write = bmdma_write, }; diff --git a/hw/ide/macio.c b/hw/ide/macio.c index a827d81..2c4027d 100644 --- a/hw/ide/macio.c +++ b/hw/ide/macio.c @@ -291,7 +291,7 @@ static uint32_t pmac_ide_readl (void *opaque,target_phys_addr_t addr) return retval; } -static MemoryRegionOps pmac_ide_ops = { +static const MemoryRegionOps pmac_ide_ops = { .old_mmio = { .write = { pmac_ide_writeb, diff --git a/hw/ide/piix.c b/hw/ide/piix.c index a472bff..c524f55 100644 --- a/hw/ide/piix.c +++ b/hw/ide/piix.c @@ -79,7 +79,7 @@ static void bmdma_write(void *opaque, target_phys_addr_t addr, } } -static MemoryRegionOps piix_bmdma_ops = { +static const MemoryRegionOps piix_bmdma_ops = { .read = bmdma_read, .write = bmdma_write, }; diff --git a/hw/ide/via.c b/hw/ide/via.c index 2771f0c..ad6f302 100644 --- a/hw/ide/via.c +++ b/hw/ide/via.c @@ -82,7 +82,7 @@ static void bmdma_write(void *opaque, target_phys_addr_t addr, } } -static MemoryRegionOps via_bmdma_ops = { +static const MemoryRegionOps via_bmdma_ops = { .read = bmdma_read, .write = bmdma_write, }; diff --git a/hw/mipsnet.c b/hw/mipsnet.c index a0e6c9f..1b49a79 100644 --- a/hw/mipsnet.c +++ b/hw/mipsnet.c @@ -224,7 +224,7 @@ static NetClientInfo net_mipsnet_info = { .cleanup = mipsnet_cleanup, }; -static MemoryRegionOps mipsnet_ioport_ops = { +static const MemoryRegionOps mipsnet_ioport_ops = { .read = mipsnet_ioport_read, .write = mipsnet_ioport_write, .impl.min_access_size = 1, diff --git a/hw/opencores_eth.c b/hw/opencores_eth.c index 09f2757..6f3f5fc 100644 --- a/hw/opencores_eth.c +++ b/hw/opencores_eth.c @@ -692,12 +692,12 @@ static void open_eth_desc_write(void *opaque, } -static MemoryRegionOps open_eth_reg_ops = { +static const MemoryRegionOps open_eth_reg_ops = { .read = open_eth_reg_read, .write = open_eth_reg_write, }; -static MemoryRegionOps open_eth_desc_ops = { +static const MemoryRegionOps open_eth_desc_ops = { .read = open_eth_desc_read, .write = open_eth_desc_write, }; diff --git a/hw/spapr_pci.c b/hw/spapr_pci.c index ed2e4b3..3c08d57 100644 --- a/hw/spapr_pci.c +++ b/hw/spapr_pci.c @@ -281,7 +281,7 @@ static void spapr_io_write(void
[Qemu-devel] [PATCH 1/2] console: add some trace events
Signed-off-by: Alon Levy al...@redhat.com --- console.h|3 +++ trace-events |4 2 files changed, 7 insertions(+), 0 deletions(-) diff --git a/console.h b/console.h index a95b581..4334db5 100644 --- a/console.h +++ b/console.h @@ -5,6 +5,7 @@ #include qdict.h #include notify.h #include monitor.h +#include trace.h /* keyboard/mouse support */ @@ -202,11 +203,13 @@ static inline DisplaySurface* qemu_create_displaysurface(DisplayState *ds, int w static inline DisplaySurface* qemu_resize_displaysurface(DisplayState *ds, int width, int height) { +trace_displaysurface_resize(ds, ds-surface, width, height); return ds-allocator-resize_displaysurface(ds-surface, width, height); } static inline void qemu_free_displaysurface(DisplayState *ds) { +trace_displaysurface_free(ds, ds-surface); ds-allocator-free_displaysurface(ds-surface); } diff --git a/trace-events b/trace-events index c5d0f0f..94c4a6f 100644 --- a/trace-events +++ b/trace-events @@ -658,3 +658,7 @@ dma_aio_cancel(void *dbs) dbs=%p dma_complete(void *dbs, int ret, void *cb) dbs=%p ret=%d cb=%p dma_bdrv_cb(void *dbs, int ret) dbs=%p ret=%d dma_map_wait(void *dbs) dbs=%p + +# console.h +displaysurface_free(void *display_state, void *display_surface) state=%p surface=%p +displaysurface_resize(void *display_state, void *display_surface, int width, int height) state=%p surface=%p %dx%d -- 1.7.9.1
[Qemu-devel] [PATCH 2/2] vga: add trace event for ppm_save
Signed-off-by: Alon Levy al...@redhat.com --- hw/vga.c |2 ++ trace-events |3 +++ 2 files changed, 5 insertions(+), 0 deletions(-) diff --git a/hw/vga.c b/hw/vga.c index 5994f43..6dc98f6 100644 --- a/hw/vga.c +++ b/hw/vga.c @@ -30,6 +30,7 @@ #include pixel_ops.h #include qemu-timer.h #include xen.h +#include trace.h //#define DEBUG_VGA //#define DEBUG_VGA_MEM @@ -2372,6 +2373,7 @@ int ppm_save(const char *filename, struct DisplaySurface *ds) int ret; char *linebuf, *pbuf; +trace_ppm_save(filename, ds); f = fopen(filename, wb); if (!f) return -1; diff --git a/trace-events b/trace-events index 94c4a6f..dfe28ed 100644 --- a/trace-events +++ b/trace-events @@ -662,3 +662,6 @@ dma_map_wait(void *dbs) dbs=%p # console.h displaysurface_free(void *display_state, void *display_surface) state=%p surface=%p displaysurface_resize(void *display_state, void *display_surface, int width, int height) state=%p surface=%p %dx%d + +# vga.c +ppm_save(const char *filename, void *display_surface) %s surface=%p -- 1.7.9.1
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On Sun, Mar 11, 2012 at 10:33:15AM -0500, Anthony Liguori wrote: On 03/11/2012 09:56 AM, Gleb Natapov wrote: On Sun, Mar 11, 2012 at 09:12:58AM -0500, Anthony Liguori wrote: -cpu best wouldn't solve this. You need a read/write configuration file where QEMU probes the available CPU and records it to be used for the lifetime of the VM. That what I thought too, but this shouldn't be the case (Avi's idea). We need two things: 1) CPU model config should be per machine type. 2) QEMU should refuse to start if it cannot create cpu exactly as specified by model config. This would either mean: A. pc-1.1 uses -cpu best with a fixed mask for 1.1 B. pc-1.1 hardcodes Westmere or some other family This would mean neither A nor B. May be it wasn't clear but I didn't talk about -cpu best above. I am talking about any CPU model with fixed meaning (not host or best which are host cpu dependant). Lets take Nehalem for example (just to move from Westmere :)). Currently it has level=2. Eduardo wants to fix it to be 11, but old guests, installed with -cpu Nehalem, should see the same CPU exactly. How do you do it? Have different Nehalem definition for pc-1.0 (which level=2) and pc-1.1 (with level=11). Lets get back to Westmere. It actually has level=11, but that's only expose another problem. Kernel 3.3 and qemu-1.1 combo will support architectural PMU which is exposed in cpuid leaf 10. We do not want guests installed with -cpu Westmere and qemu-1.0 to see architectural PMU after upgrade. How do you do it? Have different Westmere definitions for pc-1.0 (does not report PMU) and pc-1.1 (reports PMU). What happens if you'll try to run qemu-1.1 -cpu Westmere on Kernel 3.3 (without PMU support)? Qemu will fail to start. (A) would imply a different CPU if you moved the machine from one system to another. I would think this would be very problematic from a user's perspective. (B) would imply that we had to choose the least common denominator which is essentially what we do today with qemu64. If you want to just switch qemu64 to Conroe, I don't think that's a huge difference from what we have today. It's a discussion about how we handle this up and down the stack. The question is who should define and manage CPU compatibility. Right now QEMU does to a certain degree, libvirt discards this and does it's own thing, and VDSM/ovirt-engine assume that we're providing something and has built a UI around it. If we want QEMU to be usable without management layer then QEMU should provide stable CPU models. Stable in a sense that qemu, kernel or CPU upgrade does not change what guest sees. We do this today by exposing -cpu qemu64 by default. If all you're advocating is doing -cpu Conroe by default, that's fine. I am not advocating that. I am saying we should be able to amend qemu64 definition without breaking older guests that use it. But I fail to see where this fits into the larger discussion here. The problem to solve is: I want to use the largest possible subset of CPU features available uniformly throughout my datacenter. QEMU and libvirt have single node views so they cannot solve this problem on their own. Whether that subset is a generic Westmere-like processor that never existed IRL or a specific Westmere processor seems like a decision that should be made by the datacenter level manager with the node level view. If I have a homogeneous environments of Xeon 7540, I would probably like to see a Xeon 7540 in my guest. Doesn't it make sense to enable the management tool to make this decision? Of course neither QEMU nor libvirt can't made a cluster wide decision. If QEMU provides sane CPU model definitions (usable even with -nodefconfig) it would be always possible to find the model that fits best. If the oldest CPU in data center is Nehalem then probably -cpu Nehalem will do. But our CPU model definitions have a lot of shortcomings and we were talking with Edurado how to fix them when he brought this thread back to life, so may be I stirred the discussion a little bit in the wrong direction, but I do think those things are connected. If QEMU CPU model definitions are not stable across upgrades how can we say to management that it is safe to use them? Instead they insist in reimplementing the same logic in mngmt layer and do it badly (because the lack of info). -- Gleb.
Re: [Qemu-devel] [PATCH 1/1] vmware_vga: stop crashing
Can confirm that this patch fixes a crash which also occoured here. Since window was out of the VNC window, crash was reproduceable and has been removed reproduceable. Tested-by: Gerhard Wiesinger li...@wiesinger.com Please apply ASAP. Ciao, Gerhard -- http://www.wiesinger.com/ On Mon, 5 Mar 2012, Serge Hallyn wrote: if x or y 0, set them to 0 (and decrement width/height accordingly) I don't know where the best place to catch this would be, but with vnc and vmware_vga it's possible to get set_bit called on a negative index, crashing qemu. See https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/918791 for details. This patch prevents that. It's possible this should be caught earlier, but this patch works for me. Changelog: Mar 5: As Ryan Harper pointed out, don't mix tabs+spaces, and put {} around all conditionals. Signed-off-by: Serge Hallyn serge.hal...@canonical.com --- hw/vmware_vga.c | 18 ++ 1 files changed, 18 insertions(+), 0 deletions(-) diff --git a/hw/vmware_vga.c b/hw/vmware_vga.c index 142d9f4..c94f9f3 100644 --- a/hw/vmware_vga.c +++ b/hw/vmware_vga.c @@ -298,6 +298,24 @@ static inline void vmsvga_update_rect(struct vmsvga_state_s *s, uint8_t *src; uint8_t *dst; +if (x 0) { +fprintf(stderr, %s: update x was 0 (%d, w %d)\n, +__FUNCTION__, x, w); +w += x; +if (w 0) { +return; +} +x = 0; +} +if (y 0) { +fprintf(stderr, %s: update y was 0 (%d, h %d)\n, +__FUNCTION__, y, h); +h += y; +if (h 0) { +return; +} +y = 0; +} if (x + w s-width) { fprintf(stderr, %s: update width too large x: %d, w: %d\n, __FUNCTION__, x, w); -- 1.7.9
Re: [Qemu-devel] GSoC - Tracepoint support for the gdbstub
On 2012-03-11 15:04, Lyu Mitnick wrote: Hello all, I am Mitnick Lyu, who want to contribute to QEMU and participate Google Summer of Code this year. I have some experience with tool-chain developing and I am highly interested in the project: Tracepoint support for the gdbstub. I am wondering to know whether there is someone working for this project. Great to hear! You are the first one interested in it this year. But QEMU is not yet even confirmed as a participating organization. Feel invited to apply once that phase started! If you have questions or suggestions regarding the task beforehand, just let us know. Jan signature.asc Description: OpenPGP digital signature
Re: [Qemu-devel] [PATCH 0/8] Add GTK UI to enable basic accessibility (v2)
Am 27.02.2012 00:46, schrieb Anthony Liguori: I realize UIs are the third rail of QEMU development, but over the years I've gotten a lot of feedback from users about our UI. I think everyone struggles with the SDL interface and its lack of discoverability but it's worse than I think most people realize for users that rely on accessibility tools. The two pieces of feedback I've gotten the most re: accessibility are the lack of QEMU's enablement for screen readers and the lack of configurable accelerators. Since we render our own terminal using a fixed sized font, we don't respect system font settings which means we ignore if the user has configured large print. We also don't integrate at all with screen readers which means that for blind users, the virtual consoles may as well not even exist. We also don't allow any type of configuration of accelerators. For users with limited dexterity (this is actually more common than you would think), they may use an input device that only inputs one key at a time. Holding down two keys at once is not possible for these users. These are solved problems though and while we could reinvent all of this ourselves with SDL, we would be crazy if we did. Modern toolkits, like GTK, solve these problems. By using GTK, we can leverage VteTerminal for screen reader integration and font configuration. We can also use GTK's accelerator support to make accelerators configurable (Gnome provides a global accelerator configuration interface). I'm not attempting to make a pretty desktop virtualization UI. Maybe we'll go there eventually but that's not what this series is about. This is just attempting to use a richer toolkit such that we can enable basic accessibility support. As a consequence, the UI is much more usable even for a user without accessibility requirements so it's a win-win. Also available at: https://github.com/aliguori/qemu/tree/gtk.2 --- v1 - v2 - Add internationalization support. I don't actually speak any other languages so I added a placeholder for a German translation. This can be tested with LANGUAGE=de_DE.UTF-8 qemu-system-x86_64 - Fixed the terminal size for VteTerminal widgets. I think the behavior makes sense now. - Fixed lots of issues raised in review comments (see individual patches) Known Issues: - I saw the X crash once. I think it has to do with widget sizes. I need to work harder to reproduce. - I've not recreated the reported memory leak yet. - I haven't added backwards compatibility code for older VteTerminal widgets yet. Hi Anthony, are you still working on a new version of this patch series? I suggest to commit a slightly modified version of v2 which adds the GTK UI as an optional user interface (only enabled by a configure option). This makes testing easier and allows developers to send patches which improve the new UI. As soon as the GTK UI is considered stable and usable, the default could be changed from SDL to GTK. Regards, Stefan Weil PS. Of course the committed patches should pass checkpatch.pl without errors.
[Qemu-devel] [PATCH 0/4] fix qxl screendump using monitor_suspend
This patchset starts and ends with trace event additions that make it easier to see the change. It applies on top of http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01784.html due to trace-events. The problem addressed by this patchset is that after recent fixes (81fb6f) screendump with a qxl device in native mode saves a stale screen dump. The solution is to use monitor_suspend and monitor_release in qxl's implementation. This is done by: 1. introducing an extra parameter to vga_hw_screen_dump hw_vga_dump console: pass Monitor to vga_hw_screen_dump/hw_vga_dump 2. using it in qxl via a bh. qxl-render: call ppm_save on bh Additional patches add trace events to qxl and qxl_render, making it easy to see the difference: events setup: (using stderr backend) (qemu) trace-event ppm_save on (qemu) trace-event qxl* on (qemu) trace-event qxl_interface_get_command_enter off (qemu) trace-event qxl_interface_release_resource off (qemu) trace-event qxl_interface_get_command_ret off before: ppm_save done before update: (qemu) screendump /tmp/a.ppm ppm_save /tmp/a.ppm surface=0x7fc0267b3ad0 qxl_interface_update_area_complete surface=0 [152,160,464,480] #=1 qxl_interface_update_area_complete_schedule_bh #dirty=1 qxl_render_update_area_done 0x7fc02b603db0 (qemu) qxl_blit stride=-2560 [152, 160, 464, 480] after: (qemu) screendump /tmp/a.ppm qxl_interface_update_area_complete surface=0 [152,160,464,480] #=1 qxl_interface_update_area_complete_schedule_bh #dirty=1 qxl_render_update_area_done 0x7f407af72210 qxl_render_ppm_save_bh 0x7f407f845b60 (primary 0x7f401bc0) qxl_blit stride=-2560 [152, 160, 464, 480] ppm_save /tmp/a.ppm surface=0x7f4077204ad0 (qemu) Note: This doesn't address a possible libvirt problem that was mentioned in length before, but since it has not been reproduced it will be fixed when it is. Meanwhile other users like autotest will be fixed by this patch (by fix I mean screendump will produce the correct output). Alon Levy (4): qxl: switch qxl.c to trace-events qxl/qxl_render.c: add trace events console: pass Monitor to vga_hw_screen_dump/hw_vga_dump qxl-render: call ppm_save on bh console.c |4 +- console.h |5 +- hw/blizzard.c |2 +- hw/g364fb.c|3 +- hw/omap_dss.c |4 +- hw/omap_lcdc.c |3 +- hw/qxl-render.c| 95 +++-- hw/qxl.c | 150 --- hw/qxl.h |2 +- hw/sm501.c |4 +- hw/tcx.c | 12 +++-- hw/vga.c |6 ++- hw/vmware_vga.c|5 +- monitor.c |2 +- trace-events | 55 +++ ui/spice-display.h |3 + 16 files changed, 240 insertions(+), 115 deletions(-) -- 1.7.9.1
[Qemu-devel] [PATCH 2/4] qxl/qxl_render.c: add trace events
Signed-off-by: Alon Levy al...@redhat.com --- hw/qxl-render.c | 13 - trace-events|6 ++ 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/hw/qxl-render.c b/hw/qxl-render.c index 25857f6..74e7ea3 100644 --- a/hw/qxl-render.c +++ b/hw/qxl-render.c @@ -31,11 +31,10 @@ static void qxl_blit(PCIQXLDevice *qxl, QXLRect *rect) return; } if (!qxl-guest_primary.data) { -dprint(qxl, 1, %s: initializing guest_primary.data\n, __func__); +trace_qxl_blit_guest_primary_initialized(); qxl-guest_primary.data = memory_region_get_ram_ptr(qxl-vga.vram); } -dprint(qxl, 2, %s: stride %d, [%d, %d, %d, %d]\n, __func__, -qxl-guest_primary.qxl_stride, +trace_qxl_blit(qxl-guest_primary.qxl_stride, rect-left, rect-right, rect-top, rect-bottom); src = qxl-guest_primary.data; if (qxl-guest_primary.qxl_stride 0) { @@ -107,8 +106,7 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl) qxl-guest_primary.data = memory_region_get_ram_ptr(qxl-vga.vram); qxl_set_rect_to_surface(qxl, qxl-dirty[0]); qxl-num_dirty_rects = 1; -dprint(qxl, 1, %s: %dx%d, stride %d, bpp %d, depth %d\n, - __FUNCTION__, +trace_qxl_guest_primary_resized( qxl-guest_primary.surface.width, qxl-guest_primary.surface.height, qxl-guest_primary.qxl_stride, @@ -118,8 +116,6 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl) if (surface-width != qxl-guest_primary.surface.width || surface-height != qxl-guest_primary.surface.height) { if (qxl-guest_primary.qxl_stride 0) { -dprint(qxl, 1, %s: using guest_primary for displaysurface\n, - __func__); qemu_free_displaysurface(vga-ds); qemu_create_displaysurface_from(qxl-guest_primary.surface.width, qxl-guest_primary.surface.height, @@ -127,8 +123,6 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl) qxl-guest_primary.abs_stride, qxl-guest_primary.data); } else { -dprint(qxl, 1, %s: resizing displaysurface to guest_primary\n, - __func__); qemu_resize_displaysurface(vga-ds, qxl-guest_primary.surface.width, qxl-guest_primary.surface.height); @@ -187,6 +181,7 @@ void qxl_render_update_area_bh(void *opaque) void qxl_render_update_area_done(PCIQXLDevice *qxl, QXLCookie *cookie) { qemu_mutex_lock(qxl-ssd.lock); +trace_qxl_render_update_area_done(cookie); qemu_bh_schedule(qxl-update_area_bh); qxl-render_update_cookie_num--; qemu_mutex_unlock(qxl-ssd.lock); diff --git a/trace-events b/trace-events index 0853a1b..a66aee8 100644 --- a/trace-events +++ b/trace-events @@ -712,3 +712,9 @@ qxl_post_load_enter(void) qxl_post_load_restore_mode(const char *mode) %s qxl_post_load_exit(void) qxl_interface_update_area_complete(uint32_t surface_id, uint32_t dirty_left, uint32_t dirty_right, uint32_t dirty_top, uint32_t dirty_bottom, uint32_t num_updated_rects) surface=%d [%d,%d,%d,%d] #=%d + +# hw/qxl-render.c +qxl_blit_guest_primary_initialized(void) +qxl_blit(int32_t stride, int32_t left, int32_t right, int32_t top, int32_t bottom) stride=%d [%d, %d, %d, %d] +qxl_guest_primary_resized(int32_t width, int32_t height, int32_t stride, int32_t bytes_pp, int32_t bits_pp) %dx%d, stride %d, bpp %d, depth %d +qxl_render_update_area_done(void *cookie) %p -- 1.7.9.1
[Qemu-devel] [PATCH 3/4] console: pass Monitor to vga_hw_screen_dump/hw_vga_dump
Passes the Monitor ptr to the screendump implementation to all for monitor suspend and resume for qxl to fix screendump regression. graphics_console_init signature change required touching every implemented of screen_dump. There is no change other then an added parameter. qxl will make use of it in the next patch. compiles with ./configure Signed-off-by: Alon Levy al...@redhat.com --- console.c |4 ++-- console.h |5 +++-- hw/blizzard.c |2 +- hw/g364fb.c |3 ++- hw/omap_dss.c |4 +++- hw/omap_lcdc.c |3 ++- hw/qxl.c|5 +++-- hw/sm501.c |4 ++-- hw/tcx.c| 12 hw/vga.c|6 -- hw/vmware_vga.c |5 +++-- monitor.c |2 +- 12 files changed, 34 insertions(+), 21 deletions(-) diff --git a/console.c b/console.c index 6a463f5..3e386fc 100644 --- a/console.c +++ b/console.c @@ -173,7 +173,7 @@ void vga_hw_invalidate(void) active_console-hw_invalidate(active_console-hw); } -void vga_hw_screen_dump(const char *filename) +void vga_hw_screen_dump(const char *filename, Monitor *mon) { TextConsole *previous_active_console; bool cswitch; @@ -187,7 +187,7 @@ void vga_hw_screen_dump(const char *filename) console_select(0); } if (consoles[0] consoles[0]-hw_screen_dump) { -consoles[0]-hw_screen_dump(consoles[0]-hw, filename, cswitch); +consoles[0]-hw_screen_dump(consoles[0]-hw, filename, cswitch, mon); } else { error_report(screen dump not implemented); } diff --git a/console.h b/console.h index 4334db5..0d2cf30 100644 --- a/console.h +++ b/console.h @@ -343,7 +343,8 @@ static inline void console_write_ch(console_ch_t *dest, uint32_t ch) typedef void (*vga_hw_update_ptr)(void *); typedef void (*vga_hw_invalidate_ptr)(void *); -typedef void (*vga_hw_screen_dump_ptr)(void *, const char *, bool cswitch); +typedef void (*vga_hw_screen_dump_ptr)(void *, const char *, bool cswitch, + Monitor *mon); typedef void (*vga_hw_text_update_ptr)(void *, console_ch_t *); DisplayState *graphic_console_init(vga_hw_update_ptr update, @@ -354,7 +355,7 @@ DisplayState *graphic_console_init(vga_hw_update_ptr update, void vga_hw_update(void); void vga_hw_invalidate(void); -void vga_hw_screen_dump(const char *filename); +void vga_hw_screen_dump(const char *filename, Monitor *mon); void vga_hw_text_update(console_ch_t *chardata); int is_graphic_console(void); diff --git a/hw/blizzard.c b/hw/blizzard.c index c7d844d..8ccea7f 100644 --- a/hw/blizzard.c +++ b/hw/blizzard.c @@ -933,7 +933,7 @@ static void blizzard_update_display(void *opaque) } static void blizzard_screen_dump(void *opaque, const char *filename, - bool cswitch) + bool cswitch, Monitor *mon) { BlizzardState *s = (BlizzardState *) opaque; diff --git a/hw/g364fb.c b/hw/g364fb.c index 3a0b68f..f89000c 100644 --- a/hw/g364fb.c +++ b/hw/g364fb.c @@ -289,7 +289,8 @@ static void g364fb_reset(G364State *s) g364fb_invalidate_display(s); } -static void g364fb_screen_dump(void *opaque, const char *filename, bool cswitch) +static void g364fb_screen_dump(void *opaque, const char *filename, bool cswitch, + Monitor *mon) { G364State *s = opaque; int y, x; diff --git a/hw/omap_dss.c b/hw/omap_dss.c index 86ed6ea..b4a1a93 100644 --- a/hw/omap_dss.c +++ b/hw/omap_dss.c @@ -1072,7 +1072,9 @@ struct omap_dss_s *omap_dss_init(struct omap_target_agent_s *ta, #if 0 s-state = graphic_console_init(omap_update_display, -omap_invalidate_display, omap_screen_dump, s); +omap_invalidate_display, +omap_screen_dump, +NULL, s); #endif return s; diff --git a/hw/omap_lcdc.c b/hw/omap_lcdc.c index f172093..ed2325d 100644 --- a/hw/omap_lcdc.c +++ b/hw/omap_lcdc.c @@ -264,7 +264,8 @@ static int ppm_save(const char *filename, uint8_t *data, return 0; } -static void omap_screen_dump(void *opaque, const char *filename, bool cswitch) +static void omap_screen_dump(void *opaque, const char *filename, bool cswitch, + Monitor *mon) { struct omap_lcd_panel_s *omap_lcd = opaque; if (cswitch) { diff --git a/hw/qxl.c b/hw/qxl.c index 7857731..cabea3b 100644 --- a/hw/qxl.c +++ b/hw/qxl.c @@ -1486,7 +1486,8 @@ static void qxl_hw_invalidate(void *opaque) vga-invalidate(vga); } -static void qxl_hw_screen_dump(void *opaque, const char *filename, bool cswitch) +static void qxl_hw_screen_dump(void *opaque, const char *filename, bool cswitch, + Monitor *mon) { PCIQXLDevice *qxl = opaque; VGACommonState *vga = qxl-vga; @@ -1498,7 +1499,7 @@ static void qxl_hw_screen_dump(void *opaque, const char *filename, bool
[Qemu-devel] [PATCH 4/4] qxl-render: call ppm_save on bh
Uses the passed Monitor* to suspend and resume the monitor. Signed-off-by: Alon Levy al...@redhat.com --- hw/qxl-render.c| 82 +++ hw/qxl.c |5 +-- hw/qxl.h |2 +- trace-events |2 + ui/spice-display.h |3 ++ 5 files changed, 83 insertions(+), 11 deletions(-) diff --git a/hw/qxl-render.c b/hw/qxl-render.c index 74e7ea3..16340d0 100644 --- a/hw/qxl-render.c +++ b/hw/qxl-render.c @@ -19,6 +19,7 @@ * along with this program; if not, see http://www.gnu.org/licenses/. */ +#include console.h #include qxl.h static void qxl_blit(PCIQXLDevice *qxl, QXLRect *rect) @@ -142,12 +143,74 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl) } /* + * struct used just for ppm save bh. We don't actually support multiple qxl + * screendump yet, but a) we will, and b) exporting qxl0 from qxl.c looks + * uglier imo. + */ +typedef struct QXLPPMSaveBHData { +PCIQXLDevice *qxl; +QXLCookie *cookie; +} QXLPPMSaveBHData; + +static void qxl_render_ppm_save_bh(void *opaque); + +static QXLCookie *qxl_cookie_render_new(PCIQXLDevice *qxl, const char *filename, +Monitor *mon) +{ +QXLPPMSaveBHData *ppm_save_bh_data; +QEMUBH *ppm_save_bh; +QXLCookie *cookie = qxl_cookie_new(QXL_COOKIE_TYPE_RENDER_UPDATE_AREA, + 0); + +qxl_set_rect_to_surface(qxl, cookie-u.render.area); +if (filename) { +ppm_save_bh_data = g_malloc0(sizeof(*ppm_save_bh_data)); +ppm_save_bh_data-qxl = qxl; +ppm_save_bh_data-cookie = cookie; +ppm_save_bh = qemu_bh_new(qxl_render_ppm_save_bh, ppm_save_bh_data); +cookie-u.render.filename = g_strdup(filename); +cookie-u.render.ppm_save_bh = ppm_save_bh; +cookie-u.render.mon = mon; +monitor_suspend(mon); +} +return cookie; +} + +static void qxl_cookie_render_free(PCIQXLDevice *qxl, QXLCookie *cookie) +{ +g_free(cookie-u.render.filename); +if (cookie-u.render.mon) { +monitor_resume(cookie-u.render.mon); +} +g_free(cookie); +--qxl-render_update_cookie_num; +} + +static void qxl_render_ppm_save_bh(void *opaque) +{ +QXLPPMSaveBHData *data = opaque; +PCIQXLDevice *qxl = data-qxl; +QXLCookie *cookie = data-cookie; +QEMUBH *bh = cookie-u.render.ppm_save_bh; + +qemu_mutex_lock(qxl-ssd.lock); +trace_qxl_render_ppm_save_bh( + qxl-ssd.ds-surface-data, qxl-guest_primary.data); +qxl_render_update_area_unlocked(qxl); +ppm_save(cookie-u.render.filename, qxl-ssd.ds-surface); +qxl_cookie_render_free(qxl, cookie); +qemu_mutex_unlock(qxl-ssd.lock); +g_free(data); +qemu_bh_delete(bh); +} + +/* * use ssd.lock to protect render_update_cookie_num. * qxl_render_update is called by io thread or vcpu thread, and the completion * callbacks are called by spice_server thread, defering to bh called from the * io thread. */ -void qxl_render_update(PCIQXLDevice *qxl) +void qxl_render_update(PCIQXLDevice *qxl, const char *filename, Monitor *mon) { QXLCookie *cookie; @@ -155,6 +218,10 @@ void qxl_render_update(PCIQXLDevice *qxl) if (!runstate_is_running() || !qxl-guest_primary.commands) { qxl_render_update_area_unlocked(qxl); +if (filename) { +trace_qxl_render_update_screendump_no_update(); +ppm_save(filename, qxl-ssd.ds-surface); +} qemu_mutex_unlock(qxl-ssd.lock); return; } @@ -162,9 +229,7 @@ void qxl_render_update(PCIQXLDevice *qxl) qxl-guest_primary.commands = 0; qxl-render_update_cookie_num++; qemu_mutex_unlock(qxl-ssd.lock); -cookie = qxl_cookie_new(QXL_COOKIE_TYPE_RENDER_UPDATE_AREA, -0); -qxl_set_rect_to_surface(qxl, cookie-u.render.area); +cookie = qxl_cookie_render_new(qxl, filename, mon); qxl_spice_update_area(qxl, 0, cookie-u.render.area, NULL, 0, 1 /* clear_dirty_region */, QXL_ASYNC, cookie); } @@ -182,10 +247,13 @@ void qxl_render_update_area_done(PCIQXLDevice *qxl, QXLCookie *cookie) { qemu_mutex_lock(qxl-ssd.lock); trace_qxl_render_update_area_done(cookie); -qemu_bh_schedule(qxl-update_area_bh); -qxl-render_update_cookie_num--; +if (cookie-u.render.filename) { +qemu_bh_schedule(cookie-u.render.ppm_save_bh); +} else { +qemu_bh_schedule(qxl-update_area_bh); +qxl_cookie_render_free(qxl, cookie); +} qemu_mutex_unlock(qxl-ssd.lock); -g_free(cookie); } static QEMUCursor *qxl_cursor(PCIQXLDevice *qxl, QXLCursor *cursor) diff --git a/hw/qxl.c b/hw/qxl.c index cabea3b..fae5be8 100644 --- a/hw/qxl.c +++ b/hw/qxl.c @@ -1471,7 +1471,7 @@ static void qxl_hw_update(void *opaque) break; case QXL_MODE_COMPAT: case QXL_MODE_NATIVE: -qxl_render_update(qxl); +qxl_render_update(qxl, NULL, NULL);
[Qemu-devel] [PATCH 1/4] qxl: switch qxl.c to trace-events
dprint is still used for qxl_init_common one time prints. Signed-off-by: Alon Levy al...@redhat.com --- hw/qxl.c | 140 +++-- trace-events | 47 +++ 2 files changed, 113 insertions(+), 74 deletions(-) diff --git a/hw/qxl.c b/hw/qxl.c index e17b0e3..7857731 100644 --- a/hw/qxl.c +++ b/hw/qxl.c @@ -23,6 +23,7 @@ #include qemu-queue.h #include monitor.h #include sysemu.h +#include trace.h #include qxl.h @@ -409,7 +410,7 @@ static void interface_attach_worker(QXLInstance *sin, QXLWorker *qxl_worker) { PCIQXLDevice *qxl = container_of(sin, PCIQXLDevice, ssd.qxl); -dprint(qxl, 1, %s:\n, __FUNCTION__); +trace_qxl_interface_attach_worker(); qxl-ssd.worker = qxl_worker; } @@ -417,7 +418,7 @@ static void interface_set_compression_level(QXLInstance *sin, int level) { PCIQXLDevice *qxl = container_of(sin, PCIQXLDevice, ssd.qxl); -dprint(qxl, 1, %s: %d\n, __FUNCTION__, level); +trace_qxl_interface_set_compression_level(level); qxl-shadow_rom.compression_level = cpu_to_le32(level); qxl-rom-compression_level = cpu_to_le32(level); qxl_rom_set_dirty(qxl); @@ -436,7 +437,7 @@ static void interface_get_init_info(QXLInstance *sin, QXLDevInitInfo *info) { PCIQXLDevice *qxl = container_of(sin, PCIQXLDevice, ssd.qxl); -dprint(qxl, 1, %s:\n, __FUNCTION__); +trace_qxl_interface_get_init_info(); info-memslot_gen_bits = MEMSLOT_GENERATION_BITS; info-memslot_id_bits = MEMSLOT_SLOT_BITS; info-num_memslots = NUM_MEMSLOTS; @@ -505,9 +506,10 @@ static int interface_get_command(QXLInstance *sin, struct QXLCommandExt *ext) QXLCommand *cmd; int notify, ret; +trace_qxl_interface_get_command_enter(qxl_mode_to_string(qxl-mode)); + switch (qxl-mode) { case QXL_MODE_VGA: -dprint(qxl, 2, %s: vga\n, __FUNCTION__); ret = false; qemu_mutex_lock(qxl-ssd.lock); if (qxl-ssd.update != NULL) { @@ -518,19 +520,18 @@ static int interface_get_command(QXLInstance *sin, struct QXLCommandExt *ext) } qemu_mutex_unlock(qxl-ssd.lock); if (ret) { -dprint(qxl, 2, %s %s\n, __FUNCTION__, qxl_mode_to_string(qxl-mode)); +trace_qxl_interface_get_command_ret(qxl_mode_to_string(qxl-mode)); qxl_log_command(qxl, vga, ext); } return ret; case QXL_MODE_COMPAT: case QXL_MODE_NATIVE: case QXL_MODE_UNDEFINED: -dprint(qxl, 4, %s: %s\n, __FUNCTION__, qxl_mode_to_string(qxl-mode)); ring = qxl-ram-cmd_ring; if (SPICE_RING_IS_EMPTY(ring)) { return false; } -dprint(qxl, 2, %s: %s\n, __FUNCTION__, qxl_mode_to_string(qxl-mode)); +trace_qxl_interface_get_command_ret(qxl_mode_to_string(qxl-mode)); SPICE_RING_CONS_ITEM(ring, cmd); ext-cmd = *cmd; ext-group_id = MEMSLOT_GROUP_GUEST; @@ -592,7 +593,7 @@ static inline void qxl_push_free_res(PCIQXLDevice *d, int flush) } SPICE_RING_PUSH(ring, notify); -dprint(d, 2, free: push %d items, notify %s, ring %d/%d [%d,%d]\n, +trace_qxl_push_free_res( d-num_free_res, notify ? yes : no, ring-prod - ring-cons, ring-num_items, ring-prod, ring-cons); @@ -642,7 +643,7 @@ static void interface_release_resource(QXLInstance *sin, } qxl-last_release = ext.info; qxl-num_free_res++; -dprint(qxl, 3, %4d\r, qxl-num_free_res); +trace_qxl_interface_release_resource(qxl-num_free_res); qxl_push_free_res(qxl, 0); } @@ -716,7 +717,7 @@ static int interface_flush_resources(QXLInstance *sin) PCIQXLDevice *qxl = container_of(sin, PCIQXLDevice, ssd.qxl); int ret; -dprint(qxl, 1, free: guest flush (have %d)\n, qxl-num_free_res); +trace_qxl_interface_flush_resources(qxl-num_free_res); ret = qxl-num_free_res; if (ret) { qxl_push_free_res(qxl, 1); @@ -736,7 +737,7 @@ static void interface_async_complete_io(PCIQXLDevice *qxl, QXLCookie *cookie) qxl-current_async = QXL_UNDEFINED_IO; qemu_mutex_unlock(qxl-async_lock); -dprint(qxl, 2, async_complete: %d (%p) done\n, current_async, cookie); +trace_qxl_interface_async_complete_io(current_async, cookie); if (!cookie) { fprintf(stderr, qxl: %s: error, cookie is NULL\n, __func__); return; @@ -782,11 +783,13 @@ static void interface_update_area_complete(QXLInstance *sin, qemu_mutex_unlock(qxl-ssd.lock); return; } +trace_qxl_interface_update_area_complete(surface_id, dirty-left, +dirty-right, dirty-top, dirty-bottom, num_updated_rects); if (qxl-num_dirty_rects + num_updated_rects QXL_NUM_DIRTY_RECTS) { /* * overflow - treat this as a full update. Not expected to be common. */ -dprint(qxl, 1, %s: overflow of dirty rects\n, __func__); +
Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt
On Sun, Mar 11, 2012 at 10:41:32AM -0500, Anthony Liguori wrote: On 03/11/2012 10:12 AM, Gleb Natapov wrote: On Sun, Mar 11, 2012 at 09:16:49AM -0500, Anthony Liguori wrote: If libvirt assumes anything about what kvm actually supports it is working only by sheer luck. Well the simple answer for libvirt is don't use -nodefconfig and then it can reuse the CPU definitions (including any that the user adds). CPU models should be usable even with -nodefconfig. CPU model is more like device. By -cpu Nehalem I am saying I want Nehalem device in my machine. Let's say we moved CPU definitions to /usr/share/qemu/cpu-models.xml. Obviously, we'd want a command line option to be able to change that location so we'd introduce -cpu-models PATH. But we want all of our command line options to be settable by the global configuration file so we would have a cpu-model=PATH to the configuration file. But why hard code a path when we can just set the default path in the configuration file so let's avoid hard coding and just put cpu-models=/usr/share/qemu/cpu-models.xml in the default configuration file. We have two places where we define cpu models: hardcoded in target-i386/cpuid.c and in target-x86_64.conf. We moved them out to conf file because this way it is easier to add, update, examine compare CPU models. But they still should be treated as essential part of qemu. Given this I do not see the step above as a logical one. CPU models are not part of machine config. -cpu Nehalem,-sse,level=3,model=5 is part of machine config. What if we introduce a way to write devices in LUA. Should -nodefconfig drop devices implemented as LUA scripts too? But now when libvirt uses -nodefconfig, those models go away. -nodefconfig means start QEMU in the most minimal state possible. You get what you pay for if you use it. We'll have the same problem with machine configuration files. At some point in time, -nodefconfig will make machine models disappear. -- Gleb.
Re: [Qemu-devel] [PATCH 0/8] Add GTK UI to enable basic accessibility (v2)
Am 11.03.2012 19:24, schrieb François Revol: On -10/01/-28163 20:59, Stefan Weil wrote: Am 27.02.2012 00:46, schrieb Anthony Liguori: I realize UIs are the third rail of QEMU development, but over the years I've gotten a lot of feedback from users about our UI. I think everyone struggles with the SDL interface and its lack of discoverability but it's worse than I think most people realize for users that rely on accessibility tools. [...] While I do think accessibility is important... These are solved problems though and while we could reinvent all of this ourselves with SDL, we would be crazy if we did. Modern toolkits, like GTK, solve these problems. GTK itself causes problems, because, it's not ported, thus not available, to all platforms QEMU can run on. It's certainly not available on Haiku at least. Of course, SDL itself is not really a good candidate to add a11y features, due to its framebuffer-based design... By using GTK, we can leverage VteTerminal for screen reader integration and font configuration. We can also use GTK's accelerator support to make accelerators configurable (Gnome provides a global accelerator configuration interface). Hmm the thing using libvte that uses /tmp to insecurely store terminal backlogs ? ;-) [snip] As soon as the GTK UI is considered stable and usable, the default could be changed from SDL to GTK. Due to GTK not being as universally available as SDL, I'd really like not to. François. Agreed. I should have been more precise. The default could be changed from SDL to GTK when GTK is available and working (this implies not for Haiku, not for compilation environments without the necessary GTK installation, and also not for Windows as long as the GTK UI freezes QEMU). Stefan
Re: [Qemu-devel] [PATCH 2/2] Expose tsc deadline timer cpuid to guest
Jan Kiszka wrote: On 2012-03-09 20:09, Liu, Jinsong wrote: Jan Kiszka wrote: On 2012-03-09 19:27, Liu, Jinsong wrote: Jan Kiszka wrote: On 2012-03-06 08:49, Liu, Jinsong wrote: Jan, Any comments? I feel some confused about your point 'disable cpuid feature for older machine types by default': are you planning a common approach for this common issue, or, you just ask me a specific solution for the tsc deadline timer case? I think a generic solution for this can be as simple as passing a feature exclusion mask to cpu_init. You could simple append a string of -feature1,-feature2 to the cpu model that is specified on creation. And that string could be defined in the compat machine descriptions. Does this make sense? Jan, to prevent misunderstanding, I elaborate my understanding of your points below (if any misunderstanding please point out to me): = Your target is, to migrate from A(old qemu) to B(new qemu) by 1. at A: qemu-version-A [-cpu whatever] // currently the default machine type is pc-A 2. at B: qemu-version-B -machine pc-A [-cpu whatever] -feature1 -feature2 B run new qemu-version-B (w/ new features 'feature1' and 'feature2'), but when B runs w/ compat '-machine pc-A', vm should not see 'feature1' and 'feature2', so commandline append string to cpu model '-cpu whatever -feature1 -feature2' to hidden new feature1 and feature2 to vm, hence vm can see same cpuid features (at B) as those at A (which means, no feature1, no feature2) = If my understanding of your thoughts is right, I think currently qemu has satisfied your target, code refer to pc_cpus_init(cpu_model) .. cpu_init(cpu_model) -- cpu_x86_register(*env, cpu_model) -- cpu_x86_find_by_name(*def, cpu_model) // parse '+/- features', generate feature masks plus_features... // and minus_features...(this is feature exclusion masks you want) I think your point 'define in the compat machine description' is unnecessary. The user would have to specify the new feature as exclusions *manually* on the command line if -machine pc-A doesn't inject them *automatically*. So it is necessary to enhance qemu in this regard. ... You suggest 'append a string of -feature1,-feature2 to the cpu model that is specified on creation' at your last email. Could you tell me other way user exclude features? I only know qemu command line :-( I was thinking of something like diff --git a/hw/boards.h b/hw/boards.h index 667177d..2bae071 100644 --- a/hw/boards.h +++ b/hw/boards.h @@ -28,6 +28,7 @@ typedef struct QEMUMachine { int is_default; const char *default_machine_opts; GlobalProperty *compat_props; +const char *compat_cpu_features; struct QEMUMachine *next; } QEMUMachine; diff --git a/hw/pc.c b/hw/pc.c index bb9867b..4d11559 100644 --- a/hw/pc.c +++ b/hw/pc.c @@ -949,8 +949,9 @@ static CPUState *pc_new_cpu(const char *cpu_model) return env; } -void pc_cpus_init(const char *cpu_model) +void pc_cpus_init(const char *cpu_model, const char *append_features) { +char *model_and_features; int i; /* init CPUs */ @@ -961,10 +962,13 @@ void pc_cpus_init(const char *cpu_model) cpu_model = qemu32; #endif } +model_and_features = g_strconcat(cpu_model, ,, append_features, NULL); for(i = 0; i smp_cpus; i++) { -pc_new_cpu(cpu_model); +pc_new_cpu(model_and_features); } + +g_free(model_and_features); } void pc_memory_init(MemoryRegion *system_memory, However, getting machine.compat_cpu_features to pc_cpus_init is rather ugly. And we will have CPU devices with real properties soon. Then the compat feature string could be passed that way, without changing any machine init function. Andreas, do you expect CPU devices to be ready for qemu 1.1? We would need them to pass a feature exclusion mask from machine.compat_props to the (x86) CPU init code. cpu devices is just another format of current cpu_model. It helps nothing to our problem. Again, the point is, by what method the feature exclusion mask can be generated, if user not give hint manually? Thanks, Jinsong Well, given that introducing some intermediate solution for this would be complex and hacky and that there is a way to configure tsc_deadline for old machines away, though only an explicit one, I could live with postponing the feature mask after the CPU device conversion. But the last word will have the maintainers. Jan
[Qemu-devel] [PATCH] alpha-user: Initialize FPCR with round-to-nearest.
Signed-off-by: Richard Henderson r...@twiddle.net --- target-alpha/translate.c |3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/target-alpha/translate.c b/target-alpha/translate.c index 1d2142b..fe78630 100644 --- a/target-alpha/translate.c +++ b/target-alpha/translate.c @@ -3513,7 +3513,8 @@ CPUAlphaState * cpu_alpha_init (const char *cpu_model) #if defined (CONFIG_USER_ONLY) env-ps = PS_USER_MODE; cpu_alpha_store_fpcr(env, (FPCR_INVD | FPCR_DZED | FPCR_OVFD - | FPCR_UNFD | FPCR_INED | FPCR_DNOD)); + | FPCR_UNFD | FPCR_INED | FPCR_DNOD + | FPCR_DYN_NORMAL)); #endif env-lock_addr = -1; env-fen = 1; -- 1.7.7.6
[Qemu-devel] [PATCH v2 2/5] qxl/qxl_render.c: add trace events
Signed-off-by: Alon Levy al...@redhat.com --- hw/qxl-render.c | 13 - trace-events|6 ++ 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/hw/qxl-render.c b/hw/qxl-render.c index 25857f6..74e7ea3 100644 --- a/hw/qxl-render.c +++ b/hw/qxl-render.c @@ -31,11 +31,10 @@ static void qxl_blit(PCIQXLDevice *qxl, QXLRect *rect) return; } if (!qxl-guest_primary.data) { -dprint(qxl, 1, %s: initializing guest_primary.data\n, __func__); +trace_qxl_blit_guest_primary_initialized(); qxl-guest_primary.data = memory_region_get_ram_ptr(qxl-vga.vram); } -dprint(qxl, 2, %s: stride %d, [%d, %d, %d, %d]\n, __func__, -qxl-guest_primary.qxl_stride, +trace_qxl_blit(qxl-guest_primary.qxl_stride, rect-left, rect-right, rect-top, rect-bottom); src = qxl-guest_primary.data; if (qxl-guest_primary.qxl_stride 0) { @@ -107,8 +106,7 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl) qxl-guest_primary.data = memory_region_get_ram_ptr(qxl-vga.vram); qxl_set_rect_to_surface(qxl, qxl-dirty[0]); qxl-num_dirty_rects = 1; -dprint(qxl, 1, %s: %dx%d, stride %d, bpp %d, depth %d\n, - __FUNCTION__, +trace_qxl_guest_primary_resized( qxl-guest_primary.surface.width, qxl-guest_primary.surface.height, qxl-guest_primary.qxl_stride, @@ -118,8 +116,6 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl) if (surface-width != qxl-guest_primary.surface.width || surface-height != qxl-guest_primary.surface.height) { if (qxl-guest_primary.qxl_stride 0) { -dprint(qxl, 1, %s: using guest_primary for displaysurface\n, - __func__); qemu_free_displaysurface(vga-ds); qemu_create_displaysurface_from(qxl-guest_primary.surface.width, qxl-guest_primary.surface.height, @@ -127,8 +123,6 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl) qxl-guest_primary.abs_stride, qxl-guest_primary.data); } else { -dprint(qxl, 1, %s: resizing displaysurface to guest_primary\n, - __func__); qemu_resize_displaysurface(vga-ds, qxl-guest_primary.surface.width, qxl-guest_primary.surface.height); @@ -187,6 +181,7 @@ void qxl_render_update_area_bh(void *opaque) void qxl_render_update_area_done(PCIQXLDevice *qxl, QXLCookie *cookie) { qemu_mutex_lock(qxl-ssd.lock); +trace_qxl_render_update_area_done(cookie); qemu_bh_schedule(qxl-update_area_bh); qxl-render_update_cookie_num--; qemu_mutex_unlock(qxl-ssd.lock); diff --git a/trace-events b/trace-events index 0853a1b..a66aee8 100644 --- a/trace-events +++ b/trace-events @@ -712,3 +712,9 @@ qxl_post_load_enter(void) qxl_post_load_restore_mode(const char *mode) %s qxl_post_load_exit(void) qxl_interface_update_area_complete(uint32_t surface_id, uint32_t dirty_left, uint32_t dirty_right, uint32_t dirty_top, uint32_t dirty_bottom, uint32_t num_updated_rects) surface=%d [%d,%d,%d,%d] #=%d + +# hw/qxl-render.c +qxl_blit_guest_primary_initialized(void) +qxl_blit(int32_t stride, int32_t left, int32_t right, int32_t top, int32_t bottom) stride=%d [%d, %d, %d, %d] +qxl_guest_primary_resized(int32_t width, int32_t height, int32_t stride, int32_t bytes_pp, int32_t bits_pp) %dx%d, stride %d, bpp %d, depth %d +qxl_render_update_area_done(void *cookie) %p -- 1.7.9.1
[Qemu-devel] [PATCH v2 1/5] qxl: switch qxl.c to trace-events
dprint is still used for qxl_init_common one time prints. Signed-off-by: Alon Levy al...@redhat.com --- hw/qxl.c | 140 +++-- trace-events | 47 +++ 2 files changed, 113 insertions(+), 74 deletions(-) diff --git a/hw/qxl.c b/hw/qxl.c index e17b0e3..7857731 100644 --- a/hw/qxl.c +++ b/hw/qxl.c @@ -23,6 +23,7 @@ #include qemu-queue.h #include monitor.h #include sysemu.h +#include trace.h #include qxl.h @@ -409,7 +410,7 @@ static void interface_attach_worker(QXLInstance *sin, QXLWorker *qxl_worker) { PCIQXLDevice *qxl = container_of(sin, PCIQXLDevice, ssd.qxl); -dprint(qxl, 1, %s:\n, __FUNCTION__); +trace_qxl_interface_attach_worker(); qxl-ssd.worker = qxl_worker; } @@ -417,7 +418,7 @@ static void interface_set_compression_level(QXLInstance *sin, int level) { PCIQXLDevice *qxl = container_of(sin, PCIQXLDevice, ssd.qxl); -dprint(qxl, 1, %s: %d\n, __FUNCTION__, level); +trace_qxl_interface_set_compression_level(level); qxl-shadow_rom.compression_level = cpu_to_le32(level); qxl-rom-compression_level = cpu_to_le32(level); qxl_rom_set_dirty(qxl); @@ -436,7 +437,7 @@ static void interface_get_init_info(QXLInstance *sin, QXLDevInitInfo *info) { PCIQXLDevice *qxl = container_of(sin, PCIQXLDevice, ssd.qxl); -dprint(qxl, 1, %s:\n, __FUNCTION__); +trace_qxl_interface_get_init_info(); info-memslot_gen_bits = MEMSLOT_GENERATION_BITS; info-memslot_id_bits = MEMSLOT_SLOT_BITS; info-num_memslots = NUM_MEMSLOTS; @@ -505,9 +506,10 @@ static int interface_get_command(QXLInstance *sin, struct QXLCommandExt *ext) QXLCommand *cmd; int notify, ret; +trace_qxl_interface_get_command_enter(qxl_mode_to_string(qxl-mode)); + switch (qxl-mode) { case QXL_MODE_VGA: -dprint(qxl, 2, %s: vga\n, __FUNCTION__); ret = false; qemu_mutex_lock(qxl-ssd.lock); if (qxl-ssd.update != NULL) { @@ -518,19 +520,18 @@ static int interface_get_command(QXLInstance *sin, struct QXLCommandExt *ext) } qemu_mutex_unlock(qxl-ssd.lock); if (ret) { -dprint(qxl, 2, %s %s\n, __FUNCTION__, qxl_mode_to_string(qxl-mode)); +trace_qxl_interface_get_command_ret(qxl_mode_to_string(qxl-mode)); qxl_log_command(qxl, vga, ext); } return ret; case QXL_MODE_COMPAT: case QXL_MODE_NATIVE: case QXL_MODE_UNDEFINED: -dprint(qxl, 4, %s: %s\n, __FUNCTION__, qxl_mode_to_string(qxl-mode)); ring = qxl-ram-cmd_ring; if (SPICE_RING_IS_EMPTY(ring)) { return false; } -dprint(qxl, 2, %s: %s\n, __FUNCTION__, qxl_mode_to_string(qxl-mode)); +trace_qxl_interface_get_command_ret(qxl_mode_to_string(qxl-mode)); SPICE_RING_CONS_ITEM(ring, cmd); ext-cmd = *cmd; ext-group_id = MEMSLOT_GROUP_GUEST; @@ -592,7 +593,7 @@ static inline void qxl_push_free_res(PCIQXLDevice *d, int flush) } SPICE_RING_PUSH(ring, notify); -dprint(d, 2, free: push %d items, notify %s, ring %d/%d [%d,%d]\n, +trace_qxl_push_free_res( d-num_free_res, notify ? yes : no, ring-prod - ring-cons, ring-num_items, ring-prod, ring-cons); @@ -642,7 +643,7 @@ static void interface_release_resource(QXLInstance *sin, } qxl-last_release = ext.info; qxl-num_free_res++; -dprint(qxl, 3, %4d\r, qxl-num_free_res); +trace_qxl_interface_release_resource(qxl-num_free_res); qxl_push_free_res(qxl, 0); } @@ -716,7 +717,7 @@ static int interface_flush_resources(QXLInstance *sin) PCIQXLDevice *qxl = container_of(sin, PCIQXLDevice, ssd.qxl); int ret; -dprint(qxl, 1, free: guest flush (have %d)\n, qxl-num_free_res); +trace_qxl_interface_flush_resources(qxl-num_free_res); ret = qxl-num_free_res; if (ret) { qxl_push_free_res(qxl, 1); @@ -736,7 +737,7 @@ static void interface_async_complete_io(PCIQXLDevice *qxl, QXLCookie *cookie) qxl-current_async = QXL_UNDEFINED_IO; qemu_mutex_unlock(qxl-async_lock); -dprint(qxl, 2, async_complete: %d (%p) done\n, current_async, cookie); +trace_qxl_interface_async_complete_io(current_async, cookie); if (!cookie) { fprintf(stderr, qxl: %s: error, cookie is NULL\n, __func__); return; @@ -782,11 +783,13 @@ static void interface_update_area_complete(QXLInstance *sin, qemu_mutex_unlock(qxl-ssd.lock); return; } +trace_qxl_interface_update_area_complete(surface_id, dirty-left, +dirty-right, dirty-top, dirty-bottom, num_updated_rects); if (qxl-num_dirty_rects + num_updated_rects QXL_NUM_DIRTY_RECTS) { /* * overflow - treat this as a full update. Not expected to be common. */ -dprint(qxl, 1, %s: overflow of dirty rects\n, __func__); +
[Qemu-devel] [PATCH v2 5/5] qxl: screendump: use provided Monitor
This fixes the hmp loose end by suspending the monitor and resuming it after ppm_save has been called. For qmp this is redundant, and actually wrong, since a qmp command ends up suspending the hmp monitor and then resuming it. On the other hand I'm not sure how much of a problem this is. The real problem is that qmp users still end up with a completed screendump before ppm_save has completed. Signed-off-by: Alon Levy al...@redhat.com --- hw/qxl-render.c| 12 +--- hw/qxl.c |4 ++-- hw/qxl.h |2 +- ui/spice-display.h |1 + 4 files changed, 13 insertions(+), 6 deletions(-) diff --git a/hw/qxl-render.c b/hw/qxl-render.c index b281766..16340d0 100644 --- a/hw/qxl-render.c +++ b/hw/qxl-render.c @@ -154,7 +154,8 @@ typedef struct QXLPPMSaveBHData { static void qxl_render_ppm_save_bh(void *opaque); -static QXLCookie *qxl_cookie_render_new(PCIQXLDevice *qxl, const char *filename) +static QXLCookie *qxl_cookie_render_new(PCIQXLDevice *qxl, const char *filename, +Monitor *mon) { QXLPPMSaveBHData *ppm_save_bh_data; QEMUBH *ppm_save_bh; @@ -169,6 +170,8 @@ static QXLCookie *qxl_cookie_render_new(PCIQXLDevice *qxl, const char *filename) ppm_save_bh = qemu_bh_new(qxl_render_ppm_save_bh, ppm_save_bh_data); cookie-u.render.filename = g_strdup(filename); cookie-u.render.ppm_save_bh = ppm_save_bh; +cookie-u.render.mon = mon; +monitor_suspend(mon); } return cookie; } @@ -176,6 +179,9 @@ static QXLCookie *qxl_cookie_render_new(PCIQXLDevice *qxl, const char *filename) static void qxl_cookie_render_free(PCIQXLDevice *qxl, QXLCookie *cookie) { g_free(cookie-u.render.filename); +if (cookie-u.render.mon) { +monitor_resume(cookie-u.render.mon); +} g_free(cookie); --qxl-render_update_cookie_num; } @@ -204,7 +210,7 @@ static void qxl_render_ppm_save_bh(void *opaque) * callbacks are called by spice_server thread, defering to bh called from the * io thread. */ -void qxl_render_update(PCIQXLDevice *qxl, const char *filename) +void qxl_render_update(PCIQXLDevice *qxl, const char *filename, Monitor *mon) { QXLCookie *cookie; @@ -223,7 +229,7 @@ void qxl_render_update(PCIQXLDevice *qxl, const char *filename) qxl-guest_primary.commands = 0; qxl-render_update_cookie_num++; qemu_mutex_unlock(qxl-ssd.lock); -cookie = qxl_cookie_render_new(qxl, filename); +cookie = qxl_cookie_render_new(qxl, filename, mon); qxl_spice_update_area(qxl, 0, cookie-u.render.area, NULL, 0, 1 /* clear_dirty_region */, QXL_ASYNC, cookie); } diff --git a/hw/qxl.c b/hw/qxl.c index d21b508..fae5be8 100644 --- a/hw/qxl.c +++ b/hw/qxl.c @@ -1471,7 +1471,7 @@ static void qxl_hw_update(void *opaque) break; case QXL_MODE_COMPAT: case QXL_MODE_NATIVE: -qxl_render_update(qxl, NULL); +qxl_render_update(qxl, NULL, NULL); break; default: break; @@ -1495,7 +1495,7 @@ static void qxl_hw_screen_dump(void *opaque, const char *filename, bool cswitch, switch (qxl-mode) { case QXL_MODE_COMPAT: case QXL_MODE_NATIVE: -qxl_render_update(qxl, filename); +qxl_render_update(qxl, filename, mon); break; case QXL_MODE_VGA: vga-screen_dump(vga, filename, cswitch, mon); diff --git a/hw/qxl.h b/hw/qxl.h index 417ab28..219e149 100644 --- a/hw/qxl.h +++ b/hw/qxl.h @@ -147,7 +147,7 @@ void qxl_log_command(PCIQXLDevice *qxl, const char *ring, QXLCommandExt *ext); /* qxl-render.c */ void qxl_render_resize(PCIQXLDevice *qxl); -void qxl_render_update(PCIQXLDevice *qxl, const char *filename); +void qxl_render_update(PCIQXLDevice *qxl, const char *filename, Monitor *mon); void qxl_render_cursor(PCIQXLDevice *qxl, QXLCommandExt *ext); void qxl_render_update_area_done(PCIQXLDevice *qxl, QXLCookie *cookie); void qxl_render_update_area_bh(void *opaque); diff --git a/ui/spice-display.h b/ui/spice-display.h index ec1fc24..2d01f51 100644 --- a/ui/spice-display.h +++ b/ui/spice-display.h @@ -64,6 +64,7 @@ typedef struct QXLCookie { int redraw; char *filename; QEMUBH *ppm_save_bh; +Monitor *mon; } render; } u; } QXLCookie; -- 1.7.9.1
[Qemu-devel] [PATCH v2 3/5] qxl-render: call ppm_save on bh
With this change ppm_save is called after rendering, and not before. There are two lose ends: hmp: monitor will be active before ppm_save is complete. qmp: return will be emitted before ppm_save is complete. Signed-off-by: Alon Levy al...@redhat.com --- hw/qxl-render.c| 76 +++- hw/qxl.c |5 +-- hw/qxl.h |2 +- trace-events |2 + ui/spice-display.h |2 + 5 files changed, 76 insertions(+), 11 deletions(-) diff --git a/hw/qxl-render.c b/hw/qxl-render.c index 74e7ea3..b281766 100644 --- a/hw/qxl-render.c +++ b/hw/qxl-render.c @@ -19,6 +19,7 @@ * along with this program; if not, see http://www.gnu.org/licenses/. */ +#include console.h #include qxl.h static void qxl_blit(PCIQXLDevice *qxl, QXLRect *rect) @@ -142,12 +143,68 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl) } /* + * struct used just for ppm save bh. We don't actually support multiple qxl + * screendump yet, but a) we will, and b) exporting qxl0 from qxl.c looks + * uglier imo. + */ +typedef struct QXLPPMSaveBHData { +PCIQXLDevice *qxl; +QXLCookie *cookie; +} QXLPPMSaveBHData; + +static void qxl_render_ppm_save_bh(void *opaque); + +static QXLCookie *qxl_cookie_render_new(PCIQXLDevice *qxl, const char *filename) +{ +QXLPPMSaveBHData *ppm_save_bh_data; +QEMUBH *ppm_save_bh; +QXLCookie *cookie = qxl_cookie_new(QXL_COOKIE_TYPE_RENDER_UPDATE_AREA, + 0); + +qxl_set_rect_to_surface(qxl, cookie-u.render.area); +if (filename) { +ppm_save_bh_data = g_malloc0(sizeof(*ppm_save_bh_data)); +ppm_save_bh_data-qxl = qxl; +ppm_save_bh_data-cookie = cookie; +ppm_save_bh = qemu_bh_new(qxl_render_ppm_save_bh, ppm_save_bh_data); +cookie-u.render.filename = g_strdup(filename); +cookie-u.render.ppm_save_bh = ppm_save_bh; +} +return cookie; +} + +static void qxl_cookie_render_free(PCIQXLDevice *qxl, QXLCookie *cookie) +{ +g_free(cookie-u.render.filename); +g_free(cookie); +--qxl-render_update_cookie_num; +} + +static void qxl_render_ppm_save_bh(void *opaque) +{ +QXLPPMSaveBHData *data = opaque; +PCIQXLDevice *qxl = data-qxl; +QXLCookie *cookie = data-cookie; +QEMUBH *bh = cookie-u.render.ppm_save_bh; + +qemu_mutex_lock(qxl-ssd.lock); +trace_qxl_render_ppm_save_bh( + qxl-ssd.ds-surface-data, qxl-guest_primary.data); +qxl_render_update_area_unlocked(qxl); +ppm_save(cookie-u.render.filename, qxl-ssd.ds-surface); +qxl_cookie_render_free(qxl, cookie); +qemu_mutex_unlock(qxl-ssd.lock); +g_free(data); +qemu_bh_delete(bh); +} + +/* * use ssd.lock to protect render_update_cookie_num. * qxl_render_update is called by io thread or vcpu thread, and the completion * callbacks are called by spice_server thread, defering to bh called from the * io thread. */ -void qxl_render_update(PCIQXLDevice *qxl) +void qxl_render_update(PCIQXLDevice *qxl, const char *filename) { QXLCookie *cookie; @@ -155,6 +212,10 @@ void qxl_render_update(PCIQXLDevice *qxl) if (!runstate_is_running() || !qxl-guest_primary.commands) { qxl_render_update_area_unlocked(qxl); +if (filename) { +trace_qxl_render_update_screendump_no_update(); +ppm_save(filename, qxl-ssd.ds-surface); +} qemu_mutex_unlock(qxl-ssd.lock); return; } @@ -162,9 +223,7 @@ void qxl_render_update(PCIQXLDevice *qxl) qxl-guest_primary.commands = 0; qxl-render_update_cookie_num++; qemu_mutex_unlock(qxl-ssd.lock); -cookie = qxl_cookie_new(QXL_COOKIE_TYPE_RENDER_UPDATE_AREA, -0); -qxl_set_rect_to_surface(qxl, cookie-u.render.area); +cookie = qxl_cookie_render_new(qxl, filename); qxl_spice_update_area(qxl, 0, cookie-u.render.area, NULL, 0, 1 /* clear_dirty_region */, QXL_ASYNC, cookie); } @@ -182,10 +241,13 @@ void qxl_render_update_area_done(PCIQXLDevice *qxl, QXLCookie *cookie) { qemu_mutex_lock(qxl-ssd.lock); trace_qxl_render_update_area_done(cookie); -qemu_bh_schedule(qxl-update_area_bh); -qxl-render_update_cookie_num--; +if (cookie-u.render.filename) { +qemu_bh_schedule(cookie-u.render.ppm_save_bh); +} else { +qemu_bh_schedule(qxl-update_area_bh); +qxl_cookie_render_free(qxl, cookie); +} qemu_mutex_unlock(qxl-ssd.lock); -g_free(cookie); } static QEMUCursor *qxl_cursor(PCIQXLDevice *qxl, QXLCursor *cursor) diff --git a/hw/qxl.c b/hw/qxl.c index 7857731..bcfd661 100644 --- a/hw/qxl.c +++ b/hw/qxl.c @@ -1471,7 +1471,7 @@ static void qxl_hw_update(void *opaque) break; case QXL_MODE_COMPAT: case QXL_MODE_NATIVE: -qxl_render_update(qxl); +qxl_render_update(qxl, NULL); break; default: break; @@ -1494,8 +1494,7 @@ static void
[Qemu-devel] [PATCH v2 4/5] console: pass Monitor to vga_hw_screen_dump/hw_vga_dump
Passes the Monitor ptr to the screendump implementation to all for monitor suspend and resume for qxl to fix screendump regression. graphics_console_init signature change required touching every implemented of screen_dump. There is no change other then an added parameter. qxl will make use of it in the next patch. compiles with ./configure Signed-off-by: Alon Levy al...@redhat.com --- console.c |4 ++-- console.h |5 +++-- hw/blizzard.c |2 +- hw/g364fb.c |3 ++- hw/omap_dss.c |4 +++- hw/omap_lcdc.c |3 ++- hw/qxl.c|5 +++-- hw/sm501.c |4 ++-- hw/tcx.c| 12 hw/vga.c|6 -- hw/vmware_vga.c |5 +++-- monitor.c |2 +- 12 files changed, 34 insertions(+), 21 deletions(-) diff --git a/console.c b/console.c index 6a463f5..3e386fc 100644 --- a/console.c +++ b/console.c @@ -173,7 +173,7 @@ void vga_hw_invalidate(void) active_console-hw_invalidate(active_console-hw); } -void vga_hw_screen_dump(const char *filename) +void vga_hw_screen_dump(const char *filename, Monitor *mon) { TextConsole *previous_active_console; bool cswitch; @@ -187,7 +187,7 @@ void vga_hw_screen_dump(const char *filename) console_select(0); } if (consoles[0] consoles[0]-hw_screen_dump) { -consoles[0]-hw_screen_dump(consoles[0]-hw, filename, cswitch); +consoles[0]-hw_screen_dump(consoles[0]-hw, filename, cswitch, mon); } else { error_report(screen dump not implemented); } diff --git a/console.h b/console.h index 4334db5..0d2cf30 100644 --- a/console.h +++ b/console.h @@ -343,7 +343,8 @@ static inline void console_write_ch(console_ch_t *dest, uint32_t ch) typedef void (*vga_hw_update_ptr)(void *); typedef void (*vga_hw_invalidate_ptr)(void *); -typedef void (*vga_hw_screen_dump_ptr)(void *, const char *, bool cswitch); +typedef void (*vga_hw_screen_dump_ptr)(void *, const char *, bool cswitch, + Monitor *mon); typedef void (*vga_hw_text_update_ptr)(void *, console_ch_t *); DisplayState *graphic_console_init(vga_hw_update_ptr update, @@ -354,7 +355,7 @@ DisplayState *graphic_console_init(vga_hw_update_ptr update, void vga_hw_update(void); void vga_hw_invalidate(void); -void vga_hw_screen_dump(const char *filename); +void vga_hw_screen_dump(const char *filename, Monitor *mon); void vga_hw_text_update(console_ch_t *chardata); int is_graphic_console(void); diff --git a/hw/blizzard.c b/hw/blizzard.c index c7d844d..8ccea7f 100644 --- a/hw/blizzard.c +++ b/hw/blizzard.c @@ -933,7 +933,7 @@ static void blizzard_update_display(void *opaque) } static void blizzard_screen_dump(void *opaque, const char *filename, - bool cswitch) + bool cswitch, Monitor *mon) { BlizzardState *s = (BlizzardState *) opaque; diff --git a/hw/g364fb.c b/hw/g364fb.c index 3a0b68f..f89000c 100644 --- a/hw/g364fb.c +++ b/hw/g364fb.c @@ -289,7 +289,8 @@ static void g364fb_reset(G364State *s) g364fb_invalidate_display(s); } -static void g364fb_screen_dump(void *opaque, const char *filename, bool cswitch) +static void g364fb_screen_dump(void *opaque, const char *filename, bool cswitch, + Monitor *mon) { G364State *s = opaque; int y, x; diff --git a/hw/omap_dss.c b/hw/omap_dss.c index 86ed6ea..b4a1a93 100644 --- a/hw/omap_dss.c +++ b/hw/omap_dss.c @@ -1072,7 +1072,9 @@ struct omap_dss_s *omap_dss_init(struct omap_target_agent_s *ta, #if 0 s-state = graphic_console_init(omap_update_display, -omap_invalidate_display, omap_screen_dump, s); +omap_invalidate_display, +omap_screen_dump, +NULL, s); #endif return s; diff --git a/hw/omap_lcdc.c b/hw/omap_lcdc.c index f172093..ed2325d 100644 --- a/hw/omap_lcdc.c +++ b/hw/omap_lcdc.c @@ -264,7 +264,8 @@ static int ppm_save(const char *filename, uint8_t *data, return 0; } -static void omap_screen_dump(void *opaque, const char *filename, bool cswitch) +static void omap_screen_dump(void *opaque, const char *filename, bool cswitch, + Monitor *mon) { struct omap_lcd_panel_s *omap_lcd = opaque; if (cswitch) { diff --git a/hw/qxl.c b/hw/qxl.c index bcfd661..d21b508 100644 --- a/hw/qxl.c +++ b/hw/qxl.c @@ -1486,7 +1486,8 @@ static void qxl_hw_invalidate(void *opaque) vga-invalidate(vga); } -static void qxl_hw_screen_dump(void *opaque, const char *filename, bool cswitch) +static void qxl_hw_screen_dump(void *opaque, const char *filename, bool cswitch, + Monitor *mon) { PCIQXLDevice *qxl = opaque; VGACommonState *vga = qxl-vga; @@ -1497,7 +1498,7 @@ static void qxl_hw_screen_dump(void *opaque, const char *filename, bool
[Qemu-devel] [PATCH v2 0/5] fix qxl screendump using monitor_suspend
v2 changes: rearranged and split the last patch: the console change to add Monitor to vga_hw_screen_dump can be moved past the qxl change, making it easier to see the changes, and adding an intermediate point where ppm_save happens after the update, but the do_screen_dump returns before ppm_save. The title of the patchset is wrong, only hmp is fixed, but qmp is now broken in a different way: there is no file saved when it expects it. The solution to this can be a change to libvirt to use hmp command for screendump when a qxl device is used, until the QAPI async monitor commands change lands. This patchset starts and ends with trace event additions that make it easier to see the change. It applies on top of http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01784.html due to trace-events. The problem addressed by this patchset is that after recent fixes (81fb6f) screendump with a qxl device in native mode saves a stale screen dump. The solution is to use monitor_suspend and monitor_release in qxl's implementation. This is done by: 1. introducing an extra parameter to vga_hw_screen_dump hw_vga_dump console: pass Monitor to vga_hw_screen_dump/hw_vga_dump 2. using it in qxl via a bh. qxl-render: call ppm_save on bh Additional patches add trace events to qxl and qxl_render, making it easy to see the difference: events setup: (using stderr backend) (qemu) trace-event ppm_save on (qemu) trace-event qxl* on (qemu) trace-event qxl_interface_get_command_enter off (qemu) trace-event qxl_interface_release_resource off (qemu) trace-event qxl_interface_get_command_ret off before: ppm_save done before update: (qemu) screendump /tmp/a.ppm ppm_save /tmp/a.ppm surface=0x7fc0267b3ad0 qxl_interface_update_area_complete surface=0 [152,160,464,480] #=1 qxl_interface_update_area_complete_schedule_bh #dirty=1 qxl_render_update_area_done 0x7fc02b603db0 (qemu) qxl_blit stride=-2560 [152, 160, 464, 480] after: (qemu) screendump /tmp/a.ppm qxl_interface_update_area_complete surface=0 [152,160,464,480] #=1 qxl_interface_update_area_complete_schedule_bh #dirty=1 qxl_render_update_area_done 0x7f407af72210 qxl_render_ppm_save_bh 0x7f407f845b60 (primary 0x7f401bc0) qxl_blit stride=-2560 [152, 160, 464, 480] ppm_save /tmp/a.ppm surface=0x7f4077204ad0 (qemu) Note: This doesn't address a possible libvirt problem that was mentioned in length before, but since it has not been reproduced it will be fixed when it is. Meanwhile other users like autotest will be fixed by this patch (by fix I mean screendump will produce the correct output). Alon Levy (5): qxl: switch qxl.c to trace-events qxl/qxl_render.c: add trace events qxl-render: call ppm_save on bh console: pass Monitor to vga_hw_screen_dump/hw_vga_dump qxl: screendump: use provided Monitor console.c |4 +- console.h |5 +- hw/blizzard.c |2 +- hw/g364fb.c|3 +- hw/omap_dss.c |4 +- hw/omap_lcdc.c |3 +- hw/qxl-render.c| 95 +++-- hw/qxl.c | 150 --- hw/qxl.h |2 +- hw/sm501.c |4 +- hw/tcx.c | 12 +++-- hw/vga.c |6 ++- hw/vmware_vga.c|5 +- monitor.c |2 +- trace-events | 55 +++ ui/spice-display.h |3 + 16 files changed, 240 insertions(+), 115 deletions(-) -- 1.7.9.1
Re: [Qemu-devel] [PATCH 0/4] fix qxl screendump using monitor_suspend
On Sun, Mar 11, 2012 at 06:39:33PM +0200, Alon Levy wrote: This patchset starts and ends with trace event additions that make it easier to see the change. Self NACK to v1, see v2 on the same thread. It applies on top of http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01784.html due to trace-events. The problem addressed by this patchset is that after recent fixes (81fb6f) screendump with a qxl device in native mode saves a stale screen dump. The solution is to use monitor_suspend and monitor_release in qxl's implementation. This is done by: 1. introducing an extra parameter to vga_hw_screen_dump hw_vga_dump console: pass Monitor to vga_hw_screen_dump/hw_vga_dump 2. using it in qxl via a bh. qxl-render: call ppm_save on bh Additional patches add trace events to qxl and qxl_render, making it easy to see the difference: events setup: (using stderr backend) (qemu) trace-event ppm_save on (qemu) trace-event qxl* on (qemu) trace-event qxl_interface_get_command_enter off (qemu) trace-event qxl_interface_release_resource off (qemu) trace-event qxl_interface_get_command_ret off before: ppm_save done before update: (qemu) screendump /tmp/a.ppm ppm_save /tmp/a.ppm surface=0x7fc0267b3ad0 qxl_interface_update_area_complete surface=0 [152,160,464,480] #=1 qxl_interface_update_area_complete_schedule_bh #dirty=1 qxl_render_update_area_done 0x7fc02b603db0 (qemu) qxl_blit stride=-2560 [152, 160, 464, 480] after: (qemu) screendump /tmp/a.ppm qxl_interface_update_area_complete surface=0 [152,160,464,480] #=1 qxl_interface_update_area_complete_schedule_bh #dirty=1 qxl_render_update_area_done 0x7f407af72210 qxl_render_ppm_save_bh 0x7f407f845b60 (primary 0x7f401bc0) qxl_blit stride=-2560 [152, 160, 464, 480] ppm_save /tmp/a.ppm surface=0x7f4077204ad0 (qemu) Note: This doesn't address a possible libvirt problem that was mentioned in length before, but since it has not been reproduced it will be fixed when it is. Meanwhile other users like autotest will be fixed by this patch (by fix I mean screendump will produce the correct output). Alon Levy (4): qxl: switch qxl.c to trace-events qxl/qxl_render.c: add trace events console: pass Monitor to vga_hw_screen_dump/hw_vga_dump qxl-render: call ppm_save on bh console.c |4 +- console.h |5 +- hw/blizzard.c |2 +- hw/g364fb.c|3 +- hw/omap_dss.c |4 +- hw/omap_lcdc.c |3 +- hw/qxl-render.c| 95 +++-- hw/qxl.c | 150 --- hw/qxl.h |2 +- hw/sm501.c |4 +- hw/tcx.c | 12 +++-- hw/vga.c |6 ++- hw/vmware_vga.c|5 +- monitor.c |2 +- trace-events | 55 +++ ui/spice-display.h |3 + 16 files changed, 240 insertions(+), 115 deletions(-) -- 1.7.9.1
[Qemu-devel] [PATCH] Tracing documentation changes
I was trying out the tracing feature of QEMU after checking out the git tree at git://git.qemu.org/qemu.git, and managed to generate some traces, but the following are the changes needed to the documentation, in order to successfully generate the tracing. Some comments: qemu-system-i386 was used because the present git tree does not generate any qemu binary at all. Comments? -- Regards, Peter Teoh diff --git a/docs/tracing.txt b/docs/tracing.txt index ea29f2c..ca5022a 100644 --- a/docs/tracing.txt +++ b/docs/tracing.txt @@ -9,7 +9,7 @@ for debugging, profiling, and observing execution. 1. Build with the 'simple' trace backend: -./configure --trace-backend=simple +./configure --enable-trace-backend=simple make 2. Create a file with the events you want to trace: @@ -19,11 +19,19 @@ for debugging, profiling, and observing execution. 3. Run the virtual machine to produce a trace file: -qemu -trace events=/tmp/events ... # your normal QEMU invocation +qemu-system-i386 -trace events=/tmp/events ... # your normal QEMU invocation + + For example: + +qemu-system-i386 -trace events=/tmp/events -hda ./linux-0.2.img -kernel ./vmlinuz-2.6.35-22-generic -append root=/dev/sda -initrd ./initrd.img-2.6.35-22-generic + +where linux-0.2.img is the dd image containing the root filesystem, vmlinuz-* is the kernel file, and initrd-* is the initrd file. 4. Pretty-print the binary trace file: -./simpletrace.py trace-events trace-* +./scripts/simpletrace.py trace-events trace-1958 + + where trace-1958 is one of the local files produced from earlier tracing in Step 3. == Trace events ==
[Qemu-devel] seamless migration with spice
Hi, We would like to implement seamless migration for Spice, i.e., keeping the currently opened spice client session valid after migration. Today, the spice client establishes the connection to the destination before migration starts, and when migration completes, the client's session is moved to the destination, but all the session data is being reset. We face 2 main challenges when coming to implement seamless migration: (1) Spice client must establish the connection to the destination before the spice password expires. However, during migration, qemu main loop is not processed, and when migration completes, the password might have already expired. Today we solve this by the async command client_migrate_info, which is expected to be called before migration starts. The command is completed once spice client has connected to the destination (or a timeout). Since async monitor commands are no longer supported, we are looking for a new solution. The straightforward solution would be to process the main loop on the destination side during migration. (2) In order to restore the source-client spice session in the destination, we need to pass data from the source to the destination. Example for such data: in flight copy paste data, in flight usb data We want to pass the data from the source spice server to the destination, via Spice client. This introduces a possible race: after migration completes, the source qemu can be killed before the spice-server completes transferring the migration data to the client. Possible solutions: - Have an async migration state notifiers. The migration state will change after all the notifiers complete callbacks are called. - libvirt will wait for qmp event corresponding to spice completing its migration, and only then will kill the source qemu process. Any thoughts? Thanks, Yonit.
Re: [Qemu-devel] [PATCH 0/8] Add GTK UI to enable basic accessibility (v2)
On -10/01/-28163 20:59, Stefan Weil wrote: Am 27.02.2012 00:46, schrieb Anthony Liguori: I realize UIs are the third rail of QEMU development, but over the years I've gotten a lot of feedback from users about our UI. I think everyone struggles with the SDL interface and its lack of discoverability but it's worse than I think most people realize for users that rely on accessibility tools. [...] While I do think accessibility is important... These are solved problems though and while we could reinvent all of this ourselves with SDL, we would be crazy if we did. Modern toolkits, like GTK, solve these problems. GTK itself causes problems, because, it's not ported, thus not available, to all platforms QEMU can run on. It's certainly not available on Haiku at least. Of course, SDL itself is not really a good candidate to add a11y features, due to its framebuffer-based design... By using GTK, we can leverage VteTerminal for screen reader integration and font configuration. We can also use GTK's accelerator support to make accelerators configurable (Gnome provides a global accelerator configuration interface). Hmm the thing using libvte that uses /tmp to insecurely store terminal backlogs ? ;-) [snip] As soon as the GTK UI is considered stable and usable, the default could be changed from SDL to GTK. Due to GTK not being as universally available as SDL, I'd really like not to. François.
Re: [Qemu-devel] seamless migration with spice
Hi. On 03/11/2012 05:36 PM, Anthony Liguori wrote: On 03/11/2012 10:25 AM, Alon Levy wrote: On Sun, Mar 11, 2012 at 09:18:17AM -0500, Anthony Liguori wrote: On 03/11/2012 08:16 AM, Yonit Halperin wrote: Hi, We would like to implement seamless migration for Spice, i.e., keeping the currently opened spice client session valid after migration. Today, the spice client establishes the connection to the destination before migration starts, and when migration completes, the client's session is moved to the destination, but all the session data is being reset. We face 2 main challenges when coming to implement seamless migration: (1) Spice client must establish the connection to the destination before the spice password expires. However, during migration, qemu main loop is not processed, and when migration completes, the password might have already expired. Today we solve this by the async command client_migrate_info, which is expected to be called before migration starts. The command is completed once spice client has connected to the destination (or a timeout). Since async monitor commands are no longer supported, we are looking for a new solution. We need to fix async monitor commands. Luiz sent a note our to qemu-devel recently on this topic. I'm not sure we'll get there for 1.1 but if we do a 3 month release cycle for 1.2, then that's a pretty reasonable target IMHO. What about the second part? it's independant of the async issue. Isn't this a client problem? The client has this state, no? No, part of the data is server specific. If the state is stored in the server, wouldn't it be marshaled as part of the server's migration state? We currently don't restore the server state. That is the problem we want to solve. I meant that the server state can be marshaled from the source to the client, and from the client to the destination. The client serves as the mediator. Another option that we thought about was using save/load vmstate. Regards, Yonit. I read that as the client needs to marshal it's own local state in the session and restore it in the new session. Regards, Anthony Liguori Regards, Anthony Liguori The straightforward solution would be to process the main loop on the destination side during migration. (2) In order to restore the source-client spice session in the destination, we need to pass data from the source to the destination. Example for such data: in flight copy paste data, in flight usb data We want to pass the data from the source spice server to the destination, via Spice client. This introduces a possible race: after migration completes, the source qemu can be killed before the spice-server completes transferring the migration data to the client. Possible solutions: - Have an async migration state notifiers. The migration state will change after all the notifiers complete callbacks are called. - libvirt will wait for qmp event corresponding to spice completing its migration, and only then will kill the source qemu process. Any thoughts? Thanks, Yonit.
Re: [Qemu-devel] [PATCH 1/5] block: Virtual Bridges VERDE GOW disk image format documentation
On Fri, Mar 9, 2012 at 4:50 AM, Stefan Hajnoczi stefa...@gmail.com wrote: The mmap(2) approach doesn't support QEMU's protocol concept where an image format block driver is independent of the underlying storage (host file system, NBD, HTTP, etc). In QEMU block layer terminology NBD, HTTP, and the host file system block drivers are protocols in that they give access to data. It's not possible to mmap(2) over NBD or HTTP. (I'm doing a linear code review, so perhaps your later patches avoid using mmap. But at this point I wanted to comment on this.) indeed mmap() is used in the code. This is unfortunate that it cannot be used. It's a really high performance way to achieve what we want here, and very safe for the use-case. Of course the only medium we support in the product that uses this is filesystem, so I see your point. I'll see about using some different mechanism. This is a good overview. It would be nice to see a structure-level specification of the file format on disk, but given this explanation it doesn't seem critical unless you wish to do that. Thanks - I'd rather not. The format is actually quite obvious from the code. It's very simple and doesn't involve any sort of clustering, etc. There's not much more than the overview that is not quickly understood from the code itself (even the .h file). This has been raised in similar situations in the past: you have BSD licensed this but then say All Rights Reserved. What does that mean? You have just given rights to distribute, modify, etc through the BSD license so I'm not sure it makes sense to reserve all rights. Your copyright is fine but you cannot restrict rights, that would conflict with QEMU's license (which overall is GPL). I'm happy to hack off the All Rights Reserved. Our main goal is to get this accepted upstream. We provide value to customers with our knowledge and our higher level frameworks, not with this disk image format by itself. Also as far as image formats go, as you can see, it's pretty trivial. We chose BSD because 1) QEMU was all BSD a few years back when we originated this, and 2) it plays nice with both open source and closed source. If someone wants to take this and do what they want with it, that's fine with me (and my company). We have been shipping these patches for years with our commercial product so it's not new to the market. I have to post a v2 of the patches anyway - I'll make sure to hack off the All Rights Reserved clause in those. Thanks for your time on this, - Leo Stefan
[Qemu-devel] GSoC - Tracepoint support for the gdbstub
Hello all, I am Mitnick Lyu, who want to contribute to QEMU and participate Google Summer of Code this year. I have some experience with tool-chain developing and I am highly interested in the project: Tracepoint support for the gdbstub. I am wondering to know whether there is someone working for this project. Thanks a lot
Re: [Qemu-devel] How to trace all the guest OS instructions and the micro-ops
Hi On Sun, Mar 11, 2012 at 10:12, Yue Chen ycyc...@gmail.com wrote: I am doing some research based on the QEMU. Does anyone know how to get (trace) all the instructions of the guest OS, and get all the intermediate micro-ops ? (Not in the 0.9.1 version) I believe it's -d option you're looking for. Please read qemu manual for further clarification and info. Additionally, how to get the whole memory or each process' memory data of the guest OS? you wanna do that simply from Qemu's monitor? I don't think that's doable...or at least easily. Qemu sees guest RAM like your physical RAM. It doesn't differentiate which pages belongs to which process. You need to hook or go straight inside the guest OS, maybe using gdb or other tool to get the core dump of those processes. I really appreciate your help. Hope it helps... -- regards, Mulyadi Santosa Freelance Linux trainer and consultant blog: the-hydra.blogspot.com training: mulyaditraining.blogspot.com
Re: [Qemu-devel] [Bug 950692] Re: High CPU usage in Host (revisited)
On Sun, Mar 11, 2012 at 05:30, PetaMem i...@petamem.com wrote: *Newsflash* We do have a well-behaving KVM Host with 3.2.9 kernel on machine C Note: I am not Qemu developer :) OK, I read your bug report many times. I think you need deeper profiling here. Perhaps perf top is the best bet. Just make sure the kernel has debug symbols included so perf has as little difficulties as possible to interpret addresses into symbol names. Once you found the culprit, it could be easier to fix it. NB: -smp triggered it? hm bad locking somewhere perhaps? Anyway, i am not sure Qemu/KVM could really flawlessly implement SMP. So maybe it points to hidden bug in the vCPU parallel execution code somewhere. -- regards, Mulyadi Santosa Freelance Linux trainer and consultant blog: the-hydra.blogspot.com training: mulyaditraining.blogspot.com
Re: [Qemu-devel] [PATCH 4/6] acpi_piix4: Track PCI hotplug status and allow non-ACPI remove path
On Tue, Mar 06, 2012 at 05:14:51PM -0700, Alex Williamson wrote: When a guest probes a device, clear the up bit in the hotplug register. This allows us to enable a non-ACPI remove path for devices added, but never accessed by the guest. This is useful when a guest does not have ACPI PCI hotplug support to avoid losing devices to a guest. We also now individually track bits for up and down rather than clearing both on each PCI hotplug action. Signed-off-by: Alex Williamson alex.william...@redhat.com There are two features here: 1. Fixing up/down handling 2. non ACPI removal I think 1 is done correctly here. But 2. seems something completely unrelated to acpi. How about tracking access in pci core? --- hw/acpi_piix4.c | 58 --- 1 files changed, 46 insertions(+), 12 deletions(-) diff --git a/hw/acpi_piix4.c b/hw/acpi_piix4.c index 4d88e23..7e766e5 100644 --- a/hw/acpi_piix4.c +++ b/hw/acpi_piix4.c @@ -27,6 +27,7 @@ #include sysemu.h #include range.h #include ioport.h +#include pci_host.h //#define DEBUG @@ -75,6 +76,7 @@ typedef struct PIIX4PMState { qemu_irq smi_irq; int kvm_enabled; Notifier machine_ready; +Notifier device_probe; /* for pci hotplug */ ACPIGPE gpe; @@ -336,6 +338,16 @@ static void piix4_pm_machine_ready(Notifier *n, void *opaque) } +static void piix4_pm_device_probe(Notifier *n, void *opaque) +{ +PIIX4PMState *s = container_of(n, PIIX4PMState, device_probe); +PCIDevice *pdev = opaque; + +if (pci_find_domain(pdev-bus) == 0 pci_bus_num(pdev-bus) == 0) { +s-pci0_status.up = ~(1U PCI_SLOT(pdev-devfn)); +} Seems ugly. How about we register notifiers per bus? +} + static PIIX4PMState *global_piix4_pm_state; /* cpu hotadd */ static int piix4_pm_initfn(PCIDevice *dev) @@ -383,6 +395,8 @@ static int piix4_pm_initfn(PCIDevice *dev) qemu_add_machine_init_done_notifier(s-machine_ready); qemu_register_reset(piix4_reset, s); piix4_acpi_system_hot_add_init(dev-bus, s); +s-device_probe.notify = piix4_pm_device_probe; +pci_host_add_dev_probe_notifier(s-device_probe); return 0; } @@ -502,6 +516,7 @@ static void pciej_write(void *opaque, uint32_t addr, uint32_t val) PCIDeviceClass *pc = PCI_DEVICE_GET_CLASS(dev); if (PCI_SLOT(dev-devfn) == slot !pc-no_hotplug) { qdev_free(qdev); +s-pci0_status.down = ~(1U slot); } } @@ -594,16 +609,41 @@ void qemu_system_cpu_hot_add(int cpu, int state) } #endif -static void enable_device(PIIX4PMState *s, int slot) +static int enable_device(PIIX4PMState *s, int slot) { +uint32_t mask = 1U slot; + +if ((s-pci0_status.up | s-pci0_status.down) mask) { +return -1; +} + s-gpe.sts[0] |= PIIX4_PCI_HOTPLUG_STATUS; -s-pci0_status.up |= (1 slot); +s-pci0_status.up |= mask; + +pm_update_sci(s); +return 0; } -static void disable_device(PIIX4PMState *s, int slot) +static int disable_device(PIIX4PMState *s, int slot) { +uint32_t mask = 1U slot; + +if (s-pci0_status.up mask) { +s-pci0_status.up = ~mask; +pciej_write(s, PCI_EJ_BASE, mask); + +/* Clear GPE PCI hotplug status if nothing left pending */ +if (!(s-pci0_status.up | s-pci0_status.down)) { +s-gpe.sts[0] = ~PIIX4_PCI_HOTPLUG_STATUS; +} +return 0; +} + s-gpe.sts[0] |= PIIX4_PCI_HOTPLUG_STATUS; -s-pci0_status.down |= (1 slot); +s-pci0_status.down |= mask; + +pm_update_sci(s); +return 0; } static int piix4_device_hotplug(DeviceState *qdev, PCIDevice *dev, @@ -620,15 +660,9 @@ static int piix4_device_hotplug(DeviceState *qdev, PCIDevice *dev, return 0; } -s-pci0_status.up = 0; -s-pci0_status.down = 0; if (state == PCI_HOTPLUG_ENABLED) { -enable_device(s, slot); +return enable_device(s, slot); } else { -disable_device(s, slot); +return disable_device(s, slot); } - -pm_update_sci(s); - -return 0; }
[Qemu-devel] [PATCH 0/5] AREG0 patches v6
In this version I rebased the series on REGPARM removal, without splitting i386 and x86_64. I've also made some simple performance tests on i386. It looks like REGPARM removal accounts for 2.5% performance loss and the full series 7.5%, in total 10% loss in this test. I'd like to move on with the series, so if nobody produces figures with other targets that show such loss, I'll commit the series next weekend. URL git://repo.or.cz/qemu/blueswirl.git http://repo.or.cz/r/qemu/blueswirl.git Blue Swirl (5): i386: Remove REGPARM softmmu templates: optionally pass CPUState to memory access functions TCG: add 5 arg helpers to def-helper.h Sparc: avoid AREG0 for memory access helpers Sparc: avoid AREG0 wrappers for memory access helpers Makefile.target| 12 +- configure |7 + cpu-all.h |9 + def-helper.h | 26 +++ exec-all.h |2 + exec.c |4 + osdep.h|6 - softmmu_defs.h | 60 +-- softmmu_header.h | 60 +-- softmmu_template.h | 86 ++--- target-sparc/cpu.h |3 +- target-sparc/helper.h | 20 +- target-sparc/ldst_helper.c | 415 target-sparc/op_helper.c | 74 target-sparc/translate.c | 62 --- tcg/arm/tcg-target.c | 53 ++ tcg/hppa/tcg-target.c | 44 + tcg/i386/tcg-target.c | 169 -- tcg/ia64/tcg-target.c | 46 + tcg/mips/tcg-target.c | 44 + tcg/ppc/tcg-target.c | 45 + tcg/ppc/tcg-target.h |2 +- tcg/ppc64/tcg-target.c | 44 + tcg/s390/tcg-target.c | 44 + tcg/sparc/tcg-target.c | 50 +- tcg/tcg.c | 14 -- tcg/tcg.h |7 +- tcg/tci/tcg-target.c |6 + 28 files changed, 966 insertions(+), 448 deletions(-) delete mode 100644 target-sparc/op_helper.c -- 1.7.9
[Qemu-devel] [PATCH 3/5] TCG: add 5 arg helpers to def-helper.h
Signed-off-by: Blue Swirl blauwir...@gmail.com --- def-helper.h | 26 ++ 1 files changed, 26 insertions(+), 0 deletions(-) diff --git a/def-helper.h b/def-helper.h index 8a822c7..a13310e 100644 --- a/def-helper.h +++ b/def-helper.h @@ -118,6 +118,8 @@ DEF_HELPER_FLAGS_3(name, 0, ret, t1, t2, t3) #define DEF_HELPER_4(name, ret, t1, t2, t3, t4) \ DEF_HELPER_FLAGS_4(name, 0, ret, t1, t2, t3, t4) +#define DEF_HELPER_5(name, ret, t1, t2, t3, t4, t5) \ +DEF_HELPER_FLAGS_5(name, 0, ret, t1, t2, t3, t4, t5) #endif /* DEF_HELPER_H */ @@ -140,6 +142,10 @@ dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3)); dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3), \ dh_ctype(t4)); +#define DEF_HELPER_FLAGS_5(name, flags, ret, t1, t2, t3, t4, t5) \ +dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3), \ +dh_ctype(t4), dh_ctype(t5)); + #undef GEN_HELPER #define GEN_HELPER -1 @@ -203,6 +209,22 @@ static inline void glue(gen_helper_, name)(dh_retvar_decl(ret) dh_arg_decl(t1, 1 tcg_gen_helperN(HELPER(name), flags, sizemask, dh_retvar(ret), 4, args); \ } +#define DEF_HELPER_FLAGS_5(name, flags, ret, t1, t2, t3, t4, t5) \ +static inline void glue(gen_helper_, name)(dh_retvar_decl(ret) \ +dh_arg_decl(t1, 1), dh_arg_decl(t2, 2), dh_arg_decl(t3, 3), \ +dh_arg_decl(t4, 4), dh_arg_decl(t5, 5)) \ +{ \ + TCGArg args[5]; \ + int sizemask = 0; \ + dh_sizemask(ret, 0); \ + dh_arg(t1, 1); \ + dh_arg(t2, 2); \ + dh_arg(t3, 3); \ + dh_arg(t4, 4); \ + dh_arg(t5, 5); \ + tcg_gen_helperN(HELPER(name), flags, sizemask, dh_retvar(ret), 5, args); \ +} + #undef GEN_HELPER #define GEN_HELPER -1 @@ -224,6 +246,9 @@ DEF_HELPER_FLAGS_0(name, flags, ret) #define DEF_HELPER_FLAGS_4(name, flags, ret, t1, t2, t3, t4) \ DEF_HELPER_FLAGS_0(name, flags, ret) +#define DEF_HELPER_FLAGS_5(name, flags, ret, t1, t2, t3, t4, t5) \ +DEF_HELPER_FLAGS_0(name, flags, ret) + #undef GEN_HELPER #define GEN_HELPER -1 @@ -235,6 +260,7 @@ DEF_HELPER_FLAGS_0(name, flags, ret) #undef DEF_HELPER_FLAGS_2 #undef DEF_HELPER_FLAGS_3 #undef DEF_HELPER_FLAGS_4 +#undef DEF_HELPER_FLAGS_5 #undef GEN_HELPER #endif -- 1.7.9 From d35e7cdb4c82738e23c5ce51813afb35ad18aa18 Mon Sep 17 00:00:00 2001 Message-Id: d35e7cdb4c82738e23c5ce51813afb35ad18aa18.1331504344.git.blauwir...@gmail.com In-Reply-To: e98a3f58574a8147e73aa278bb3c60b09106a2e2.1331504344.git.blauwir...@gmail.com References: e98a3f58574a8147e73aa278bb3c60b09106a2e2.1331504344.git.blauwir...@gmail.com From: Blue Swirl blauwir...@gmail.com Date: Tue, 12 Jul 2011 13:14:47 + Subject: [PATCH 3/5] TCG: add 5 arg helpers to def-helper.h Signed-off-by: Blue Swirl blauwir...@gmail.com --- def-helper.h | 26 ++ 1 files changed, 26 insertions(+), 0 deletions(-) diff --git a/def-helper.h b/def-helper.h index 8a822c7..a13310e 100644 --- a/def-helper.h +++ b/def-helper.h @@ -118,6 +118,8 @@ DEF_HELPER_FLAGS_3(name, 0, ret, t1, t2, t3) #define DEF_HELPER_4(name, ret, t1, t2, t3, t4) \ DEF_HELPER_FLAGS_4(name, 0, ret, t1, t2, t3, t4) +#define DEF_HELPER_5(name, ret, t1, t2, t3, t4, t5) \ +DEF_HELPER_FLAGS_5(name, 0, ret, t1, t2, t3, t4, t5) #endif /* DEF_HELPER_H */ @@ -140,6 +142,10 @@ dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3)); dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3), \ dh_ctype(t4)); +#define DEF_HELPER_FLAGS_5(name, flags, ret, t1, t2, t3, t4, t5) \ +dh_ctype(ret) HELPER(name) (dh_ctype(t1), dh_ctype(t2), dh_ctype(t3), \ +dh_ctype(t4), dh_ctype(t5)); + #undef GEN_HELPER #define GEN_HELPER -1 @@ -203,6 +209,22 @@ static inline void glue(gen_helper_, name)(dh_retvar_decl(ret) dh_arg_decl(t1, 1 tcg_gen_helperN(HELPER(name), flags, sizemask, dh_retvar(ret), 4, args); \ } +#define DEF_HELPER_FLAGS_5(name, flags, ret, t1, t2, t3, t4, t5) \ +static inline void glue(gen_helper_, name)(dh_retvar_decl(ret) \ +dh_arg_decl(t1, 1), dh_arg_decl(t2, 2), dh_arg_decl(t3, 3), \ +dh_arg_decl(t4, 4), dh_arg_decl(t5, 5)) \ +{ \ + TCGArg args[5]; \ + int sizemask = 0; \ + dh_sizemask(ret, 0); \ + dh_arg(t1, 1); \ + dh_arg(t2, 2); \ + dh_arg(t3, 3); \ + dh_arg(t4, 4); \ + dh_arg(t5, 5); \ + tcg_gen_helperN(HELPER(name), flags, sizemask, dh_retvar(ret), 5, args); \ +} + #undef GEN_HELPER #define GEN_HELPER -1 @@ -224,6 +246,9 @@ DEF_HELPER_FLAGS_0(name, flags, ret) #define DEF_HELPER_FLAGS_4(name, flags, ret, t1, t2, t3, t4) \ DEF_HELPER_FLAGS_0(name, flags, ret) +#define DEF_HELPER_FLAGS_5(name, flags, ret, t1, t2, t3, t4, t5) \ +DEF_HELPER_FLAGS_0(name, flags, ret) + #undef GEN_HELPER #define GEN_HELPER -1 @@ -235,6 +260,7 @@ DEF_HELPER_FLAGS_0(name, flags, ret) #undef DEF_HELPER_FLAGS_2 #undef DEF_HELPER_FLAGS_3 #undef DEF_HELPER_FLAGS_4 +#undef
[Qemu-devel] [PATCH 1/5] i386: Remove REGPARM
Use stack based calling convention (GCC default) for interfacing with generated code instead of register based convention (regparm(3)). Signed-off-by: Blue Swirl blauwir...@gmail.com --- osdep.h |6 --- softmmu_defs.h| 32 +++--- softmmu_template.h|8 +-- tcg/i386/tcg-target.c | 112 tcg/ppc/tcg-target.h |2 +- tcg/tcg.c | 14 -- tcg/tcg.h |7 +--- 7 files changed, 77 insertions(+), 104 deletions(-) diff --git a/osdep.h b/osdep.h index 0350383..15e 100644 --- a/osdep.h +++ b/osdep.h @@ -70,12 +70,6 @@ #define inline always_inline #endif -#ifdef __i386__ -#define REGPARM __attribute((regparm(3))) -#else -#define REGPARM -#endif - #define qemu_printf printf int qemu_daemon(int nochdir, int noclose); diff --git a/softmmu_defs.h b/softmmu_defs.h index c5a2bcd..d47d30d 100644 --- a/softmmu_defs.h +++ b/softmmu_defs.h @@ -9,22 +9,22 @@ #ifndef SOFTMMU_DEFS_H #define SOFTMMU_DEFS_H -uint8_t REGPARM __ldb_mmu(target_ulong addr, int mmu_idx); -void REGPARM __stb_mmu(target_ulong addr, uint8_t val, int mmu_idx); -uint16_t REGPARM __ldw_mmu(target_ulong addr, int mmu_idx); -void REGPARM __stw_mmu(target_ulong addr, uint16_t val, int mmu_idx); -uint32_t REGPARM __ldl_mmu(target_ulong addr, int mmu_idx); -void REGPARM __stl_mmu(target_ulong addr, uint32_t val, int mmu_idx); -uint64_t REGPARM __ldq_mmu(target_ulong addr, int mmu_idx); -void REGPARM __stq_mmu(target_ulong addr, uint64_t val, int mmu_idx); +uint8_t __ldb_mmu(target_ulong addr, int mmu_idx); +void __stb_mmu(target_ulong addr, uint8_t val, int mmu_idx); +uint16_t __ldw_mmu(target_ulong addr, int mmu_idx); +void __stw_mmu(target_ulong addr, uint16_t val, int mmu_idx); +uint32_t __ldl_mmu(target_ulong addr, int mmu_idx); +void __stl_mmu(target_ulong addr, uint32_t val, int mmu_idx); +uint64_t __ldq_mmu(target_ulong addr, int mmu_idx); +void __stq_mmu(target_ulong addr, uint64_t val, int mmu_idx); -uint8_t REGPARM __ldb_cmmu(target_ulong addr, int mmu_idx); -void REGPARM __stb_cmmu(target_ulong addr, uint8_t val, int mmu_idx); -uint16_t REGPARM __ldw_cmmu(target_ulong addr, int mmu_idx); -void REGPARM __stw_cmmu(target_ulong addr, uint16_t val, int mmu_idx); -uint32_t REGPARM __ldl_cmmu(target_ulong addr, int mmu_idx); -void REGPARM __stl_cmmu(target_ulong addr, uint32_t val, int mmu_idx); -uint64_t REGPARM __ldq_cmmu(target_ulong addr, int mmu_idx); -void REGPARM __stq_cmmu(target_ulong addr, uint64_t val, int mmu_idx); +uint8_t __ldb_cmmu(target_ulong addr, int mmu_idx); +void __stb_cmmu(target_ulong addr, uint8_t val, int mmu_idx); +uint16_t __ldw_cmmu(target_ulong addr, int mmu_idx); +void __stw_cmmu(target_ulong addr, uint16_t val, int mmu_idx); +uint32_t __ldl_cmmu(target_ulong addr, int mmu_idx); +void __stl_cmmu(target_ulong addr, uint32_t val, int mmu_idx); +uint64_t __ldq_cmmu(target_ulong addr, int mmu_idx); +void __stq_cmmu(target_ulong addr, uint64_t val, int mmu_idx); #endif diff --git a/softmmu_template.h b/softmmu_template.h index 97020f8..40fcf58 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -89,8 +89,7 @@ static inline DATA_TYPE glue(io_read, SUFFIX)(target_phys_addr_t physaddr, } /* handle all cases except unaligned access which span two pages */ -DATA_TYPE REGPARM glue(glue(__ld, SUFFIX), MMUSUFFIX)(target_ulong addr, - int mmu_idx) +DATA_TYPE glue(glue(__ld, SUFFIX), MMUSUFFIX)(target_ulong addr, int mmu_idx) { DATA_TYPE res; int index; @@ -232,9 +231,8 @@ static inline void glue(io_write, SUFFIX)(target_phys_addr_t physaddr, #endif /* SHIFT 2 */ } -void REGPARM glue(glue(__st, SUFFIX), MMUSUFFIX)(target_ulong addr, - DATA_TYPE val, - int mmu_idx) +void glue(glue(__st, SUFFIX), MMUSUFFIX)(target_ulong addr, DATA_TYPE val, + int mmu_idx) { target_phys_addr_t ioaddr; unsigned long addend; diff --git a/tcg/i386/tcg-target.c b/tcg/i386/tcg-target.c index 1dbe240..9776203 100644 --- a/tcg/i386/tcg-target.c +++ b/tcg/i386/tcg-target.c @@ -116,17 +116,7 @@ static inline int tcg_target_get_call_iarg_regs_count(int flags) return 6; } -flags = TCG_CALL_TYPE_MASK; -switch(flags) { -case TCG_CALL_TYPE_STD: -return 0; -case TCG_CALL_TYPE_REGPARM_1: -case TCG_CALL_TYPE_REGPARM_2: -case TCG_CALL_TYPE_REGPARM: -return flags - TCG_CALL_TYPE_REGPARM_1 + 1; -default: -tcg_abort(); -} +return 0; } /* parse target specific constraints */ @@ -1148,7 +1138,12 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int data_reg, data_reg2 = 0; int addrlo_idx; #if defined(CONFIG_SOFTMMU) -int mem_index, s_bits, arg_idx; +int mem_index, s_bits; +#if TCG_TARGET_REG_BITS == 64 +
[Qemu-devel] [PATCH 5/5] Sparc: avoid AREG0 wrappers for memory access helpers
Adjust generation of load and store templates so that the functions take a parameter for CPUState instead of relying on global env. Remove wrappers. Move remaining memory helpers to ldst_helper.c. Signed-off-by: Blue Swirl blauwir...@gmail.com --- Makefile.target| 12 ++- configure |7 ++ target-sparc/cpu.h | 85 +- target-sparc/ldst_helper.c | 73 +- target-sparc/op_helper.c | 174 target-sparc/translate.c | 10 ++- 6 files changed, 93 insertions(+), 268 deletions(-) delete mode 100644 target-sparc/op_helper.c diff --git a/Makefile.target b/Makefile.target index de61c6b..4941171 100644 --- a/Makefile.target +++ b/Makefile.target @@ -80,7 +80,10 @@ libobj-y = exec.o translate-all.o cpu-exec.o translate.o libobj-y += tcg/tcg.o tcg/optimize.o libobj-$(CONFIG_TCG_INTERPRETER) += tci.o libobj-y += fpu/softfloat.o -libobj-y += op_helper.o helper.o +ifneq ($(TARGET_BASE_ARCH), sparc) +libobj-y += op_helper.o +endif +libobj-y += helper.o ifeq ($(TARGET_BASE_ARCH), i386) libobj-y += cpuid.o endif @@ -101,9 +104,12 @@ tci-dis.o: QEMU_CFLAGS += -I$(SRC_PATH)/tcg -I$(SRC_PATH)/tcg/tci $(libobj-y): $(GENERATED_HEADERS) -# HELPER_CFLAGS is used for all the code compiled with static register +# HELPER_CFLAGS is used for all the legacy code compiled with static register # variables -op_helper.o user-exec.o: QEMU_CFLAGS += $(HELPER_CFLAGS) +ifneq ($(TARGET_BASE_ARCH), sparc) +op_helper.o: QEMU_CFLAGS += $(HELPER_CFLAGS) +endif +user-exec.o: QEMU_CFLAGS += $(HELPER_CFLAGS) # Note: this is a workaround. The real fix is to avoid compiling # cpu_signal_handler() in user-exec.c. diff --git a/configure b/configure index 39d2b54..e055ec9 100755 --- a/configure +++ b/configure @@ -3580,6 +3580,13 @@ case $target_arch2 in exit 1 ;; esac + +case $target_arch2 in + sparc*) +echo CONFIG_TCG_PASS_AREG0=y $config_target_mak + ;; +esac + echo TARGET_SHORT_ALIGNMENT=$target_short_alignment $config_target_mak echo TARGET_INT_ALIGNMENT=$target_int_alignment $config_target_mak echo TARGET_LONG_ALIGNMENT=$target_long_alignment $config_target_mak diff --git a/target-sparc/cpu.h b/target-sparc/cpu.h index 143db17..71a890c 100644 --- a/target-sparc/cpu.h +++ b/target-sparc/cpu.h @@ -581,89 +581,6 @@ void cpu_unassigned_access(CPUState *env1, target_phys_addr_t addr, target_phys_addr_t cpu_get_phys_page_nofault(CPUState *env, target_ulong addr, int mmu_idx); #endif - -#define WRAP_LD(rettype, fn)\ -rettype cpu_ ## fn (CPUState *env1, target_ulong addr) - -WRAP_LD(uint32_t, ldub_kernel); -WRAP_LD(uint32_t, lduw_kernel); -WRAP_LD(uint32_t, ldl_kernel); -WRAP_LD(uint64_t, ldq_kernel); - -WRAP_LD(uint32_t, ldub_user); -WRAP_LD(uint32_t, lduw_user); -WRAP_LD(uint32_t, ldl_user); -WRAP_LD(uint64_t, ldq_user); - -WRAP_LD(uint64_t, ldfq_kernel); -WRAP_LD(uint64_t, ldfq_user); - -#ifdef TARGET_SPARC64 -WRAP_LD(uint32_t, ldub_hypv); -WRAP_LD(uint32_t, lduw_hypv); -WRAP_LD(uint32_t, ldl_hypv); -WRAP_LD(uint64_t, ldq_hypv); - -WRAP_LD(uint64_t, ldfq_hypv); - -WRAP_LD(uint32_t, ldub_nucleus); -WRAP_LD(uint32_t, lduw_nucleus); -WRAP_LD(uint32_t, ldl_nucleus); -WRAP_LD(uint64_t, ldq_nucleus); - -WRAP_LD(uint32_t, ldub_kernel_secondary); -WRAP_LD(uint32_t, lduw_kernel_secondary); -WRAP_LD(uint32_t, ldl_kernel_secondary); -WRAP_LD(uint64_t, ldq_kernel_secondary); - -WRAP_LD(uint32_t, ldub_user_secondary); -WRAP_LD(uint32_t, lduw_user_secondary); -WRAP_LD(uint32_t, ldl_user_secondary); -WRAP_LD(uint64_t, ldq_user_secondary); -#endif -#undef WRAP_LD - -#define WRAP_ST(datatype, fn) \ -void cpu_ ## fn (CPUState *env1, target_ulong addr, datatype val) - -WRAP_ST(uint32_t, stb_kernel); -WRAP_ST(uint32_t, stw_kernel); -WRAP_ST(uint32_t, stl_kernel); -WRAP_ST(uint64_t, stq_kernel); - -WRAP_ST(uint32_t, stb_user); -WRAP_ST(uint32_t, stw_user); -WRAP_ST(uint32_t, stl_user); -WRAP_ST(uint64_t, stq_user); - -WRAP_ST(uint64_t, stfq_kernel); -WRAP_ST(uint64_t, stfq_user); - -#ifdef TARGET_SPARC64 -WRAP_ST(uint32_t, stb_hypv); -WRAP_ST(uint32_t, stw_hypv); -WRAP_ST(uint32_t, stl_hypv); -WRAP_ST(uint64_t, stq_hypv); - -WRAP_ST(uint64_t, stfq_hypv); - -WRAP_ST(uint32_t, stb_nucleus); -WRAP_ST(uint32_t, stw_nucleus); -WRAP_ST(uint32_t, stl_nucleus); -WRAP_ST(uint64_t, stq_nucleus); - -WRAP_ST(uint32_t, stb_kernel_secondary); -WRAP_ST(uint32_t, stw_kernel_secondary); -WRAP_ST(uint32_t, stl_kernel_secondary); -WRAP_ST(uint64_t, stq_kernel_secondary); - -WRAP_ST(uint32_t, stb_user_secondary); -WRAP_ST(uint32_t, stw_user_secondary); -WRAP_ST(uint32_t, stl_user_secondary); -WRAP_ST(uint64_t, stq_user_secondary); -#endif - -#undef WRAP_ST #endif int cpu_sparc_signal_handler(int host_signum, void *pinfo, void *puc); @@ -776,6 +693,8 @@ uint64_t cpu_tick_get_count(CPUTimer *timer);
[Qemu-devel] Windows boot is waiting for keypress
Hello, I am virtualizing a Windows 2000 machine (bit-by-bit copy of physical machine). It apparently works fine except for one strange thing: windows 2000 stops at the black screen (first step of boot) where it asks me if I want to load Windows 2000 or previous operating system. When the machine was physical there was a short timeout in that screen, and then it would proceed with the default choice of windows 2000, but in qemu it waits forever like if I pressed a key to interrupt the timeout. Note that there is another bootloader *before* that one (it chainloads the windows bootloader), and that one also has a timeout which can be interrupted with a keypress, but that one does not show the problem, i.e. goes ahead after its timeout without my need to press key. That's a problem because I cannot really start the Windows VM from linux scripting as it stops at boot. Sending a keypress to a VM programmatically is not so easy methinks... or do you know how to do that? This is with qemu-kvm 1.0 Thanks for any idea
Re: [Qemu-devel] Re : Regression: more 0.12 regression (SeaBIOS related?)
On Wed, Mar 07, 2012 at 06:31:31AM -0800, Alain Ribière wrote: I ran qemu 1.0.1 and the latest SeaBIOS (from the git) with the following options : qemu-system-i386 -L git/bios -fda disk.img -no-fd-bootchk -boot a -m 16 Here is the log : https://docs.google.com/open?id=0B7mz0vq6Rpb7UE1ibjJDcEhTRWlNV050QnMyMWwtZw Here is the floppy disk image I used : https://docs.google.com/open?id=0B7mz0vq6Rpb7bHpYaEt2SnVUUi1KaWE3a3lBQUJpQQ The floppy disk is simply a C-DOS 720 Ko floppy created by format a: /s. So it's quite empty. Qemu doesn't crash or freeze. But I can just type a single character and the nothing else. But the system is still running (there is a clock at the bottom right of the screen). I tracked this down. Looks like the image takes over the PS2 irq and keyboard handling, but then occasionally calls into the BIOS. When it does call the BIOS irq handler (manually), it expects the irq handler to enable the keyboard. Weird. Anyway, the patch below fixes it for me. -Kevin From 90ce89f8953da0e89c311aa34116b59aac1c6c5e Mon Sep 17 00:00:00 2001 From: Kevin O'Connor ke...@koconnor.net Date: Sun, 11 Mar 2012 20:45:56 -0400 Subject: [PATCH] ps2: Enable keyboard at end of PS2 port irq. To: seab...@seabios.org Looks like some old programs expect the keyboard irq to enable the keyboard port at the end of the irq. This behavior was seen on an image of Concurrent DOS. Signed-off-by: Kevin O'Connor ke...@koconnor.net --- src/ps2port.c |3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/src/ps2port.c b/src/ps2port.c index 1f04299..4b27b7a 100644 --- a/src/ps2port.c +++ b/src/ps2port.c @@ -404,6 +404,9 @@ handle_09(void) process_key(v); +// Some old programs expect ISR to turn keyboard back on. +i8042_command(I8042_CMD_KBD_ENABLE, NULL); + done: eoi_pic1(); } -- 1.7.6.5
Re: [Qemu-devel] [RESEND][PATCH 2/2 v3] deal with guest panicked event
At 03/08/2012 07:56 PM, Daniel P. Berrange Wrote: On Thu, Mar 08, 2012 at 01:52:45PM +0200, Avi Kivity wrote: On 03/08/2012 01:36 PM, Daniel P. Berrange wrote: On Thu, Mar 08, 2012 at 01:28:56PM +0200, Avi Kivity wrote: On 03/08/2012 12:15 PM, Wen Congyang wrote: When the host knows the guest is panicked, it will set exit_reason to KVM_EXIT_GUEST_PANICKED. So if qemu receive this exit_reason, we can send a event to tell management application that the guest is panicked and set the guest status to RUN_STATE_PANICKED. Signed-off-by: Wen Congyang we...@cn.fujitsu.com --- kvm-all.c|5 + monitor.c|3 +++ monitor.h|1 + qapi-schema.json |2 +- qmp.c|3 ++- vl.c |1 + 6 files changed, 13 insertions(+), 2 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 77eadf6..b3c9a83 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -1290,6 +1290,11 @@ int kvm_cpu_exec(CPUState *env) (uint64_t)run-hw.hardware_exit_reason); ret = -1; break; +case KVM_EXIT_GUEST_PANICKED: +monitor_protocol_event(QEVENT_GUEST_PANICKED, NULL); +vm_stop(RUN_STATE_PANICKED); +ret = -1; +break; If the management application is not aware of this event, then it will never resume the guest, so it will appear hung. Even if the mgmt app doesn't know about the QEVENT_GUEST_PANICKED, it should still see a QEVENT_STOP event emitted by vm_stop() surely ? So it will know the guest CPUs have been stopped, even if it isn't aware of the reason why, which seems fine to me. No. The guest is stopped, and there's no reason to suppose that the management app will restart it. Behaviour has changed. Suppose the guest has reboot_on_panic set; now the behaviour change is even more visible - service will stop completely instead of being interrupted for a bit while the guest reboots. Hmm, so this calls for a new command line argument to control behaviour, similar to what we do for disk werror, eg something like --onpanic report|pause|stop|... where report - emit QEVENT_GUEST_PANICKED only If the guest is panicked when libvirt is stopped, and we only emit a event, we cannot know the guest is panicked when libvirt starts. So I add a new RunState to solve this problem. If the guest is stopped when it is panicked, it will change the behaviour. So I think the new RunState should be a running state. Thanks Wen Congyang pause - emit QEVENT_GUEST_PANICKED and pause VM stop - emit QEVENT_GUEST_PANICKED and quit VM stop - emit QEVENT_GUEST_PANICKED and quit VM This would map fairly well into libvirt, where we already have config parameters for controlling what todo with a guest when it panics. Regards, Daniel
Re: [Qemu-devel] [RFC][PATCH 05/16 v8] Add API to get memory mapping
At 03/09/2012 06:05 PM, Jan Kiszka Wrote: On 2012-03-09 10:57, Wen Congyang wrote: At 03/09/2012 05:41 PM, Jan Kiszka Wrote: On 2012-03-09 03:53, HATAYAMA Daisuke wrote: From: Wen Congyang we...@cn.fujitsu.com Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping Date: Fri, 09 Mar 2012 10:26:56 +0800 At 03/09/2012 10:05 AM, HATAYAMA Daisuke Wrote: From: Wen Congyang we...@cn.fujitsu.com Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping Date: Fri, 09 Mar 2012 09:46:31 +0800 At 03/09/2012 08:40 AM, HATAYAMA Daisuke Wrote: From: Wen Congyang we...@cn.fujitsu.com Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping Date: Thu, 08 Mar 2012 16:52:29 +0800 At 03/07/2012 11:27 PM, HATAYAMA Daisuke Wrote: From: Wen Congyang we...@cn.fujitsu.com Subject: [RFC][PATCH 05/16 v8] Add API to get memory mapping Date: Fri, 02 Mar 2012 18:18:23 +0800 How does the memory portion referenced by PT_LOAD program headers with p_vaddr == 0 looks through gdb? If we cannot access such portions, part not referenced by the page table CR3 has is unnecessary, isn't it? The part is unnecessary if you use gdb. But it is necessary if you use crash. crash users would not use paging option because even if without using it, we can see all memory well, so the paging option is only for gdb users. Yes, the paging option is only for gdb users. The default value if off. It looks to me that the latter part only complicates the logic. If instead, collecting virtual addresses only, way of handling PT_LOAD entries become simpler, for example, they no longer need to be physically contiguous in a single entry, and reviewing and maintaince becomes easy. Sorry, I donot understand what do you want to say. The processing that adds part not referenced by page table to vmcore is meaningless for gdb. crash doesn't require it. So, it only complicates the current logic. If the paging mode is on, we can also use crash to analyze the vmcore. As the comment methioned, the memory used by the 1st kernel may be not referenced by the page table, so we neet this logic. As I said several times, crash users don't use paging mode. Users of the paging mode is gdb only just as you say. So, the paging path needs to collect part referenced by page table only since the other part is invisible to gdb. If crash can work both with and without paging, it should be default *on* to avoid writing cores that can later on only be analyzed with that tool. Still not sure, though, if that changes the requirement on what memory regions should be written in that mode. If this logic is not remvoed, crash can work both with and without paging. But the default value is 'off' now, because the option is '-p'. And this would be unfortunate if you do not want to use crash for analyzing (I'm working on gdb python scripts which will make gdb - one day - at least as powerful as crash). If paging mode has the same information that non-paging mode has, I would even suggest to drop it. I donot have any knowledge about gdb python scripts. But is it OK to work without virtual address in PT_LOAD? Thanks Wen Congyang Jan
Re: [Qemu-devel] [PATCH 0/8] Add GTK UI to enable basic accessibility (v2)
On 03/11/2012 12:29 PM, Stefan Weil wrote: Hi Anthony, are you still working on a new version of this patch series? Yeah, but this is purely a free time project which is in short supply these days :-) I suggest to commit a slightly modified version of v2 which adds the GTK UI as an optional user interface (only enabled by a configure option). The big thing I have on my TODO list is understanding what's happening with resizing. I'm actually in the process of upgrading to GTK3 so I hope I'll see the issues other people are having. I'll try to get a new series out as soon as I have some free time. Regards, Anthony Liguori This makes testing easier and allows developers to send patches which improve the new UI. As soon as the GTK UI is considered stable and usable, the default could be changed from SDL to GTK. Regards, Stefan Weil PS. Of course the committed patches should pass checkpatch.pl without errors.
Re: [Qemu-devel] [PATCH 0/8] Add GTK UI to enable basic accessibility (v2)
On 03/11/2012 01:24 PM, François Revol wrote: GTK itself causes problems, because, it's not ported, thus not available, to all platforms QEMU can run on. It's certainly not available on Haiku at least. There is no perfect solution here. I think GTK is the best that's out there. By using GTK, we can leverage VteTerminal for screen reader integration and font configuration. We can also use GTK's accelerator support to make accelerators configurable (Gnome provides a global accelerator configuration interface). Hmm the thing using libvte that uses /tmp to insecurely store terminal backlogs ? ;-) Yeah, I saw it on a blog, it must be true! Regards, Anthony Liguori As soon as the GTK UI is considered stable and usable, the default could be changed from SDL to GTK. Due to GTK not being as universally available as SDL, I'd really like not to. François.
[Qemu-devel] change the default value of timeout
Hi all Currently, if not using nonblocking mode, the default timeout of select() in main_loop_wait is 1000ms. There has no problem if you run few VMs. But when running more VMs like 32 or 64, then the problem is coming. Our experience shows that when running 64 idle VMs, the pkg C6 residency is 88% by default and it goes to 90% when I change timeout to 10s. And 2% means 1 watt in my box. Since this is only a timeout value for select, so I suggest to use a more reasonable way to set it rather than using a fixed value. For example, using an argument to set it by user. But I am not sure whether something else is depend on the timeout. best regards yang
Re: [Qemu-devel] regarding qcow2metadata
yes ofcourse here is the output *[root@t06 p]# ls -lsh* *total 1.4M* *1.4M -rw-r--r-- 1 root root 8.1G Mar 12 09:10 guest* On Wed, Mar 7, 2012 at 10:00 PM, Mulyadi Santosa mulyadi.sant...@gmail.comwrote: have you double checked by using ls -lsh command? :) -- *Pankaj Rawat*