Stefan Weil writes:
> +
> +switch (opc) {
> +case INDEX_op_end:
> +case INDEX_op_nop:
> +break;
You could probably get some more speed out of this by using a threaded
interpreter with gcc's computed goto extension. That's typically
significantly faster than a p
> I think it also improves branch target prediction - if you have a tight
> loop of a few opcodes the predictor can guess where you're headed (since
> there is a separate lookup key for each opcode), whereas with the
> original code, there's a single key which cannot be used to predict the
> branc
> You generally want CSE, yes? So you can't blame gcc for getting it
> wrong sometimes.
There are cases where CSE pessimizes the code, .e.g when it increases
memory pressure too much or caches something that is easier recomputed.
This is just another one.
BTW I checked again and the problem see
Peter Maydell writes:
>
> The answer is that the edge cases basically never match. No CPU
> architecture does handling of NaNs and input denormals and output
> denormals and underflow checks and all the rest of it in exactly
> the same way as anybody else. (In particular x86 is pretty crazy,
Can
Running LTP testcases/kernel/io/direct_io/test_dma_thread_diotest7
causes IO errors in the guest. There are no IO errors on the host.
Kernel Linux 3.0.0-rc*
Using a standard emulated IDE -hda image.
I tried a few qemu versions, it happens at least with the one
in FC14 and with 0.14. qemu master
On Mon, Jun 27, 2011 at 05:59:41PM +0200, Kevin Wolf wrote:
> Am 23.06.2011 01:36, schrieb Andi Kleen:
> >
> > Running LTP testcases/kernel/io/direct_io/test_dma_thread_diotest7
> > causes IO errors in the guest. There are no IO errors on the host.
> >
> > K
Michael Roth writes:
> +
> +int64_t qmp_guest_file_open(const char *filename, const char *mode, Error
> **err)
> +{
> +FILE *fh;
> +int fd, ret;
> +int64_t id = -1;
> +
> +if (!logging_enabled()) {
> +error_set(err, QERR_QGA_LOGGING_FAILED);
> +goto out;
> +}
>
Hi,
Is the linux-user qemu for x86-64/i386 supposed to work?
For example running it with a simple hello world on FC14 in gdb:
/home/ak/tsrc/hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV),
dynamically linked (uses shared libs), for GNU/Linux 2.6.32, not stripped
[Thread debugging us
I don't have any problems running a statically linked x86_64
helloworld program in an i386 chroot. Dynamically linked programs try
to use wrong libraries, but at least running
/lib64/ld-linux-x86-64.so.2 directly works.
static binary segfaults too. I wonder if it's some setup on my system.
I
On Sat, Feb 12, 2011 at 10:08:26AM +0200, Blue Swirl wrote:
> On Sat, Feb 12, 2011 at 12:27 AM, Andi Kleen wrote:
> >
> >> I don't have any problems running a statically linked x86_64
> >> helloworld program in an i386 chroot. Dynamically linked programs try
>
Don't declare XSAVE as supported
i386 cpuid.c currently claims XSAVE is supported in the CPUID filter,
but that's not true: Only FXSAVE is supported. Remove that bit
from the filter.
Signed-off-by: Andi Kleen
diff --git a/target-i386/cpuid.c b/target-i386/cpuid.c
index 6a0f7ca..4251
Add more boundary checking to sse3/4 parsing
s?sse3 uses tables with only two entries per op, but it is indexed
with b1 which can contain variables upto 3. This happens when ssse3
or sse4 are used with REP* prefixes.
Add boundary checking for this case.
Signed-off-by: Andi Kleen
diff --git a
Peter Lieven writes:
> Starting mail service (Postfix)
> NMI Watchdog detected LOCKUP on CPU 0
You could simply turn off the NMI watchdog (nmi_watchdog=0 at the kernel
command line)
Perhaps the PMU emulation is not complete and nmi watchdog
needs PMU. It's not really needed normally.
-Andi
Juan Quintela writes:
>
> - networking: man, setting networking is a mess, libvirt just does it
> for you.
Agreed it's messy, but isn't this something that the standard qemu
command line tool could potentially do better by itself? I don't see why you
need a wrapper for that.
-Andi
--
a...@li
On Wed, Aug 22, 2007 at 10:03:32AM +0300, Avi Kivity wrote:
> Maybe the kernel is using the timer, so userspace can't. Just a guess.
HPET has multiple timers (variable, but typically 2 or 4). The kernel
only uses timer 0. It's possible someone else in user space is using
it though. Try lsof /dev/
> $ dmesg |grep -i hpet
> ACPI: HPET 7D5B6AE0, 0038 (r1 A M I OEMHPET 5000708 MSFT 97)
> ACPI: HPET id: 0x8086a301 base: 0xfed0
> hpet0: at MMIO 0xfed0, IRQs 2, 8, 0, 0
> hpet0: 4 64-bit timers, 14318180 Hz
> hpet_resources: 0xfed0 is busy
What kernel version was that? There w
16 matches
Mail list logo