Re: [Qemu-devel] [PATCH] ARM - Remove fixed map code buffer restriction
On 12 December 2011 18:10, andrzej zaborowski balr...@gmail.com wrote: On 12 December 2011 19:03, Peter Maydell peter.mayd...@linaro.org wrote: On 12 December 2011 17:24, andrzej zaborowski balr...@gmail.com wrote: BTW: I think we can also use the ld branch when we see the goto target is in Thumb mode. The target of a goto is currently never Thumb (because gotos are always to other TCG generated code and we only generate ARM insns). I'm aware of that, I just like functions that can do what their name says well. :) It does have an assert which will catch it if you try, so no one should get caught out by it, and on ARMv7 the add is apparently an interworking branch, so I think it might even work. Dave
Re: [Qemu-devel] [ICON] QEMU Mascot Contest v.2
Yes, the Q with the emu head looks nice - I kind of think the middle of the Q starts to look nicely like an egg; but perhaps that's just me. Dave
Re: [Qemu-devel] [PATCH 2/5] linux-user: add open() hijack infrastructure
On 2 November 2011 19:23, Alexander Graf ag...@suse.de wrote: There are a number of files in /proc that expose host information to the guest program. This patch adds infrastructure to override the open() syscall for guest programs to enable us to on the fly generate guest sensible files. Signed-off-by: Alexander Graf ag...@suse.de --- linux-user/syscall.c | 52 +++-- 1 files changed, 49 insertions(+), 3 deletions(-) diff --git a/linux-user/syscall.c b/linux-user/syscall.c index 9f5da36..38953ba 100644 --- a/linux-user/syscall.c +++ b/linux-user/syscall.c @@ -4600,6 +4600,52 @@ int get_osversion(void) return osversion; } +static int do_open(void *cpu_env, const char *pathname, int flags, mode_t mode) Once you open the pandoras-box that is emulating /proc, I think you'll probably need to hook it in more places and be more general; although you may well get away with it for this particular case. Isn't it better to put the filename interception code somewhere more general so that it can also be misused by other calls - e.g. stat(). I guess you're also going to need to be able to do /proc/pid/* instead of /proc/self; something is bound to use that. Dave
Re: [Qemu-devel] [PATCH] Make cpu_single_env thread local (Linux only for now)
On 5 October 2011 10:21, Paolo Bonzini pbonz...@redhat.com wrote: If interested people can test the patches more and submit them more formally, I'd be very glad. I wrote it for RCU, but of course that one is not really going to be 1.0 material (even for 9p). Hmm this got a bit more complex than the original patch; still it covers a lot more bases. Should this also replace the THREAD that's defined in linux-user/qemu.h and bsd-user/qemu.h (that is __thread if built with NPTL)? It seems to only be there for 'thread_env' which is also a CPUState* (hmm - what state does that contain that cpu_single_env doesn't?) Dave
Re: [Qemu-devel] How to run realview-pbx-a9 image in qemu
On 27 September 2011 14:01, loody milo...@gmail.com wrote: hi: snip Would you mind to let me know which configs you use to compile for a9 running on qemu? Kernel configs? I mostly use prebuilt kernels from the Linaro images. Dave
Re: [Qemu-devel] [PATCH 0/8] tcg/interpreter: Add TCG + interpreter for bytecode (virtual machine)
On 18 September 2011 16:13, Stefan Weil w...@mail.berlios.de wrote: Am 18.09.2011 17:02, schrieb Mulyadi Santosa: Hi :) On Sun, Sep 18, 2011 at 02:59, Stefan Weil w...@mail.berlios.de wrote: Hello, these patches add a new code generator (TCG target) to qemu. I personally congrats you for your hard work. So, here's a question from who are not so keen with Qemu internals: what is the biggest advantage of using TCI instead of directly using TCG? TCG with native code support is much faster (6x to 10x), so for emulation on a supported host, TCI has no advantage for normal users. Is it possible to dynamically switch between the two? The two cases I'm thinking of are: 1) Using the interpreter to execute one or two instructions in an exception handling case 2) Avoiding TCG code generation on the first few runs of a piece of code that might only be init code, and only bothering with TCG for hotter code. Dave
Re: [Qemu-devel] [PATCH 0/8] tcg/interpreter: Add TCG + interpreter for bytecode (virtual machine)
On 19 September 2011 11:20, Stefan Hajnoczi stefa...@gmail.com wrote: On Mon, Sep 19, 2011 at 9:40 AM, David Gilbert david.gilb...@linaro.org wrote: snip Is it possible to dynamically switch between the two? The two cases I'm thinking of are: 1) Using the interpreter to execute one or two instructions in an exception handling case 2) Avoiding TCG code generation on the first few runs of a piece of code that might only be init code, and only bothering with TCG for hotter code. The tricky thing with using the interpeter for lesser run code is that it has a bunch of machinery in front of it which probably makes it relatively similar to actually emitting native code. The interesting benchmark would be to translate blocks but never cache them for future executions - compare this with TCI to see how much difference there is between executing with interpretation vs translation. If the interpreter is almost as expensive as the translator then it's not worth it. Right; the trick is if you have a passably fast interpreter you can afford to do some more expensive optimisations in the code generator which would be interesting. It's not unusual to find an awful lot of executed once-or-twice code. Dave
Re: [Qemu-devel] emulated ARM performance vs real processor ?
On 1 September 2011 08:32, Julien Heyman bidsom...@gmail.com wrote: Hi, I was wondering if anyone had some data regarding the relative performance of any given ARM board emulated in QEMU versus the real thing. Yes, I do know this depends a lot on the host PC running qemu, but some ballpark/example figures would help. Say, I emulate a 400 Mhz ARM9 processor on a Core2Duo laptop @ 2 Ghz, what kind of performance/timing ratio should I expect, one way or the other ? For example, for boot time. I have no idea whether the overhead of emulation is over-compensated by the huge processing power of the host compared to the real HW target, and by which factor. Comparing performance is always a bit tricky, and I've not really got a solid set of benchmarks ready to run to try it but to give some numbers: 1) Boot times Comparing the Linaro 11.08 ubuntu desktop images, time to boot to desktop Real Panda board (dual core A9 at 1GHz, 1GB RAM, running off SD card) - 2minutes to desktop QEMU vexpress (2xA9 core, 1GB RAM, emulated sd card, running on a Core2 Duo T9400 2.53GHz laptop) - 3minutes to desktop (The times are scarily close to exact minutes - timeout somewhere?) Now, QEMU system mode only ever uses one host core when emulating multiple cores, so there is a factor 2 disadvantage there, but on the plus side the memory bandwidth of the host and the disk speed is probably much higher than the Panda. 2) Simple md5sum benchmark As a really simple benchmark the test: time (dd if=/dev/zero bs=1024k count=1000 | md5sum) Panda board 14.5s real, 10.7 user, 3.8s system Emulated Overo board (single A8 processor on same laptop as above) - 41s real, 24.7s user, 16.4s system User mode emulated - 14.2s real, 14s user, 0.5s system Native on x86 host - 3.2s real, 2.5s user, 1.2s system So, that's two sets of pretty bogus dummy simple benchmarks! I suppose one observation is that the boot time isn't that bad compared to the real (different) hardware, the user mode emulation was comparable to the Panda, but the system emulation on a simple test seems a lot slower. These things will vary wildly depending what your benchmark is; but as a summary I'd say that the ARM system mode emulation is fast enough to use interactively but CPU wise is noticeably slower than user mode emulation. Dave
Re: [Qemu-devel] emulated ARM performance vs real processor ?
On 2 September 2011 17:04, Julien Heyman bidsom...@gmail.com wrote: Thanks Dave. I use system emulation, and my main concern is just to know that the actual board will run faster than the emulation. So based on your example, and even though my target board (mini2440) is nowhere as fast as a Panda board, this should be the case by a comfortable margin. OK, but be careful - you will occasionally trip over something where the emulation of it is particularly dire and the real board might be faster; for example with the default flags SD card writes can be a factor of 10 slower than real hardware, so relying on the real hardware always being faster is dangerous. You'll probably get similar CPU emulation artefacts where there are some instructions that are particularly nasty to emulate but really cheap on the hardware. Dave
Re: [Qemu-devel] [PATCH] tcg: Reload local variables after return from longjmp
On 11 August 2011 15:10, Paolo Bonzini pbonz...@redhat.com wrote: I'm not sure about what to read from there: If I make cpu_single_env thread local with __thread and leave 0d101... in, then again it works reliably on 32bit Lucid, and is flaky on 64 bit Oneiric (5/10 2 hangs, 3 segs) I've also tried using a volatile local variable in cpu_exec to hold a copy of env and restore that rather than cpu_single_env. With this it's solid on 32bit lucid and flaky on 64bit Oneirc; these failures on 64bit OO look like it running off the end of the code buffer (all 0 code), jumping to non-existent code addresses and a seg in tb_reset_jump_recursive2. It looks like neither a thread-local cpu_single_env nor a volatile copy fix the bug?!? As I say at the bottom of that bug I'm assuming I'm hitting multiple bugs. Although it's not clear to me why I don't hit them on 32bit lucid. Dave
[Qemu-devel] Multithreaded ARM regression from 0d101938
Hi, I've just filed bug 823902 ( https://bugs.launchpad.net/qemu/+bug/823902 ) which is a mutlthreaded user mode ARM crash that goes away if I revert 0d101938 ( tcg: Reload local variables after return from longjmp ). It's actually a bit more complicated than that, in that: 1) It fails reliably on 32bit Lucid with that commit 2) It works reliably on 32bit Lucid without that commit 3) It fails reliably on 64bit Oneiric or Natty with that commit 4) It works mostly on 64bit Oneiric without that commit. (By mostly I'm talking in run it 10 times and it fails a couple) Peter Maydell has suggested a few things; and we've tried using a local volatile copy of env in cpu_exec which seems to work fine on 32bit Lucid and helps a bit on 64bit Oneiric but has a much higher failure rate than just reverting 823902 - which is all a bit confusing. My guess is I'm seeing multiple bugs but haven't quite nailed why/how. Dave
[Qemu-devel] Defaulting SD_OPTS to cache=writeback
Hi, Write performance on the SD emulation on ARM is rather painful with the default empty SD_OPTS; it's getting something like 130KB/s on the vexpress model With cache=writeback this goes up to a sensible 8MB/s. (This is with the file on an LUKS encrypted lvm partition that is my home directory on an Ubuntu Lucid install on my core 2 laptop). The impact is probably so bad because most of the writes are done using the SD card's native 512byte block size which turns into a read/modify/write. Is it reasonable to set SD_OPTS to cache=writeback ? If people want other options they can always use -drive, but having the default give 130KB/s is nasty. Dave
Re: [Qemu-devel] How to run realview-pbx-a9 image in qemu
On 12 July 2011 07:04, Xiao Jiang jgq...@gmail.com wrote: It looks like I am not in luck, qemu still can't run successfully. I recompiled the qemu from linaro qemu tree and executed below instructions in order. 1. open window A, run below cmd. xjiang@xjiang-desktop:~/work/qemu$ sudo qemu-system-arm -M realview-pbx-a9 -m 1024 -kernel zImage-cortex-a9 -serial telnet::,server -append I think you have a bad kernel image. There are some prebuilt ones (and config files) linked off: http://www.arm.com/community/software-enablement/linux.php but unfortunately the webserver that holds them isn't happy today. Dave
Re: [Qemu-devel] How to run realview-pbx-a9 image in qemu
On 11 July 2011 09:21, Xiao Jiang jgq...@gmail.com wrote: Hello, I downloaded latest qemu 0.14.1, it should support realview-pbx-a9 board now from below cmd. $ qemu-system-arm -M ?|grep Cortex-A9 realview-pbx-a9 ARM RealView Platform Baseboard Explore for Cortex-A9 Then I compiled a zImage from latest mainline with use realview-smp_defconfig as config file, but unfortunately qemu can't run with the image, I had tried the below instructions. 1. qemu-system-arm -M realview-pbx-a9 -kernel zImage -initrd initrd.img -nographic -append console=ttyAMA0 2. qemu-system-arm -M realview-pbx-a9 -m 1024M -kernel zImage -nographic -append root=/dev/nfs nfsroot=128.225.160.22:/home/work/rootfs rw console=ttyAMA0 rdinit=/sbin/init -net nic -net tap,ifname=tap0,script=no Both instructions can't work, console is hang and no informations is appeared. so I am wondering if there are something wrong about the kernel, am I choose the wrong config file for realview-pbx-a9 board? Or the native mainline kernel can't boot on qemu? Any suggestions? Thanks a lot! The command I use is: sudo ./arm-softmmu/qemu-system-arm -M realview-pbx-a9 -m 1024 -kernel thekernel -serial telnet::,server -append console=earlycon console=ttyS0 console=ttyAMA0 earlyprintk root=/dev/nfs nfsroot=10.1.1.1:/armroot rw ip=10.1.1.2:10.1.1.1:10.1.1.1:255.255.255.0:armqemu nfsrootdebug -net nic -net tap,ifname=tap0,script=no,downscript=no and then telnet to port to get the console. (I'm using the linaro qemu tree). Dave