Re: [Qemu-devel] the problem of booting linux kernel from NFS
Hello¸ new bee wrote: ... when I boot the new kernel, there is the message : IP-Config: No network device available Looking up port of RPC 13/2 on 192.168.15.1 http://192.168.15.1 and the kernel cannot mount the root fs.but the kernel has the network device driver. What is the reason? and What should I solve it? The reason that you kernel didn't contain right driver module for your network card. Rgds, Anton
Re: [Qemu-devel] [PATCH] fix possible NULL pointer use in hw/ptimer.c
We currently don't check the return value in the init function where the new timer is created but do check it wherever it is used which is backwards and wasteful. You would prefer that qemu just segfaults rather than die gracefully? I think qemu should die before it returns from qemu_malloc. Having to check every return value is extremely tedious and (as you've proved) easy to miss. If the allocation fails we don't have any viable alternatives, so we may as well stop right there. Paul
Re: [Qemu-devel] qemu cpu-all.h exec.c
On 1/3/08, Paul Brook [EMAIL PROTECTED] wrote: On Wednesday 02 January 2008, Blue Swirl wrote: On 1/2/08, Paul Brook [EMAIL PROTECTED] wrote: Also the opaque parameter may need to be different for each function, it just didn't matter for the unassigned memory case. Do you really have systems where independent devices need to respond to different sized accesses to the same address? I don't think so. But one day unassigned or even normal RAM memory access may need an opaque parameter, so passing the device's opaque to unassigned memory handler is wrong. I'm not convinced. Your current implementation seems to introduce an extra level of indirection without any plausible benefit. If you're treating unassigned memory differently it needs to be handled much earlier that so you can raise CPU exceptions. Earlier, where's that? Another approach could be conditional stacked handlers, where a higher level handler could pass the access request to lower one (possibly modifying it in flight) or handle completely. Maybe this solves the longstanding generic DMA issue if taken to the device to memory direction.
Re: [Qemu-devel] qemu cpu-all.h exec.c
Blue Swirl wrote: On 1/3/08, Paul Brook [EMAIL PROTECTED] wrote: On Wednesday 02 January 2008, Blue Swirl wrote: On 1/2/08, Paul Brook [EMAIL PROTECTED] wrote: Also the opaque parameter may need to be different for each function, it just didn't matter for the unassigned memory case. Do you really have systems where independent devices need to respond to different sized accesses to the same address? I don't think so. But one day unassigned or even normal RAM memory access may need an opaque parameter, so passing the device's opaque to unassigned memory handler is wrong. I'm not convinced. Your current implementation seems to introduce an extra level of indirection without any plausible benefit. If you're treating unassigned memory differently it needs to be handled much earlier that so you can raise CPU exceptions. Earlier, where's that? Another approach could be conditional stacked handlers, where a higher level handler could pass the access request to lower one (possibly modifying it in flight) or handle completely. Maybe this solves the longstanding generic DMA issue if taken to the device to memory direction. As I said earlier, the only correct way to handle memory accesses is to be able to consider a memory range and its associated I/O callbacks as an object which can be installed _and_ removed. It implies that there is a priority system close to what you described. It is essential to correct long standing PCI bugs for example. Regards, Fabrice.
Re: [Qemu-devel] qemu cpu-all.h exec.c
On 1/3/08, Fabrice Bellard [EMAIL PROTECTED] wrote: Blue Swirl wrote: On 1/3/08, Paul Brook [EMAIL PROTECTED] wrote: On Wednesday 02 January 2008, Blue Swirl wrote: On 1/2/08, Paul Brook [EMAIL PROTECTED] wrote: Also the opaque parameter may need to be different for each function, it just didn't matter for the unassigned memory case. Do you really have systems where independent devices need to respond to different sized accesses to the same address? I don't think so. But one day unassigned or even normal RAM memory access may need an opaque parameter, so passing the device's opaque to unassigned memory handler is wrong. I'm not convinced. Your current implementation seems to introduce an extra level of indirection without any plausible benefit. If you're treating unassigned memory differently it needs to be handled much earlier that so you can raise CPU exceptions. Earlier, where's that? Another approach could be conditional stacked handlers, where a higher level handler could pass the access request to lower one (possibly modifying it in flight) or handle completely. Maybe this solves the longstanding generic DMA issue if taken to the device to memory direction. As I said earlier, the only correct way to handle memory accesses is to be able to consider a memory range and its associated I/O callbacks as an object which can be installed _and_ removed. It implies that there is a priority system close to what you described. It is essential to correct long standing PCI bugs for example. This should be feasible, though raises a few questions. Does this mean another API for stacked registration, or should stacking happen automatically with current API? A new function is needed for removal. What could be the API for setting priorities? How would multiple layers be enabled for multiple devices at same location? How can a higher level handler pass the request to lower one? Do we need a status return for access handler? A few use cases: Partial width device unassigned ROM RAM unassigned SBus controller EBus controller Device unassigned Other direction (for future expansion): Device DMA controller SBus controller IOMMU RAM unassigned
Re: [Qemu-devel] [PATCH 2 of 3] Optionally link against libuuid if present
* Filip Navara [EMAIL PROTECTED] [2007-12-11 15:29]: Hi Ryan others, now I have been holding a SMBIOS patch on my hard disk for way to long it seems. I used a different approach from yours, so I decided to publish it for review or further ideas. What I did was to modify the bochs bios to produce the SMBIOS tables and I get the UUID using VMware backdoor port from the virtual machine. Attached are just the changed files, creating a patch will take a while because it's against VERY OLD version of the sources. Filip, Thanks for posting this. I agree with Fabrice that doing the SMBIOS tables in rombios is a better approach. The rombio32.c file you included didn't look that old to me; it has a CVS release tag of: 'rombios32.c,v 1.11 2007/08/03 13:56:13' AFAICT, it looks like a straight-forward SMBIOS implementation. The only thing worth adding to yours is the BIOS release date string in the type 0 table. Setting this date to something newer than typical CONFIG_ACPI_BLACKLIST_YEAR value (2000 in Gutsy's kernels) lets the kernel enable ACPI features (like power-off). Any idea on when you might have a patch that I can test? -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 [EMAIL PROTECTED]
Re: [Qemu-devel] qemu cpu-all.h exec.c
As I said earlier, the only correct way to handle memory accesses is to be able to consider a memory range and its associated I/O callbacks as an object which can be installed _and_ removed. It implies that there is a priority system close to what you described. It is essential to correct long standing PCI bugs for example. This should be feasible, though raises a few questions. Does this mean another API for stacked registration, or should stacking happen automatically with current API? A new function is needed for removal. What could be the API for setting priorities? How would multiple layers be enabled for multiple devices at same location? How can a higher level handler pass the request to lower one? Do we need a status return for access handler? I don't think passing through requests to the next handler is an interesting use case. Just consider a device to handle all accesses within its defined region. If an overlapping region is accessed then at best you're into highly machine dependent behavior. The only interesting case I can think of is x86 where a PCI region may be overlayed on top of RAM. A single level of priority (ram/rom vs. everything else) is probably sufficient for practical purposes. The most important thing is that when one of the mappings is removed, subsequent accesses to the previously overlapped region hit the remaining device. A few use cases: Partial width device unassigned ROM RAM unassigned SBus controller EBus controller Device unassigned Other direction (for future expansion): Device DMA controller SBus controller IOMMU RAM unassigned I think these are different things: - Registering multiple devices within the same address space. - Mapping access from one address sapce to annother. Currently qemu does neither. The former is what Fabrice is talking about. The latter depends how general you want the solution to be. One possibility is for the device DMA+registration routines map everything onto CPU address space. Paul
[Qemu-devel] performance monitor
hi! has anyone ever used some real performance monitoring tools (like papiex, perfex, pfmon, etc.) on qemu? i'm running a debian linux and would like to time some applications inside qemu and have tried the perfmon2 kernel-patch (http://perfmon2.sourceforge.net/) for testing. sadly, it does not work... dmesg tells me that the CPU is not identified correctly (unsupported family=6). Now i am not really sure what type of hardware-support the monitor relies on (i think PMU is the correct term, but I'm not sure about that) and what CPUs are supported (dmesg tells me that qemu simulates a Pentium M, but that's probably because I've compiled the kernel on my *real* Pentium M). ... Ok, to cut a long question short: Is there any hardware support im qemu for doing monitoring (that goes deeper than using time) and has anyone ever tested something that could work? Thanks! Clemens
[Qemu-devel] qemu/target-mips helper.c
CVSROOT:/sources/qemu Module name:qemu Changes by: Thiemo Seufer ths 08/01/03 21:26:24 Modified files: target-mips: helper.c Log message: Fix exception debug output. CVSWeb URLs: http://cvs.savannah.gnu.org/viewcvs/qemu/target-mips/helper.c?cvsroot=qemur1=1.61r2=1.62
Re: [Qemu-devel] performance monitor
On Thursday 03 January 2008 22:29:06 Paul Brook wrote: ... Ok, to cut a long question short: Is there any hardware support im qemu for doing monitoring (that goes deeper than using time) and has anyone ever tested something that could work? Probably your application wants the performance counters. Qemu doesn't emulate those. Besides which, qemu is not cycle accurate. Any performance measurements your make are pretty much meaningless, and bear absolutely no relationship to real hardware. Thanks for the quick answer Paul! Not really what I wanted to hear, but probably true ;-) Does anyone have an idea on how I can measure performance in qemu to a somewhat accurate level? I have modified qemu (the memory handling) and the linux kernel and want to find out the penalty this introduced... does anyone have any comments / ideas on this? Thanks!
Re: [Qemu-devel] performance monitor
... Ok, to cut a long question short: Is there any hardware support im qemu for doing monitoring (that goes deeper than using time) and has anyone ever tested something that could work? Probably your application wants the performance counters. Qemu doesn't emulate those. Besides which, qemu is not cycle accurate. Any performance measurements your make are pretty much meaningless, and bear absolutely no relationship to real hardware. Paul
Re: [Qemu-devel] performance monitor
Does anyone have an idea on how I can measure performance in qemu to a somewhat accurate level? I have modified qemu (the memory handling) and the linux kernel and want to find out the penalty this introduced... does anyone have any comments / ideas on this? Short answer is you probably can't. And even if you can I won't believe tyour results unless you've verified them on real hardware :-) With the exception of some very small embedded cores, Modern CPUs have complex out of order execution pipelines and multi-level cache hierarchies. It's common for performance to be dominated by these secondary factors rather than raw instruction throughput. Exactly what features dominate performance is very application specific. Determining which factor dominates is unlikely to be something qemu can help with. However if e.g. you know that for your application there's a good correlation was between performance and L2 cache misses you could instrument qemu to and a L1/L2 cache model. The overhead will be fairly severe (easily 10x slower), and completely screw up any realtime measurements. However it would produce some useful cache use statistics that you could use to guesstimate actual performance. This is similar to how cachegrind works. Obviously if your application isn't cache bound then these figures will be meaningless. Paul
Re: [Qemu-devel] performance monitor
On Thursday 03 January 2008 23:07:07 you wrote: Does anyone have an idea on how I can measure performance in qemu to a somewhat accurate level? I have modified qemu (the memory handling) and the linux kernel and want to find out the penalty this introduced... does anyone have any comments / ideas on this? Short answer is you probably can't. And even if you can I won't believe tyour results unless you've verified them on real hardware :-) With the exception of some very small embedded cores, Modern CPUs have complex out of order execution pipelines and multi-level cache hierarchies. It's common for performance to be dominated by these secondary factors rather than raw instruction throughput. Exactly what features dominate performance is very application specific. Determining which factor dominates is unlikely to be something qemu can help with. However if e.g. you know that for your application there's a good correlation was between performance and L2 cache misses you could instrument qemu to and a L1/L2 cache model. The overhead will be fairly severe (easily 10x slower), and completely screw up any realtime measurements. However it would produce some useful cache use statistics that you could use to guesstimate actual performance. This is similar to how cachegrind works. Obviously if your application isn't cache bound then these figures will be meaningless. Well, the measuring I had in mind partly concentrats on TLB misses, page faults, etc. (in addition to the cycle measuring). guess i'll have to implement something for myself in qemu :-/ But thanks a lot for helping me out!
Re: [Qemu-devel] [PATCH v3] Add cache parameter to -drive
Le lundi 24 décembre 2007 à 15:34 +0100, andrzej zaborowski a écrit : [...] -#define BLK_READ_BLOCK(a, len) sd_blk_read(sd-bdrv, sd-data, a, len) -#define BLK_WRITE_BLOCK(a, len)sd_blk_write(sd-bdrv, sd-data, a, len) I committed the patch but I retained the use of these macros in sd.c because they make sense to me. No problem for me. Thank you, Laurent -- - [EMAIL PROTECTED] -- La perfection est atteinte non quand il ne reste rien à ajouter mais quand il ne reste rien à enlever. Saint Exupéry signature.asc Description: Ceci est une partie de message numériquement signée
Re: [Qemu-devel] performance monitor
On Jan 3, 2008 11:11 PM, Clemens Kolbitsch [EMAIL PROTECTED] wrote: Well, the measuring I had in mind partly concentrats on TLB misses, page faults, etc. (in addition to the cycle measuring). guess i'll have to implement something for myself in qemu :-/ There's something not clear here: do you want to measure your kernel changes or do you want to profile Qemu? As Paul clearly explained you can't do both :) If you want to measure kernel performance oprofile is probably worth looking at. But you will need the real hardware. Another option, though much more intrusive, would be to add explicit performance counters in places you need to look at (this method can be applied to both Qemu too). And to say it again: nobody can expect to measure OS performance on a simulator, unless the simulator is directly derived from the HDL code written by designers. At least I would never trust such a result ;) Laurent
Re: [Qemu-devel] performance monitor
On Thursday 03 January 2008 23:18:58 Paul Brook wrote: Well, the measuring I had in mind partly concentrats on TLB misses, page faults, etc. (in addition to the cycle measuring). guess i'll have to implement something for myself in qemu :-/ Be aware that the TLB qemu uses behaves very differently to a real CPU TLB. If you want to get TLB miss statistics you'll need to model a real TLB for that separately. Sure, yes. But I don't even care what it would be like on a real CPU. I just want to know the impact it has on the emulated CPU ;-) Page faults should be straightforward, but any half-decent guest OS would be able to tell you those anyway. True *g*
Re: [Qemu-devel] performance monitor
Well, the measuring I had in mind partly concentrats on TLB misses, page faults, etc. (in addition to the cycle measuring). guess i'll have to implement something for myself in qemu :-/ Be aware that the TLB qemu uses behaves very differently to a real CPU TLB. If you want to get TLB miss statistics you'll need to model a real TLB for that separately. Page faults should be straightforward, but any half-decent guest OS would be able to tell you those anyway. Paul
[Qemu-devel] Multiple Ethernet interfaces for Gumstix connex (NetDUO-mmc)
QEMU Development Team, I've been playing with the latest snapshot of the QEMU stuff (qemu-snapshot-2008-01-03_05.tar.bz2) to play with the Gumstix Connex machine (the latest stable version 0.9.0 doesn't seem to have the Gumstix connex machine). I noticed that this architecture supports only one ethernet (SMC91C111). I would like to emulate something like a NetDUO-MMC (http://gumstix.com/store/catalog/product_info.php?products_id=156), where the PXA255a can utilize two SMC91C111 devices via GPIO. Example: Running the following only allows me to only use eth0: ./arm-softmmu/qemu-system-arm -M connex -pflash ~/flash -nographic \ -net nic,vlan=0 -net tap,vlan=0,ifname=tap0,script=no\ -net nic,vlan=1 -net tap,vlan=1,ifname=tap1,script=no Eth0 on the virtualized gumstix can receive and transmit information with the tap0 host interface. On the other hand eth1 is never even discovered by the Linux kernel. Upon eth1 not being recognized, I started digging into the code. I modified the code inside hw/gumstix.c to add the following line: [EMAIL PROTECTED] qemu-snapshot-2008-01-03_05]$ svn diff hw/gumstix.c Index: hw/gumstix.c === --- hw/gumstix.c(revision 1259) +++ hw/gumstix.c(working copy) @@ -78,7 +78,10 @@ /* Interrupt line of NIC is connected to GPIO line 36 */ smc91c111_init(nd_table[0], 0x04000300, - pxa2xx_gpio_in_get(cpu-gpio)[36]); + pxa2xx_gpio_in_get(cpu-gpio)[36]); +smc91c111_init(nd_table[1], 0x08000300, + pxa2xx_gpio_in_get(cpu-gpio)[37]); + } static void verdex_init(int ram_size, int vga_ram_size, As for the arguments, I noticed the three following items: 1. nd_table[1] gets initialized in vl.c when parsing the arguments. 2. 0x08000300 I got based on looking at a real gumstix with a a NetDUO-mmc. # cat /proc/iomem 04000300-040f : smc91x-regs 04000300-0400030f : smc91x 08000300-080f : smc91x-regs 08000300-0800030f : smc91x 2000-2fff : PCMCIA socket 0 2000-23ff : io 2800-2bff : attribute 2c00-2fff : memory 40301680-403016a3 : pxa2xx-i2c.0 4040-40400083 : pxa2xx-i2s 4060-4060 : pxa2xx-udc 4110-41100fff : pxa2xx-mci 4400-4400 : pxa2xx-fb a000-a3ff : System RAM a0018000-a015b0c7 : Kernel text a015c000-a019a877 : Kernel data 3. gpio line 37, I took a stab in the dark. With this change, eth0 seemed to continue to work perfectly. As for Eth1: 1. The Linux Kernel seemed to ALSO recognize eth1. (example: ifconfig eth1 seemed to work fine) 2. Sending packets out the eth1 interface seemed okay, since I could run tcpdump on the tap1 host interface and see packets coming from the virtualized Connex eth1. 3. Unfortunately, the eth1 device seems to have problems receiving packets due to some Interrupt conflict. I seem to get a number of the following errors: NETDEV WATCHDOG: eth1: transmit timed out I was wondering if: 1. Anyone else out there was working on adding support for another Ethernet interface for the Gumstix connex (or Gumstix verdex) OR 2. Anyone could suggest some information on trying to add in another Ethernet interface. Thanks, John M. Woo Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs
Re: [Qemu-devel] Multiple Ethernet interfaces for Gumstix connex (NetDUO-mmc)
Hi, On 04/01/2008, John W [EMAIL PROTECTED] wrote: 3. gpio line 37, I took a stab in the dark. With this change, eth0 seemed to continue to work perfectly. As for Eth1: 1. The Linux Kernel seemed to ALSO recognize eth1. (example: ifconfig eth1 seemed to work fine) 2. Sending packets out the eth1 interface seemed okay, since I could run tcpdump on the tap1 host interface and see packets coming from the virtualized Connex eth1. 3. Unfortunately, the eth1 device seems to have problems receiving packets due to some Interrupt conflict. I seem to get a number of the following errors: NETDEV WATCHDOG: eth1: transmit timed out I was wondering if: 1. Anyone else out there was working on adding support for another Ethernet interface for the Gumstix connex (or Gumstix verdex) OR 2. Anyone could suggest some information on trying to add in another Ethernet interface. cat /proc/interrupts may yield some information on which pin the second NIC is connected to. The distance between the interrupt numbers for eth0 and eth1 should be the same as between eth0 GPIO and eth1 GPIO. In particular they may be using the same GPIO with the two signals being ORed or ANDed, or other combination, this can be done in qemu too. Regards
[Qemu-devel] Windows Vista 64 bit on QEMU
Hello All, Has anyone had success installing (and runnning) Vista 64 bit on QEMU. I tried it and landed into a variety of windows blue screen errors. The EFI BIOS also does not seem to be working with the QEMU version in CVS. Thanks for the help. Regards, Anup