Re: [kvm-devel] [Qemu-devel] [PATCH 3/4] Add support for HPET periodic timer.
On Tue, Aug 21, 2007 at 01:15:22PM -0700, Matthew Kent wrote: > On Tue, 2007-21-08 at 21:40 +0200, Luca wrote: > > On 8/21/07, Matthew Kent <[EMAIL PROTECTED]> wrote: > > > On Sat, 2007-18-08 at 01:11 +0200, Luca Tettamanti wrote: > > > > plain text document attachment (clock-hpet) > > > > Linux operates the HPET timer in legacy replacement mode, which means > > > > that > > > > the periodic interrupt of the CMOS RTC is not delivered (qemu won't be > > > > able > > > > to use /dev/rtc). Add support for HPET (/dev/hpet) as a replacement for > > > > the > > > > RTC; the periodic interrupt is delivered via SIGIO and is handled in the > > > > same way as the RTC timer. > > > > > > > > Signed-off-by: Luca Tettamanti <[EMAIL PROTECTED]> > > > > > > I must be missing something silly here.. should I be able to open more > > > than one instance of qemu with -clock hpet? Because upon invoking a > > > second instance of qemu HPET_IE_ON fails. > > > > It depends on your hardware. Theoretically it's possible, but I've yet > > to see a motherboard with more than one periodic timer. > > Ah thank you, after re-reading the docs I think I better understand > this. In a risk of being off-topic, maybe you can help me try the hpet support. When I try the hpet Documentation demo I get # ./hpet poll /dev/hpet 1 1000 -hpet: executing poll hpet_poll: info.hi_flags 0x0 hpet_poll, HPET_IE_ON failed while I have $ dmesg|grep -i HPET ACPI: HPET 7D5B6AE0, 0038 (r1 A M I OEMHPET 5000708 MSFT 97) ACPI: HPET id: 0x8086a301 base: 0xfed0 hpet0: at MMIO 0xfed0, IRQs 2, 8, 0, 0 hpet0: 4 64-bit timers, 14318180 Hz hpet_resources: 0xfed0 is busy Time: hpet clocksource has been installed. Any idea what I am misconfiguring? Thanks, Dan.
[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2
Luca Tettamanti wrote: > Actually I'm having troubles with cyclesoak (probably it's calibration), > numbers are not very stable across multiple runs... > I've had good results with cyclesoak; maybe you need to run it in runlevel 3 so the load generated by moving the mouse or breathing doesn't affect meaurements. > I've also tried APC which was suggested by malc[1] and: > - readings are far more stable > - the gap between dynticks and non-dynticks seems not significant > > >> Can you verify this by running >> >>strace -c -p `pgrep qemu` & sleep 10; pkill strace >> >> for all 4 cases, and posting the results? >> > > Plain QEMU: > > With dynticks: > > % time seconds usecs/call callserrors syscall > -- --- --- - - > 57.970.000469 0 13795 clock_gettime > 32.880.000266 0 1350 gettimeofday > 7.420.60 0 1423 1072 sigreturn > 1.730.14 0 5049 timer_gettime > 0.000.00 0 1683 1072 select > 0.000.00 0 2978 timer_settime > -- --- --- - - > 100.000.000809 26278 2144 total > The 1072 select() errors are the delivered ticks (EINTR). But why only 1000? would have expected 1 for a 1000Hz guest in a 10 sec period. > HPET: > > % time seconds usecs/call callserrors syscall > -- --- --- - - > 87.480.010459 1 10381 10050 select > 8.450.001010 0 40736 clock_gettime > 2.730.000326 0 10049 gettimeofday > 1.350.000161 0 10086 10064 sigreturn > -- --- --- - - > 100.000.011956 71252 20114 total > This is expected. 1 tick per millisecond. > Unix (SIGALRM): > > % time seconds usecs/call callserrors syscall > -- --- --- - - > 90.360.011663 1 10291 9959 select > 7.380.000953 0 40355 clock_gettime > 2.050.000264 0 9960 gettimeofday > 0.210.27 0 9985 9969 sigreturn > -- --- --- - - > 100.000.012907 70591 19928 total > Same here. > And KVM: > > dynticks: > > % time seconds usecs/call callserrors syscall > -- --- --- - - > 78.900.004001 1 6681 5088 rt_sigtimedwait > 10.870.000551 0 27901 clock_gettime > 4.930.000250 0 7622 timer_settime > 4.300.000218 0 10078 timer_gettime > 0.390.20 0 3863 gettimeofday > 0.350.18 0 6054 ioctl > 0.260.13 0 4196 select > 0.000.00 0 1593 rt_sigaction > -- --- --- - - > 100.000.005071 67988 5088 total > kvm uses sigtimedwait() to wait for signals. Here, an error (ETIMEDOUT) indicates we did _not_ get a wakeup, so there are 1500 wakeups in a 10 second period. Strange. Some calibration error? > HPET: > > % time seconds usecs/call callserrors syscall > -- --- --- - - > 90.200.011029 0 32437 22244 rt_sigtimedwait > 4.460.000545 0 44164 clock_gettime > 2.590.000317 0 12128 gettimeofday > 1.500.000184 0 10193 rt_sigaction > 1.100.000134 0 12461 select > 0.150.18 0 6060 ioctl > -- --- --- - - > 100.000.012227117443 22244 total > 10K wakeups per second. The code is not particularly efficient (11 syscalls per tick), but overhead is still low. > Unix: > > % time seconds usecs/call callserrors syscall > -- --- --- - - > 83.290.012522 0 31652 21709 rt_sigtimedwait > 6.910.001039 0 43125 clock_gettime > 3.500.000526 0 6042 ioctl > 2.740.000412 0 9943 rt_sigaction > 1.980.000298 0 12183 select > 1.580.000238 0 11850 gettimeofday > -- --- --- - - > 100.00
[Qemu-devel] Re: [kvm-devel] linux verify_pmtmr_rate() issue
Matthew Kent wrote: > Issue here that's beyond my skill set to resolve: > > I've been starting multiple linux 2.6.23-rc3 x86 guests up in parallel > with qemu/kvm and noticed pm-timer is being disabled in some of them > with > > PM-Timer running at invalid rate: 126% of normal - aborting. > > in dmesg when I start about 6 at a time. Unfortunately without the timer > a tickless kernel in my guests is disabled. > > I also replicated the issue by starting a single vm when the host system > was busy enough. > > After some amateurish debugging added to verify_pmtmr_rate() in the > kernel acpi_pm driver and get_pmtmr() in qemu acpi I can indeed see it > returning just slowly enough to throw off the sanity check. > > [ 10.264772] DEBUG: PM-Timer running value1: 2925874 value2: 3058371 > expected_rate: 107385 delta: 132497 count: 2269 > [ 10.270766] PM-Timer running at invalid rate: 123% of normal - > aborting. > > For now I've just disabled verify_pmtmr_rate() in the kernel for my > guests and they seem to be keeping time just fine. > > Not sure if a patch for the linux kernel making the sanity check > optional with a kernel parameter would make sense or there's something > else that can be done at the qemu level. > You can try implementing qemu's cpu_get_real_ticks() using gettimeofday() instead of using the time stamp counter (which can go back or jump forward if the time stamp counter is not synced across cpus). Not sure if that's the problem though. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.
[Qemu-devel] Re: ANN: DetaolB v0.5 is released
On Tuesday 21 August 2007 3:04:12 am Christian MICHON wrote: > If your tcc fork can compile the kernel, uclibc, I'll gladly remove > binutils and gcc :) I'm working on that. Currently I'm trying to strip down an "allnoconfig" kernel build to a something I can build from the command via three or four lines of shell script (a gcc invocation and whatever postprocessing the image requires), boot it in qemu, and have it say "hello world". Then I'll try to get tcc to do that (tccboot did, but with an extremely old kernel), and work my way up from there... But that's fodder for the tcc list, not here. :) Rob -- "One of my most productive days was throwing away 1000 lines of code." - Ken Thompson.
[Qemu-devel] linux verify_pmtmr_rate() issue
Issue here that's beyond my skill set to resolve: I've been starting multiple linux 2.6.23-rc3 x86 guests up in parallel with qemu/kvm and noticed pm-timer is being disabled in some of them with PM-Timer running at invalid rate: 126% of normal - aborting. in dmesg when I start about 6 at a time. Unfortunately without the timer a tickless kernel in my guests is disabled. I also replicated the issue by starting a single vm when the host system was busy enough. After some amateurish debugging added to verify_pmtmr_rate() in the kernel acpi_pm driver and get_pmtmr() in qemu acpi I can indeed see it returning just slowly enough to throw off the sanity check. [ 10.264772] DEBUG: PM-Timer running value1: 2925874 value2: 3058371 expected_rate: 107385 delta: 132497 count: 2269 [ 10.270766] PM-Timer running at invalid rate: 123% of normal - aborting. For now I've just disabled verify_pmtmr_rate() in the kernel for my guests and they seem to be keeping time just fine. Not sure if a patch for the linux kernel making the sanity check optional with a kernel parameter would make sense or there's something else that can be done at the qemu level. Thanks. -- Matthew Kent <[EMAIL PROTECTED]> http://magoazul.com
RE: [Qemu-devel] [PATCH] Share Vmware communication port betweendevices
>> Hi, >> >> Some more information about the VMware backdoor can be found at: >> http://chitchat.at.infoseek.co.jp/vmware/backdoor.html > >Are there interesting apps that make use of this? I really don't like >the idea of supporting this PV protocol if we're not going to get >interesting apps out of it. The project you referenced earlier didn't >have source code available (you had to request it privately) and I don't >think it's being generally used. > >The problem with this particular protocol is that it's inherently >x86-specific b/c it depends on doing PIO in userspace. If we're just >looking to get this functionality, it would be better to do it as a PCI >device or something that could actually work on non-x86 architectures. Actually we have implemented such device + driver in qemu for kvm. You can check out the code, currently is uses ioport too but is accessed by guest pci driver. It is called vmchannel device. From the guest view it's a pci device. From the host view it's a qemu device that can be contacted as a socket/file/.. It is used for mgmt proposes. --Dor
Re: [Qemu-devel] [PATCH] Share Vmware communication port between devices
On Tue, 2007-08-21 at 22:10 +0200, Hervé Poussineau wrote: > Hi, > > Some more information about the VMware backdoor can be found at: > http://chitchat.at.infoseek.co.jp/vmware/backdoor.html Are there interesting apps that make use of this? I really don't like the idea of supporting this PV protocol if we're not going to get interesting apps out of it. The project you referenced earlier didn't have source code available (you had to request it privately) and I don't think it's being generally used. The problem with this particular protocol is that it's inherently x86-specific b/c it depends on doing PIO in userspace. If we're just looking to get this functionality, it would be better to do it as a PCI device or something that could actually work on non-x86 architectures. In my mind, vmmouse was worth implementing since the driver already exists and was packaged in a number of distros. Regards, Anthony Liguori > Hervé > > >
Re: [Qemu-devel] [PATCH 3/4] Add support for HPET periodic timer.
On Tue, 2007-21-08 at 21:40 +0200, Luca wrote: > On 8/21/07, Matthew Kent <[EMAIL PROTECTED]> wrote: > > On Sat, 2007-18-08 at 01:11 +0200, Luca Tettamanti wrote: > > > plain text document attachment (clock-hpet) > > > Linux operates the HPET timer in legacy replacement mode, which means that > > > the periodic interrupt of the CMOS RTC is not delivered (qemu won't be > > > able > > > to use /dev/rtc). Add support for HPET (/dev/hpet) as a replacement for > > > the > > > RTC; the periodic interrupt is delivered via SIGIO and is handled in the > > > same way as the RTC timer. > > > > > > Signed-off-by: Luca Tettamanti <[EMAIL PROTECTED]> > > > > I must be missing something silly here.. should I be able to open more > > than one instance of qemu with -clock hpet? Because upon invoking a > > second instance of qemu HPET_IE_ON fails. > > It depends on your hardware. Theoretically it's possible, but I've yet > to see a motherboard with more than one periodic timer. Ah thank you, after re-reading the docs I think I better understand this. -- Matthew Kent <[EMAIL PROTECTED]> http://magoazul.com
[Qemu-devel] [PATCH] Share Vmware communication port between devices
Hi, Some more information about the VMware backdoor can be found at: http://chitchat.at.infoseek.co.jp/vmware/backdoor.html Hervé
[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2
On Tue, 21 Aug 2007, Luca Tettamanti wrote: Avi Kivity ha scritto: Luca Tettamanti wrote: At 1000Hz: QEMU hpet5.5% dynticks 11.7% KVM hpet3.4% dynticks7.3% No surprises here, you can see the additional 1k syscalls per second. This is very surprising to me. The 6.2% difference for the qemu case translates to 62ms per second, or 62us per tick at 1000Hz. That's more than a hundred simple syscalls on modern processors. We shouldn't have to issue a hundred syscalls per guest clock tick. [..snip preulde..] I've also tried APC which was suggested by malc[1] and: - readings are far more stable - the gap between dynticks and non-dynticks seems not significant [..dont snip the obvious fact and snip the numbers..] Luca [1] copy_to_user inside spinlock is a big no-no ;) [..notice a projectile targeting at you and rush to see the code..] Mixed feelings about this... But in principle the code ofcourse is dangerous, thank you kindly for pointing this out. I see two ways out of this: a. moving the lock/unlock inside the loop with unlock preceding sometimes sleep deprived copy_to_user b. fill temporaries and after the loop is done copy it in one go Too late, too hot, i wouldn't mind beying on a receiving side of a good advice. -- vale
Re: [Qemu-devel] [PATCH 3/4] Add support for HPET periodic timer.
On 8/21/07, Matthew Kent <[EMAIL PROTECTED]> wrote: > On Sat, 2007-18-08 at 01:11 +0200, Luca Tettamanti wrote: > > plain text document attachment (clock-hpet) > > Linux operates the HPET timer in legacy replacement mode, which means that > > the periodic interrupt of the CMOS RTC is not delivered (qemu won't be able > > to use /dev/rtc). Add support for HPET (/dev/hpet) as a replacement for the > > RTC; the periodic interrupt is delivered via SIGIO and is handled in the > > same way as the RTC timer. > > > > Signed-off-by: Luca Tettamanti <[EMAIL PROTECTED]> > > I must be missing something silly here.. should I be able to open more > than one instance of qemu with -clock hpet? Because upon invoking a > second instance of qemu HPET_IE_ON fails. It depends on your hardware. Theoretically it's possible, but I've yet to see a motherboard with more than one periodic timer. "dmesg | grep hpet" should tell you something like: hpet0: 3 64-bit timers, 14318180 Hz Luca
[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2
Avi Kivity ha scritto: > Luca Tettamanti wrote: >> At 1000Hz: >> >> QEMU >> hpet5.5% >> dynticks 11.7% >> >> KVM >> hpet3.4% >> dynticks7.3% >> >> No surprises here, you can see the additional 1k syscalls per second. > > This is very surprising to me. The 6.2% difference for the qemu case > translates to 62ms per second, or 62us per tick at 1000Hz. That's more > than a hundred simple syscalls on modern processors. We shouldn't have to > issue a hundred syscalls per guest clock tick. APIC or PIT interrupts are delivered using the timer, which will be re-armed after each tick, so I'd expect 1k timer_settime per second. But according to strace it's not happening, maybe I'm misreading the code? > The difference with kvm is smaller (just 3.9%), which is not easily > explained as the time for the extra syscalls should be about the same. My > guess is that guest behavior is different; with dynticks the guest does > about twice as much work as with hpet. Actually I'm having troubles with cyclesoak (probably it's calibration), numbers are not very stable across multiple runs... I've also tried APC which was suggested by malc[1] and: - readings are far more stable - the gap between dynticks and non-dynticks seems not significant > Can you verify this by running > >strace -c -p `pgrep qemu` & sleep 10; pkill strace > > for all 4 cases, and posting the results? Plain QEMU: With dynticks: % time seconds usecs/call callserrors syscall -- --- --- - - 57.970.000469 0 13795 clock_gettime 32.880.000266 0 1350 gettimeofday 7.420.60 0 1423 1072 sigreturn 1.730.14 0 5049 timer_gettime 0.000.00 0 1683 1072 select 0.000.00 0 2978 timer_settime -- --- --- - - 100.000.000809 26278 2144 total HPET: % time seconds usecs/call callserrors syscall -- --- --- - - 87.480.010459 1 10381 10050 select 8.450.001010 0 40736 clock_gettime 2.730.000326 0 10049 gettimeofday 1.350.000161 0 10086 10064 sigreturn -- --- --- - - 100.000.011956 71252 20114 total Unix (SIGALRM): % time seconds usecs/call callserrors syscall -- --- --- - - 90.360.011663 1 10291 9959 select 7.380.000953 0 40355 clock_gettime 2.050.000264 0 9960 gettimeofday 0.210.27 0 9985 9969 sigreturn -- --- --- - - 100.000.012907 70591 19928 total And KVM: dynticks: % time seconds usecs/call callserrors syscall -- --- --- - - 78.900.004001 1 6681 5088 rt_sigtimedwait 10.870.000551 0 27901 clock_gettime 4.930.000250 0 7622 timer_settime 4.300.000218 0 10078 timer_gettime 0.390.20 0 3863 gettimeofday 0.350.18 0 6054 ioctl 0.260.13 0 4196 select 0.000.00 0 1593 rt_sigaction -- --- --- - - 100.000.005071 67988 5088 total HPET: % time seconds usecs/call callserrors syscall -- --- --- - - 90.200.011029 0 32437 22244 rt_sigtimedwait 4.460.000545 0 44164 clock_gettime 2.590.000317 0 12128 gettimeofday 1.500.000184 0 10193 rt_sigaction 1.100.000134 0 12461 select 0.150.18 0 6060 ioctl -- --- --- - - 100.000.012227117443 22244 total Unix: % time seconds usecs/call callserrors syscall -- --- --- - - 83.290.012522 0 31652 21709 rt_sigtimedwait 6.910.001039 0 43125 clock_gettime 3.500.000526 0 6042 ioctl 2.740.000412 0 9943 rt_sigaction 1.980.000298 0 12183 select 1.580.000238
Re: [Qemu-devel] [PATCH 3/4] Add support for HPET periodic timer.
On Sat, 2007-18-08 at 01:11 +0200, Luca Tettamanti wrote: > plain text document attachment (clock-hpet) > Linux operates the HPET timer in legacy replacement mode, which means that > the periodic interrupt of the CMOS RTC is not delivered (qemu won't be able > to use /dev/rtc). Add support for HPET (/dev/hpet) as a replacement for the > RTC; the periodic interrupt is delivered via SIGIO and is handled in the > same way as the RTC timer. > > Signed-off-by: Luca Tettamanti <[EMAIL PROTECTED]> I must be missing something silly here.. should I be able to open more than one instance of qemu with -clock hpet? Because upon invoking a second instance of qemu HPET_IE_ON fails. I also tried running the example in the kernel docs under Documentation/hpet.txt [EMAIL PROTECTED] [/home/mkent]# ./demo poll /dev/hpet 1 1000 -hpet: executing poll hpet_poll: info.hi_flags 0x0 hpet_poll: expired time = 0x8 hpet_poll: revents = 0x1 hpet_poll: data 0x1 [EMAIL PROTECTED] [/home/mkent]# ./demo poll /dev/hpet 1 1000 -hpet: executing poll hpet_poll: info.hi_flags 0x0 hpet_poll, HPET_IE_ON failed This is on 2.6.23-rc3 x86_64 with the patch-2.6.23-rc3-hrt2.patch hrtimers patch. -- Matthew Kent <[EMAIL PROTECTED]> http://magoazul.com
Re: [Qemu-devel] [PATCH] Share Vmware communication port between devices
On 8/21/07, Anthony Liguori <[EMAIL PROTECTED]> wrote: > > On Tue, 2007-08-21 at 20:17 +0200, Hervé Poussineau wrote: > > Hi, > > > > VMware registers the port 0x5658 to communicate between guest and host. > > At the moment, vmmouse.c is the only one to use this communication channel, > > so it registers the port. IMO, this design is not right because it will be > > hard to implement other functionalities of VMware. > > > > I extracted non-mouse part from this file and created a framework for VMware > > communication in a new file. Devices can then register for specific > > commands, so communication port will be shared between devices. > > I also added support for "Get RAM size" command. More commands will be added > > later. > > What other things are used for this port and where is it documented? > What is the "Get RAM size" command used by? > > AFAIK, the vmware tools have a EULA that prevents them from being used > in QEMU guests. Unless there's an open source driver that uses these > commands, I don't see the use of supporting them if the drivers are > restricted from being used within QEMU. There are open source implementations for this interface. E.g. the Bluebottle OS (ETH Zuerich) has such an implementation. (In recent builds there are some installation issues on QEmu with ATA detection, ask if you want to try. I can extract and upload/mail the respective source if it is of any help) --Thomas
Re: [Qemu-devel] [PATCH] Share Vmware communication port between devices
On Tue, 2007-08-21 at 20:17 +0200, Hervé Poussineau wrote: > Hi, > > VMware registers the port 0x5658 to communicate between guest and host. > At the moment, vmmouse.c is the only one to use this communication channel, > so it registers the port. IMO, this design is not right because it will be > hard to implement other functionalities of VMware. > > I extracted non-mouse part from this file and created a framework for VMware > communication in a new file. Devices can then register for specific > commands, so communication port will be shared between devices. > I also added support for "Get RAM size" command. More commands will be added > later. What other things are used for this port and where is it documented? What is the "Get RAM size" command used by? AFAIK, the vmware tools have a EULA that prevents them from being used in QEMU guests. Unless there's an open source driver that uses these commands, I don't see the use of supporting them if the drivers are restricted from being used within QEMU. Regards, Anthony Liguori > Attached files: > 0 - vmmouse-formatting.diff > Replace tabs by 8 spaces. No code change > 1 - adding-vmport.diff > Add a generic framework for VMware communication port > 2 - vmmouse-using-vmport.diff > Use the framework for the VMware mouse emulation > > Hervé
Re: [Qemu-devel] Current state of SMP?
On 8/21/07, Simon Peter <[EMAIL PROTECTED]> wrote: > Does anybody know what's going on? Is SMP support working at the moment? SMP works fine on Sparc32. Performance isn't great because the CPUs are not halted in SMP mode but they busy loop when idle.
[Qemu-devel] [PATCH] Share Vmware communication port between devices
Hi, VMware registers the port 0x5658 to communicate between guest and host. At the moment, vmmouse.c is the only one to use this communication channel, so it registers the port. IMO, this design is not right because it will be hard to implement other functionalities of VMware. I extracted non-mouse part from this file and created a framework for VMware communication in a new file. Devices can then register for specific commands, so communication port will be shared between devices. I also added support for "Get RAM size" command. More commands will be added later. Attached files: 0 - vmmouse-formatting.diff Replace tabs by 8 spaces. No code change 1 - adding-vmport.diff Add a generic framework for VMware communication port 2 - vmmouse-using-vmport.diff Use the framework for the VMware mouse emulation Hervé 0 - vmmouse-formatting.diff Description: Binary data 1 - adding-vmport.diff Description: Binary data 2 - vmmouse-using-vmport.diff Description: Binary data
[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2
Luca Tettamanti wrote: Run a 100Hz guest, measure cpu usage using something accurate like cyclesoak, with and without dynticks, with and without kvm. Ok, here I've measured the CPU usage on the host when running an idle guest. At 100Hz QEMU hpet4.8% dynticks5.1% Note: I've taken the mean over a period of 20 secs, but the difference between hpet and dynticks is well inside the variability of the test. KVM hpet2.2% dynticks1.0% Hum... here the numbers jumps a bit, but dynticks is always below hpet. The differences here are small, so I'll focus on the 1000Hz case. At 1000Hz: QEMU hpet5.5% dynticks 11.7% KVM hpet3.4% dynticks7.3% No surprises here, you can see the additional 1k syscalls per second. This is very surprising to me. The 6.2% difference for the qemu case translates to 62ms per second, or 62us per tick at 1000Hz. That's more than a hundred simple syscalls on modern processors. We shouldn't have to issue a hundred syscalls per guest clock tick. The difference with kvm is smaller (just 3.9%), which is not easily explained as the time for the extra syscalls should be about the same. My guess is that guest behavior is different; with dynticks the guest does about twice as much work as with hpet. Can you verify this by running strace -c -p `pgrep qemu` & sleep 10; pkill strace for all 4 cases, and posting the results? -- error compiling committee.c: too many arguments to function
[Qemu-devel] Current state of SMP?
Hi, I'd like to emulate an SMP x86 system using QEMU, running on Linux-x86. For a first test, I tried to boot Debian Live with it using the following command line: qemu -cdrom debian-live-sid-i386-standard.iso -smp 2 It gets to the bootloader (LILO), but the keyboard is not responding. Adding -no-acpi to the command line does not solve the problem. I tried both the current release version 0.9.0 and the current CVS version of today. Does anybody know what's going on? Is SMP support working at the moment? Thanks! Simon
[Qemu-devel] Re: ANN: DetaolB v0.5 is released
On 8/21/07, Rob Landley <[EMAIL PROTECTED]> wrote: > On Friday 17 August 2007 3:23:04 pm Christian MICHON wrote: > > DetaolB aimed to be a "much-less-than-a-floppy" x86 linux live distro. > > Now, it's evolving more into "a-la-slax" type of distro. > > As did Puppy Linux before it. > > Rob I actually intend to keep uclibc at least, and later on propose, like slax, different types of iso/editions. If your tcc fork can compile the kernel, uclibc, I'll gladly remove binutils and gcc :) -- Christian -- http://detaolb.sourceforge.net/, a linux distribution for Qemu
[Qemu-devel] Re: ANN: DetaolB v0.5 is released
On Friday 17 August 2007 3:23:04 pm Christian MICHON wrote: > DetaolB aimed to be a "much-less-than-a-floppy" x86 linux live distro. > Now, it's evolving more into "a-la-slax" type of distro. As did Puppy Linux before it. Rob -- "One of my most productive days was throwing away 1000 lines of code." - Ken Thompson.