On 2012-09-19 16:43, Abel Gordon wrote: > >> It's imperfect as you need to dedicate a core to pure guest-mode load >> and cannot run userspace on that core (cannot walk through >> userspace-based device models e.g.). > > That's not correct. > For the evaluation, we dedicated a core for each guest to maximize the > performance but this > is not a requirement. You can over-commit cores and share them across > multiple VMs. In this case, > you will be sharing the cycles among all the VMs and the performance per VM > will be degraded. > In addition, if you share a core, an interrupt for a guest may be raised > while the corresponding > VCPU is not running. Depending on the workload, most of the interrupts will > be generated while > the VCPU runs or not. > >> and cannot run userspace on that core (cannot walk through >> userspace-based device models e.g.). > > That's not correct. > We can run any thread (including user-space threads) on the same core we > run the VMs (VCPU threads). > In fact, we did that for the ELI evaluation: we shared a single core to run > the VCPU thread and ALL the host threads (including qemu I/O thread). > >> And it requires that magic bar to >> map the shadow IDT into the guest (hmm, I think Hitachi avoided this). > > Hitachi uses a different technique which seems to have the 2 disadvantages > you previously mentioned while ELI doesn't have these disadvantages. > >> It's invasive as it has to change Linux to maintain those isolated slave >> CPUs. That is, of course, based on code that was published by Hitachi. >> Yours may differ but will still have to solve the same problems. > > ELI does not require isolated/slave CPUs. >
OK. Show patches. Jan -- Siemens AG, Corporate Technology, CT RTC ITP SDP-DE Corporate Competence Center Embedded Linux