and
ELI is now available in github:
https://github.com/abelg/virtual_io_acceleration/commits/ibm-io-acceleration-3.9-github
Source-code contributors (in alphabetical order):
Nadav Amit nadav.a...@gmail.com
Muli Ben-Yehuda mu...@mulix.org
Abel Gordon ab...@il.ibm.com
Nadav Har'El n
Zhangleiqiang zhangleiqi...@huawei.com wrote on 08/04/2013 12:06:17 PM:
I think maybe Anthony is right. In previous benchmarks, maybe the
non-dataplane already reached the physical disk's IOPS upper limit.
Yep, agree. Try to run the same benchmark in the host to see
what is the bare-metal
Zhangleiqiang zhangleiqi...@huawei.com wrote on 08/04/2013 02:13:50 PM:
I think do multiple benchmarks with the same situation and calc the
average value will eliminate the side effects.
Calculating the average of multiple benchmarks may not solve the issue.
For example, if for the
qemu-devel-bounces+abelg=il.ibm@nongnu.org wrote on 07/04/2013 02:31:20
PM:
From: Zhangleiqiang zhangleiqi...@huawei.com
To: Stefan Hajnoczi stefa...@redhat.com,
Cc: Zhangleiqiang zhangleiqi...@huawei.com, Stefan Hajnoczi
stefa...@gmail.com, Luohao \(brian\) brian.luo...@huawei.com,
Zhangleiqiang zhangleiqi...@huawei.com wrote on 07/04/2013 04:34:45 PM:
Hi, Abel Gordon:
The CPU info of host is as follows:
Physical CPU:2
Core Per Phy CPU: 6
HT: enabled
According to your advice, I have finished another benchmark which
张磊强 leiqzh...@gmail.com wrote on 07/04/2013 07:10:24 PM:
HI, Abel Stefan:
After thinking twice about the benchmarks and the idea of
dataplane, I am still confused.
Please note while I am familiar with the documentation and architecture
of dataplane, I didn't contribute to the
qemu-devel-bounces+abelg=il.ibm@nongnu.org wrote on 03/03/2013 11:35:27
AM:
Also, I wonder if you have time to do a presentation/discussion session
so we can get the ball rolling and more people exposed to your
approach.
There is a weekly QEMU Community Call which we can use as the
Stefan Hajnoczi stefa...@gmail.com wrote on 01/03/2013 12:54:54 PM:
On Thu, Feb 28, 2013 at 08:20:08PM +0200, Abel Gordon wrote:
Stefan Hajnoczi stefa...@gmail.com wrote on 28/02/2013 04:43:04 PM:
I think extending and tuning the existing mechanisms is the way to
go.
I don't see
Stefan Hajnoczi stefa...@gmail.com wrote on 28/02/2013 04:43:04 PM:
I see your point, but the shared-process only needs access to
the virtio ring/buffers (not necessary the entire memory of
all the guests), the network sockets and image files opened by
all the qemu user-space process. So,
Stefan Hajnoczi stefa...@gmail.com wrote on 26/02/2013 06:45:30 PM:
But is this significantly different than any other security bug in the
host,
qemu, kvm? If you perform the I/O virtualization in a separate (not
qemu)
process, you have a significantly smaller, self-contained and
Stefan Hajnoczi stefa...@gmail.com wrote on 26/02/2013 06:45:30 PM:
But is this significantly different than any other security bug in the
host,
qemu, kvm? If you perform the I/O virtualization in a separate (not
qemu)
process, you have a significantly smaller, self-contained and
Stefan Hajnoczi stefa...@gmail.com wrote on 21/02/2013 10:11:12 AM:
From: Stefan Hajnoczi stefa...@gmail.com
To: Loic Dachary l...@dachary.org,
Cc: qemu-devel qemu-devel@nongnu.org
Date: 21/02/2013 10:11 AM
Subject: Re: [Qemu-devel] Block I/O optimizations
Sent by:
Stefan Hajnoczi stefa...@gmail.com wrote on wrote on 25/02/2013 02:50:56
PM:
However, I am concerned dataplane may not solve the scalability
problem because QEMU will be still running 1 thread
per VCPU and 1 per virtual device to handle I/O for each VM.
Assuming we run N VMs with 1 VCPU
GaoYi gaoyi...@gmail.com wrote on 20/09/2012 08:42:51 AM:
The CPU isolation in Hitachi patches is just to improve the real
time performance of GUEST. The core of it, direct IRQ delivery, is
very similar to that of ELI.
For the ELI patches,
(1) Since EOI part of ELI is already
It's imperfect as you need to dedicate a core to pure guest-mode load
and cannot run userspace on that core (cannot walk through
userspace-based device models e.g.).
That's not correct.
For the evaluation, we dedicated a core for each guest to maximize the
performance but this
is not a
15 matches
Mail list logo