[Expired for QEMU because there has been no activity for 60 days.]
** Changed in: qemu
Status: Incomplete => Expired
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/965867
Title:
9p virtual f
[Expired for qemu-kvm (Ubuntu) because there has been no activity for 60
days.]
** Changed in: qemu-kvm (Ubuntu)
Status: Incomplete => Expired
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/965
** Changed in: qemu-kvm (Ubuntu)
Status: Confirmed => Incomplete
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/965867
Title:
9p virtual file system on qemu slow
Status in QEMU:
Incomplete
Can you still reproduce this problem with the latest version of QEMU
(currently version 2.9.0)?
** Changed in: qemu
Status: New => Incomplete
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/9658
On 04/26/2012 07:25 AM, M. Mohan Kumar wrote:
Hi Max,
Could you try passing msize=262144 for 9p mount point and post the
results?
Indeed the default value of 4096 for msize is unreasonably low.
As I wrote in January on this mailing list: 128k would be a much
more appropriate default value - p
Hi Mohan,
this parameter provide significant improvement in big file access/write:
VirtFS on /srv/shared type 9p (rw,trans=virtio,version=9p2000.L,msize=262144)
$ dd if=/dev/zero of=test count=10 bs=
Hi Max,
Could you try passing msize=262144 for 9p mount point and post the
results?
Host:
[root@llm116 media]# ls -lhas file
1.1G -rw-r--r-- 1 root root 1.0G Apr 26 11:05 file
[root@llm116 media]# dd if=/dev/zero of=file bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1
>>>Can you try with security_model=passthrough?
It provides the same results, see below:
$ dd if=/dev/zero of=test count=10
10+0 records in
10+0 records out
5120 bytes (51 MB) copied, 19.8581 s, 2.6 MB/s
$ dd if=/dev/zero of=test count=10 bs=16384
10+0 records in
10+
One of possible problems could be a block size. In this case I am using
ZFS with raidZ 4+1 drives. Each drive has 4Kb block. So optimal block
size is 16384 bytes. By optimizing block size it possible to improve
performance 10 folds but 9p stably provides 10 folds worse performance
than native write
Can you try with security_model=passthrough?
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/965867
Title:
9p virtual file system on qemu slow
Status in QEMU:
New
Status in “qemu-kvm” package in U
Thanks, Max. Marked as affecting upstream QEMU per the last comment.
** Changed in: qemu-kvm (Ubuntu)
Status: Incomplete => Confirmed
** Also affects: qemu
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of qemu-
devel-ml, which
11 matches
Mail list logo