On Wed, 21 Mar 2018 06:41:46 +0700
Robert Elz wrote:
> Date:Tue, 20 Mar 2018 14:18:31 +
> From:Chavdar Ivanov
> Message-ID:
>
>
> | Anyway, nothing so far explains Martin's results being just a tad
> | below those of Linux and everyone else getting speeds 5-
Date:Tue, 20 Mar 2018 14:18:31 +
From:Chavdar Ivanov
Message-ID:
| Anyway, nothing so far explains Martin's results being just a tad below
| those of Linux and everyone else getting speeds 5-6 times slower.
What are the file system parameters?It is easy
On Tue, 20 Mar 2018, 12:30 Sad Clouds, wrote:
> Hello, a few comments on your tests:
>
> - Reading from /dev/urandom could be a bottleneck, depending on how that
> random data is generated. Best to avoid this, if you need random data, try
> to use a bench tool that can quickly generate dynamic ra
Hello, a few comments on your tests:
- Reading from /dev/urandom could be a bottleneck, depending on how that
random data is generated. Best to avoid this, if you need random data, try
to use a bench tool that can quickly generate dynamic random data.
- Writing to ZFS can give all sorts of result
2018-03-20 00:05 időpontban m...@netbsd.org ezt írta:
On Mon, Mar 19, 2018 at 02:58:06PM +0100, Fekete Zolt?n wrote:
Any setting which influence the test and I didn't apply?
yes, need to figure out what to make GNU dd behave the same.
It has different defaults.
Ok, I installed a precompiled
Well, testing with a file of zeroes is not a very good benchmark - see the
result for OmniOS/CE below:
➜ xci dd if=/dev/zero of=out bs=100 count=1000
1000+0 records in
1000+0 records out
10 bytes transferred in 0.685792 secs (1458168149 bytes/sec)
So I decided to switch to p
I ran my tests with our dd and also with /usr/pkg/gnu/bin/dd, supposedly
the same or similar enough to the one in Centos; there was no significant
difference between the two. The fastest figure came on the system disk when
it was attached to an IDE controller with ICH6 chipset. about 180MB/sec.
All
On Mon, 19 Mar 2018 22:44:44 +
Chavdar Ivanov wrote:
> I managed to get mine to about 180MB/sec, host i/o cache didn't make
> much difference, but I switched to ICH9 chipset and ICH6 SATA
> controller... Hold on, I just realised my root device is on an IDE
> controller, not SATA, which must h
On Mon, 19 Mar 2018 16:17:33 +0100
Martin Husemann wrote:
> On Mon, Mar 19, 2018 at 12:06:44PM +, Sad Clouds wrote:
> > Hello, which virtual controller do you use in VirtualBox and do you
> > have "Use Host I/O Cache" selected on that controller? If yes, then
> > you need to disable it before
I managed to get mine to about 180MB/sec, host i/o cache didn't make much
difference, but I switched to ICH9 chipset and ICH6 SATA controller... Hold
on, I just realised my root device is on an IDE controller, not SATA, which
must have been the default setting for NetBSD in VirtualBox. I'll check
u
On Mon, Mar 19, 2018 at 02:58:06PM +0100, Fekete Zolt?n wrote:
> Any setting which influence the test and I didn't apply?
yes, need to figure out what to make GNU dd behave the same.
It has different defaults.
On Mon, Mar 19, 2018 at 12:06:44PM +, Sad Clouds wrote:
> Hello, which virtual controller do you use in VirtualBox and do you have
> "Use Host I/O Cache" selected on that controller? If yes, then you need to
> disable it before running I/O tests, otherwise it caches loads of data in
> RAM inste
2018-03-19 13:06 időpontban Sad Clouds ezt írta:
Hello, which virtual controller do you use in VirtualBox and do you
have "Use Host I/O Cache" selected on that controller? If yes, then you
need to disable it before running I/O tests, otherwise it caches loads
of data in RAM instead of sending
Hello, which virtual controller do you use in VirtualBox and do you have
"Use Host I/O Cache" selected on that controller? If yes, then you need to
disable it before running I/O tests, otherwise it caches loads of data in
RAM instead of sending it to disk.
On Mon, Mar 19, 2018 at 8:59 AM, Martin H
On Mon, Mar 19, 2018 at 08:54:12AM +, Chavdar Ivanov wrote:
> I'd be also interested in your setup - on a W10 hosted VBox (latest) on a
> fast M.2 disk I get approximately 5 times slower values, on -current amd64,
> having disks attached to SATA, SAS and NVMe controllers (almost the same,
> the
I'd be also interested in your setup - on a W10 hosted VBox (latest) on a
fast M.2 disk I get approximately 5 times slower values, on -current amd64,
having disks attached to SATA, SAS and NVMe controllers (almost the same,
the SAS one is a little slower than the rest, but nowhere near your
figures
On Sun, Mar 18, 2018 at 03:45:48PM +, Sad Clouds wrote:
> Hello, using 'log' or both 'async, log' does not improve things much,
> i.e. it's around 30-50 MBytes/sec:
>
> localhost# mount | grep wd0a
> /dev/wd0a on / type ffs (asynchronous, log, local)
>
> localhost# dd if=/dev/zero of=out bs=1
On Sun, 18 Mar 2018 16:05:05 +0100
Martin Husemann wrote:
> On Sun, Mar 18, 2018 at 02:08:53PM +, Sad Clouds wrote:
> > Hello, I tend to use dd to estimate I/O throughput
> >
> > dd if=/dev/zero of=out bs=1m count=1000
>
> Ok, so it is about in-filesystem writes.
>
> Assuming you use ffs,
On Sun, 18 Mar 2018 09:44:49 -0500
D'Arcy Cain wrote:
> On 03/18/2018 08:41 AM, Sad Clouds wrote:
> > Hello, are there known I/O performance issues with NetBSD on
> > VirtualBox? I've setup two similar VMs, one Linux, another one
> > NetBSD, both use SATA virtual controller with one disk.
> >
>
On Sun, 18 Mar 2018 15:38:40 +0100
Kamil Rytarowski wrote:
> On 18.03.2018 14:41, Sad Clouds wrote:
> > Hello, are there known I/O performance issues with NetBSD on
> > VirtualBox? I've setup two similar VMs, one Linux, another one
> > NetBSD, both use SATA virtual controller with one disk.
> >
On Sun, 18 Mar 2018 15:00:40 +0100
Martin Husemann wrote:
> On Sun, Mar 18, 2018 at 01:41:46PM +, Sad Clouds wrote:
> > Hello, are there known I/O performance issues with NetBSD on
> > VirtualBox? I've setup two similar VMs, one Linux, another one
> > NetBSD, both use SATA virtual controller
On Sun, Mar 18, 2018 at 02:08:53PM +, Sad Clouds wrote:
> Hello, I tend to use dd to estimate I/O throughput
>
> dd if=/dev/zero of=out bs=1m count=1000
Ok, so it is about in-filesystem writes.
Assuming you use ffs, you could test with the "log" or the "async"
and no special mount option. Al
On 18.03.2018 15:41, Sad Clouds wrote:
> On Sun, 18 Mar 2018 15:38:40 +0100
> Kamil Rytarowski wrote:
>
>> On 18.03.2018 14:41, Sad Clouds wrote:
>>> Hello, are there known I/O performance issues with NetBSD on
>>> VirtualBox? I've setup two similar VMs, one Linux, another one
>>> NetBSD, both us
On 03/18/2018 08:41 AM, Sad Clouds wrote:
> Hello, are there known I/O performance issues with NetBSD on VirtualBox?
> I've setup two similar VMs, one Linux, another one NetBSD, both use
> SATA virtual controller with one disk.
>
> Writing 1GB file sequentially:
> - Linux gives 425MB/sec,
So abou
On 18.03.2018 14:41, Sad Clouds wrote:
> Hello, are there known I/O performance issues with NetBSD on VirtualBox?
> I've setup two similar VMs, one Linux, another one NetBSD, both use
> SATA virtual controller with one disk.
>
> Writing 1GB file sequentially:
> - Linux gives 425MB/sec,
> - NetBSD
On Sun, Mar 18, 2018 at 01:41:46PM +, Sad Clouds wrote:
> Hello, are there known I/O performance issues with NetBSD on VirtualBox?
> I've setup two similar VMs, one Linux, another one NetBSD, both use
> SATA virtual controller with one disk.
>
> Writing 1GB file sequentially:
> - Linux gives 4
Hello, are there known I/O performance issues with NetBSD on VirtualBox?
I've setup two similar VMs, one Linux, another one NetBSD, both use
SATA virtual controller with one disk.
Writing 1GB file sequentially:
- Linux gives 425MB/sec,
- NetBSD gives 27MB/sec.
Repeated this several times, and got
27 matches
Mail list logo