On Thu, Jun 18, 2020 at 05:28:11PM +0300, Reco wrote:
Hi.
On Thu, Jun 18, 2020 at 08:57:48AM -0400, Michael Stone wrote:
On Thu, Jun 18, 2020 at 08:50:49AM +0300, Reco wrote:
> On Wed, Jun 17, 2020 at 05:54:51PM -0400, Michael Stone wrote:
> > On Wed, Jun 17, 2020 at 11:45:53PM +0300, R
Hi.
On Thu, Jun 18, 2020 at 08:57:48AM -0400, Michael Stone wrote:
> On Thu, Jun 18, 2020 at 08:50:49AM +0300, Reco wrote:
> > On Wed, Jun 17, 2020 at 05:54:51PM -0400, Michael Stone wrote:
> > > On Wed, Jun 17, 2020 at 11:45:53PM +0300, Reco wrote:
> > > > Long story short, if you need a
On Thu, Jun 18, 2020 at 08:50:49AM +0300, Reco wrote:
Hi.
On Wed, Jun 17, 2020 at 05:54:51PM -0400, Michael Stone wrote:
On Wed, Jun 17, 2020 at 11:45:53PM +0300, Reco wrote:
> Long story short, if you need a primitive I/O benchmark, you're better
> with both dsync and nocache.
Not unl
Hi.
On Wed, Jun 17, 2020 at 05:54:51PM -0400, Michael Stone wrote:
> On Wed, Jun 17, 2020 at 11:45:53PM +0300, Reco wrote:
> > Long story short, if you need a primitive I/O benchmark, you're better
> > with both dsync and nocache.
>
> Not unless that's your actual workload, IMO. Almost no
Hello,
On Wed, Jun 17, 2020 at 12:17:58PM +0200, Albretch Mueller wrote:
> also, if in order to use RAID 10 you need 4 drives
Linux mdadm can do RAID-10 with 2 or more devices (also doesn't have
to be an even number).
> (but the dollar per Gb is approaching $0.02) and you get 1.5
> faster perfo
On Wed, Jun 17, 2020 at 11:45:53PM +0300, Reco wrote:
Long story short, if you need a primitive I/O benchmark, you're better
with both dsync and nocache.
Not unless that's your actual workload, IMO. Almost nothing does sync
i/o; simply using conv=fdatasync to make sure that the cache is flushe
On Wed, Jun 17, 2020 at 11:02:14PM +0200, to...@tuxteam.de wrote:
> On Wed, Jun 17, 2020 at 11:45:53PM +0300, Reco wrote:
>
> [...]
>
> > Long story short, if you need a primitive I/O benchmark, you're better
> > with both dsync and nocache.
>
> Thanks for actually looking over dd's shoulder :-)
On Wed, Jun 17, 2020 at 01:23:41PM -0700, David Christensen wrote:
> On 2020-06-17 12:26, Reco wrote:
>
> > On Wed, Jun 17, 2020 at 12:10:51PM -0700, David Christensen wrote:
> > > 2. AIUI dd(1) uses asynchronous (buffered) I/O unless told otherwise.
> >
> > You seem to confuse asynchronous and
On Wed, Jun 17, 2020 at 11:45:53PM +0300, Reco wrote:
[...]
> Long story short, if you need a primitive I/O benchmark, you're better
> with both dsync and nocache.
Thanks for actually looking over dd's shoulder :-)
Cheers
-- t
signature.asc
Description: Digital signature
Hi.
On Wed, Jun 17, 2020 at 10:33:51PM +0200, to...@tuxteam.de wrote:
> So to test disk write speed, 'dsync' seems the way to go. When dumping
> to a device, there are no metadata (am I right there?), so probably
> again you want 'dsync'.
>
> I don't know what 'nocache' would do for writi
On Wed, Jun 17, 2020 at 01:23:41PM -0700, David Christensen wrote:
[...]
> I was referring to the 'fdatasync', 'fsync', 'dsync', 'sync', and
> 'nocache' options to dd(1). Given the terse manual page, and a
> unwillingness to crawl the dd(1) and/or kernel code, I can only
> guess at my understand
On 2020-06-17 12:26, Reco wrote:
On Wed, Jun 17, 2020 at 12:10:51PM -0700, David Christensen wrote:
2. AIUI dd(1) uses asynchronous (buffered) I/O unless told otherwise.
You seem to confuse asynchronous and cached I/O too.
From Linux kernel POV, *asynchronous* I/O is a pair of
io_submit/i
Hi.
On Wed, Jun 17, 2020 at 12:10:51PM -0700, David Christensen wrote:
> 2. AIUI dd(1) uses asynchronous (buffered) I/O unless told otherwise.
You seem to confuse asynchronous and cached I/O too.
>From Linux kernel POV, *asynchronous* I/O is a pair of
io_submit/io_getevents syscalls, an
On 2020-06-17 03:14, Albretch Mueller wrote:
HDDs have their internal caching mechanism and I have heard that the
Linux kernel uses RAM very effitiently, but to my understanding RAM
being only 3-4 times faster doesn't make much sense, so I may be doing
or understanding something not entirely ri
> Your test dataset is too small and you aren't flushing the cache before
> exiting dd, so you are largely seeing the time it takes to write to cache,
> not to disk.
> But that gives the RAID10 system 220 IOPs, still nowhere near the 100,000
> IOPs of a single SSD.
> I suggest that you google a
On Wed, Jun 17, 2020 at 12:15 PM Albretch Mueller wrote:
>
> HDDs have their internal caching mechanism and I have heard that the
> Linux kernel uses RAM very effitiently, but to my understanding RAM
> being only 3-4 times faster doesn't make much sense, so I may be doing
> or understanding somet
Albretch Mueller wrote:
> also, if in order to use RAID 10 you need 4 drives (but the dollar
> per Gb is approaching $0.02) and you get 1.5 faster performance, what
> is the economy of "bying more RAM" if it is so much more expensive?
>
> Any comparison on HDD, SSD and RAM including pros and co
Albretch Mueller wrote:
> HDDs have their internal caching mechanism and I have heard that the
> Linux kernel uses RAM very effitiently, but to my understanding RAM
> being only 3-4 times faster doesn't make much sense, so I may be doing
> or understanding something not entirely right.
>
> does
On Wed, Jun 17, 2020 at 12:14:55PM +0200, Albretch Mueller wrote:
HDDs have their internal caching mechanism and I have heard that the
Linux kernel uses RAM very effitiently, but to my understanding RAM
being only 3-4 times faster doesn't make much sense, so I may be doing
or understanding someth
Albretch Mueller writes:
[...]
does dd actually hit the bare metal drive or is it just reaching the
disks cache
This is what I am consistently getting from my code doing intesive IO
on the RAM drive:
// __ write speed test
# time dd if=/dev/zero of="${_RAM_MNT}"/zero bs=4k count=10
[.
also, if in order to use RAID 10 you need 4 drives (but the dollar
per Gb is approaching $0.02) and you get 1.5 faster performance, what
is the economy of "bying more RAM" if it is so much more expensive?
Any comparison on HDD, SSD and RAM including pros and cons which is
worth reading?
lbrtch
HDDs have their internal caching mechanism and I have heard that the
Linux kernel uses RAM very effitiently, but to my understanding RAM
being only 3-4 times faster doesn't make much sense, so I may be doing
or understanding something not entirely right.
does dd actually hit the bare metal drive
22 matches
Mail list logo