Re: Triggering non-integrity writeback from userspace

2015-10-29 Thread Dave Chinner
On Thu, Oct 29, 2015 at 05:23:56PM +0100, Andres Freund wrote:
> On 2015-10-29 12:54:22 +1100, Dave Chinner wrote:
> > On Thu, Oct 29, 2015 at 12:23:12AM +0100, Andres Freund wrote:
> > > By calling sync_file_range() over small ranges of pages shortly after
> > > they've been written we make it unlikely (but still possible) that much
> > > data has to be flushed at fsync() time.
> > 
> > Right, but you still need the fsync call, whereas with a async fsync
> > call you don't - when you gather the completion, no further action
> > needs to be taken on that dirty range.
> 
> I assume that the actual IOs issued by the async fsync and a plain fsync
> would be pretty similar. So the problem that an fsync of large amounts
> of dirty data causes latency increases for other issuers of IO wouldn't
> be gone, no?

Yes, they'd be the same if the async operation is not range limited.

> > > At the moment using fdatasync() instead of fsync() is a considerable
> > > performance advantage... If I understand the above proposal correctly,
> > > it'd allow specifying ranges, is that right?
> > 
> > Well, the patch I sent doesn't do ranges, but it could easily be
> > passed in as the iocb has offset/len parameters that are used by
> > IOCB_CMD_PREAD/PWRITE.
> 
> That'd be cool. Then we could issue those for asynchronous transaction
> commits, and to have more wal writes concurrently in progress by the
> background wal writer.

Updated patch that allows ranged aio fsync below. In the
application, do this for a ranged fsync:

io_prep_fsync(iocb, fd);
iocb->u.c.offset = offset;  /* start of range */
iocb->u.c.nbytes = len; /* size (in bytes) to sync */
error = io_submit(ctx, 1, &iocb);

> I'll try the patch from 20151028232641.GS8773@dastard and see wether I
> can make it be advantageous for throughput (for WAL flushing, not the
> checkpointer process).  Wish I had a better storage system, my guess
> it'll be more advantageous there. We'll see.

A $100 SATA ssd is all you need to get the IOPS rates in the
thousands for these sorts of tests...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com

aio: wire up generic aio_fsync method

From: Dave Chinner 

We've had plenty of requests for an asynchronous fsync over the past
few years, and we've got the infrastructure there to do it. But
nobody has wired it up to test it. The common request we get from
userspace storage applications is to do a post-write pass over a set
of files that were just written (i.e. bulk background fsync) for
point-in-time checkpointing or flushing purposes.

So, just to see if I could brute force an effective implementation,
wire up aio_fsync, add a workqueue and push all the fsync calls off
to the workqueue. The workqueue will allow parallel dispatch, switch
execution if a fsync blocks for any reason, etc. Brute force and
very effective

This also allows us to do ranged f(data)sync calls. the libaio
io_prep_fsync() function zeros the unused sections of the iocb
passed to the kernel, so the offset/byte count in the iocb should
always be zero. Hence if we get a non-zero byte count, we can treat
it as a ranges operation. This allows applications to commit ranges
of files to stable storage, rather than just he entire file. TO do
this, we need to be able to pass the length to ->aio_fsync(), but
this is trivial to change because no subsystem currently implements
this method.

So, I hacked up fs_mark to enable fsync via the libaio io_fsync()
interface to run some tests. The quick test is:

- write 1 4k files into the cache
- run a post write open-fsync-close pass (sync mode 5)
- run 5 iterations
- run a single thread, then 4 threads.

First I ran it on a 500TB sparse filesystem on a SSD.

FSUse%Count SizeFiles/sec App Overhead
 01 4096507.5   184435
 02 4096527.2   184815
 03 4096530.4   183798
 04 4096531.0   189431
 05 4096554.2   181557

real1m34.548s
user0m0.819s
sys 0m10.596s

Runs at around 500 log forces/s resulting in 500 log writes/s
giving a sustained IO load of about 1200 IOPS.

Using io_fsync():

FSUse%Count SizeFiles/sec App Overhead
 01 4096   4124.1   151359
 02 4096   5506.4   112704
 03 4096   7347.197967
 04 4096   7110.197089
 05 4096   7075.394942

real0m8.554s
user0m0.350s
sys 0m3.684s

Runs at around 7,000 log forces/s, which are mostly aggregated down
to around 700 log writes/s, for a total sustained load of ~8000 IOPS.
The parallel dispatch of fsync operations allows the log to
aggregate them effectiv

Re: Triggering non-integrity writeback from userspace

2015-10-29 Thread Andres Freund
On 2015-10-29 12:54:22 +1100, Dave Chinner wrote:
> On Thu, Oct 29, 2015 at 12:23:12AM +0100, Andres Freund wrote:
> > The blocking/latency of the fsync doesn't actually matter at all *for
> > this callsite*. It's called from a dedicated background process - if
> > it's slowed down by a couple seconds it doesn't matter much.
> > The problem is that if you have a couple gigabytes of dirty data being
> > fsync()ed at once, latency for concurrent reads and writes often goes
> > absolutely apeshit. And those concurrent reads and writes might
> > actually be latency sensitive.
> 
> Right, but my point is with an async fsync/fdatasync you don't need
> this background process - you can just trickle out async fdatasync
> calls instead of trckling out calls to sync_file_range().

We don't want to do the checkpointing from normal backends that process
user queries, so there has to be a background process anyway. Depending
on settings we only do the checkpoints in 5 to 60 minutes intervals
(spread over that interval).


> > By calling sync_file_range() over small ranges of pages shortly after
> > they've been written we make it unlikely (but still possible) that much
> > data has to be flushed at fsync() time.
> 
> Right, but you still need the fsync call, whereas with a async fsync
> call you don't - when you gather the completion, no further action
> needs to be taken on that dirty range.

I assume that the actual IOs issued by the async fsync and a plain fsync
would be pretty similar. So the problem that an fsync of large amounts
of dirty data causes latency increases for other issuers of IO wouldn't
be gone, no?


> > At the moment using fdatasync() instead of fsync() is a considerable
> > performance advantage... If I understand the above proposal correctly,
> > it'd allow specifying ranges, is that right?
> 
> Well, the patch I sent doesn't do ranges, but it could easily be
> passed in as the iocb has offset/len parameters that are used by
> IOCB_CMD_PREAD/PWRITE.

That'd be cool. Then we could issue those for asynchronous transaction
commits, and to have more wal writes concurrently in progress by the
background wal writer.



I'll try the patch from 20151028232641.GS8773@dastard and see wether I
can make it be advantageous for throughput (for WAL flushing, not the
checkpointer process).  Wish I had a better storage system, my guess
it'll be more advantageous there. We'll see.


Greetings,

Andres Freund
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Triggering non-integrity writeback from userspace

2015-10-28 Thread Dave Chinner
On Thu, Oct 29, 2015 at 12:23:12AM +0100, Andres Freund wrote:
> Hi,
> 
> On 2015-10-29 07:48:34 +1100, Dave Chinner wrote:
> > > The idea of using SYNC_FILE_RANGE_WRITE beforehand is that
> > > the fsync() will only have to do very little work. The language in
> > > sync_file_range(2) doesn't inspire enough confidence for using it as an
> > > actual integrity operation :/
> > 
> > So really you're trying to minimise the blocking/latency of fsync()?
> 
> The blocking/latency of the fsync doesn't actually matter at all *for
> this callsite*. It's called from a dedicated background process - if
> it's slowed down by a couple seconds it doesn't matter much.
> The problem is that if you have a couple gigabytes of dirty data being
> fsync()ed at once, latency for concurrent reads and writes often goes
> absolutely apeshit. And those concurrent reads and writes might
> actually be latency sensitive.

Right, but my point is with an async fsync/fdatasync you don't need
this background process - you can just trickle out async fdatasync
calls instead of trckling out calls to sync_file_range().

> By calling sync_file_range() over small ranges of pages shortly after
> they've been written we make it unlikely (but still possible) that much
> data has to be flushed at fsync() time.

Right, but you still need the fsync call, whereas with a async fsync
call you don't - when you gather the completion, no further action
needs to be taken on that dirty range.

> At the moment using fdatasync() instead of fsync() is a considerable
> performance advantage... If I understand the above proposal correctly,
> it'd allow specifying ranges, is that right?

Well, the patch I sent doesn't do ranges, but it could easily be
passed in as the iocb has offset/len parameters that are used by
IOCB_CMD_PREAD/PWRITE. io_prep_fsync/io_fsync both memset the iocb
to zero, so if we pass in a non-zero length, we could treat it as a
ranged f(d)sync quite easily.

> There'll be some concern about portability around this - issuing
> sync_file_range() every now and then isn't particularly invasive. Using
> aio might end up being that, not sure.

It's still a non-portable/linux only solution, because it is using
the linux native aio interface, not the glibc one...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Triggering non-integrity writeback from userspace

2015-10-28 Thread Dave Chinner
On Thu, Oct 29, 2015 at 07:48:34AM +1100, Dave Chinner wrote:
> Hi Andres,
> 
> On Wed, Oct 28, 2015 at 10:27:52AM +0100, Andres Freund wrote:
> > On 2015-10-25 08:39:12 +1100, Dave Chinner wrote:
> 
> > > Data integrity operations require related file metadata (e.g. block
> > > allocation trnascations) to be forced to the journal/disk, and a
> > > device cache flush issued to ensure the data is on stable storage.
> > > SYNC_FILE_RANGE_WRITE does neither of these things, and hence while
> > > the IO might be the same pattern as a data integrity operation, it
> > > does not provide such guarantees.
> > 
> > Which is desired here - the actual integrity is still going to be done
> > via fsync().
> 
> OK, so you require data integrity, but
> 
> > The idea of using SYNC_FILE_RANGE_WRITE beforehand is that
> > the fsync() will only have to do very little work. The language in
> > sync_file_range(2) doesn't inspire enough confidence for using it as an
> > actual integrity operation :/
> 
> So really you're trying to minimise the blocking/latency of fsync()?
> 
> > > You don't want to do writeback from the syscall, right? i.e. you'd
> > > like to expire the inode behind the fd, and schedule background
> > > writeback to run on it immediately?
> > 
> > Yes, that's exactly what we want. Blocking if a process has done too
> > much writes is fine tho.
> 
> OK, so it's really the latency of the fsync() operation that is what
> you are trying to avoid? I've been meaning to get back to a generic
> implementation of an aio fsync operation:
> 
> http://oss.sgi.com/archives/xfs/2014-06/msg00214.html
> 
> Would that be a better approach to solving your need for a
> non-blocking data integrity flush of a file?

Which was relatively trivial to do. Numbers below come from XFS, I
smoke tested ext4 and it kinda worked but behaviour was very
unpredictable and maxxed out at about 25000 IOPS with max
performance being at 4 threads @ an average of 2 files/s...

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com

[RFC] aio: wire up generic aio_fsync method

From: Dave Chinner 

We've had plenty of requests for an asynchronous fsync over the past
few years, and we've got the infrastructure there to do it. But
nobody has wired it up to test it. The common request we get from
userspace storage applications is to do a post-write pass over a set
of files that were just written (i.e. bulk background fsync) for
point-in-time checkpointing or flushing purposes.

So, just to see if I could brute force an effective implementation,
wire up aio_fsync, add a workqueue and push all the fsync calls off
to the workqueue. The workqueue will allow parallel dispatch, switch
execution if a fsync blocks for any reason, etc. Brute force and
very effective

So, I hacked up fs_mark to enable fsync via the libaio io_fsync()
interface to run some tests. The quick test is:

- write 1 4k files into the cache
- run a post write open-fsync-close pass (sync mode 5)
- run 5 iterations
- run a single thread, then 4 threads.

First I ran it on a 500TB sparse filesystem on a SSD.

FSUse%Count SizeFiles/sec App Overhead
 01 4096507.5   184435
 02 4096527.2   184815
 03 4096530.4   183798
 04 4096531.0   189431
 05 4096554.2   181557

real1m34.548s
user0m0.819s
sys 0m10.596s

Runs at around 500 log forces/s resulting in 500 log writes/s
giving a sustained IO load of about 1200 IOPS.

Using io_fsync():

FSUse%Count SizeFiles/sec App Overhead
 01 4096   4124.1   151359
 02 4096   5506.4   112704
 03 4096   7347.197967
 04 4096   7110.197089
 05 4096   7075.394942

real0m8.554s
user0m0.350s
sys 0m3.684s

Runs at around 7,000 log forces/s, which are mostly aggregated down
to around 700 log writes/s, for a total sustained load of ~8000 IOPS.
The parallel dispatch of fsync operations allows the log to
aggregate them effectively, reducing journal IO by a factor of 10

Run the same workload, 4 threads at a time. Normal fsync:

FSUse%Count SizeFiles/sec App Overhead
 04 4096   2156.0   690185
 08 4096   1859.6   693849
 0   12 4096   1858.8   723889
 0   16 4096   1848.5   708657
 0   20 4096   1842.7   736587

Runs at ~2000 log forces/s, resulting in ~1000 log writes/s and
3,000 IOPS. We see the journal writes being aggregated, but nowhere
near the rate of the previous async fsync ru

Re: Triggering non-integrity writeback from userspace

2015-10-28 Thread Andres Freund
Hi,

On 2015-10-29 07:48:34 +1100, Dave Chinner wrote:
> > The idea of using SYNC_FILE_RANGE_WRITE beforehand is that
> > the fsync() will only have to do very little work. The language in
> > sync_file_range(2) doesn't inspire enough confidence for using it as an
> > actual integrity operation :/
> 
> So really you're trying to minimise the blocking/latency of fsync()?

The blocking/latency of the fsync doesn't actually matter at all *for
this callsite*. It's called from a dedicated background process - if
it's slowed down by a couple seconds it doesn't matter much.
The problem is that if you have a couple gigabytes of dirty data being
fsync()ed at once, latency for concurrent reads and writes often goes
absolutely apeshit. And those concurrent reads and writes might
actually be latency sensitive.

By calling sync_file_range() over small ranges of pages shortly after
they've been written we make it unlikely (but still possible) that much
data has to be flushed at fsync() time.


Should it interesting: The relevant background process is the
"checkpointer" - it writes back all dirty data from postgres' in-memory
shared buffer cache back to disk, then fyncs all files that have been
touched since the last checkpoint (might have independently been
flushed). After that it then can remove the old write-ahead-log/journal.


> > > You don't want to do writeback from the syscall, right? i.e. you'd
> > > like to expire the inode behind the fd, and schedule background
> > > writeback to run on it immediately?
> > 
> > Yes, that's exactly what we want. Blocking if a process has done too
> > much writes is fine tho.
> 
> OK, so it's really the latency of the fsync() operation that is what
> you are trying to avoid? I've been meaning to get back to a generic
> implementation of an aio fsync operation:
> 
> http://oss.sgi.com/archives/xfs/2014-06/msg00214.html
> 
> Would that be a better approach to solving your need for a
> non-blocking data integrity flush of a file?

So an async fsync() isn't that particularly interesting for the
checkpointer/the issue in this thread. But there's another process in
postgres where I could imagine it being useful. We have a "background"
process that regularly flushes the journal to disk. It currently uses
fdatasync() to do so for subsections of a preallocated/reused file. It
tries to sync the sections that in the near future needs to be flushed
to disk because a transaction commits.

I could imagine that it's good for throughput to issue multiple
asynchronous fsyncs in this background process. Might not be good for
latency sensitive workloads tho.

At the moment using fdatasync() instead of fsync() is a considerable
performance advantage... If I understand the above proposal correctly,
it'd allow specifying ranges, is that right?


There'll be some concern about portability around this - issuing
sync_file_range() every now and then isn't particularly invasive. Using
aio might end up being that, not sure.

Greetings,

Andres Freund
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Triggering non-integrity writeback from userspace

2015-10-28 Thread Dave Chinner
Hi Andres,

On Wed, Oct 28, 2015 at 10:27:52AM +0100, Andres Freund wrote:
> On 2015-10-25 08:39:12 +1100, Dave Chinner wrote:

> > Data integrity operations require related file metadata (e.g. block
> > allocation trnascations) to be forced to the journal/disk, and a
> > device cache flush issued to ensure the data is on stable storage.
> > SYNC_FILE_RANGE_WRITE does neither of these things, and hence while
> > the IO might be the same pattern as a data integrity operation, it
> > does not provide such guarantees.
> 
> Which is desired here - the actual integrity is still going to be done
> via fsync().

OK, so you require data integrity, but

> The idea of using SYNC_FILE_RANGE_WRITE beforehand is that
> the fsync() will only have to do very little work. The language in
> sync_file_range(2) doesn't inspire enough confidence for using it as an
> actual integrity operation :/

So really you're trying to minimise the blocking/latency of fsync()?

> > You don't want to do writeback from the syscall, right? i.e. you'd
> > like to expire the inode behind the fd, and schedule background
> > writeback to run on it immediately?
> 
> Yes, that's exactly what we want. Blocking if a process has done too
> much writes is fine tho.

OK, so it's really the latency of the fsync() operation that is what
you are trying to avoid? I've been meaning to get back to a generic
implementation of an aio fsync operation:

http://oss.sgi.com/archives/xfs/2014-06/msg00214.html

Would that be a better approach to solving your need for a
non-blocking data integrity flush of a file?

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Triggering non-integrity writeback from userspace

2015-10-28 Thread Andres Freund
Hi,

Thanks for looking into this.

On 2015-10-25 08:39:12 +1100, Dave Chinner wrote:
> WB_SYNC_ALL is simply a method of saying "writeback all dirty pages
> and don't skip any". That's part of a data integrity operation, but
> it's not what results in data integrity being provided. It may cause
> some latencies caused by blocking on locks or in the request queues,
> so that's what I'd be looking for.

It also means we'll wait for more:
int write_cache_pages(struct address_space *mapping,
  struct writeback_control *wbc, writepage_t writepage,
  void *data)
{
...
if (wbc->sync_mode == WB_SYNC_ALL || wbc->tagged_writepages)
tag = PAGECACHE_TAG_TOWRITE;
else
tag = PAGECACHE_TAG_DIRTY;
...
if (PageWriteback(page)) {
if (wbc->sync_mode != WB_SYNC_NONE)
wait_on_page_writeback(page);
else
goto continue_unlock;
}

> i.e. if the request queues are full, SYNC_FILE_RANGE_WRITE will
> block until all the IO it has been requested to write has been
> submitted to the request queues. Put simply: the IO is asynchronous
> in that we don't wait for completion, but the IO submission is still
> synchronous.

That's desirable in our case because there's a limit to how much
outstanding IO there is.

> Data integrity operations require related file metadata (e.g. block
> allocation trnascations) to be forced to the journal/disk, and a
> device cache flush issued to ensure the data is on stable storage.
> SYNC_FILE_RANGE_WRITE does neither of these things, and hence while
> the IO might be the same pattern as a data integrity operation, it
> does not provide such guarantees.

Which is desired here - the actual integrity is still going to be done
via fsync(). The idea of using SYNC_FILE_RANGE_WRITE beforehand is that
the fsync() will only have to do very little work. The language in
sync_file_range(2) doesn't inspire enough confidence for using it as an
actual integrity operation :/

> > If I followed the code correctly - not a sure thing at all - that means
> > bios are submitted with WRITE_SYNC specified. Not really what's needed
> > in this case.
>
> That just allows the IO scheduler to classify them differently to
> bulk background writeback.

It also influences which writes are merged and which are not, at least
if I understand elv_rq_merge_ok() and the callbacks it calls..

> You don't want to do writeback from the syscall, right? i.e. you'd
> like to expire the inode behind the fd, and schedule background
> writeback to run on it immediately?

Yes, that's exactly what we want. Blocking if a process has done too
much writes is fine tho.

Greetings,

Andres Freund
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Triggering non-integrity writeback from userspace

2015-10-24 Thread Jan Kara
  Hi,

On Thu 22-10-15 15:15:55, Andres Freund wrote:
> postgres regularly has to checkpoint data to disk to be able to free
> data from its journal. We currently use buffered IO and that's not
> going to change short term.
> 
> In a busy database this checkpointing process can write out a lot of
> data. Currently that frequently leads to massive latency spikes
> (c.f. 20140326191113.gf9...@alap3.anarazel.de) for other processed doing
> IO. These happen either when the kernel starts writeback or when, at the
> end of the checkpoint, we issue an fsync() on the datafiles.
> 
> One odd issue there is that the kernel tends to do writeback in a very
> irregular manner. Even if we write data at a constant rate writeback
> very often happens in bulk - not a good idea for preserving
> interactivity.
> 
> What we're preparing to do now is to regularly issue
> sync_file_range(SYNC_FILE_RANGE_WRITE) on a few blocks shortly after
> we've written them to to the OS. That way there's not too much dirty
> data in the page cache, so writeback won't cause latency spikes, and the
> fsync at the end doesn't have to write much if anything.
> 
> That improves things a lot.
> 
> But I still see latency spikes that shouldn't be there given the amount
> of IO. I'm wondering if that is related to the fact that
> SYNC_FILE_RANGE_WRITE ends up doing __filemap_fdatawrite_range with
> WB_SYNC_ALL specified. Given the the documentation for
> SYNC_FILE_RANGE_WRITE I did not expect that:
>  * SYNC_FILE_RANGE_WRITE: start writeout of all dirty pages in the range which
>  * are not presently under writeout.  This is an asynchronous flush-to-disk
>  * operation.  Not suitable for data integrity operations.
> 
> If I followed the code correctly - not a sure thing at all - that means
> bios are submitted with WRITE_SYNC specified. Not really what's needed
> in this case.
> 
> Now I think the docs are somewhat clear that SYNC_FILE_RANGE_WRITE isn't
> there for data integrity, but it might be that people rely on in
> nonetheless. so I'm loathe to suggest changing that. But I do wonder if
> there's a way non-integrity writeback triggering could be exposed to
> userspace. A new fadvise flags seems like a good way to do that -
> POSIX_FADV_DONTNEED actually does non-integrity writeback, but also does
> other things, so it's not suitable for us.

You are absolutely correct that sync_file_range() should issue writeback as
WB_SYNC_NONE and not wait for current writeback in progress. That was an
oversight introduced by commit ee53a891f474 (mm: do_sync_mapping_range
integrity fix) which changed do_sync_mapping_range() to use WB_SYNC_ALL
because it had other users which relied WB_SYNC_ALL semantics. Later that
got copied over to the current sync_file_range() implementation.

I think we should just revert to the very explicitely documented behavior
of sync_file_range(). I'll send a patch for that. Thanks for report.

Honza
-- 
Jan Kara 
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Triggering non-integrity writeback from userspace

2015-10-24 Thread Dave Chinner
On Thu, Oct 22, 2015 at 03:15:55PM +0200, Andres Freund wrote:
> Hi,
> 
> postgres regularly has to checkpoint data to disk to be able to free
> data from its journal. We currently use buffered IO and that's not
> going to change short term.
> 
> In a busy database this checkpointing process can write out a lot of
> data. Currently that frequently leads to massive latency spikes
> (c.f. 20140326191113.gf9...@alap3.anarazel.de) for other processed doing
> IO. These happen either when the kernel starts writeback or when, at the
> end of the checkpoint, we issue an fsync() on the datafiles.
> 
> One odd issue there is that the kernel tends to do writeback in a very
> irregular manner. Even if we write data at a constant rate writeback
> very often happens in bulk - not a good idea for preserving
> interactivity.
> 
> What we're preparing to do now is to regularly issue
> sync_file_range(SYNC_FILE_RANGE_WRITE) on a few blocks shortly after
> we've written them to to the OS. That way there's not too much dirty
> data in the page cache, so writeback won't cause latency spikes, and the
> fsync at the end doesn't have to write much if anything.
> 
> That improves things a lot.
> 
> But I still see latency spikes that shouldn't be there given the amount
> of IO. I'm wondering if that is related to the fact that
> SYNC_FILE_RANGE_WRITE ends up doing __filemap_fdatawrite_range with
> WB_SYNC_ALL specified. Given the the documentation for
> SYNC_FILE_RANGE_WRITE I did not expect that:
>  * SYNC_FILE_RANGE_WRITE: start writeout of all dirty pages in the range which
>  * are not presently under writeout.  This is an asynchronous flush-to-disk
>  * operation.  Not suitable for data integrity operations.

WB_SYNC_ALL is simply a method of saying "writeback all dirty pages
and don't skip any". That's part of a data integrity operation, but
it's not what results in data integrity being provided. It may cause
some latencies caused by blocking on locks or in the request queues,
so that's what I'd be looking for.

i.e. if the request queues are full, SYNC_FILE_RANGE_WRITE will
block until all the IO it has been requested to write has been
submitted to the request queues. Put simply: the IO is asynchronous
in that we don't wait for completion, but the IO submission is still
synchronous.

Data integrity operations require related file metadata (e.g. block
allocation trnascations) to be forced to the journal/disk, and a
device cache flush issued to ensure the data is on stable storage.
SYNC_FILE_RANGE_WRITE does neither of these things, and hence while
the IO might be the same pattern as a data integrity operation, it
does not provide such guarantees.

> If I followed the code correctly - not a sure thing at all - that means
> bios are submitted with WRITE_SYNC specified. Not really what's needed
> in this case.

That just allows the IO scheduler to classify them differently to
bulk background writeback. 

> Now I think the docs are somewhat clear that SYNC_FILE_RANGE_WRITE isn't
> there for data integrity, but it might be that people rely on in
> nonetheless. so I'm loathe to suggest changing that. But I do wonder if
> there's a way non-integrity writeback triggering could be exposed to
> userspace. A new fadvise flags seems like a good way to do that -
> POSIX_FADV_DONTNEED actually does non-integrity writeback, but also does
> other things, so it's not suitable for us.

You don't want to do writeback from the syscall, right? i.e. you'd
like to expire the inode behind the fd, and schedule background
writeback to run on it immediately?

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Triggering non-integrity writeback from userspace

2015-10-22 Thread Andres Freund
Hi,

postgres regularly has to checkpoint data to disk to be able to free
data from its journal. We currently use buffered IO and that's not
going to change short term.

In a busy database this checkpointing process can write out a lot of
data. Currently that frequently leads to massive latency spikes
(c.f. 20140326191113.gf9...@alap3.anarazel.de) for other processed doing
IO. These happen either when the kernel starts writeback or when, at the
end of the checkpoint, we issue an fsync() on the datafiles.

One odd issue there is that the kernel tends to do writeback in a very
irregular manner. Even if we write data at a constant rate writeback
very often happens in bulk - not a good idea for preserving
interactivity.

What we're preparing to do now is to regularly issue
sync_file_range(SYNC_FILE_RANGE_WRITE) on a few blocks shortly after
we've written them to to the OS. That way there's not too much dirty
data in the page cache, so writeback won't cause latency spikes, and the
fsync at the end doesn't have to write much if anything.

That improves things a lot.

But I still see latency spikes that shouldn't be there given the amount
of IO. I'm wondering if that is related to the fact that
SYNC_FILE_RANGE_WRITE ends up doing __filemap_fdatawrite_range with
WB_SYNC_ALL specified. Given the the documentation for
SYNC_FILE_RANGE_WRITE I did not expect that:
 * SYNC_FILE_RANGE_WRITE: start writeout of all dirty pages in the range which
 * are not presently under writeout.  This is an asynchronous flush-to-disk
 * operation.  Not suitable for data integrity operations.

If I followed the code correctly - not a sure thing at all - that means
bios are submitted with WRITE_SYNC specified. Not really what's needed
in this case.

Now I think the docs are somewhat clear that SYNC_FILE_RANGE_WRITE isn't
there for data integrity, but it might be that people rely on in
nonetheless. so I'm loathe to suggest changing that. But I do wonder if
there's a way non-integrity writeback triggering could be exposed to
userspace. A new fadvise flags seems like a good way to do that -
POSIX_FADV_DONTNEED actually does non-integrity writeback, but also does
other things, so it's not suitable for us.

Greetings,

Andres Freund
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/