On 2020-11-24 07:39, Matthew Wilcox wrote:
On Mon, Nov 23, 2020 at 10:49:38PM -0800, Chris Goldsworthy wrote:
+static void __evict_bh_lru(void *arg)
+{
+ struct bh_lru *b = &get_cpu_var(bh_lrus);
+ struct buffer_head *bh = arg;
+ int i;
+
+ for (i = 0; i < BH_LRU_SIZE; i++) {
+ if (b->bhs[i] == bh) {
+ brelse(b->bhs[i]);
+ b->bhs[i] = NULL;
+ goto out;
That's an odd way to spell 'break' ...
+ }
+ }
+out:
+ put_cpu_var(bh_lrus);
+}
...
@@ -3245,8 +3281,15 @@ drop_buffers(struct page *page, struct
buffer_head **buffers_to_free)
bh = head;
do {
- if (buffer_busy(bh))
- goto failed;
+ if (buffer_busy(bh)) {
+ /*
+ * Check if the busy failure was due to an
+ * outstanding LRU reference
+ */
+ evict_bh_lrus(bh);
+ if (buffer_busy(bh))
+ goto failed;
Hi Matthew,
Apologies for the delayed response.
We might be better off just calling invalidate_bh_lrus() -- we'd flush
the entire LRU, but we'd only need to do it once, not once per buffer
head.
I'm concerned about emptying the cache, such that those who might
benefit
from it would be left affected.
We could have a more complex 'evict' that iterates each busy buffer on
a
page so transforming:
for_each_buffer
for_each_cpu
for_each_lru_entry
to:
for_each_cpu
for_each_buffer
for_each_lru_entry
(and i suggest that way because it's more expensive to iterate the
buffers
than it is to iterate the lru entries)
I've gone ahead and done this in a follow-up patch:
https://lore.kernel.org/lkml/cover.1609829465.git.cgold...@codeaurora.org/
There might be room for improvement in the data structure being used to
track the used entries - using an xarray gives the cleanest code, but
pre-allocating an array to hold up to page_size(page) / bh->b_size
entres
might be faster, although it would be a bit uglier to do in a way that
doesn't reduce the performance of the case when evict_bh_lru() doesn't
need to be called.
Regards,
Chris.
--
The Qualcomm Innovation Center, Inc.
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
Forum,
a Linux Foundation Collaborative Project