On Thu, 10 May 2001, Andrea Arcangeli wrote:
> If some page wasn't yet visible in the dirty_pages list by the time
> __sync_one started, we'll find I_DIRTY_PAGES set. This is enforced by
> the locking order (sync_one first clears the I_DIRTY_PAGES and then
> it starts browsing the dirty_pages l
Well,
Here is the updated version of the patch to add the "priority" argument to
writepage(). All implementations have been fixed.
No referenced bit changes as I still think its not worth passing this
information down to writepage().
Note: I've removed ramfs_writepage(). If there is no writep
On Thu, 10 May 2001, Andrea Arcangeli wrote:
> On Wed, May 09, 2001 at 07:02:16PM -0300, Marcelo Tosatti wrote:
> > Why don't you clean I_DIRTY_PAGES ?
>
> we don't have visibilty on the inode_lock from there, we could make a
> function in fs/inode.c or export th
On Wed, 9 May 2001, Trond Myklebust wrote:
>
> In addition to the two changes I proposed to Andrea's new patch, I
> also realized we might want to do a fdatasync() when locking files. If
> we don't, then locking won't be atomic on mmap()...
>
> Here therefore is Andrea's patch with the changes
On Wed, 9 May 2001, David S. Miller wrote:
>
> Marcelo Tosatti writes:
> > > Let me state it a different way, how is the new writepage() framework
> > > going to do things like ignore the referenced bit during page_launder
> > > for dead swap pages?
>
On Wed, 9 May 2001, David S. Miller wrote:
>
> Marcelo Tosatti writes:
> > You want writepage() to check/clean the referenced bit and move the page
> > to the active list itself ?
>
> Well, that's the other part of what my patch was doing.
>
> Let me s
Ok, what prevents this from happening:
CPU0CPU1
try_to_swap_out()
...
entry = get_swap_page();
if (!entry.val)
goto out_unlock_restore;
swapin_readahead()
finds valid swap entry just allocate
On Tue, 8 May 2001, David S. Miller wrote:
>
> Marcelo Tosatti writes:
> > Ok, this patch implements thet thing and also changes ext2+swap+shm
> > writepage operations (so I could test the thing).
> >
> > The performance is better with the patch on my restri
On Wed, 9 May 2001, Mark Hemment wrote:
>
> On Tue, 8 May 2001, David S. Miller wrote:
> > Actually, the change was made because it is illogical to try only
> > once on multi-order pages. Especially because we depend upon order
> > 1 pages so much (every task struct allocated). We depend up
On Tue, 8 May 2001, Linus Torvalds wrote:
>
>
> On Tue, 8 May 2001, Marcelo Tosatti wrote:
> >
> > There are two issues which I missed yesterday: we have to get a reference
> > on the page, mark it clean, drop the locks and then call writepage(). If
> > the
Hi,
I was just wondering how bad the current way of writing out dirty pages is
wrt multiple page_launder() users.
We don't remove a dirty page from the inactive dirty list when writing it
out (as opposed to "direct" page->buffers ll_rw_block() IO).
When we have multiple users inside page_la
On Tue, 8 May 2001, Mark Hemment wrote:
>
> In 2.4.3pre6, code in page_alloc.c:__alloc_pages(), changed from;
>
> try_to_free_pages(gfp_mask);
> wakeup_bdflush();
> if (!order)
> goto try_again;
> to
> try_to_free_pages(gfp_mask);
> wakeup_bdflush
On Mon, 7 May 2001, Linus Torvalds wrote:
> In fact, it might even clean stuff up. Who knows? At least
> page_launder() would not need to know about magic dead swap pages, because
> the decision would be entirely in writepage().
>
> And there aren't that many writepage() implementations in the
On Mon, 7 May 2001, David S. Miller wrote:
>
> Marcelo Tosatti writes:
> > I just thought about this case:
> >
> > We find a dead swap cache page, so dead_swap_page goes to 1.
> >
> > We call swap_writepage(), but in the meantime the swapin reada
On Mon, 7 May 2001, Linus Torvalds wrote:
>
> On Mon, 7 May 2001, Marcelo Tosatti wrote:
> >
> > So the "dead_swap_page" logic is _not_ buggy and you are full of shit when
> > telling Alan to revert the change. (sorry, I could not avoid this one)
>
>
On Mon, 7 May 2001, Linus Torvalds wrote:
>
> On Mon, 7 May 2001, Marcelo Tosatti wrote:
> >
> > On 7 May 2001, Linus Torvalds wrote:
> >
> > > But it is important to re-calculate the deadness after getting the
> > > lock. Before, it was jus
On 7 May 2001, Linus Torvalds wrote:
> But it is important to re-calculate the deadness after getting the
> lock. Before, it was just an informed guess. After the lock, it is
> knowledge. And you can use informed guesses for heuristics, but you
> must _not_ use them for any serious decisions.
On Wed, 2 May 2001, Jorge Nerin wrote:
> Short version:
> Under very heavy thrashing (about four hours) the system either lockups
> or OOM handler kills a task even when there is swap space left.
First of all, please try to reproduce the problem with 2.4.5-pre1.
If it still happens with pre
Hi,
The following patch implements a new super_operations "wait_inode"
operation on ext2 to fix the generic_osync_inode/fsync/fdatasync race I
mentioned sometime ago.
We still have to implement the wait_inode operation on _all_ block
filesystems to make them safe.
Comments?
diff --exclude-f
On Fri, 27 Apr 2001, Mike Galbraith wrote:
> On Thu, 26 Apr 2001, Rik van Riel wrote:
>
> > On Thu, 26 Apr 2001, Mike Galbraith wrote:
> >
> > > > > > No. It livelocked on me with almost all active pages exausted.
> > > > > Misspoke.. I didn't try the two mixed. Rik's patch livelocked me.
>
Linus,
Currently __alloc_pages() does not allow PF_MEMALLOC tasks to free clean
inactive pages.
This is senseless --- if the allocation has __GFP_WAIT set, its ok to grab
the pagemap_lru_lock/pagecache_lock/etc.
I checked all possible codepaths after reclaim_page() and they are ok.
The follow
On Thu, 26 Apr 2001, Mike Galbraith wrote:
> > (i cannot see how this chunk affects the VM, AFAICS this too makes the
> > zapping of the cache less agressive.)
>
> (more folks get snagged on write.. they can't eat cache so fast)
What about GFP_BUFFER allocations ? :)
I suspect the jiffies hac
On Thu, 26 Apr 2001, Mike Galbraith wrote:
> On Thu, 26 Apr 2001, Marcelo Tosatti wrote:
>
> > > (I can get it to under 9 with MUCH extremely ugly tinkering. I've done
> > > this enough to know that I _should_ be able to do 8 1/2 minutes ~easily)
> >
> &
On Thu, 26 Apr 2001, Mike Galbraith wrote:
> > Comments?
>
> More of a question. Neither Ingo's nor your patch makes any difference
> on my UP box (128mb PIII/500) doing make -j30.
Well, my patch incorporates Ingo's patch.
It is now integrated into pre7, btw.
> It is taking me 11 1/2
>
On Wed, 25 Apr 2001, Feng Xian wrote:
> Hi,
>
> I am running linux-2.4.3 on a Dell dual PIII machine with 128M memory.
> After the machine runs a while, dmesg shows,
>
> __alloc_pages: 4-order allocation failed.
> __alloc_pages: 3-order allocation failed.
> __alloc_pages: 4-order allocation f
Resending...
-- Forwarded message --
Date: Tue, 24 Apr 2001 23:28:38 -0300 (BRT)
From: Marcelo Tosatti <[EMAIL PROTECTED]>
To: Linus Torvalds <[EMAIL PROTECTED]>
Cc: Ingo Molnar <[EMAIL PROTECTED]>, Alan Cox <[EMAIL PROTECTED]>,
Linux Kernel List <
On Tue, 24 Apr 2001, Linus Torvalds wrote:
> Basically, I don't want to mix synchronous and asynchronous
> interfaces. Everything should be asynchronous by default, and waiting
> should be explicit.
The following patch changes all swap IO functions to be asynchronous by
default and changes the
>
> To: [EMAIL PROTECTED]
> CC: [EMAIL PROTECTED]
> In-reply-to: <[EMAIL PROTECTED]>
> (message from Marcelo Tosatti on Mon, 23 Apr 2001 20:31:12 -0300
> (BRT))
> Subject: Re: 2.4.4-pre6 : THANKS! very snappy here [nt]
> Reply-to: [EMAIL PROTECTED]
> Refere
On Mon, 23 Apr 2001, Marcelo Tosatti wrote:
>
> Linus,
>
> With the prune_icache() modifications which were integrated in pre5 there
> is no more need to avoid non __GFP_IO allocations to go down to
> prune_icache().
>
> The following patch moves the __GFP_IO ch
On Mon, 23 Apr 2001, Linus Torvalds wrote:
>
> On Mon, 23 Apr 2001, Jonathan Morton wrote:
> > >There seems to be one more reason, take a look at the function
> > >read_swap_cache_async() in swap_state.c, around line 240:
> > >
> > >/*
> > > * Add it to the swap cache and read i
Linus,
With the prune_icache() modifications which were integrated in pre5 there
is no more need to avoid non __GFP_IO allocations to go down to
prune_icache().
The following patch moves the __GFP_IO check down to prune_icache(),
allowing !__GFP_IO allocations to free clean unused inodes.
dif
On Fri, 20 Apr 2001, Marcelo Tosatti wrote:
>
> Hi,
Argh. Silly.
Well...
Right now we can get a task killed by the OOM killer even if there is a
lot of _unused_ (but allocated) swap space. The reason for that is the
pre allocation of swap.
Practical example (128MB swap, 960
Hi,
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Hi Linus,
The following patch fixes the OOM deadlock condition caused by
prune_icache(), and also improves its performance significantly.
The OOM deadlock can happen because prune_icache() tries to sync _all_
dirty inodes (under PF_MEMALLOC) on the system before trying to free a
portion of the
On Tue, 17 Apr 2001, Stephen C. Tweedie wrote:
> Hi,
>
> On Sat, Apr 14, 2001 at 07:24:42AM -0300, Marcelo Tosatti wrote:
> >
> > As described earlier, code which wants to write an inode cannot rely on
> > the I_DIRTY bits (on inode->i_state) being clean to gu
Kernel 2.4.4-pre3.
gcc -D__KERNEL__ -I/home/marcelo/rpm/BUILD/kernel-2.4.3/linux/include
-Wall -Wstrict-prototypes -O2 -fomit-frame-pointer -fno-strict-aliasing
-pipe -mpreferred-stack-boundary=2 -march=i386 -DMODULE -DMODVERSIONS
-include
/home/marcelo/rpm/BUILD/kernel-2.4.3/linux/include/linux
On Sat, 14 Apr 2001, Rik van Riel wrote:
> On Sat, 14 Apr 2001, Marcelo Tosatti wrote:
>
> > There is a nasty race between shmem_getpage_locked() and
> > swapin_readahead() with the new shmem code (introduced in
> > 2.4.3-ac3 and merged in the main tree in 2.4.4-pre3)
Hi,
There is a nasty race between shmem_getpage_locked() and
swapin_readahead() with the new shmem code (introduced in 2.4.3-ac3 and
merged in the main tree in 2.4.4-pre3):
shmem_getpage_locked() finds a page in the swapcache and moves it to the
pagecache as an shmem page, freeing the swapcac
Hi,
As described earlier, code which wants to write an inode cannot rely on
the I_DIRTY bits (on inode->i_state) being clean to guarantee that the
inode and its dirty pages, if any, are safely synced on disk.
The reason for that is sync_one() --- it cleans the I_DIRTY bits of an
inode, sets the
Hi,
The following patch should fix the OOM deadlock condition caused by
prune_icache(), and improve its performance significantly.
The OOM deadlock can happen because prune_icache() tries to sync _all_
dirty inodes (under PF_MEMALLOC) on the system before trying to free a
portion of the clean
On Thu, 12 Apr 2001, Linus Torvalds wrote:
>
>
> On Thu, 12 Apr 2001, Marcelo Tosatti wrote:
> >
> > Comments?
> >
> > --- fs/inode.c~ Thu Mar 22 16:04:13 2001
> > +++ fs/inode.c Thu Apr 12 15:18:22 2001
> > @@ -347,6 +347,11 @@
On Thu, 12 Apr 2001, Marcelo Tosatti wrote:
>
> On Thu, 12 Apr 2001, Linus Torvalds wrote:
>
> > On Thu, 12 Apr 2001, Marcelo Tosatti wrote:
> > >
> > > Comments?
> > >
> > > --- fs/inode.c~ Thu Mar 22 16:04:13 2001
> > > +++ fs
On Thu, 12 Apr 2001, Linus Torvalds wrote:
> On Thu, 12 Apr 2001, Marcelo Tosatti wrote:
> >
> > Comments?
> >
> > --- fs/inode.c~ Thu Mar 22 16:04:13 2001
> > +++ fs/inode.c Thu Apr 12 15:18:22 2001
> > @@ -347,6 +347,11 @@
> > #endif
>
Hi,
generic_osync_inode() (called by generic_file_write()) is not checking if
the inode being synced has the I_LOCK bit set before checking the I_DIRTY
bit.
AFAICS, the following problem can happen:
sync()
...
sync_one()
reset I_DIRTY, set I_LOCK
filemap_fdatasync() <-- #window
write_inode()
On Thu, 12 Apr 2001, Marcelo Tosatti wrote:
>
> I did :)
>
> This should fix it
>
> --- mm/page_alloc.c.orig Thu Apr 12 13:47:53 2001
> +++ mm/page_alloc.cThu Apr 12 13:48:06 2001
> @@ -454,7 +454,7 @@
>
On Thu, 12 Apr 2001, Rik van Riel wrote:
> On Thu, 12 Apr 2001, Alan Cox wrote:
>
> > > 2.4.3-pre6 quietly made a very significant change there:
> > > it used to say "if (!order) goto try_again;" and now just
> > > says "goto try_again;". Which seems very sensible since
> > > __GFP_WAIT is se
On Fri, 23 Mar 2001, Adam J. Richter wrote:
> [Sorry for posting three messages to linux-kernel about this.
> Each time I was pretty sure I was done for the night. Anyhow, I
> hope this proposed patch makes up for it.]
>
> In linux-2.4.3-pre6, a call to vmalloc can result in a cal
On Wed, 21 Mar 2001, Linus Torvalds wrote:
> diff -u --recursive --new-file pre6/linux/mm/memory.c linux/mm/memory.c
> --- pre6/linux/mm/memory.cTue Mar 20 23:13:03 2001
> +++ linux/mm/memory.c Wed Mar 21 22:21:27 2001
> @@ -1031,18 +1031,20 @@
> struct vm_area_struct * vma, unsigned l
On Mon, 19 Mar 2001, Linus Torvalds wrote:
>
>
> On Tue, 20 Mar 2001, Marcelo Tosatti wrote:
> >
> > Could the IDE one cause corruption ?
>
> Only with broken disks, as far as we know right now. There's been so far
> just one report of this problem, an
On Mon, 19 Mar 2001, Linus Torvalds wrote:
>
> There is a 2.4.3-pre5 in the test-directory on ftp.kernel.org.
>
> The complete changelog is appended, but the biggest recent change is the
> mmap_sem change, which I updated with new locking rules for pte/pmd_alloc
> to avoid the race on the act
On Mon, 19 Mar 2001, David Raufeisen wrote:
> Getting oops every time I run rsync today.. happens after it receives file list and
>is starting to stat all the files.. filesystem is reiser.
>
> Linux prototype 2.4.3-pre1 #2 Thu Mar 15 00:24:43 PST 2001 i686 unknown
>
> 15:25:28 up 1 day, 20:0
On Fri, 16 Mar 2001, Shane Y. Gibson wrote:
> Marcelo Tosatti wrote:
> >
> > Can you please try to reproduce it with the following patch against 2.4.2?
>
> Marcelo (et al),
>
> I'll give it a whirl with the patch. Should I also
> try setting `nmi_watchdog=
Well, here it is.
I would like to know if this makes a big difference on highmem (2GB or
more) machines with heavy IO workloads.
I don't have such a machine here to be able to test it. (it works with a
1GB machine, but it needs to be tested with lots of highmem to show the
improvement)
diff
Ok.
Going to write a patch and send you to test RSN.
On Fri, 16 Mar 2001, Ingo Molnar wrote:
>
> On Thu, 15 Mar 2001, Marcelo Tosatti wrote:
>
> > The old create_bounce code used to set PF_MEMALLOC on the task flags
> > and call wakeup_bdflush(1) in case GFP_BUFFER p
On Wed, 14 Mar 2001, Donald J. Barry wrote:
> Hey kernel developers,
>
> I'm getting repeated oopses and occasional freezes on a server I've
> set up to host a giant (180G) reiserfs system atop lvm, served by nfs(v2).
> (I've applied the reiserfs and nfs patches to the vanilla kernel,
> which
On Thu, 15 Mar 2001, Shane Y. Gibson wrote:
> Marcelo Tosatti wrote:
> >
> > Did'nt you get a message similar to
> >
> > "kernel BUG at page_alloc.c line xxx!"
>
> Marcelo,
>
> Yes there was. I'm pasting the total sum of the /va
Ingo,
Any comments?
-- Forwarded message --
Date: Wed, 28 Feb 2001 02:02:16 -0300 (BRT)
From: Marcelo Tosatti <[EMAIL PROTECTED]>
To: Ingo Molnar <[EMAIL PROTECTED]>
Cc: lkml <[EMAIL PROTECTED]>
Subject: Reserved memory for highmem bouncing
Hi Ingo,
On Thu, 15 Mar 2001, Shane Y. Gibson wrote:
>
> All,
>
> I just compiled 2.4.2 and installed it on a otherwise stock
> Redhat 7.0 platform. The system is a SuperMicro PIIISME,
> running dual PIII 750s, with 256 cache. It appears that about
> every 10 to 18 hours, the system is panicing, and
Linus,
I never got I answer from you, so I'm going to ask again.
Do you want this patches for 2.4 or not ?
Yes, I tested them.
-- Forwarded message --
Date: Mon, 19 Feb 2001 23:05:23 -0200 (BRST)
From: Marcelo Tosatti <[EMAIL PROTECTED]>
To: Linus Torva
On Wed, 7 Mar 2001, Jesse Pollard wrote:
> On Wed, 07 Mar 2001, Tom Sightler wrote:
> >Hi All,
> >
> >I'm seeking information in regards to a large Linux implementation we are
> >planning. We have been evaluating many storage options and I've come up
> >with some questions that I have been unab
On Wed, 7 Mar 2001, Alexander Viro wrote:
>
>
> You are reinventing the wheel.
> man ptrace (see PTRACE_{PEEK,POKE}{TEXT,DATA} and PTRACE_{ATTACH,CONT,DETACH})
With ptrace data will be copied twice. As far as I understood, Jeremy
wants to avoid that.
-
To unsubscribe from this list: send t
On Fri, 2 Mar 2001, Mike Galbraith wrote:
> On Thu, 1 Mar 2001, Rik van Riel wrote:
>
> > > > The merging at the elevator level only works if the requests sent to
> > > > it are right next to each other on disk. This means that randomly
> > > > sending stuff to disk really DOES DESTROY PERFORM
Hi,
The following patch changes two things:
- Counts asynchronous ll_rw_block() IO in the flushed pages counter (page_launder)
- Limits the amount of scanned pte's _by user tasks_ inside swap_out()
diff --exclude-from=/home/marcelo/exclude -Nur linux.orig/fs/buffer.c linux/fs/buffer.c
---
On Thu, 1 Mar 2001, Chris Evans wrote:
>
> On Thu, 1 Mar 2001, Rik van Riel wrote:
>
> > True. I think we want something in-between our ideas...
> ^^^
> > a while. This should make it possible for the disk reads to
> ^^
>
> Oh dear.. not more "vm design by wavi
On Wed, 28 Feb 2001, Mike Galbraith wrote:
> > > Have you tried to use SWAP_SHIFT as 4 instead of 5 on a stock 2.4.2-ac5 to
> > > see if the system still swaps out too much?
> >
> > Not yet, but will do.
But what about swapping behaviour?
It still swaps too much?
-
To unsubscribe from this
On Wed, 28 Feb 2001, Mike Galbraith wrote:
> On Tue, 27 Feb 2001, Marcelo Tosatti wrote:
>
> > On Tue, 27 Feb 2001, Mike Galbraith wrote:
> >
> > > What the patch does is simply to push I/O as fast as we can.. we're
> > > by definition I/O bound an
Hi Ingo,
I have a question about the highmem page IO deadlock fix which is in
2.4.2-ac. (the emergency memory thing)
The old create_bounce code used to set PF_MEMALLOC on the task flags and
call wakeup_bdflush(1) in case GFP_BUFFER page allocation failed. That was
broken because flush_dirty_buf
On Tue, 27 Feb 2001, Robert Read wrote:
> Currently in brw_kiovec, iobuf->io_count is being incremented as each
> bh is submitted, and decremented in the bh->b_end_io(). This means
> io_count can go to zero before all the bhs have been submitted,
> especially during a large request. This causes
On Tue, 27 Feb 2001, Mike Galbraith wrote:
> On Tue, 27 Feb 2001, Rik van Riel wrote:
>
> > On Tue, 27 Feb 2001, Mike Galbraith wrote:
> >
> > > Attempting to avoid doing I/O has been harmful to throughput here
> > > ever since the queueing/elevator woes were fixed. Ever since then,
> > > tossi
Hi,
page_launder() is not counting direct ll_rw_block() IO correctly in the
flushed pages counter.
A page is only counted as flushed if it had its buffer_head's freed,
meaning that pages which have been queued but not freed are not counted.
The following patch against ac5 fixes the problem.
On Mon, 26 Feb 2001, Mordechai T. Abzug wrote:
> Why do I have 47MB of swap in use? I thought at first that it might
> be due to the minimum allowable cache size, but considering that there
> was only 48MB of RAM in use to begin with, that still seems
> suspicious. Even weirder, if I then turn
On Mon, 26 Feb 2001, Alan Cox wrote:
> > We can add an allocation flag (__GFP_NO_CRITICAL?) which can be used by
> > sg_low_malloc() (and other non critical allocations) to fail previously
> > and not print the message.
>
> It is just for debugging. The message can go. If anytbing it would be
On Sun, 25 Feb 2001, Mike Galbraith wrote:
> The way sg_low_malloc() tries to allocate, failure messages are
> pretty much garanteed. It tries high order allocations (which
> are unreliable even when not stressed) and backs off until it
> succeeds.
>
> In other words, the messages are a red h
Linus,
refill_freelist() (fs/buffer.c) calls page_launder(GFP_BUFFER) after
syncing some of the oldest dirty buffers.
As fair as I can see, that used to make sense because clean pages could be
freed with page_launder(GFP_BUFFER) -- this could avoid a potential sleep
on kswapd when trying to al
On Fri, 23 Feb 2001, Shawn Starr wrote:
> Feb 23 21:17:47 coredump kernel: __alloc_pages: 3-order allocation failed.
> Feb 23 21:17:47 coredump kernel: __alloc_pages: 2-order allocation failed.
> Feb 23 21:17:47 coredump kernel: __alloc_pages: 1-order allocation failed.
> Feb 23 21:17:47 coredum
On Thu, 22 Feb 2001, Andrea Arcangeli wrote:
> However if you have houndred of different queues doing I/O at the same
> time it may make a difference, but probably with tons of harddisks
> you'll also have tons of ram... In theory we could put a global limit
> on top of the the per-queue one.
On Thu, 22 Feb 2001, Andrea Arcangeli wrote:
> On Thu, Feb 22, 2001 at 10:59:20AM -0800, Linus Torvalds wrote:
> > I'd prefer for this check to be a per-queue one.
>
> I'm running this in my tree since a few weeks, however I never had the courage
> to post it publically because I didn't benchma
On Thu, 22 Feb 2001, Linus Torvalds wrote:
>
>
> On Thu, 22 Feb 2001, Jens Axboe wrote:
>
> > On Thu, Feb 22 2001, Marcelo Tosatti wrote:
> > > The following piece of code in ll_rw_block() aims to limit the number of
> > > locked buffers by making processe
Hi,
The following piece of code in ll_rw_block() aims to limit the number of
locked buffers by making processes throttle on IO if the number of on
flight requests is bigger than a high watermaker. IO will only start
again if we're under a low watermark.
if (atomic_read(&queued_
On Tue, 20 Feb 2001, Alan Cox wrote:
> > --- linux/include/linux/locks.h.origMon Feb 19 23:16:50 2001
> > +++ linux/include/linux/locks.h Mon Feb 19 23:21:48 2001
> > @@ -13,6 +13,7 @@
> > * lock buffers.
> > */
> > extern void __wait_on_buffer(struct buffer_head *);
> > +exter
if (PageLocked(page)) {
- run_task_queue(&tq_disk);
+ sync_page(page);
schedule();
continue;
}
On Mon, 19 Feb 2001, Marcelo Tosatti wrote:
>
> Hi Linus,
>
> Take a look at __lock_page:
&
On Mon, 19 Feb 2001, Linus Torvalds wrote:
>
>
> On Mon, 19 Feb 2001, Marcelo Tosatti wrote:
> >
> > The following patch makes lock_buffer() use the exclusive wakeup scheme
> > added in 2.3.
>
> Ugh, This is horrible.
>
> You should NOT have one func
Hi,
The following patch makes lock_buffer() use the exclusive wakeup scheme
added in 2.3.
Against 2.4.2pre4.
--- linux/fs/buffer.c.orig Mon Feb 19 23:13:08 2001
+++ linux/fs/buffer.c Mon Feb 19 23:16:26 2001
@@ -142,13 +142,18 @@
* if 'b_wait' is set before calling this, so that the
Hi Linus,
Take a look at __lock_page:
static void __lock_page(struct page *page)
{
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
add_wait_queue_exclusive(&page->wait, &wait);
for (;;) {
sync_page(page);
set_task
Hi,
This patch makes page_launder() do actual disk IO
(run_task_queue(&tq_disk)) only if IO was queued in the page freeing
loop.
If we freed enough clean pages without needing do to any disk IO, there is
no need to call run_task_queue(&tq_disk).
--- linux/mm/vmscan.c.orig Mon Feb 19 20
On Tue, 13 Feb 2001, Rik van Riel wrote:
> On Tue, 13 Feb 2001, Mike Galbraith wrote:
> > On Mon, 12 Feb 2001, Marcelo Tosatti wrote:
> >
> > > Could you please try the attached patch on top of latest Rik's patch?
> >
> > Sure thing.. (few minutes l
On Wed, 14 Feb 2001, Steve Lord wrote:
> > However, we may still optimize readahead a bit on Linux 2.4 without too
> > much efforts: an IO read command which fails (and returns an error code
> > back to the caller) if merging with other requests fail.
> >
> > Using this command for readahead
On Wed, 14 Feb 2001, wrote:
> I have been performing some IO tests under Linux on SCSI disks.
ext2 filesystem?
> I noticed gaps between the commands and decided to investigate.
> I am new to the kernel and do not profess to underatand what
> actually happens. My observations suggest that th
On Wed, 14 Feb 2001, NIIBE Yutaka wrote:
> Alan Cox wrote:
> > Ok we need to handle that case a bit more intelligently so those flushes dont
> > get into other ports code paths.
>
> Possibly at fs/buffer.c:end_buffer_io_async?
>
> We need to flush the cache when I/O was READ or READA.
Yet
On Mon, 12 Feb 2001, george anzinger wrote:
> Excuse me if I am off base here, but wouldn't an atomic operation be
> better here. There are atomic inc/dec and add/sub macros for this. It
> just seems that that is all that is needed here (from inspection of the
> patch).
Most functions which
Hi,
Niibe Yutaka noted (and added an entry on the MM bugzilla system) that
cache flushing on do_swap_page() is buggy. Here:
---
struct page *page = lookup_swap_cache(entry);
pte_t pte;
if (!page) {
lock_kernel();
swapin_readahead(entry);
On Sun, 11 Feb 2001, Mike Galbraith wrote:
> On Sun, 11 Feb 2001, Rik van Riel wrote:
>
> > On Sun, 11 Feb 2001, Mike Galbraith wrote:
> > > On Sun, 11 Feb 2001, Mike Galbraith wrote:
> > >
> > > > Something else I see while watching it run: MUCH more swapout than
> > > > swapin. Does that
On Mon, 12 Feb 2001, Hans Reiser wrote:
> Marcelo Tosatti wrote:
> >
> > On Sun, 11 Feb 2001, Chris Mason wrote:
> >
> > >
> > >
> > > On Sunday, February 11, 2001 10:00:11 AM +0300 Hans Reiser
> > > <[EMAIL PROTECTED]> wrote:
>
On Sun, 11 Feb 2001, Chris Mason wrote:
>
>
> On Sunday, February 11, 2001 10:00:11 AM +0300 Hans Reiser
> <[EMAIL PROTECTED]> wrote:
>
> > Daniel Stone wrote:
> >>
> >> On 11 Feb 2001 02:02:00 +1300, Chris Wedgwood wrote:
> >> > On Thu, Feb 08, 2001 at 05:34:44PM +1100, Daniel Stone wrote:
I just tested it here and it seems to behave pretty well.
On Sat, 10 Feb 2001, Rik van Riel wrote:
> Hi,
>
> the patch below should make page_launder() more well-behaved
> than it is in -ac8 and -ac9 ... note, however, that this thing
> is still completely untested and only in theory makes pa
On Sat, 10 Feb 2001, Mike Galbraith wrote:
> Hi Rik,
>
> This change makes my box swap madly under load. It appears to be
> keeping more cache around than is really needed, and therefore
> having to resort to swap instead. The result is MUCH more I/O than
> previous kernels while doing the sa
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
> > > The problem is that aio_read and aio_write are pretty useless for ftp or
> > > http server. You need aio_open.
> >
> > Could you explain this?
>
> If the server is sending many small files, disk spends huge amount time
> walking directory tree
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
> > > > How do you write high-performance ftp server without threads if select
> > > > on regular file always returns "ready"?
> > >
> > > Select can work if the access is sequential, but async IO is a more
> > > general solution.
> >
> > Even async
On Thu, 8 Feb 2001, Marcelo Tosatti wrote:
>
> On Thu, 8 Feb 2001, Ben LaHaise wrote:
>
>
>
> > > (besides, latency would suck. I bet you're better off waiting for the
> > > requests if they are all used up. It takes too long to get deep into the
> &
On Thu, 8 Feb 2001, Ben LaHaise wrote:
> > (besides, latency would suck. I bet you're better off waiting for the
> > requests if they are all used up. It takes too long to get deep into the
> > kernel from user space, and you cannot use the exclusive waiters with its
> > anti-herd behaviour et
1101 - 1200 of 1340 matches
Mail list logo