On Fri, 2007-01-26 at 00:51 -0800, Andrew Morton wrote:
> On Fri, 26 Jan 2007 09:03:37 +0100
> Peter Zijlstra <[EMAIL PROTECTED]> wrote:
>
> > On Thu, 2007-01-25 at 22:04 -0800, Andrew Morton wrote:
> > > On Thu, 25 Jan 2007 21:31:43 -0800 (PST)
> > > Christoph Lameter <[EMAIL PROTECTED]> wrote:
On Fri, 2007-01-26 at 00:51 -0800, Andrew Morton wrote:
On Fri, 26 Jan 2007 09:03:37 +0100
Peter Zijlstra [EMAIL PROTECTED] wrote:
On Thu, 2007-01-25 at 22:04 -0800, Andrew Morton wrote:
On Thu, 25 Jan 2007 21:31:43 -0800 (PST)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Thu,
On Fri, 2007-01-26 at 00:51 -0800, Andrew Morton wrote:
> A patch against next -mm would suit, thanks.
>
> (But we already use atomic_long_t in generic code?)
but there is currently no atomic_long_{inc,dec}_return, or any
atomic_long_*_return function for that matter.
Mathieu adds these
On Fri, 26 Jan 2007 09:03:37 +0100
Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> On Thu, 2007-01-25 at 22:04 -0800, Andrew Morton wrote:
> > On Thu, 25 Jan 2007 21:31:43 -0800 (PST)
> > Christoph Lameter <[EMAIL PROTECTED]> wrote:
> >
> > > On Thu, 25 Jan 2007, Andrew Morton wrote:
> > >
> > > >
On Fri, 2007-01-26 at 09:00 +0100, Peter Zijlstra wrote:
> On Thu, 2007-01-25 at 21:02 -0800, Andrew Morton wrote:
> > On Thu, 25 Jan 2007 16:32:28 +0100
> > Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> >
> > > +long congestion_wait_interruptible(int rw, long timeout)
> > > +{
> > > + long ret;
>
On Thu, 2007-01-25 at 22:04 -0800, Andrew Morton wrote:
> On Thu, 25 Jan 2007 21:31:43 -0800 (PST)
> Christoph Lameter <[EMAIL PROTECTED]> wrote:
>
> > On Thu, 25 Jan 2007, Andrew Morton wrote:
> >
> > > atomic_t is 32-bit. Put 16TB of memory under writeback and blam.
> >
> > We have systems
On Thu, 2007-01-25 at 21:02 -0800, Andrew Morton wrote:
> On Thu, 25 Jan 2007 16:32:28 +0100
> Peter Zijlstra <[EMAIL PROTECTED]> wrote:
>
> > +long congestion_wait_interruptible(int rw, long timeout)
> > +{
> > + long ret;
> > + DEFINE_WAIT(wait);
> > + wait_queue_head_t *wqh = _wqh[rw];
>
On Thu, 2007-01-25 at 21:02 -0800, Andrew Morton wrote:
On Thu, 25 Jan 2007 16:32:28 +0100
Peter Zijlstra [EMAIL PROTECTED] wrote:
+long congestion_wait_interruptible(int rw, long timeout)
+{
+ long ret;
+ DEFINE_WAIT(wait);
+ wait_queue_head_t *wqh = congestion_wqh[rw];
+
On Thu, 2007-01-25 at 22:04 -0800, Andrew Morton wrote:
On Thu, 25 Jan 2007 21:31:43 -0800 (PST)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Thu, 25 Jan 2007, Andrew Morton wrote:
atomic_t is 32-bit. Put 16TB of memory under writeback and blam.
We have systems with 8TB main
On Fri, 2007-01-26 at 09:00 +0100, Peter Zijlstra wrote:
On Thu, 2007-01-25 at 21:02 -0800, Andrew Morton wrote:
On Thu, 25 Jan 2007 16:32:28 +0100
Peter Zijlstra [EMAIL PROTECTED] wrote:
+long congestion_wait_interruptible(int rw, long timeout)
+{
+ long ret;
+
On Fri, 26 Jan 2007 09:03:37 +0100
Peter Zijlstra [EMAIL PROTECTED] wrote:
On Thu, 2007-01-25 at 22:04 -0800, Andrew Morton wrote:
On Thu, 25 Jan 2007 21:31:43 -0800 (PST)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Thu, 25 Jan 2007, Andrew Morton wrote:
atomic_t is 32-bit.
On Fri, 2007-01-26 at 00:51 -0800, Andrew Morton wrote:
A patch against next -mm would suit, thanks.
(But we already use atomic_long_t in generic code?)
but there is currently no atomic_long_{inc,dec}_return, or any
atomic_long_*_return function for that matter.
Mathieu adds these missing
On Thu, 25 Jan 2007, Andrew Morton wrote:
> > We have systems with 8TB main memory and are able to get to 16TB.
>
> But I bet you don't use 4k pages on 'em ;)
IA64 can be configured for 4k pagesize but yes 16k is the default. There
are plans to go much higher though. Plus there may be other
On Thu, 25 Jan 2007 21:31:43 -0800 (PST)
Christoph Lameter <[EMAIL PROTECTED]> wrote:
> On Thu, 25 Jan 2007, Andrew Morton wrote:
>
> > atomic_t is 32-bit. Put 16TB of memory under writeback and blam.
>
> We have systems with 8TB main memory and are able to get to 16TB.
But I bet you don't
On Thu, 25 Jan 2007, Andrew Morton wrote:
> atomic_t is 32-bit. Put 16TB of memory under writeback and blam.
We have systems with 8TB main memory and are able to get to 16TB.
Better change it now.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a
On Thu, 25 Jan 2007 16:32:28 +0100
Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> Hopefully the last version ;-)
>
>
> ---
> Subject: nfs: fix congestion control
>
> The current NFS client congestion logic is severly broken, it marks the
> backing
> device congested during each nfs_writepages()
On Thu, 25 Jan 2007 16:32:28 +0100
Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> +long congestion_wait_interruptible(int rw, long timeout)
> +{
> + long ret;
> + DEFINE_WAIT(wait);
> + wait_queue_head_t *wqh = _wqh[rw];
> +
> + prepare_to_wait(wqh, , TASK_INTERRUPTIBLE);
> + if
Hopefully the last version ;-)
---
Subject: nfs: fix congestion control
The current NFS client congestion logic is severly broken, it marks the backing
device congested during each nfs_writepages() call but doesn't mirror this in
nfs_writepage() which makes for deadlocks. Also it implements its
Hopefully the last version ;-)
---
Subject: nfs: fix congestion control
The current NFS client congestion logic is severly broken, it marks the backing
device congested during each nfs_writepages() call but doesn't mirror this in
nfs_writepage() which makes for deadlocks. Also it implements its
On Thu, 25 Jan 2007 16:32:28 +0100
Peter Zijlstra [EMAIL PROTECTED] wrote:
+long congestion_wait_interruptible(int rw, long timeout)
+{
+ long ret;
+ DEFINE_WAIT(wait);
+ wait_queue_head_t *wqh = congestion_wqh[rw];
+
+ prepare_to_wait(wqh, wait, TASK_INTERRUPTIBLE);
+
On Thu, 25 Jan 2007 16:32:28 +0100
Peter Zijlstra [EMAIL PROTECTED] wrote:
Hopefully the last version ;-)
---
Subject: nfs: fix congestion control
The current NFS client congestion logic is severly broken, it marks the
backing
device congested during each nfs_writepages() call but
On Thu, 25 Jan 2007, Andrew Morton wrote:
atomic_t is 32-bit. Put 16TB of memory under writeback and blam.
We have systems with 8TB main memory and are able to get to 16TB.
Better change it now.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
On Thu, 25 Jan 2007 21:31:43 -0800 (PST)
Christoph Lameter [EMAIL PROTECTED] wrote:
On Thu, 25 Jan 2007, Andrew Morton wrote:
atomic_t is 32-bit. Put 16TB of memory under writeback and blam.
We have systems with 8TB main memory and are able to get to 16TB.
But I bet you don't use 4k
On Thu, 25 Jan 2007, Andrew Morton wrote:
We have systems with 8TB main memory and are able to get to 16TB.
But I bet you don't use 4k pages on 'em ;)
IA64 can be configured for 4k pagesize but yes 16k is the default. There
are plans to go much higher though. Plus there may be other reaons
On Sat, 20 Jan 2007, Peter Zijlstra wrote:
> Subject: nfs: fix congestion control
I am not sure if its too valuable since I have limited experience with NFS
but it looks fine to me.
Acked-by: Christoph Lameter <[EMAIL PROTECTED]>
-
To unsubscribe from this list: send the line "unsubscribe
On Sat, 2007-01-20 at 08:01 +0100, Peter Zijlstra wrote:
> Subject: nfs: fix congestion control
>
> The current NFS client congestion logic is severly broken, it marks the
> backing
> device congested during each nfs_writepages() call but doesn't mirror this in
> nfs_writepage() which makes for
On Sat, 2007-01-20 at 08:01 +0100, Peter Zijlstra wrote:
Subject: nfs: fix congestion control
The current NFS client congestion logic is severly broken, it marks the
backing
device congested during each nfs_writepages() call but doesn't mirror this in
nfs_writepage() which makes for
On Sat, 20 Jan 2007, Peter Zijlstra wrote:
Subject: nfs: fix congestion control
I am not sure if its too valuable since I have limited experience with NFS
but it looks fine to me.
Acked-by: Christoph Lameter [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe
Subject: nfs: fix congestion control
The current NFS client congestion logic is severly broken, it marks the backing
device congested during each nfs_writepages() call but doesn't mirror this in
nfs_writepage() which makes for deadlocks. Also it implements its own waitqueue.
Replace this by a
On Fri, 19 Jan 2007, Trond Myklebust wrote:
> That would be good as a default, but I've been thinking that we could
> perhaps also add a sysctl in /proc/sys/fs/nfs in order to make it a
> tunable?
Good idea.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body
On Fri, 2007-01-19 at 18:57 +0100, Peter Zijlstra wrote:
> On Fri, 2007-01-19 at 09:20 -0800, Christoph Lameter wrote:
> > On Fri, 19 Jan 2007, Peter Zijlstra wrote:
> >
> > > + /*
> > > + * NFS congestion size, scale with available memory.
> > > + *
> >
> > Well this all depends on the memory
On Fri, 19 Jan 2007, Peter Zijlstra wrote:
> Eeuh, right. Glad to have you around to remind how puny my boxens
> are :-)
Sorry about that but it was unavoidable if we want to get to reasonable
limits that will work in all situations.
-
To unsubscribe from this list: send the line "unsubscribe
On Fri, 2007-01-19 at 09:20 -0800, Christoph Lameter wrote:
> On Fri, 19 Jan 2007, Peter Zijlstra wrote:
>
> > + /*
> > +* NFS congestion size, scale with available memory.
> > +*
>
> Well this all depends on the memory available to the running process.
> If the process is just allowed
On Fri, 2007-01-19 at 11:51 -0500, Trond Myklebust wrote:
> > So with that out of the way I now have this
>
> Looks much better. Just one obvious buglet...
> > @@ -1565,6 +1579,23 @@ int __init nfs_init_writepagecache(void)
> > if (nfs_commit_mempool == NULL)
> > return -ENOMEM;
On Fri, 19 Jan 2007, Peter Zijlstra wrote:
> + /*
> + * NFS congestion size, scale with available memory.
> + *
Well this all depends on the memory available to the running process.
If the process is just allowed to allocate from a subset of memory
(cpusets) then this may need to
On Fri, 2007-01-19 at 14:07 +0100, Peter Zijlstra wrote:
> On Fri, 2007-01-19 at 10:34 +0100, Peter Zijlstra wrote:
> > On Thu, 2007-01-18 at 10:49 -0500, Trond Myklebust wrote:
> >
> > > After the dirty page has been written to unstable storage, it marks the
> > > inode using I_DIRTY_DATASYNC,
On Fri, 2007-01-19 at 10:34 +0100, Peter Zijlstra wrote:
> On Thu, 2007-01-18 at 10:49 -0500, Trond Myklebust wrote:
>
> > After the dirty page has been written to unstable storage, it marks the
> > inode using I_DIRTY_DATASYNC, which should then ensure that the VFS
> > calls write_inode() on the
On Thu, 2007-01-18 at 10:49 -0500, Trond Myklebust wrote:
> After the dirty page has been written to unstable storage, it marks the
> inode using I_DIRTY_DATASYNC, which should then ensure that the VFS
> calls write_inode() on the next pass through __sync_single_inode.
> I'd rather like to see
On Thu, 2007-01-18 at 10:49 -0500, Trond Myklebust wrote:
After the dirty page has been written to unstable storage, it marks the
inode using I_DIRTY_DATASYNC, which should then ensure that the VFS
calls write_inode() on the next pass through __sync_single_inode.
I'd rather like to see
On Fri, 2007-01-19 at 10:34 +0100, Peter Zijlstra wrote:
On Thu, 2007-01-18 at 10:49 -0500, Trond Myklebust wrote:
After the dirty page has been written to unstable storage, it marks the
inode using I_DIRTY_DATASYNC, which should then ensure that the VFS
calls write_inode() on the next
On Fri, 2007-01-19 at 14:07 +0100, Peter Zijlstra wrote:
On Fri, 2007-01-19 at 10:34 +0100, Peter Zijlstra wrote:
On Thu, 2007-01-18 at 10:49 -0500, Trond Myklebust wrote:
After the dirty page has been written to unstable storage, it marks the
inode using I_DIRTY_DATASYNC, which should
On Fri, 19 Jan 2007, Peter Zijlstra wrote:
+ /*
+ * NFS congestion size, scale with available memory.
+ *
Well this all depends on the memory available to the running process.
If the process is just allowed to allocate from a subset of memory
(cpusets) then this may need to be
On Fri, 2007-01-19 at 11:51 -0500, Trond Myklebust wrote:
So with that out of the way I now have this
Looks much better. Just one obvious buglet...
@@ -1565,6 +1579,23 @@ int __init nfs_init_writepagecache(void)
if (nfs_commit_mempool == NULL)
return -ENOMEM;
+
On Fri, 2007-01-19 at 09:20 -0800, Christoph Lameter wrote:
On Fri, 19 Jan 2007, Peter Zijlstra wrote:
+ /*
+* NFS congestion size, scale with available memory.
+*
Well this all depends on the memory available to the running process.
If the process is just allowed to
On Fri, 19 Jan 2007, Peter Zijlstra wrote:
Eeuh, right. Glad to have you around to remind how puny my boxens
are :-)
Sorry about that but it was unavoidable if we want to get to reasonable
limits that will work in all situations.
-
To unsubscribe from this list: send the line unsubscribe
On Fri, 2007-01-19 at 18:57 +0100, Peter Zijlstra wrote:
On Fri, 2007-01-19 at 09:20 -0800, Christoph Lameter wrote:
On Fri, 19 Jan 2007, Peter Zijlstra wrote:
+ /*
+ * NFS congestion size, scale with available memory.
+ *
Well this all depends on the memory available to the
On Fri, 19 Jan 2007, Trond Myklebust wrote:
That would be good as a default, but I've been thinking that we could
perhaps also add a sysctl in /proc/sys/fs/nfs in order to make it a
tunable?
Good idea.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a
Subject: nfs: fix congestion control
The current NFS client congestion logic is severly broken, it marks the backing
device congested during each nfs_writepages() call but doesn't mirror this in
nfs_writepage() which makes for deadlocks. Also it implements its own waitqueue.
Replace this by a
On Thu, 2007-01-18 at 14:27 +0100, Peter Zijlstra wrote:
> On Wed, 2007-01-17 at 16:54 -0500, Trond Myklebust wrote:
> > On Wed, 2007-01-17 at 22:52 +0100, Peter Zijlstra wrote:
> >
> > > >
> > > > > Index: linux-2.6-git/fs/inode.c
> > > > >
On Wed, 2007-01-17 at 16:54 -0500, Trond Myklebust wrote:
> On Wed, 2007-01-17 at 22:52 +0100, Peter Zijlstra wrote:
>
> > >
> > > > Index: linux-2.6-git/fs/inode.c
> > > > ===
> > > > --- linux-2.6-git.orig/fs/inode.c
On Wed, 2007-01-17 at 16:54 -0500, Trond Myklebust wrote:
On Wed, 2007-01-17 at 22:52 +0100, Peter Zijlstra wrote:
Index: linux-2.6-git/fs/inode.c
===
--- linux-2.6-git.orig/fs/inode.c 2007-01-12
On Thu, 2007-01-18 at 14:27 +0100, Peter Zijlstra wrote:
On Wed, 2007-01-17 at 16:54 -0500, Trond Myklebust wrote:
On Wed, 2007-01-17 at 22:52 +0100, Peter Zijlstra wrote:
Index: linux-2.6-git/fs/inode.c
===
> --- linux-2.6-git.orig/fs/inode.c 2007-01-12 08:03:47.0 +0100
> +++ linux-2.6-git/fs/inode.c 2007-01-12 08:53:26.0 +0100
> @@ -81,6 +81,7 @@ static struct hlist_head *inode_hashtabl
> * the i_state of an inode while it is in use..
> */
> DEFINE_SPINLOCK(inode_lock);
>
On Wed, 2007-01-17 at 22:52 +0100, Peter Zijlstra wrote:
> >
> > > Index: linux-2.6-git/fs/inode.c
> > > ===
> > > --- linux-2.6-git.orig/fs/inode.c 2007-01-12 08:03:47.0 +0100
> > > +++ linux-2.6-git/fs/inode.c
On Wed, 2007-01-17 at 12:05 -0800, Christoph Lameter wrote:
> On Tue, 16 Jan 2007, Peter Zijlstra wrote:
>
> > The current NFS client congestion logic is severely broken, it marks the
> > backing device congested during each nfs_writepages() call and implements
> > its own waitqueue.
>
> This is
On Tue, 16 Jan 2007, Peter Zijlstra wrote:
> The current NFS client congestion logic is severely broken, it marks the
> backing device congested during each nfs_writepages() call and implements
> its own waitqueue.
This is the magic bullet that Andrew is looking for to fix the NFS issues?
>
On Wed, 2007-01-17 at 15:29 +0100, Peter Zijlstra wrote:
> I was thinking that since the server needs to actually sync the page a
> commit might be quite expensive (timewise), hence I didn't want to flush
> too much, and interleave them with writing out some real pages to
> utilise bandwidth.
On Wed, 2007-01-17 at 08:50 -0500, Trond Myklebust wrote:
> On Wed, 2007-01-17 at 09:49 +0100, Peter Zijlstra wrote:
> > > They are certainly _not_ dirty pages. They are pages that have been
> > > written to the server but are not yet guaranteed to have hit the disk
> > > (they were only written
On Wed, 2007-01-17 at 09:49 +0100, Peter Zijlstra wrote:
> > They are certainly _not_ dirty pages. They are pages that have been
> > written to the server but are not yet guaranteed to have hit the disk
> > (they were only written to the server's page cache). We don't care if
> > they are paged
On Wed, 2007-01-17 at 01:15 -0500, Trond Myklebust wrote:
> On Wed, 2007-01-17 at 03:41 +0100, Peter Zijlstra wrote:
> > On Tue, 2007-01-16 at 17:27 -0500, Trond Myklebust wrote:
> > > On Tue, 2007-01-16 at 23:08 +0100, Peter Zijlstra wrote:
> > > > Subject: nfs: fix congestion control
> > > >
>
On Wed, 2007-01-17 at 01:15 -0500, Trond Myklebust wrote:
On Wed, 2007-01-17 at 03:41 +0100, Peter Zijlstra wrote:
On Tue, 2007-01-16 at 17:27 -0500, Trond Myklebust wrote:
On Tue, 2007-01-16 at 23:08 +0100, Peter Zijlstra wrote:
Subject: nfs: fix congestion control
The current
On Wed, 2007-01-17 at 09:49 +0100, Peter Zijlstra wrote:
They are certainly _not_ dirty pages. They are pages that have been
written to the server but are not yet guaranteed to have hit the disk
(they were only written to the server's page cache). We don't care if
they are paged in or
On Wed, 2007-01-17 at 08:50 -0500, Trond Myklebust wrote:
On Wed, 2007-01-17 at 09:49 +0100, Peter Zijlstra wrote:
They are certainly _not_ dirty pages. They are pages that have been
written to the server but are not yet guaranteed to have hit the disk
(they were only written to the
On Wed, 2007-01-17 at 15:29 +0100, Peter Zijlstra wrote:
I was thinking that since the server needs to actually sync the page a
commit might be quite expensive (timewise), hence I didn't want to flush
too much, and interleave them with writing out some real pages to
utilise bandwidth.
Most
On Tue, 16 Jan 2007, Peter Zijlstra wrote:
The current NFS client congestion logic is severely broken, it marks the
backing device congested during each nfs_writepages() call and implements
its own waitqueue.
This is the magic bullet that Andrew is looking for to fix the NFS issues?
Index:
On Wed, 2007-01-17 at 12:05 -0800, Christoph Lameter wrote:
On Tue, 16 Jan 2007, Peter Zijlstra wrote:
The current NFS client congestion logic is severely broken, it marks the
backing device congested during each nfs_writepages() call and implements
its own waitqueue.
This is the magic
On Wed, 2007-01-17 at 22:52 +0100, Peter Zijlstra wrote:
Index: linux-2.6-git/fs/inode.c
===
--- linux-2.6-git.orig/fs/inode.c 2007-01-12 08:03:47.0 +0100
+++ linux-2.6-git/fs/inode.c 2007-01-12
--- linux-2.6-git.orig/fs/inode.c 2007-01-12 08:03:47.0 +0100
+++ linux-2.6-git/fs/inode.c 2007-01-12 08:53:26.0 +0100
@@ -81,6 +81,7 @@ static struct hlist_head *inode_hashtabl
* the i_state of an inode while it is in use..
*/
DEFINE_SPINLOCK(inode_lock);
On Wed, 2007-01-17 at 03:41 +0100, Peter Zijlstra wrote:
> On Tue, 2007-01-16 at 17:27 -0500, Trond Myklebust wrote:
> > On Tue, 2007-01-16 at 23:08 +0100, Peter Zijlstra wrote:
> > > Subject: nfs: fix congestion control
> > >
> > > The current NFS client congestion logic is severely broken, it
On Tue, 2007-01-16 at 17:27 -0500, Trond Myklebust wrote:
> On Tue, 2007-01-16 at 23:08 +0100, Peter Zijlstra wrote:
> > Subject: nfs: fix congestion control
> >
> > The current NFS client congestion logic is severely broken, it marks the
> > backing device congested during each nfs_writepages()
On Tue, 2007-01-16 at 23:08 +0100, Peter Zijlstra wrote:
> Subject: nfs: fix congestion control
>
> The current NFS client congestion logic is severely broken, it marks the
> backing device congested during each nfs_writepages() call and implements
> its own waitqueue.
>
> Replace this by a
Subject: nfs: fix congestion control
The current NFS client congestion logic is severely broken, it marks the
backing device congested during each nfs_writepages() call and implements
its own waitqueue.
Replace this by a more regular congestion implementation that puts a cap
on the number of
Subject: nfs: fix congestion control
The current NFS client congestion logic is severely broken, it marks the
backing device congested during each nfs_writepages() call and implements
its own waitqueue.
Replace this by a more regular congestion implementation that puts a cap
on the number of
On Tue, 2007-01-16 at 23:08 +0100, Peter Zijlstra wrote:
Subject: nfs: fix congestion control
The current NFS client congestion logic is severely broken, it marks the
backing device congested during each nfs_writepages() call and implements
its own waitqueue.
Replace this by a more
On Tue, 2007-01-16 at 17:27 -0500, Trond Myklebust wrote:
On Tue, 2007-01-16 at 23:08 +0100, Peter Zijlstra wrote:
Subject: nfs: fix congestion control
The current NFS client congestion logic is severely broken, it marks the
backing device congested during each nfs_writepages() call and
On Wed, 2007-01-17 at 03:41 +0100, Peter Zijlstra wrote:
On Tue, 2007-01-16 at 17:27 -0500, Trond Myklebust wrote:
On Tue, 2007-01-16 at 23:08 +0100, Peter Zijlstra wrote:
Subject: nfs: fix congestion control
The current NFS client congestion logic is severely broken, it marks the
76 matches
Mail list logo