On Sun, 8 Jul 2001, Linus Torvalds wrote:
> On Sun, 8 Jul 2001 [EMAIL PROTECTED] wrote:
> >
> > mm/highmem.c/copy_from_high_bh() blocks interrupts while copying "down"
> > to a bounce buffer, for writing.
> > This function is only ever called from create_bounce() (which cannot
> > be called
On Sun, 8 Jul 2001, Linus Torvalds wrote:
On Sun, 8 Jul 2001 [EMAIL PROTECTED] wrote:
mm/highmem.c/copy_from_high_bh() blocks interrupts while copying down
to a bounce buffer, for writing.
This function is only ever called from create_bounce() (which cannot
be called from an
On Wed, 30 May 2001, Jens Axboe wrote:
> On Wed, May 30 2001, Mark Hemment wrote:
> > This can lead to attempt_merge() releasing the embedded request
> > structure (which, as an extract copy, has the ->q set, so to
> > blkdev_release_request() it looks like a reque
c) but no one posted any feedback.
I've included some of the original message below.
Mark
--
>From [EMAIL PROTECTED] Sat Mar 31 16:07:14 2001 +0100
Date: Sat, 31 Mar 2001 16:07:13 +0100 (BST)
From: Mark Hemment <[EMAIL PROTECT
Hi Jens, all,
In drivers/block/ll_rw_blk.c:blk_dev_init(), the high and low queued
sectors are calculated from the total number of free pages in all memory
zones. Shouldn't this calculation be passed upon the number of pages upon
which I/O can be done directly (ie. without bounce pages)?
On Wed, 30 May 2001, Jens Axboe wrote:
> On Wed, May 30 2001, Mark Hemment wrote:
> > Hi Jens,
> >
> > I ran this (well, cut-two) on a 4-way box with 4GB of memory and a
> > modified qlogic fibre channel driver with 32disks hanging off it, without
> > any pr
Hi Jens,
I ran this (well, cut-two) on a 4-way box with 4GB of memory and a
modified qlogic fibre channel driver with 32disks hanging off it, without
any problems. The test used was SpecFS 2.0
Peformance is definitely up - but I can't give an exact number, as the
run with this patch was
Hi Jens,
I ran this (well, cut-two) on a 4-way box with 4GB of memory and a
modified qlogic fibre channel driver with 32disks hanging off it, without
any problems. The test used was SpecFS 2.0
Peformance is definitely up - but I can't give an exact number, as the
run with this patch was
On Wed, 30 May 2001, Jens Axboe wrote:
On Wed, May 30 2001, Mark Hemment wrote:
Hi Jens,
I ran this (well, cut-two) on a 4-way box with 4GB of memory and a
modified qlogic fibre channel driver with 32disks hanging off it, without
any problems. The test used was SpecFS 2.0
Cool
Hi Jens, all,
In drivers/block/ll_rw_blk.c:blk_dev_init(), the high and low queued
sectors are calculated from the total number of free pages in all memory
zones. Shouldn't this calculation be passed upon the number of pages upon
which I/O can be done directly (ie. without bounce pages)?
On Fri, 11 May 2001, null wrote:
> Time to mkfs the same two 5GB LUNs in parallel is 54 seconds. Hmmm.
> Bandwidth on two CPUs is totally consumed (99.9%) and a third CPU is
> usually consumed by the kupdated process. Activity lights on the storage
> device are mostly idle during this time.
On Wed, 9 May 2001, Marcelo Tosatti wrote:
> On Wed, 9 May 2001, Mark Hemment wrote:
> > Could introduce another allocation flag (__GFP_FAIL?) which is or'ed
> > with a __GFP_WAIT to limit the looping?
>
> __GFP_FAIL is in the -ac tree already and it is being used b
On Wed, 9 May 2001, Marcelo Tosatti wrote:
On Wed, 9 May 2001, Mark Hemment wrote:
Could introduce another allocation flag (__GFP_FAIL?) which is or'ed
with a __GFP_WAIT to limit the looping?
__GFP_FAIL is in the -ac tree already and it is being used by the bounce
buffer allocation
On Tue, 8 May 2001, David S. Miller wrote:
> Actually, the change was made because it is illogical to try only
> once on multi-order pages. Especially because we depend upon order
> 1 pages so much (every task struct allocated). We depend upon them
> even more so on sparc64 (certain kinds of
On Tue, 8 May 2001, David S. Miller wrote:
Actually, the change was made because it is illogical to try only
once on multi-order pages. Especially because we depend upon order
1 pages so much (every task struct allocated). We depend upon them
even more so on sparc64 (certain kinds of page
In 2.4.3pre6, code in page_alloc.c:__alloc_pages(), changed from;
try_to_free_pages(gfp_mask);
wakeup_bdflush();
if (!order)
goto try_again;
to
try_to_free_pages(gfp_mask);
wakeup_bdflush();
goto try_again;
This introduced
In 2.4.3pre6, code in page_alloc.c:__alloc_pages(), changed from;
try_to_free_pages(gfp_mask);
wakeup_bdflush();
if (!order)
goto try_again;
to
try_to_free_pages(gfp_mask);
wakeup_bdflush();
goto try_again;
This introduced
Hi,
d_move() in fs/dcache.c is checking the kernel lock is held
(switch_names() does the same, but is only called from d_move()).
My question is why?
I can't see what it is using the kernel lock to sync/protect against.
Anyone out there know?
Thanks,
Mark
-
To unsubscribe from this
Hi,
d_move() in fs/dcache.c is checking the kernel lock is held
(switch_names() does the same, but is only called from d_move()).
My question is why?
I can't see what it is using the kernel lock to sync/protect against.
Anyone out there know?
Thanks,
Mark
-
To unsubscribe from this
Marcelo,
Infact, the test can be made even weaker than that.
We only what to avoid the inactive-clean list when allocating from
within an interrupt (or from a bottom-half handler) to avoid
deadlock on taking the pagecache_lock and pagemap_lru_lock.
Note: no allocations are done while
I believe David Miller's latest zero-copy patches might help here.
In his patch, the pull-up buffer is now allocated near the top of stack
(in the sunrpc code), so it can be a blocking allocation.
This doesn't fix the core VM problems, but does relieve the pressure
_slightly_ on the VM (I
I believe David Miller's latest zero-copy patches might help here.
In his patch, the pull-up buffer is now allocated near the top of stack
(in the sunrpc code), so it can be a blocking allocation.
This doesn't fix the core VM problems, but does relieve the pressure
_slightly_ on the VM (I
Hi,
I've never seen these trigger, but they look theoretically possible.
When processing the completion of a SCSI request in a bottom-half,
__scsi_end_request() can find all the buffers associated with the request
haven't been completed (ie. leftovers).
One question is; can this ever
Hi,
Two performance changes against 2.4.3.
flush_all_zero_pkmaps() is guarding against a race which cannot happen,
and thus hurting performance.
It uses the atomic fetch-and-clear "ptep_get_and_clear()" operation,
which is much stronger than needed. No-one has the page mapped, and
Hi,
I've never seen these trigger, but they look theoretically possible.
When processing the completion of a SCSI request in a bottom-half,
__scsi_end_request() can find all the buffers associated with the request
haven't been completed (ie. leftovers).
One question is; can this ever
On Fri, 2 Mar 2001, Manfred Spraul wrote:
> Zitiere Mark Hemment <[EMAIL PROTECTED]>:
> > Could be a win on archs with small L1 cache line sizes (16bytes on a
> > 486) - but most modern processors have larger lines.
>
> IIRC cache colouring was introduced for some
On Thu, 1 Mar 2001, Manfred Spraul wrote:
> Yes, I see the difference, but I'm not sure that it will work as
> intended.
> offset must be a multiple of the alignment, everything else won't work.
The code does force the offset to be a multiple of the alignment -
rounding the offset up. The
On Thu, 1 Mar 2001, Manfred Spraul wrote:
Yes, I see the difference, but I'm not sure that it will work as
intended.
offset must be a multiple of the alignment, everything else won't work.
The code does force the offset to be a multiple of the alignment -
rounding the offset up. The idea
On Fri, 2 Mar 2001, Manfred Spraul wrote:
Zitiere Mark Hemment [EMAIL PROTECTED]:
Could be a win on archs with small L1 cache line sizes (16bytes on a
486) - but most modern processors have larger lines.
IIRC cache colouring was introduced for some sun hardware with 2 memory busses
On Thu, 1 Mar 2001, Manfred Spraul wrote:
> Mark Hemment wrote:
> >
> > The original idea behind offset was for objects with a "hot" area
> > greater than a single L1 cache line. By using offset correctly (and to my
> > knowledge it has never been used
On Thu, 1 Mar 2001, Manfred Spraul wrote:
> Alan added a CONFIG options for FORCED_DEBUG slab debugging, but there
> is one minor problem with FORCED_DEBUG: FORCED_DEBUG disables
> HW_CACHEALIGN, and several drivers assume that HW_CACHEALIGN implies a
> certain alignment (iirc usb/uhci.c assumes
On Thu, 1 Mar 2001, Manfred Spraul wrote:
Alan added a CONFIG options for FORCED_DEBUG slab debugging, but there
is one minor problem with FORCED_DEBUG: FORCED_DEBUG disables
HW_CACHEALIGN, and several drivers assume that HW_CACHEALIGN implies a
certain alignment (iirc usb/uhci.c assumes
On Thu, 1 Mar 2001, Manfred Spraul wrote:
Mark Hemment wrote:
The original idea behind offset was for objects with a "hot" area
greater than a single L1 cache line. By using offset correctly (and to my
knowledge it has never been used anywhere in the Linux kernel), a SL
On Thu, 22 Feb 2001, Neil Brown wrote:
> On Sunday February 18, [EMAIL PROTECTED] wrote:
> > Hi Neil, all,
> >
> > The nfs daemons run holding the global kernel lock. They still hold
> > this lock over calls to file_op's read and write.
> > [snip]
> > Dropping the kernel lock around read
On Thu, 22 Feb 2001, Neil Brown wrote:
On Sunday February 18, [EMAIL PROTECTED] wrote:
Hi Neil, all,
The nfs daemons run holding the global kernel lock. They still hold
this lock over calls to file_op's read and write.
[snip]
Dropping the kernel lock around read and write in
Hi Neil, all,
The nfs daemons run holding the global kernel lock. They still hold
this lock over calls to file_op's read and write.
The file system kernel interface (FSKI) doesn't require the kernel lock
to be held over these read/write calls. The nfs daemons do not require
that the
Hi,
On a 4GB SMP box, configured with HIGHMEM support, making a 670G
(obviously using a volume manager) ext2 file system takes 12minutes (over
10minutes of sys time).
One problem is buffer allocations do not use HIGHMEM, but the
nr_free_buffer_pages() doesn't take this into account causing
Hi Neil, all,
The nfs daemons run holding the global kernel lock. They still hold
this lock over calls to file_op's read and write.
The file system kernel interface (FSKI) doesn't require the kernel lock
to be held over these read/write calls. The nfs daemons do not require
that the
Hi,
If two processes, sharing the same page tables, hit an unloaded vmalloc
address in the kernel at the same time, one of the processes is killed
(with the message "Unable to handle kernel paging request").
This occurs because the test on a vmalloc fault is too tight. On x86,
it contains;
Hi,
If two processes, sharing the same page tables, hit an unloaded vmalloc
address in the kernel at the same time, one of the processes is killed
(with the message "Unable to handle kernel paging request").
This occurs because the test on a vmalloc fault is too tight. On x86,
it contains;
Hi,
Several places in the kernel run holding the global kernel lock when it
isn't needed. This usually occurs when where is a function which can be
called via many different paths; some holding the lock, others not.
Now, if a function can block (and hence drop the kernel lock) the caller
Hi,
Several places in the kernel run holding the global kernel lock when it
isn't needed. This usually occurs when where is a function which can be
called via many different paths; some holding the lock, others not.
Now, if a function can block (and hence drop the kernel lock) the caller
Hi Paul,
> 2) Other block I/O output (eg dd if=/dev/zero of=/dev/sdi bs=4M) also
> run very slowly
What do you notice when running "top" and doing the above?
Does the "buff" value grow high (+700MB), with high CPU usage?
If so, I think this might be down to nr_free_buffer_pages().
This
Hi Paul,
2) Other block I/O output (eg dd if=/dev/zero of=/dev/sdi bs=4M) also
run very slowly
What do you notice when running "top" and doing the above?
Does the "buff" value grow high (+700MB), with high CPU usage?
If so, I think this might be down to nr_free_buffer_pages().
This
On Fri, 29 Dec 2000, Tim Wright wrote:
> Yes, this is a very important point if we ever want to make serious use
> of large memory machines on ia32. We ran into this with DYNIX/ptx when the
> P6 added 36-bit physical addressing. Conserving KVA (kernel virtual address
> space), became a very high
Hi,
On Thu, 28 Dec 2000, David S. Miller wrote:
>Date: Thu, 28 Dec 2000 23:17:22 +0100
>From: Andi Kleen <[EMAIL PROTECTED]>
>
>Would you consider patches for any of these points?
>
> To me it seems just as important to make sure struct page is
> a power of 2 in size, with
Hi,
On Thu, 28 Dec 2000, David S. Miller wrote:
Date: Thu, 28 Dec 2000 23:17:22 +0100
From: Andi Kleen [EMAIL PROTECTED]
Would you consider patches for any of these points?
To me it seems just as important to make sure struct page is
a power of 2 in size, with the waitq
On Fri, 29 Dec 2000, Tim Wright wrote:
Yes, this is a very important point if we ever want to make serious use
of large memory machines on ia32. We ran into this with DYNIX/ptx when the
P6 added 36-bit physical addressing. Conserving KVA (kernel virtual address
space), became a very high
Hi,
Looking at the second loop in elevator_linus_merge(), it is possible for
requests to have their elevator_sequence go negative. This can cause a
v long latency before the request is finally serviced.
Say, for example, a request (in the queue) is jumped in the first loop
in
Hi,
Looking at the second loop in elevator_linus_merge(), it is possible for
requests to have their elevator_sequence go negative. This can cause a
v long latency before the request is finally serviced.
Say, for example, a request (in the queue) is jumped in the first loop
in
Hi Tigran,
On Wed, 11 Oct 2000, Tigran Aivazian wrote:
> a) one of the eepro100 interfaces (the onboard one on the S2QR6 mb) is
> malfunctioning, interrupts are generated but no traffic gets through (YES,
> I did plug it in correctly, this time, and I repeat 2.2.16 works!)
I saw this the
Hi Tigran,
On Wed, 11 Oct 2000, Tigran Aivazian wrote:
a) one of the eepro100 interfaces (the onboard one on the S2QR6 mb) is
malfunctioning, interrupts are generated but no traffic gets through (YES,
I did plug it in correctly, this time, and I repeat 2.2.16 works!)
I saw this the other
Hi,
On Mon, 25 Sep 2000, Stephen C. Tweedie wrote:
> So you have run out of physical memory --- what do you do about it?
Why let the system get into the state where it is neccessary to kill a
process?
Per-user/task resource counters should prevent unprivileged users from
soaking up too
Hi,
On Mon, 25 Sep 2000, Stephen C. Tweedie wrote:
So you have run out of physical memory --- what do you do about it?
Why let the system get into the state where it is neccessary to kill a
process?
Per-user/task resource counters should prevent unprivileged users from
soaking up too many
54 matches
Mail list logo