Hi,
On Fri, Mar 23, 2001 at 10:13:44PM +, Alan Cox wrote:
> >
> > I really need to know any *specific* issues with RAWIO.
>
> All I know is that Stephen said he had a set of patches needed to fix rawio.
> I've not applied them nor afaik has Linus.
Ben LaHaise has been testing Oracle on
Hi,
We know from debugging tlb IPIs recently that it is possible for a
single IPI to be delivered more than once to a CPU on Intel SMP. (The
most common mechanism seems to be a CPU detecting CRC failure of a
multicast IPI, and the sender resending the entire multicast so that
any CPU which did
Hi,
On Thu, Mar 22, 2001 at 05:21:46PM -0500, Richard Jerrell wrote:
> 2.4.1 has a memory leak (temporary) where anonymous memory pages that have
> been moved into the swap cache will stick around after their vma has been
> unmapped by the owning process. These pages are not free'd in
Hi,
On Thu, Mar 22, 2001 at 05:21:46PM -0500, Richard Jerrell wrote:
2.4.1 has a memory leak (temporary) where anonymous memory pages that have
been moved into the swap cache will stick around after their vma has been
unmapped by the owning process. These pages are not free'd in free_pte()
Hi,
We know from debugging tlb IPIs recently that it is possible for a
single IPI to be delivered more than once to a CPU on Intel SMP. (The
most common mechanism seems to be a CPU detecting CRC failure of a
multicast IPI, and the sender resending the entire multicast so that
any CPU which did
Hi,
On Fri, Mar 23, 2001 at 10:13:44PM +, Alan Cox wrote:
I really need to know any *specific* issues with RAWIO.
All I know is that Stephen said he had a set of patches needed to fix rawio.
I've not applied them nor afaik has Linus.
Ben LaHaise has been testing Oracle on raw IO
Hi,
On Sat, Feb 17, 2001 at 02:15:40PM -0800, Fireball Freddy wrote:
>
> o Eliminate BUF_CLEAN, BUF_DIRTY, and BUF_LOCKED
> lists in favor of a single BUF_LRU list. This because
> I don't see the point of maintaining three lists...
> the only time I need to find all the dirty blocks is
> on
Hi,
On Sat, Feb 17, 2001 at 02:15:40PM -0800, Fireball Freddy wrote:
o Eliminate BUF_CLEAN, BUF_DIRTY, and BUF_LOCKED
lists in favor of a single BUF_LRU list. This because
I don't see the point of maintaining three lists...
the only time I need to find all the dirty blocks is
on a sync
Hi,
On Tue, Oct 17, 2000 at 09:42:36PM -0700, Linus Torvalds wrote:
> Now, the way I'v ealways envisioned this to work is that the VM scanning
> function basically always does the equivalent of just
>
> - get PTE entry, clear it out.
> - if PTE was dirty, add the page to the swap cache, and
Hi,
On Tue, Oct 17, 2000 at 09:42:36PM -0700, Linus Torvalds wrote:
Now, the way I'v ealways envisioned this to work is that the VM scanning
function basically always does the equivalent of just
- get PTE entry, clear it out.
- if PTE was dirty, add the page to the swap cache, and mark
Hi,
On Tue, Oct 17, 2000 at 10:02:52AM -0400, Byron Stanoszek wrote:
> I am very unimpressed with the current OOM killer. After 10 days of online
> time, I decided to try compiling gcc again, the very culprit that killed my
> last system using 2.4.0-test8 Friday night (to which I was unable to
Hi,
On Tue, Oct 17, 2000 at 03:57:52PM -0600, Jeff V. Merkey wrote:
>
> Were Linux to go totally modular in 2.5, development cycles will be
> reduced by 1/2 to 1/3. This is because you could always roll back to
> known good modules to post a release.
Most of the big 2.4 module changes
Hi,
On Tue, Oct 17, 2000 at 12:31:24AM -0400, John Kacur wrote:
> I'm trying to understand how the proc file system works. In particular
> I'd like to know more about the algorithm by which the information is
> updated and how frequently.
It is "live": the file contents are generated on demand
Hi,
On Wed, Oct 18, 2000 at 03:23:17PM +0200, Andrea Arcangeli wrote:
> This change makes sense and I agree it would cover the problem. However I
> prefer to clarify that doing it for the swap cache as described is not nearly
> enough to cover the mm corruption (everything that gets written via
Hi,
On Tue, Oct 17, 2000 at 09:26:07PM -0700, Linus Torvalds wrote:
> I hated people mis-using it the way it's being done by the sound drivers,
> but because I also realize that it allows for some simplifications I do
> accept it - it's basically an ugly hack that doesn't really matter because
Hi,
On Tue, Oct 17, 2000 at 09:26:07PM -0700, Linus Torvalds wrote:
I hated people mis-using it the way it's being done by the sound drivers,
but because I also realize that it allows for some simplifications I do
accept it - it's basically an ugly hack that doesn't really matter because
Hi,
On Wed, Oct 18, 2000 at 03:23:17PM +0200, Andrea Arcangeli wrote:
This change makes sense and I agree it would cover the problem. However I
prefer to clarify that doing it for the swap cache as described is not nearly
enough to cover the mm corruption (everything that gets written via a
Hi,
On Tue, Oct 17, 2000 at 03:57:52PM -0600, Jeff V. Merkey wrote:
Were Linux to go totally modular in 2.5, development cycles will be
reduced by 1/2 to 1/3. This is because you could always roll back to
known good modules to post a release.
Most of the big 2.4 module changes involved
Hi,
On Tue, Oct 17, 2000 at 12:31:24AM -0400, John Kacur wrote:
I'm trying to understand how the proc file system works. In particular
I'd like to know more about the algorithm by which the information is
updated and how frequently.
It is "live": the file contents are generated on demand
Hi,
On Tue, Oct 17, 2000 at 10:02:52AM -0400, Byron Stanoszek wrote:
I am very unimpressed with the current OOM killer. After 10 days of online
time, I decided to try compiling gcc again, the very culprit that killed my
last system using 2.4.0-test8 Friday night (to which I was unable to
Hi,
On Wed, Oct 18, 2000 at 01:00:48AM +0200, Andrea Arcangeli wrote:
> On Tue, Oct 17, 2000 at 02:04:10PM -0700, Linus Torvalds wrote:
> > It so happens that the vmscan stuff won't ever remove a physical page
> > mapping, but that's simply because such a page CANNOT be swapped out. How
>
> So
Hi,
On Tue, Oct 17, 2000 at 10:06:35PM +0200, Andrea Arcangeli wrote:
> > also don't see why any bug with kiobufs can't be fixed without the
> > expensive and complex pinning.
>
> IMHO pinning the page in the pte is less expensive and less complex than making
> rawio and the VM aware of those
Hi,
On Mon, Oct 16, 2000 at 08:11:55PM -0700, Linus Torvalds wrote:
> I'm sure this bug will get fixed too. And the fix probably won't end up
> even being all that painful - it's probably a question of marking the page
> dirty after completing IO into it and making sure the swap-out logic does
Hi,
On Tue, Oct 17, 2000 at 12:13:49AM +0200, Andrea Arcangeli wrote:
>
> Correct. But the problem is that the page won't stay in physical memory after
> we finished the I/O because swap cache with page count 1 will be freed by the
> VM.
Rik has been waiting for an excuse to get deferred
Hi,
On Tue, Oct 17, 2000 at 12:13:49AM +0200, Andrea Arcangeli wrote:
Correct. But the problem is that the page won't stay in physical memory after
we finished the I/O because swap cache with page count 1 will be freed by the
VM.
Rik has been waiting for an excuse to get deferred swapout
Hi,
On Mon, Oct 16, 2000 at 08:11:55PM -0700, Linus Torvalds wrote:
I'm sure this bug will get fixed too. And the fix probably won't end up
even being all that painful - it's probably a question of marking the page
dirty after completing IO into it and making sure the swap-out logic does
Hi,
On Tue, Oct 17, 2000 at 10:06:35PM +0200, Andrea Arcangeli wrote:
also don't see why any bug with kiobufs can't be fixed without the
expensive and complex pinning.
IMHO pinning the page in the pte is less expensive and less complex than making
rawio and the VM aware of those issues.
Hi,
On Wed, Oct 18, 2000 at 01:00:48AM +0200, Andrea Arcangeli wrote:
On Tue, Oct 17, 2000 at 02:04:10PM -0700, Linus Torvalds wrote:
It so happens that the vmscan stuff won't ever remove a physical page
mapping, but that's simply because such a page CANNOT be swapped out. How
So if I
Hi,
On Mon, Oct 16, 2000 at 12:08:54AM +0200, Andrea Arcangeli wrote:
> The basic problem is that map_user_kiobuf tries to map those pages calling an
> handle_mm_fault on their virtual addresses and it's thinking that when
> handle_mm_fault returns 1 the page is mapped. That's wrong.
Good
Hi,
On Fri, Oct 13, 2000 at 12:30:49PM +0100, Malcolm Beattie wrote:
> free_kiovec(1, );/* does an implicit unlock_kiovec */
>
> It doesn't do an unmap_kiobuf(iobuf) so I don't understand where
> the per-page map->count that map_user_kiobuf incremented gets
> decremented again. Anyone?
Hi,
On Fri, Oct 13, 2000 at 12:30:49PM +0100, Malcolm Beattie wrote:
free_kiovec(1, iobuf);/* does an implicit unlock_kiovec */
It doesn't do an unmap_kiobuf(iobuf) so I don't understand where
the per-page map-count that map_user_kiobuf incremented gets
decremented again. Anyone?
Hi,
On Mon, Oct 16, 2000 at 12:08:54AM +0200, Andrea Arcangeli wrote:
The basic problem is that map_user_kiobuf tries to map those pages calling an
handle_mm_fault on their virtual addresses and it's thinking that when
handle_mm_fault returns 1 the page is mapped. That's wrong.
Good point
Hi,
On Fri, Oct 13, 2000 at 08:17:42PM -0400, Ben LaHaise wrote:
>
> Below is take two of the patch making pte_clear use atomic xchg in an
> effort to avoid the loss of dirty bits. PAE no longer uses cmpxchg8 for
> updates; set_pte is two ordered long writes with a barrier.
Looks good. The
Hi,
On Fri, Oct 13, 2000 at 08:17:42PM -0400, Ben LaHaise wrote:
Below is take two of the patch making pte_clear use atomic xchg in an
effort to avoid the loss of dirty bits. PAE no longer uses cmpxchg8 for
updates; set_pte is two ordered long writes with a barrier.
Looks good. The only
Hi,
On Wed, Oct 11, 2000 at 09:28:57PM +0100, Chris Evans wrote:
>
> The problem is best described with a little sequence. After using raw i/o
> facilities, streamed block device reads from the same underlying device
> exhibit much poorer performance than before the raw i/o.
>
> Anyone know
Hi,
On Wed, Oct 11, 2000 at 09:28:57PM +0100, Chris Evans wrote:
The problem is best described with a little sequence. After using raw i/o
facilities, streamed block device reads from the same underlying device
exhibit much poorer performance than before the raw i/o.
Anyone know what's
36 matches
Mail list logo