Going through all the discussions once again and trying to look at this
from the point of view of just basic requirements for data structures and
mechanisms, that they imply.
1. Should have a data structure that represents a memory chain , which may
not be contiguous in physical memory, and
Hi!
> > So you consider inability to select() on regular files _feature_?
>
> select on files is unimplementable. You can't do background file IO the
> same way you do background receiving of packets on socket. Filesystem is
> synchronous. It can block.
You can use helper friends if VFS layer
Linus Torvalds wrote:
> Absolutely. This is exactly what I mean by saying that low-level drivers
> may not actually be able to handle new cases that they've never been asked
> to do before - they just never saw anything like a 64kB request before or
> something that crossed its own alignment.
>
Linus Torvalds wrote:
Absolutely. This is exactly what I mean by saying that low-level drivers
may not actually be able to handle new cases that they've never been asked
to do before - they just never saw anything like a 64kB request before or
something that crossed its own alignment.
But
Hi!
So you consider inability to select() on regular files _feature_?
select on files is unimplementable. You can't do background file IO the
same way you do background receiving of packets on socket. Filesystem is
synchronous. It can block.
You can use helper friends if VFS layer is
Going through all the discussions once again and trying to look at this
from the point of view of just basic requirements for data structures and
mechanisms, that they imply.
1. Should have a data structure that represents a memory chain , which may
not be contiguous in physical memory, and
Linus Torvalds wrote:
>
> On Thu, 8 Feb 2001, Rik van Riel wrote:
>
> > On Thu, 8 Feb 2001, Mikulas Patocka wrote:
> >
> > > > > You need aio_open.
> > > > Could you explain this?
> > >
> > > If the server is sending many small files, disk spends huge
> > > amount time walking directory tree
Linus Torvalds wrote:
On Thu, 8 Feb 2001, Rik van Riel wrote:
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
You need aio_open.
Could you explain this?
If the server is sending many small files, disk spends huge
amount time walking directory tree and seeking to inodes.
Hi,
On Thu, Feb 08, 2001 at 03:52:35PM +0100, Mikulas Patocka wrote:
>
> > How do you write high-performance ftp server without threads if select
> > on regular file always returns "ready"?
>
> No, it's not really possible on Linux. Use SYS$QIO call on VMS :-)
Ahh, but even VMS SYS$QIO is
On Thu, 8 Feb 2001, Rik van Riel wrote:
> On Thu, 8 Feb 2001, Mikulas Patocka wrote:
>
> > > > You need aio_open.
> > > Could you explain this?
> >
> > If the server is sending many small files, disk spends huge
> > amount time walking directory tree and seeking to inodes. Maybe
> > opening
On Thu, 8 Feb 2001, Marcelo Tosatti wrote:
>
> On Thu, 8 Feb 2001, Stephen C. Tweedie wrote:
>
>
>
> > > How do you write high-performance ftp server without threads if select
> > > on regular file always returns "ready"?
> >
> > Select can work if the access is sequential, but async IO is
On Thu, 8 Feb 2001, Martin Dalecki wrote:
> >
> > But you'll have a bitch of a time trying to merge multiple
> > threads/processes reading from the same area on disk at roughly the same
> > time. Your higher levels won't even _know_ that there is merging to be
> > done until the IO requests
On Thu, 8 Feb 2001, Pavel Machek wrote:
> >
> > There are currently no other alternatives in user space. You'd have to
> > create whole new interfaces for aio_read/write, and ways for the kernel to
> > inform user space that "now you can re-try submitting your IO".
>
> Why is current select()
On Thu, 8 Feb 2001, Rik van Riel wrote:
> On Thu, 8 Feb 2001, Mikulas Patocka wrote:
>
> > > > You need aio_open.
> > > Could you explain this?
> >
> > If the server is sending many small files, disk spends huge
> > amount time walking directory tree and seeking to inodes. Maybe
> > opening
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
> > > You need aio_open.
> > Could you explain this?
>
> If the server is sending many small files, disk spends huge
> amount time walking directory tree and seeking to inodes. Maybe
> opening the file is even slower than reading it
Not if you have a
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
> > > The problem is that aio_read and aio_write are pretty useless for ftp or
> > > http server. You need aio_open.
> >
> > Could you explain this?
>
> If the server is sending many small files, disk spends huge amount time
> walking directory
> > The problem is that aio_read and aio_write are pretty useless for ftp or
> > http server. You need aio_open.
>
> Could you explain this?
If the server is sending many small files, disk spends huge amount time
walking directory tree and seeking to inodes. Maybe opening the file is
even
On Thu, Feb 08 2001, Mikulas Patocka wrote:
> > Even async IO (ie aio_read/aio_write) should block on the request queue if
> > its full in Linus mind.
>
> This is not problem (you can create queue big enough to handle the load).
Well in theory, but in practice this isn't a very good idea. At
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
> > > > How do you write high-performance ftp server without threads if select
> > > > on regular file always returns "ready"?
> > >
> > > Select can work if the access is sequential, but async IO is a more
> > > general solution.
> >
> > Even async
> > > How do you write high-performance ftp server without threads if select
> > > on regular file always returns "ready"?
> >
> > Select can work if the access is sequential, but async IO is a more
> > general solution.
>
> Even async IO (ie aio_read/aio_write) should block on the request
On Thu, 8 Feb 2001, Marcelo Tosatti wrote:
>
> On Thu, 8 Feb 2001, Ben LaHaise wrote:
>
>
>
> > > (besides, latency would suck. I bet you're better off waiting for the
> > > requests if they are all used up. It takes too long to get deep into the
> > > kernel from user space, and you cannot
On Thu, 8 Feb 2001, Ben LaHaise wrote:
> > (besides, latency would suck. I bet you're better off waiting for the
> > requests if they are all used up. It takes too long to get deep into the
> > kernel from user space, and you cannot use the exclusive waiters with its
> > anti-herd behaviour
On Tue, 6 Feb 2001, Linus Torvalds wrote:
> There are currently no other alternatives in user space. You'd have to
> create whole new interfaces for aio_read/write, and ways for the kernel to
> inform user space that "now you can re-try submitting your IO".
>
> Could be done. But that's a big
Hi!
> So you consider inability to select() on regular files _feature_?
select on files is unimplementable. You can't do background file IO the
same way you do background receiving of packets on socket. Filesystem is
synchronous. It can block.
> It can be a pretty serious problem with slow
On Thu, 8 Feb 2001, Pavel Machek wrote:
> Hi!
>
> > > Its arguing against making a smart application block on the disk while its
> > > able to use the CPU for other work.
> >
> > There are currently no other alternatives in user space. You'd have to
> > create whole new interfaces for
On Thu, 8 Feb 2001, Stephen C. Tweedie wrote:
> > How do you write high-performance ftp server without threads if select
> > on regular file always returns "ready"?
>
> Select can work if the access is sequential, but async IO is a more
> general solution.
Even async IO (ie
Hi,
On Thu, Feb 08, 2001 at 12:15:13AM +0100, Pavel Machek wrote:
>
> > EAGAIN is _not_ a valid return value for block devices or for regular
> > files. And in fact it _cannot_ be, because select() is defined to always
> > return 1 on them - so if a write() were to return EAGAIN, user space
Linus Torvalds wrote:
>
> On Tue, 6 Feb 2001, Ben LaHaise wrote:
> >
> > On Tue, 6 Feb 2001, Stephen C. Tweedie wrote:
> >
> > > The whole point of the post was that it is merging, not splitting,
> > > which is troublesome. How are you going to merge requests without
> > > having chains of
Hi!
> > Its arguing against making a smart application block on the disk while its
> > able to use the CPU for other work.
>
> There are currently no other alternatives in user space. You'd have to
> create whole new interfaces for aio_read/write, and ways for the kernel to
> inform user space
Hi!
> > > Reading write(2):
> > >
> > >EAGAIN Non-blocking I/O has been selected using O_NONBLOCK and there was
> > > no room in the pipe or socket connected to fd to write the data
> > > immediately.
> > >
> > > I see no reason why "aio function have to
On Tue, Feb 06, 2001 at 10:14:21AM -0800, Linus Torvalds wrote:
> I will claim that you CANNOT merge at higher levels and get good
> performance.
>
> Sure, you can do read-ahead, and try to get big merges that way at a high
> level. Good for you.
>
> But you'll have a bitch of a time trying to
On Tue, Feb 06, 2001 at 10:14:21AM -0800, Linus Torvalds wrote:
I will claim that you CANNOT merge at higher levels and get good
performance.
Sure, you can do read-ahead, and try to get big merges that way at a high
level. Good for you.
But you'll have a bitch of a time trying to merge
Hi!
Reading write(2):
EAGAIN Non-blocking I/O has been selected using O_NONBLOCK and there was
no room in the pipe or socket connected to fd to write the data
immediately.
I see no reason why "aio function have to block waiting for
Hi!
Its arguing against making a smart application block on the disk while its
able to use the CPU for other work.
There are currently no other alternatives in user space. You'd have to
create whole new interfaces for aio_read/write, and ways for the kernel to
inform user space that
Hi,
On Thu, Feb 08, 2001 at 12:15:13AM +0100, Pavel Machek wrote:
EAGAIN is _not_ a valid return value for block devices or for regular
files. And in fact it _cannot_ be, because select() is defined to always
return 1 on them - so if a write() were to return EAGAIN, user space would
On Thu, 8 Feb 2001, Stephen C. Tweedie wrote:
snip
How do you write high-performance ftp server without threads if select
on regular file always returns "ready"?
Select can work if the access is sequential, but async IO is a more
general solution.
Even async IO (ie
On Thu, 8 Feb 2001, Pavel Machek wrote:
Hi!
Its arguing against making a smart application block on the disk while its
able to use the CPU for other work.
There are currently no other alternatives in user space. You'd have to
create whole new interfaces for aio_read/write, and ways
Hi!
So you consider inability to select() on regular files _feature_?
select on files is unimplementable. You can't do background file IO the
same way you do background receiving of packets on socket. Filesystem is
synchronous. It can block.
It can be a pretty serious problem with slow
On Tue, 6 Feb 2001, Linus Torvalds wrote:
There are currently no other alternatives in user space. You'd have to
create whole new interfaces for aio_read/write, and ways for the kernel to
inform user space that "now you can re-try submitting your IO".
Could be done. But that's a big thing.
On Thu, 8 Feb 2001, Ben LaHaise wrote:
snip
(besides, latency would suck. I bet you're better off waiting for the
requests if they are all used up. It takes too long to get deep into the
kernel from user space, and you cannot use the exclusive waiters with its
anti-herd behaviour etc).
On Thu, 8 Feb 2001, Marcelo Tosatti wrote:
On Thu, 8 Feb 2001, Ben LaHaise wrote:
snip
(besides, latency would suck. I bet you're better off waiting for the
requests if they are all used up. It takes too long to get deep into the
kernel from user space, and you cannot use the
How do you write high-performance ftp server without threads if select
on regular file always returns "ready"?
Select can work if the access is sequential, but async IO is a more
general solution.
Even async IO (ie aio_read/aio_write) should block on the request queue if
its full
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
How do you write high-performance ftp server without threads if select
on regular file always returns "ready"?
Select can work if the access is sequential, but async IO is a more
general solution.
Even async IO (ie
On Thu, Feb 08 2001, Mikulas Patocka wrote:
Even async IO (ie aio_read/aio_write) should block on the request queue if
its full in Linus mind.
This is not problem (you can create queue big enough to handle the load).
Well in theory, but in practice this isn't a very good idea. At some
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
The problem is that aio_read and aio_write are pretty useless for ftp or
http server. You need aio_open.
Could you explain this?
If the server is sending many small files, disk spends huge amount time
walking directory tree and seeking
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
You need aio_open.
Could you explain this?
If the server is sending many small files, disk spends huge
amount time walking directory tree and seeking to inodes. Maybe
opening the file is even slower than reading it
Not if you have a big
On Thu, 8 Feb 2001, Rik van Riel wrote:
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
You need aio_open.
Could you explain this?
If the server is sending many small files, disk spends huge
amount time walking directory tree and seeking to inodes. Maybe
opening the file is even
On Thu, 8 Feb 2001, Pavel Machek wrote:
There are currently no other alternatives in user space. You'd have to
create whole new interfaces for aio_read/write, and ways for the kernel to
inform user space that "now you can re-try submitting your IO".
Why is current select() interface
On Thu, 8 Feb 2001, Martin Dalecki wrote:
But you'll have a bitch of a time trying to merge multiple
threads/processes reading from the same area on disk at roughly the same
time. Your higher levels won't even _know_ that there is merging to be
done until the IO requests hit the wall
On Thu, 8 Feb 2001, Marcelo Tosatti wrote:
On Thu, 8 Feb 2001, Stephen C. Tweedie wrote:
snip
How do you write high-performance ftp server without threads if select
on regular file always returns "ready"?
Select can work if the access is sequential, but async IO is a more
On Thu, 8 Feb 2001, Rik van Riel wrote:
On Thu, 8 Feb 2001, Mikulas Patocka wrote:
You need aio_open.
Could you explain this?
If the server is sending many small files, disk spends huge
amount time walking directory tree and seeking to inodes. Maybe
opening the file is even
Hi,
On Thu, Feb 08, 2001 at 03:52:35PM +0100, Mikulas Patocka wrote:
How do you write high-performance ftp server without threads if select
on regular file always returns "ready"?
No, it's not really possible on Linux. Use SYS$QIO call on VMS :-)
Ahh, but even VMS SYS$QIO is
On Wednesday February 7, [EMAIL PROTECTED] wrote:
>
>
> On Wed, 7 Feb 2001, Christoph Hellwig wrote:
>
> > On Tue, Feb 06, 2001 at 12:59:02PM -0800, Linus Torvalds wrote:
> > >
> > > Actually, they really aren't.
> > >
> > > They kind of _used_ to be, but more and more they've moved away
Hi,
On Wed, Feb 07, 2001 at 12:12:44PM -0700, Richard Gooch wrote:
> Stephen C. Tweedie writes:
> >
> > Sorry? I'm not sure where communication is breaking down here, but
> > we really don't seem to be talking about the same things. SGI's
> > kiobuf request patches already let us pass a large
Stephen C. Tweedie writes:
> Hi,
>
> On Tue, Feb 06, 2001 at 06:37:41PM -0800, Linus Torvalds wrote:
> > Absolutely. And this is independent of what kind of interface we end up
> > using, whether it be kiobuf of just plain "struct buffer_head". In that
> > respect they are equivalent.
>
>
On Wed, Feb 07, 2001 at 10:36:47AM -0800, Linus Torvalds wrote:
>
>
> On Wed, 7 Feb 2001, Christoph Hellwig wrote:
>
> > On Tue, Feb 06, 2001 at 12:59:02PM -0800, Linus Torvalds wrote:
> > >
> > > Actually, they really aren't.
> > >
> > > They kind of _used_ to be, but more and more they've
On Wed, 7 Feb 2001, Christoph Hellwig wrote:
> On Tue, Feb 06, 2001 at 12:59:02PM -0800, Linus Torvalds wrote:
> >
> > Actually, they really aren't.
> >
> > They kind of _used_ to be, but more and more they've moved away from that
> > historical use. Check in particular the page cache, and
On Tue, Feb 06, 2001 at 09:35:58PM +0100, Ingo Molnar wrote:
> caching bmap() blocks was a recent addition around 2.3.20, and i suggested
> some time ago to cache pagecache blocks via explicit entries in struct
> page. That would be one solution - but it creates overhead.
>
> but there isnt
On Tue, Feb 06, 2001 at 12:59:02PM -0800, Linus Torvalds wrote:
>
>
> On Tue, 6 Feb 2001, Christoph Hellwig wrote:
> >
> > The second is that bh's are two things:
> >
> > - a cacheing object
> > - an io buffer
>
> Actually, they really aren't.
>
> They kind of _used_ to be, but more and
Hi,
On Tue, Feb 06, 2001 at 06:37:41PM -0800, Linus Torvalds wrote:
> >
> However, I really _do_ want to have the page cache have a bigger
> granularity than the smallest memory mapping size, and there are always
> special cases that might be able to generate IO in bigger chunks (ie
> in-kernel
Hi,
On Wed, Feb 07, 2001 at 09:10:32AM +, David Howells wrote:
>
> I presume that correct_size will always be a power of 2...
Yes.
--Stephen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at
Linus Torvalds <[EMAIL PROTECTED]> wrote:
> Actually, I'd rather leave it in, but speed it up with the saner and
> faster
>
> if (bh->b_size & (correct_size-1)) {
I presume that correct_size will always be a power of 2...
David
-
To unsubscribe from this list: send the line "unsubscribe
Linus Torvalds [EMAIL PROTECTED] wrote:
Actually, I'd rather leave it in, but speed it up with the saner and
faster
if (bh-b_size (correct_size-1)) {
I presume that correct_size will always be a power of 2...
David
-
To unsubscribe from this list: send the line "unsubscribe
Hi,
On Wed, Feb 07, 2001 at 09:10:32AM +, David Howells wrote:
I presume that correct_size will always be a power of 2...
Yes.
--Stephen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read the FAQ at
Hi,
On Tue, Feb 06, 2001 at 06:37:41PM -0800, Linus Torvalds wrote:
However, I really _do_ want to have the page cache have a bigger
granularity than the smallest memory mapping size, and there are always
special cases that might be able to generate IO in bigger chunks (ie
in-kernel
On Tue, Feb 06, 2001 at 12:59:02PM -0800, Linus Torvalds wrote:
On Tue, 6 Feb 2001, Christoph Hellwig wrote:
The second is that bh's are two things:
- a cacheing object
- an io buffer
Actually, they really aren't.
They kind of _used_ to be, but more and more they've moved
On Tue, Feb 06, 2001 at 09:35:58PM +0100, Ingo Molnar wrote:
caching bmap() blocks was a recent addition around 2.3.20, and i suggested
some time ago to cache pagecache blocks via explicit entries in struct
page. That would be one solution - but it creates overhead.
but there isnt anything
On Wed, 7 Feb 2001, Christoph Hellwig wrote:
On Tue, Feb 06, 2001 at 12:59:02PM -0800, Linus Torvalds wrote:
Actually, they really aren't.
They kind of _used_ to be, but more and more they've moved away from that
historical use. Check in particular the page cache, and as a really
Stephen C. Tweedie writes:
Hi,
On Tue, Feb 06, 2001 at 06:37:41PM -0800, Linus Torvalds wrote:
Absolutely. And this is independent of what kind of interface we end up
using, whether it be kiobuf of just plain "struct buffer_head". In that
respect they are equivalent.
Sorry? I'm not
Hi,
On Wed, Feb 07, 2001 at 12:12:44PM -0700, Richard Gooch wrote:
Stephen C. Tweedie writes:
Sorry? I'm not sure where communication is breaking down here, but
we really don't seem to be talking about the same things. SGI's
kiobuf request patches already let us pass a large IO
On Wednesday February 7, [EMAIL PROTECTED] wrote:
On Wed, 7 Feb 2001, Christoph Hellwig wrote:
On Tue, Feb 06, 2001 at 12:59:02PM -0800, Linus Torvalds wrote:
Actually, they really aren't.
They kind of _used_ to be, but more and more they've moved away from that
On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
>
> > "struct buffer_head" can deal with pretty much any size: the only thing it
> > cares about is bh->b_size.
>
> Right now, anything larger than a page is physically non-contiguous,
> and sorry if I didn't make that explicit, but I thought that
On Tue, Feb 06 2001, Linus Torvalds wrote:
> > > [...] so I would be _really_ nervous about just turning it on
> > > silently. This is all very much a 2.5.x-kind of thing ;)
> >
> > Then you might want to apply this :-)
> >
> > --- drivers/block/ll_rw_blk.c~ Wed Feb 7 02:38:31 2001
> >
Hi,
On Tue, Feb 06, 2001 at 04:50:19PM -0800, Linus Torvalds wrote:
>
>
> On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
> >
> > That gets us from 512-byte blocks to 4k, but no more (ll_rw_block
> > enforces a single blocksize on all requests but that relaxing that
> > requirement is no big
On Wed, 7 Feb 2001, Jens Axboe wrote:
>
> > [...] so I would be _really_ nervous about just turning it on
> > silently. This is all very much a 2.5.x-kind of thing ;)
>
> Then you might want to apply this :-)
>
> --- drivers/block/ll_rw_blk.c~Wed Feb 7 02:38:31 2001
> +++
On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
> >
> > The fact is, if you have problems like the above, then you don't
> > understand the interfaces. And it sounds like you designed kiobuf support
> > around the wrong set of interfaces.
>
> They used the only interfaces available at the
On Tue, Feb 06 2001, Linus Torvalds wrote:
> > I don't see anything that would break doing this, in fact you can
> > do this as long as the buffers are all at least a multiple of the
> > block size. All the drivers I've inspected handle this fine, noone
> > assumes that rq->bh->b_size is the same
Hi,
On Tue, Feb 06, 2001 at 04:41:21PM -0800, Linus Torvalds wrote:
>
> On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
> > No, it is a problem of the ll_rw_block interface: buffer_heads need to
> > be aligned on disk at a multiple of their buffer size.
>
> Ehh.. True of ll_rw_block() and
On Wed, 7 Feb 2001, Ingo Molnar wrote:
>
> most likely some coding error on your side. buffer-size mismatches should
> show up as filesystem corruption or random DMA scribble, not in-driver
> oopses.
I'm not sure. If I was a driver writer (and I'm happy those days are
mostly behind me ;), I
On Wed, 7 Feb 2001, Jens Axboe wrote:
>
> I don't see anything that would break doing this, in fact you can
> do this as long as the buffers are all at least a multiple of the
> block size. All the drivers I've inspected handle this fine, noone
> assumes that rq->bh->b_size is the same in all
On Wed, Feb 07, 2001 at 02:06:27AM +0100, Ingo Molnar wrote:
>
> On Tue, 6 Feb 2001, Jeff V. Merkey wrote:
>
> > > I don't see anything that would break doing this, in fact you can
> > > do this as long as the buffers are all at least a multiple of the
> > > block size. All the drivers I've
On Wed, 7 Feb 2001, Jens Axboe wrote:
> > > Adaptec drivers had an oops. Also, AIC7XXX also had some oops with it.
> >
> > most likely some coding error on your side. buffer-size mismatches should
> > show up as filesystem corruption or random DMA scribble, not in-driver
> > oopses.
>
> I
On Wed, Feb 07, 2001 at 02:08:53AM +0100, Jens Axboe wrote:
> On Tue, Feb 06 2001, Jeff V. Merkey wrote:
> > Adaptec drivers had an oops. Also, AIC7XXX also had some oops with it.
>
> Do you still have this oops?
>
I can recreate. Will work on it tommorrow. SCI testing today.
Jeff
> --
>
On Tue, Feb 06 2001, Jeff V. Merkey wrote:
> Adaptec drivers had an oops. Also, AIC7XXX also had some oops with it.
Do you still have this oops?
--
Jens Axboe
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Please read
On Wed, Feb 07 2001, Ingo Molnar wrote:
> > > So I would appreciate pointers to these devices that break so we
> > > can inspect them.
> > >
> > > --
> > > Jens Axboe
> >
> > Adaptec drivers had an oops. Also, AIC7XXX also had some oops with it.
>
> most likely some coding error on your side.
On Tue, 6 Feb 2001, Jeff V. Merkey wrote:
> > I don't see anything that would break doing this, in fact you can
> > do this as long as the buffers are all at least a multiple of the
> > block size. All the drivers I've inspected handle this fine, noone
> > assumes that rq->bh->b_size is the
On Wed, Feb 07, 2001 at 02:02:21AM +0100, Jens Axboe wrote:
> On Tue, Feb 06 2001, Jeff V. Merkey wrote:
> > I remember Linus asking to try this variable buffer head chaining
> > thing 512-1024-512 kind of stuff several months back, and mixing them to
> > see what would happen -- result. About
On Wed, Feb 07, 2001 at 02:01:54AM +0100, Ingo Molnar wrote:
>
> On Tue, 6 Feb 2001, Jeff V. Merkey wrote:
>
> > I remember Linus asking to try this variable buffer head chaining
> > thing 512-1024-512 kind of stuff several months back, and mixing them
> > to see what would happen -- result.
On Tue, 6 Feb 2001, Jeff V. Merkey wrote:
> I remember Linus asking to try this variable buffer head chaining
> thing 512-1024-512 kind of stuff several months back, and mixing them
> to see what would happen -- result. About half the drivers break with
> it. [...]
time to fix them then -
On Tue, Feb 06 2001, Jeff V. Merkey wrote:
> I remember Linus asking to try this variable buffer head chaining
> thing 512-1024-512 kind of stuff several months back, and mixing them to
> see what would happen -- result. About half the drivers break with it.
> The interface allows you to do
On Tue, Feb 06, 2001 at 04:50:19PM -0800, Linus Torvalds wrote:
>
>
> On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
> >
> > That gets us from 512-byte blocks to 4k, but no more (ll_rw_block
> > enforces a single blocksize on all requests but that relaxing that
> > requirement is no big deal).
On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
>
> That gets us from 512-byte blocks to 4k, but no more (ll_rw_block
> enforces a single blocksize on all requests but that relaxing that
> requirement is no big deal). Buffer_heads can't deal with data which
> spans more than a page right now.
On Wed, Feb 07, 2001 at 12:36:29AM +, Stephen C. Tweedie wrote:
> Hi,
>
> On Tue, Feb 06, 2001 at 07:25:19PM -0500, Ingo Molnar wrote:
> >
> > On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
> >
> > > No, it is a problem of the ll_rw_block interface: buffer_heads need to
> > > be aligned on
On Tue, 6 Feb 2001, Ingo Molnar wrote:
>
> On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
>
> > No, it is a problem of the ll_rw_block interface: buffer_heads need to
> > be aligned on disk at a multiple of their buffer size. Under the Unix
> > raw IO interface it is perfectly legal to begin
On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
>
> On Tue, Feb 06, 2001 at 08:57:13PM +0100, Ingo Molnar wrote:
> >
> > [overhead of 512-byte bhs in the raw IO code is an artificial problem of
> > the raw IO code.]
>
> No, it is a problem of the ll_rw_block interface: buffer_heads need to
>
Hi,
On Tue, Feb 06, 2001 at 07:25:19PM -0500, Ingo Molnar wrote:
>
> On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
>
> > No, it is a problem of the ll_rw_block interface: buffer_heads need to
> > be aligned on disk at a multiple of their buffer size. Under the Unix
> > raw IO interface it is
On Wed, Feb 07 2001, Stephen C. Tweedie wrote:
> > [overhead of 512-byte bhs in the raw IO code is an artificial problem of
> > the raw IO code.]
>
> No, it is a problem of the ll_rw_block interface: buffer_heads need to
> be aligned on disk at a multiple of their buffer size. Under the Unix
>
Hi,
On Tue, Feb 06, 2001 at 08:57:13PM +0100, Ingo Molnar wrote:
>
> [overhead of 512-byte bhs in the raw IO code is an artificial problem of
> the raw IO code.]
No, it is a problem of the ll_rw_block interface: buffer_heads need to
be aligned on disk at a multiple of their buffer size. Under
On Wed, 7 Feb 2001, Stephen C. Tweedie wrote:
> No, it is a problem of the ll_rw_block interface: buffer_heads need to
> be aligned on disk at a multiple of their buffer size. Under the Unix
> raw IO interface it is perfectly legal to begin a 128kB IO at offset
> 512 bytes into a device.
then
On Tue, 6 Feb 2001, Marcelo Tosatti wrote:
>
> Its arguing against making a smart application block on the disk while its
> able to use the CPU for other work.
There are currently no other alternatives in user space. You'd have to
create whole new interfaces for aio_read/write, and ways for
1 - 100 of 388 matches
Mail list logo