On 2000-09-11 09:22:23 -0400, Chris Mason wrote:
> Thanks Andrea, Andi, new patch is attached, with the warning messages
> removed. The first patch got munged somewhere between test machine and
> mailer, please don't use it.
I've been hammering this all day installing the relevent tools and
On 2000-09-11 09:22:23 -0400, Chris Mason wrote:
Thanks Andrea, Andi, new patch is attached, with the warning messages
removed. The first patch got munged somewhere between test machine and
mailer, please don't use it.
I've been hammering this all day installing the relevent tools and
On Tue, 12 Sep 2000, Matthew Hawkins wrote:
Very stable so far, and having Andrea's VM patches in (I usually
didn't put them in) has made a noticeable difference - xmms has
rarely skipped and things start faster and run smoother.
Hopefully I'll see the same (or better) results from Rik's
On Tue, 12 Sep 2000, Rik van Riel wrote:
The large IO delays I'm seeing in certain tests have
been traced back to the /elevator/ code. I think I'll
Actually the elevator works as in 2.2.15 (before any fix). The latency
settings are too high. They should be around 250 for reads and 500 for
On Tue, 12 Sep 2000, Andrea Arcangeli wrote:
On Tue, 12 Sep 2000, Rik van Riel wrote:
The large IO delays I'm seeing in certain tests have
been traced back to the /elevator/ code. I think I'll
Actually the elevator works as in 2.2.15 (before any fix). The
latency settings are too high.
On Tue, 12 Sep 2000, Rik van Riel wrote:
We simply keep track of how old the oldest request
in the queue is, and when that request is getting
too old (say 1/2 second), we /stop/ all the others
Going in function of time is obviously wrong. A blockdevice can write 1
request every two seconds or 1
On Tue, 12 Sep 2000, Andrea Arcangeli wrote:
On Tue, 12 Sep 2000, Rik van Riel wrote:
We simply keep track of how old the oldest request
in the queue is, and when that request is getting
too old (say 1/2 second), we /stop/ all the others
Going in function of time is obviously wrong. A
On Tue, 12 Sep 2000, Rik van Riel wrote:
Uhmmm, isn't the elevator about request /latency/ ?
Yes, but definitely not absolute "time" latency.
How do you get a 1msec latency for a read request out of a blockdevice
that writes 1 request in 2 seconds? See?
That was one of the first issues I was
Going in function of time is obviously wrong. A blockdevice can write 1
request every two seconds or 1 request every msecond. You can't assume
anything in function of time _unless_ you have per harddisk timing
informations into the kernel.
Andrea - latency is time measured and perceived.
On Tue, 12 Sep 2000, Alan Cox wrote:
Andrea - latency is time measured and perceived. Doing it time based seems to
make reasonable sense. I grant you might want to play with the weighting per
When you have a device that writes a request every two seconds you still
want it not to seek all the
On Tue, 12 Sep 2000, Andrea Arcangeli wrote:
On Tue, 12 Sep 2000, Rik van Riel wrote:
Uhmmm, isn't the elevator about request /latency/ ?
Yes, but definitely not absolute "time" latency.
How do you get a 1msec latency for a read request out of a
blockdevice that writes 1 request in 2
On Tue, 12 Sep 2000, Rik van Riel wrote:
We can already set different figures for different drives.
Right.
Would it really be more than 30 minutes of work to put in
a different request # limit for each drive that automatically
satisfies the latency specified by the user?
Note that if you know
Andrea Arcangeli wrote:
Andrea - latency is time measured and perceived. Doing it time based
seems to make reasonable sense. I grant you might want to play with
the weighting per [device]
Right. Perception.
When you have a device that writes a request every two seconds you still
want it
On Tue, 12 Sep 2000, Andrea Arcangeli wrote:
On Tue, 12 Sep 2000, Rik van Riel wrote:
But you don't. Transfer rate is very much dependant on the
kind of load you're putting on the disk...
Transfer rate means `hdparm -t` in single user mode. Try it and
you'll see you'll get always the
On Tue, 12 Sep 2000, Rik van Riel wrote:
On Tue, 12 Sep 2000, Andrea Arcangeli wrote:
On Tue, 12 Sep 2000, Rik van Riel wrote:
Uhmmm, isn't the elevator about request /latency/ ?
Yes, but definitely not absolute "time" latency.
How do you get a 1msec latency for a read request
I really think Rik has it right here. In particular, an MP3 player needs to be able
to say, I have
X milliseconds of buffer so make my worst case latency X milliseconds. The number of
requests is
the wrong metric, because the time required per request depends on disk geometry, disk
caching,
On Tue, 12 Sep 2000, Hans Reiser wrote:
I really think Rik has it right here. In particular, an MP3 player
needs to be able to say, I have X milliseconds of buffer so make my
worst case latency X milliseconds. The number of requests is the
wrong metric, because the time required per
Chris Evans wrote:
On Tue, 12 Sep 2000, Hans Reiser wrote:
I really think Rik has it right here. In particular, an MP3 player
needs to be able to say, I have X milliseconds of buffer so make my
worst case latency X milliseconds. The number of requests is the
wrong metric, because
On Tue, 12 Sep 2000, Alan Cox wrote:
Now, I see people trying to introduce the concept of elapsed time into
that fix, which smells strongly of hack. How will this hack be cobbled
Actually my brain says that elapsed time based scheduling is the right
thing to do. It certainly works for
That problem: the original elevator code did not schedule I/O particularly
fairly under certain I/O usage patterns. So it got fixed.
No it got hacked up a bit.
Now, I see people trying to introduce the concept of elapsed time into
that fix, which smells strongly of hack. How will this hack
On Tue, 12 Sep 2000, Jamie Lokier wrote:
Sure the global system is slower. But the "interactive feel" is faster.
If I type "find /" I want it to go quickly. But I still want Emacs to
You always want it to go quickly. But when you're in the blockdevice
layer you lost all the semantics of such
On Tue, 12 Sep 2000, Chris Evans wrote:
the elevator code. Keep it to a queue management system, and suddenly it
scales to slow or fast devices without any gross device-type specific
tuning.
Yep, that was the object.
Andrea
-
To unsubscribe from this list: send the line "unsubscribe
On Tue, 12 Sep 2000, Martin Dalecki wrote:
Second: The concept of time can give you very very nasty
behaviour in even cases. [integer arithmetic]
Point taken.
Third: All you try to improve is the boundary case between an
entierly overloaded system and a system which has a huge reserve
On Tue, 12 Sep 2000, Andrea Arcangeli wrote:
On Tue, 12 Sep 2000, Rik van Riel wrote:
Also, this possibility is /extremely/ remote, if not
impossible. Well, it could happen at one point in time,
It's not impossible. Think when you run a backup of you home
directory while you're listening
Why do you say it's not been fixed? Can you still reproduce hangs long as
a write(2) can write? I certainly can't.
I cant reproduce long hangs. Im not seeing as good I/O throughput as before
but right now Im quite happy with the tradeoff. If someone can make it better
then Im happier still
On Tue, 12 Sep 2000, Martin Dalecki wrote:
First of all: In the case of the mp3 player and such there is already a
fine
proper way to give it better chances on getting it's job done smooth -
RT kernel sceduler priorities and proper IO buffering. I did something
similiar
to a GDI printer
Hi,
Geez, A simple comment on IRC can _really_ generate lots of feedback.
(There were over 50 messages about this in my queue - did not help
that some were duplicated three times grin).
I made the comment because I remember back when the discussion was current
on linux kernel. I thought Jeff
One important point on remirroring I did not mention in my post. In
NetWare, remirroring scans the disk BACKWARDS (n0) to prevent
artificial starvation while remirring is going on. This was another
optimization we learned the hard way by trying numerous approaches to
the problem.
Jeff
Ed
Andrea Arcangeli wrote:
On Tue, 12 Sep 2000, Martin Dalecki wrote:
First of all: In the case of the mp3 player and such there is already a
fine
proper way to give it better chances on getting it's job done smooth -
RT kernel sceduler priorities and proper IO buffering. I did something
Alan Cox wrote:
Now, I see people trying to introduce the concept of elapsed time into
that fix, which smells strongly of hack. How will this hack be cobbled
Actually my brain says that elapsed time based scheduling is the right thing
to do.
No, Andrea is right here. The argument that
time, but remember that there are two things measured in time here:
A. The time for the whole queue of requests to run (this is what Rik is
proposing using to throttle)
B. The time an average request takes to process.
Your perceived latency is based entirely on A.
If we limit on
Hans Reiser wrote:
I really think Rik has it right here. In particular, an MP3 player needs to be able
to say, I have
X milliseconds of buffer so make my worst case latency X milliseconds. The number
of requests is
the wrong metric, because the time required per request depends on disk
Alan Cox wrote:
time, but remember that there are two things measured in time here:
A. The time for the whole queue of requests to run (this is what Rik is
proposing using to throttle)
B. The time an average request takes to process.
Your perceived latency is based entirely on
Date:Tue, 12 Sep 2000 04:23:05 -0300 (BRST)
From: Rik van Riel [EMAIL PROTECTED]
I've just uploaded a new snapshot of my new VM for
2.4 to my home page, this version contains a
wakeup_kswapd() function (copied from wakeup_bdflush)
and should balance memory a bit better.
Considering there are a lot of people still using 2.0.x because they find it
more stable than the 2.2.x series, doesn't it make sense to give this
scalability to people who are already running SMP boxes on 2.2.x and who may
decide to use ReiserFS?
- Original Message -
From: "Andrea
On Mon, 11 Sep 2000, Andi Kleen wrote:
>BTW, there is a another optimization that could help reiserfs a lot
>on SMP settings: do a unlock_kernel()/lock_kernel() around the user
>copies. It is quite legal to do that (you have to handle sleeping
>anyways in case of a page fault), and it allows
--On 09/11/00 15:02:34 +0200 Andrea Arcangeli <[EMAIL PROTECTED]> wrote:
>
> In 2.2.18pre2aa2.bz2 there's a latency bugfix, now a:
>
> read(fd, , 0x7fff)
> write(fd, , 0x7fff)
> sendfile(src, dst, NULL, 0x7fff)
>
> doesn't hang the machine anymore for
On Mon, Sep 11, 2000 at 08:15:15AM -0400, Chris Mason wrote:
> LFS changes for filldir, reiserfs_readpage, and adds limit checking in
> file_write to make sure we don't go above 2GB (Andi Kleen). Also fixes
> include/linux/fs.h, which does not patch cleanly for 3.5.25 because of usb.
>
>
On Mon, 11 Sep 2000, Chris Mason wrote:
>reiserfs-3.5.25, this patch. I tested against pre3-aa2.
BTW, pre3-aa2 means 2.2.18pre2aa2.bz2 applyed on top of 2.2.18pre3.
>Note, you might see debugging messages about items moving during
>copy_from_user. These are safe, but I'm leaving them in for
--On 09/11/00 07:45:16 -0400 Ed Tomlinson <[EMAIL PROTECTED]> wrote:
> Hi Chris,
>
>>> Something between bigmem and his big VM changes makes reiserfs
>>> uncompilable. [..]
>
>> It's due LFS. Chris should have a reiserfs patch that compiles on top of
>> 2.2.18pre2aa2, right? (if not Chris, I
--On 09/11/00 07:45:16 -0400 Ed Tomlinson [EMAIL PROTECTED] wrote:
Hi Chris,
Something between bigmem and his big VM changes makes reiserfs
uncompilable. [..]
It's due LFS. Chris should have a reiserfs patch that compiles on top of
2.2.18pre2aa2, right? (if not Chris, I can sure find it
On Mon, 11 Sep 2000, Andi Kleen wrote:
BTW, there is a another optimization that could help reiserfs a lot
on SMP settings: do a unlock_kernel()/lock_kernel() around the user
copies. It is quite legal to do that (you have to handle sleeping
anyways in case of a page fault), and it allows CPUs
Considering there are a lot of people still using 2.0.x because they find it
more stable than the 2.2.x series, doesn't it make sense to give this
scalability to people who are already running SMP boxes on 2.2.x and who may
decide to use ReiserFS?
- Original Message -
From: "Andrea
101 - 143 of 143 matches
Mail list logo