On Wed, Jan 16, 2008 at 02:40:53PM -0200, Renato S. Yamane wrote:
> Ray Lee escreveu:
>> On Jan 14, 2008 7:28 AM, Renato S. Yamane <[EMAIL PROTECTED]> wrote:
>>> Ray Lee escreveu:
>>>> On Jan 12, 2008 10:03 AM, Renato S. Yamane wrote:
>>>>>
Ray Lee escreveu:
On Jan 14, 2008 7:28 AM, Renato S. Yamane <[EMAIL PROTECTED]> wrote:
Ray Lee escreveu:
On Jan 12, 2008 10:03 AM, Renato S. Yamane wrote:
I can't use updatedb in Debian Etch (stable) using customized Kernel
2.6.22.9-cfs-v22.
When I ran updatedb, after ~1 minut
On Jan 14, 2008 7:28 AM, Renato S. Yamane <[EMAIL PROTECTED]> wrote:
> Ray Lee escreveu:
> > On Jan 12, 2008 10:03 AM, Renato S. Yamane wrote:
> >> I can't use updatedb in Debian Etch (stable) using customized Kernel
> >> 2.6.22.9-cfs-v22.
> >>
>
Ray Lee escreveu:
On Jan 12, 2008 10:03 AM, Renato S. Yamane wrote:
I can't use updatedb in Debian Etch (stable) using customized Kernel
2.6.22.9-cfs-v22.
When I ran updatedb, after ~1 minute my system hangs and "caps lock" LED
is blinking. No log is registered.
Please switch
On Sat, Jan 12, 2008 at 04:03:43PM -0200, Renato S. Yamane wrote:
> Hi,
> I can't use updatedb in Debian Etch (stable) using customized Kernel
> 2.6.22.9-cfs-v22.
>
Hi,
Can you see if it happens with the latest CFS backport. Its been updated
quite a bit since then. You ca
On Jan 12, 2008 10:03 AM, Renato S. Yamane <[EMAIL PROTECTED]> wrote:
> Hi,
> I can't use updatedb in Debian Etch (stable) using customized Kernel
> 2.6.22.9-cfs-v22.
>
> When I ran updatedb, after ~1 minute my system hangs and "caps lock" LED
> is blinkin
Hi,
I can't use updatedb in Debian Etch (stable) using customized Kernel
2.6.22.9-cfs-v22.
When I ran updatedb, after ~1 minute my system hangs and "caps lock" LED
is blinking. No log is registered.
.config is attached.
Regards,
Renato S. Yamane
#
# Automatically genera
On 8/6/07, Nick Piggin <[EMAIL PROTECTED]> wrote:
[...]
> > this completely ignores the use case where the
> > swapping was exactly the
> > right thing to do, but memory has been freed up from
> > a program exiting so
> > that you couldnow fill that empty ram with data that
> > was swapped out.
>
>
s,
> >> but instead a nebulous 'something better may
> come along later'
> >
> > Something better, ie. the problems with page
> reclaim being fixed.
> > Why is that nebulous?
>
> becouse that doesn't begin to address all the
> benifits.
Wh
specific issues,
but instead a nebulous 'something better may come along later'
Something better, ie. the problems with page reclaim being fixed.
Why is that nebulous?
becouse that doesn't begin to address all the benifits.
the approach of fixing page reclaim and updatedb is preten
[EMAIL PROTECTED] wrote:
On Sun, 29 Jul 2007, Rene Herman wrote:
On 07/29/2007 01:41 PM, [EMAIL PROTECTED] wrote:
I agree that tinkering with the core VM code should not be done
lightly,
but this has been put through the proper process and is stalled with no
hints on how to move forward.
hen we could skip unchanged
> subtrees completely...
Could we help it a little from kernel and set 'dirty since last look'
on directory renames?
I mean, this is not only updatedb. KDE startup is limited by this,
too. It would be nice to have effective 'what change in tree'
Matthew Hawkins wrote:
updatedb by itself doesn't really bug me, its just that on occasion
its still running at 7am
You should start it earlier then - assuming it doesn't
already start at the earliest opportunity?
Helge Hafting
-
To unsubscribe from this list: send the line &q
On Sun, 29 Jul 2007, Rene Herman wrote:
On 07/29/2007 01:41 PM, [EMAIL PROTECTED] wrote:
I agree that tinkering with the core VM code should not be done lightly,
but this has been put through the proper process and is stalled with no
hints on how to move forward.
It has not. Concerns that
On Sunday 29 July 2007 16:00:22 Ray Lee wrote:
> On 7/29/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> > If the problem is reading stuff back in from swap at the *same time*
> > that the application is reading stuff from some user file system, and if
> > that user file system is on the same drive a
On 7/29/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Ray wrote:
> > Ah, so in a normal scenario where a working-set is getting faulted
> > back in, we have the swap storage as well as the file-backed stuff
> > that needs to be read as well. So even if swap is organized perfectly,
> > we're still s
Ray wrote:
> Ah, so in a normal scenario where a working-set is getting faulted
> back in, we have the swap storage as well as the file-backed stuff
> that needs to be read as well. So even if swap is organized perfectly,
> we're still seeking. Damn.
Perhaps this applies in some cases ... perhaps.
On 7/29/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> If the problem is reading stuff back in from swap at the *same time*
> that the application is reading stuff from some user file system, and if
> that user file system is on the same drive as the swap partition
> (typical on laptops), then inter
Ray wrote:
> a log structured scheme, where the writeout happens to sequential spaces
> on the drive instead of scattered about.
If the problem is reading stuff back in from swap quickly when
needed, then this likely helps, by reducing the seeks needed.
If the problem is reading stuff back in fro
On 07/29/2007 07:52 PM, Ray Lee wrote:
Well, that doesn't match my systems. My laptop has 400MB in swap:
Which in your case is slightly more than 1/3 of available swap space. Quite
a lot for a desktop indeed. And if it's more than a few percent fragmented,
please fix current swapout instead
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 07:19 PM, Ray Lee wrote:
> For me, it is generally the case yes. We are still discussing this in the
> context of desktop machines and their problems with being slow as things
> have been swapped out and generally I expect a desktop
> > Is that generally the case on your systems? Every linux system I've
> > run, regardless of RAM, has always pushed things out to swap.
>
> For me, it is generally the case yes. We are still discussing this in the
> context of desktop machines and their problems with being slow as things
> hav
On 07/29/2007 07:19 PM, Ray Lee wrote:
The program is not a real-world issue and if you do not consider it a useful
boundary condition either (okay I guess), how would log structured swap help
if I just assume I have plenty of free swap to begin with?
Is that generally the case on your systems
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 06:04 PM, Ray Lee wrote:
> >> I am very aware of the costs of seeks (on current magnetic media).
> >
> > Then perhaps you can just take it on faith -- log structured layouts
> > are designed to help minimize seeks, read and write.
>
On 07/29/2007 06:04 PM, Ray Lee wrote:
I am very aware of the costs of seeks (on current magnetic media).
Then perhaps you can just take it on faith -- log structured layouts
are designed to help minimize seeks, read and write.
I am particularly bad at faith. Let's take that stupid program t
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 05:20 PM, Ray Lee wrote:
> This seems to be now fixing the different problem of swap-space filling up.
> I'm quite willing to for now assume I've got plenty free.
I was trying to point out that currently, as an example, memory that
On 07/29/2007 05:20 PM, Ray Lee wrote:
I understand what log structure is generally, but how does it help swapin?
Look at the swap out case first.
Right now, when swapping out the kernel places whatever it can
wherever it can inside the swap space. The closer you are to filling
your swap spac
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 04:58 PM, Ray Lee wrote:
> > On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> >> Right over my head. Why does log-structure help anything?
> >
> > Log structured disk layouts allow for better placement of writeout, so
> > that y
On 07/29/2007 04:58 PM, Ray Lee wrote:
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
On 07/29/2007 03:12 PM, Alan Cox wrote:
More radically if anyone wants to do real researchy type work - how about
log structured swap with a cleaner ?
Right over my head. Why does log-structure help
On 7/29/07, Rene Herman <[EMAIL PROTECTED]> wrote:
> On 07/29/2007 03:12 PM, Alan Cox wrote:
> > More radically if anyone wants to do real researchy type work - how about
> > log structured swap with a cleaner ?
>
> Right over my head. Why does log-structure help anything?
Log structured disk lay
On 07/29/2007 03:12 PM, Alan Cox wrote:
What are the tradeoffs here? What wants small chunks? Also, as far as
I'm aware Linux does not do things like up the granularity when it
notices it's swapping in heavily? That sounds sort of promising...
Small chunks means you get better efficiency of me
your conclusion then that if people just stopped useing that
version of updatedb the problem would be solved and there would be no
need for the swap prefetch patch? that seemed to be what you were
strongly implying (if not saying outright)
No. What I said outright, every single time, is that
> Contrived thing and all, but what it does do is show exactly how bad seeking
> all over swap-space is. If you push it out before hitting enter, the time it
> takes easily grows past 10 minutes (with my 768M) versus sub-second (!) when
> it's all in to start with.
Think in "operations/second"
MemFree:110600 kB
and then uses one-half of whichever is -greater- of MemTotal/8 or
MemFree.
... However ... for the typical GNU locate updatedb run, it is sorting
the list of pathnames for almost all files on the system, which is
usually larger than fits in one of these sized buffers. So it
see the conclusion as being exactly the opposite.
And now you do it again :-) There is no conclusion -- just the inescapable
observation that swap-prefetch was (or may have been) masking the problem of
GNU locate being a program that noone in their right mind should be using.
isn't your
for part of that. I expect though that
the same holds for the people that actually matter in this, such as Andrew
Morton and Nick Piggin.
-- 1: people being unconvinced it helps all that much
At least partly caused by the updatedb i/dcache red herring that infected
this issue. Also, at the
On 07/28/2007 01:21 PM, Alan Cox wrote:
It is. Prefetched pages can be dropped on the floor without additional
I/O.
Which is essentially free for most cases. In addition your disk access
may well have been in idle time (and should be for this sort of stuff)
Yes. The swap-prefetch patch ensu
icy for the metadata pagecache. We
> > read the filesystem objects into the dcache and icache and then we won't
> > read from that page again for a long time (I expect). But the page will
> > still hang around for a long time.
> >
> > It could be that we should leave t
. But the page will
still hang around for a long time.
It could be that we should leave those pages inactive.
Good idea for updatedb.
However, it may be a bad idea for files that are often
written to. Turning an inode write into a read plus a
write does not sound like such a hot idea, we real
g to
make that stop entirely)
> >> what people are arguing is that there are situations where it helps for
> >> the first case. on some machines and version of updatedb the nighly run
> >> of updatedb can cause both sets of problems. but the nightly updatedb
> >
prefetch can't help when memory is filled.
I stand corrected, thaks for speaking up and correcting your position.
what people are arguing is that there are situations where it helps for
the first case. on some machines and version of updatedb the nighly run of
updatedb can cause both sets of pro
On Sat, 28 Jul 2007, Alan Cox wrote:
It is. Prefetched pages can be dropped on the floor without additional I/O.
Which is essentially free for most cases. In addition your disk access
may well have been in idle time (and should be for this sort of stuff)
and if it was in the same chunk as some
prefetch will help
significantly, what is the downside that prevents it from being included?
(reading this thread it sometimes seems like the downside is that updatedb
shouldn't cause this problem and so if you fixed updatedb there wold be no
legitimate benifit, or alturnatly this patch d
On 7/28/07, Alan Cox <[EMAIL PROTECTED]> wrote:
> Actual physical disk ops are precious resource and anything that mostly
> reduces the number will be a win - not to stay swap prefetch is the right
> answer but accidentally or otherwise there are good reasons it may happen
> to help.
>
> Bigger mor
> > Questions about it:
> >> > > Q) Does swap-prefetch help with this?
> >> > > A) [From all reports I've seen (*)]
> >> > > Yes, it does.
> >> >
> >> > No it does not. If updatedb filled memory to the point of
El Sat, 28 Jul 2007 02:03:19 +0200 (CEST), "Indan Zupancic" <[EMAIL PROTECTED]>
escribió:
> Perhaps one of the reasons is that this is core kernel code. And that it
> isn't a new
> feature, but a performance improvement with doubtful trade-offs. The problem
> statement isn't clear either. It see
due to memory pressure generated by over-night
> system maintenance operations.
>
> The author himself however, says his implementation can't help with
> updatedb (though people seem to be saying that it does), or anything
> else that leaves memory full. That IMHO, makes it
> It is. Prefetched pages can be dropped on the floor without additional I/O.
Which is essentially free for most cases. In addition your disk access
may well have been in idle time (and should be for this sort of stuff)
and if it was in the same chunk as something nearby was effectively free
anywa
On 07/28/2007 10:55 AM, [EMAIL PROTECTED] wrote:
in at some situations swap prefetch can help becouse something that used
memory freed it so there is free memory that could be filled with data
(which is something that Linux does agressivly in most other situations)
in some other situations sw
om all reports I've seen (*)]
> > Yes, it does.
>
> No it does not. If updatedb filled memory to the point of causing
> swapping (which noone is reproducing anyway) it HAS FILLED MEMORY and
> swap-prefetch hasn't any memory to prefetch into -- updatedb itself
>
oken and nobody should be using it. The updatedb from
(my distribution standard) "slocate" uses around 2M allocated total during
an entire run while GNU locate allocates some 30M to the sort process alone.
GNU locate is also close to 4 times as slow (although that ofcourse only
matte
hor himself however, says his implementation can't help with
updatedb (though people seem to be saying that it does), or anything
else that leaves memory full. That IMHO, makes it of questionable value
toward solving what people are saying they want swap-prefetch for in the
first place.
I personally don&
On 07/28/2007 01:15 AM, Björn Steinbrink wrote:
On 2007.07.27 20:16:32 +0200, Rene Herman wrote:
Here's swap-prefetch's author saying the same:
http://lkml.org/lkml/2007/2/9/112
| It can't help the updatedb scenario. Updatedb leaves the ram full and
| swap prefetch wants to
On 07/27/2007 09:43 PM, [EMAIL PROTECTED] wrote:
On Fri, 27 Jul 2007, Rene Herman wrote:
On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
Questions about it:
Q) Does swap-prefetch help with this?
A) [From all reports I've seen (*)]
Yes, it does.
No it does not. If updatedb f
On 07/27/2007 10:28 PM, Daniel Hazelton wrote:
Check the attitude at the door then re-read what I actually said:
Attitude? You wanted attitude dear boy?
Updatedb or another process that uses the FS heavily runs on a users
256MB P3-800 (when it is idle) and the VFS caches grow, causing
Chris Snook wrote:
> Al Boldi wrote:
> > IMHO, what everybody agrees on, is that swap-prefetch has a positive
> > effect in some cases, and nobody can prove an adverse effect (excluding
> > power consumption). The reason for this positive effect is also crystal
> > clear: It prefetches from swap o
Al Boldi wrote:
People wrote:
I believe the users who say their apps really do get paged back in
though, so suspect that's not the case.
Stopping the bush-circumference beating, I do not. -ck (and gentoo) have
this massive Calimero thing going among their users where people are
much less intere
il Client, XChat, FireFox
and a console window) and I've seen this lag in FireFox when switching to it
after starting OOo. I've also seen the same sort of lag when exiting OOo.
I'll see about getting some numbers about this)
> It would be better to measure than to guess. At lea
eed I
guess?).
> It would be better to measure than to guess. At least Andrew's measurements
> on 128MB actually didn't show updatedb being really that big a problem.
Here's a before/after memory usage for an updatedb run:
[EMAIL PROTECTED]:~# free -m
total used
eature, but a performance improvement with doubtful trade-offs. The problem
statement isn't clear either. It seems like a natural enhancement, but is that
enough
reason to merge it? Maybe, maybe not. But if slow swap-in is the problem,
shouldn't
that be fixed instead of bypassed?
Yes, t
On Sat, 2007-07-28 at 01:34 +0200, grundig wrote:
> El Fri, 27 Jul 2007 15:06:14 -0700, Arjan van de Ven <[EMAIL PROTECTED]>
> escribió:
>
> > how do you know there will be other activity? You start the IO and that
> > basically blacks out the disk for 5 to 10 ms. If the "real" IO gets
> > submit
El Fri, 27 Jul 2007 15:06:14 -0700, Arjan van de Ven <[EMAIL PROTECTED]>
escribió:
> how do you know there will be other activity? You start the IO and that
> basically blacks out the disk for 5 to 10 ms. If the "real" IO gets
> submitted in that time you add latency. You cannot predict that IO
>
e the same as anonymous memory.
But it's not that different given.
It would be better to measure than to guess. At least Andrew's measurements
on 128MB actually didn't show updatedb being really that big a problem.
Perhaps some people have much more files or simply a less efficient
On 2007.07.27 20:16:32 +0200, Rene Herman wrote:
> On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
>
>> Updatedb or another process that uses the FS heavily runs on a users
>> 256MB P3-800 (when it is idle) and the VFS caches grow, causing memory
>> pressure that causes
On Friday 27 July 2007 18:08:44 Mike Galbraith wrote:
> On Fri, 2007-07-27 at 13:45 -0400, Daniel Hazelton wrote:
> > On Friday 27 July 2007 06:25:18 Mike Galbraith wrote:
> > > On Fri, 2007-07-27 at 03:00 -0700, Andrew Morton wrote:
> > > > So hrm. Are we sur
On Sat, July 28, 2007 00:06, Arjan van de Ven wrote:
> On Fri, 2007-07-27 at 23:51 +0200, Indan Zupanci
>> > also, they take up seek time (5 to 10 msec), so if you were to read
>> > something else at the time you get additional latency.
>>
>> If there's other disk activity swap prefetch shouldn't d
On Fri, 2007-07-27 at 13:45 -0400, Daniel Hazelton wrote:
> On Friday 27 July 2007 06:25:18 Mike Galbraith wrote:
> > On Fri, 2007-07-27 at 03:00 -0700, Andrew Morton wrote:
> > > So hrm. Are we sure that updatedb is the problem? There are quite a few
> > > heavywe
On Fri, 2007-07-27 at 23:51 +0200, Indan Zupanci
> > also, they take up seek time (5 to 10 msec), so if you were to read
> > something else at the time you get additional latency.
>
> If there's other disk activity swap prefetch shouldn't do much, so this isn't
> really true.
how do you know ther
On Fri, July 27, 2007 22:34, Arjan van de Ven wrote:
> On Fri, July 27, 2007 21:43, Al Boldi wrote:
>> IMHO, what everybody agrees on, is that swap-prefetch has a positive effect
>> in some cases, and nobody can prove an adverse effect (excluding power
>> consumption). The reason for this positive
> IMHO, what everybody agrees on, is that swap-prefetch has a positive effect
> in some cases, and nobody can prove an adverse effect (excluding power
> consumption). The reason for this positive effect is also crystal clear:
> It prefetches from swap on idle into free memory, ie: it doesn't
On Friday 27 July 2007 14:16:32 Rene Herman wrote:
> On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
> > Updatedb or another process that uses the FS heavily runs on a users
> > 256MB P3-800 (when it is idle) and the VFS caches grow, causing memory
> > pressure that causes oth
On Fri, 27 Jul 2007, Rene Herman wrote:
On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
Updatedb or another process that uses the FS heavily runs on a users
256MB P3-800 (when it is idle) and the VFS caches grow, causing memory
pressure that causes other applications to be swapped to disk
People wrote:
> >> I believe the users who say their apps really do get paged back in
> >> though, so suspect that's not the case.
> >
> > Stopping the bush-circumference beating, I do not. -ck (and gentoo) have
> > this massive Calimero thing going among their users where people are
> > much less
Al Viro wrote:
> BTW, I really wonder how much pain could be avoided if updatedb recorded
> mtime of directories and checked it.
Someone mentioned a variant of slocate above that they called mlocate,
and that Red Hat ships, that seems to do this (if I understand you and
what mlocat
On 07/27/2007 07:45 PM, Daniel Hazelton wrote:
Updatedb or another process that uses the FS heavily runs on a users
256MB P3-800 (when it is idle) and the VFS caches grow, causing memory
pressure that causes other applications to be swapped to disk. In the
morning the user has to wait for the
about
> > schedulers, elevators, and performance, the issue of running updatedb and
> > its
> > effects on the kernel's fs cache seems to recur. I've also yet to see anyone
> > present a solution that others think is worth pursuing. I'm curious why
> >
> ext3-tools which will mmap, page in and hold a file for you.
> >
> > So much for that theory. afaict mmapped, active pagecache is immune to
> > updatedb activity. It just sits there while updatedb continues munching
> > away at the slab and blockdev pagecache which
On Fri, Jul 27, 2007 at 01:09:13PM -0400, Michael Tharp wrote:
> Ray Lee wrote:
> > But yes, if we had a full filesystem events notifier, then we could
> > just toss updatedb aside and have the benefit of a live index into the
> > system. It's been suggested before, at l
Ray Lee wrote:
> But yes, if we had a full filesystem events notifier, then we could
> just toss updatedb aside and have the benefit of a live index into the
> system. It's been suggested before, at least by me. Other projects
> want this as well, such as an on-demand virus s
On 7/27/07, Douglas J Hunley <[EMAIL PROTECTED]> wrote:
> I've been following lkml for a little while (not understanding it all, but
> following nonetheless ) and I've noticed that in a lot of the talks about
> schedulers, elevators, and performance, the issue of running u
Rene Herman schrieb:
> On 07/27/2007 01:48 PM, Mike Galbraith wrote:
>
>> I believe the users who say their apps really do get paged back in
>> though, so suspect that's not the case.
>
> Stopping the bush-circumference beating, I do not. -ck (and gentoo) have
> this massive Calimero thing going
I've been following lkml for a little while (not understanding it all, but
following nonetheless ) and I've noticed that in a lot of the talks about
schedulers, elevators, and performance, the issue of running updatedb and its
effects on the kernel's fs cache seems to recur. I
Nick Piggin has been unable to get anyone to substantiate anything it seems
and even this thread alone (and I privately) received a few "oh, heh, sorry,
I don't actually have a friggin' clue what I'm talking about" responses. As
such, I believe it's fairly safe t
On Fri, 2007-07-27 at 13:09 +0200, Rene Herman wrote:
> On 07/27/2007 11:26 AM, Mike Galbraith wrote:
>
> > Updatedb finishes, freeing some ram (doesn't matter how much)
>
> Will be very little and swap-prefetch at least in its current form needs
> more than very litt
On 07/27/2007 11:26 AM, Mike Galbraith wrote:
On Fri, 2007-07-27 at 10:28 +0200, Rene Herman wrote:
I still wonder what the "the swap thing" is though. People just kept
saying that swap-prefetch helped which would seem to indicate their
problem didnt have anything to do with upd
>
> So much for that theory. afaict mmapped, active pagecache is immune to
> updatedb activity. It just sits there while updatedb continues munching
> away at the slab and blockdev pagecache which it instantiated. I assume
> we're never getting the VM into enough trouble t
On Fri, 27 Jul 2007 01:47:49 -0700 Andrew Morton <[EMAIL PROTECTED]> wrote:
> More sophisticated testing is needed - there's something in
> ext3-tools which will mmap, page in and hold a file for you.
So much for that theory. afaict mmapped, active pagecache is immune to
updat
On Fri, 2007-07-27 at 01:47 -0700, Andrew Morton wrote:
> Anyway, blockdev pagecache is a problem, I expect. It's worth playing with
> that patch.
(may tinker a bit, but i'm way rusty. ain't had the urge to mutilate
anything down there in quite a while... works just fine for me these
days)
> A
On Fri, 2007-07-27 at 10:28 +0200, Rene Herman wrote:
> On 07/27/2007 09:54 AM, Mike Galbraith wrote:
>
> > On Fri, 2007-07-27 at 08:00 +0200, Rene Herman wrote:
> >
> >> The remaining issue of updatedb unnecessarily blowing away VFS caches is
> >> being di
On Fri, 27 Jul 2007 09:54:41 +0100 Al Viro <[EMAIL PROTECTED]> wrote:
> On Fri, Jul 27, 2007 at 01:47:49AM -0700, Andrew Morton wrote:
> > What I think is killing us here is the blockdev pagecache: the pagecache
> > which backs those directory entries and inodes. These pages get read
> > multiple
On Fri, Jul 27, 2007 at 01:47:49AM -0700, Andrew Morton wrote:
> What I think is killing us here is the blockdev pagecache: the pagecache
> which backs those directory entries and inodes. These pages get read
> multiple times because they hold multiple directory entries and multiple
> inodes. The
On Fri, 27 Jul 2007 09:23:41 +0200 Mike Galbraith <[EMAIL PROTECTED]> wrote:
> On Fri, 2007-07-27 at 07:13 +0200, Mike Galbraith wrote:
> > On Thu, 2007-07-26 at 11:05 -0700, Andrew Morton wrote:
> > > > drops caches prior to both updatedb runs.
> > >
> >
On 07/27/2007 09:54 AM, Mike Galbraith wrote:
On Fri, 2007-07-27 at 08:00 +0200, Rene Herman wrote:
The remaining issue of updatedb unnecessarily blowing away VFS caches is
being discussed (*) in a few thread-branches still running.
If you solve that, the swap thing dies too, they'r
On Fri, 2007-07-27 at 08:00 +0200, Rene Herman wrote:
> The remaining issue of updatedb unnecessarily blowing away VFS caches is
> being discussed (*) in a few thread-branches still running.
If you solve that, the swap thing dies too, they're one and the same
problem.
On Fri, 2007-07-27 at 07:13 +0200, Mike Galbraith wrote:
> On Thu, 2007-07-26 at 11:05 -0700, Andrew Morton wrote:
> > > drops caches prior to both updatedb runs.
> >
> > I think that was the wrong thing to do. That will leave gobs of free
> > memory for updatedb
On 07/27/2007 02:46 AM, Jesper Juhl wrote:
On 26/07/07, Andika Triwidada <[EMAIL PROTECTED]> wrote:
Might be insignificant, but updatedb calls find (~2M) and sort (~26M).
Definitely not RAM intensive though (RAM is 1GB).
That doesn't match my box at all :
[ ... ]
This is
the vfs caches. You
> > > wanted 1 there.
> > >
> > >
> >
> > drops caches prior to both updatedb runs.
>
> I think that was the wrong thing to do. That will leave gobs of free
> memory for updatedb to populate with dentries and inodes.
>
&g
into blabber-land. Why does swap-prefetch
> >> help updatedb? Or doesn't it? And if it doesn't, why should anyone
> >> trust anything else someone who said it does says?
>
> > I don't think anyone has ever argued that swap-prefetch directly helps
> >
sign doesn't thrash the cache as much. People using
slocate (distros other than Redhat ;) are going to be hit worse. See
http://carolina.mff.cuni.cz/~trmac/blog/mlocate/
updatedb by itself doesn't really bug me, its just that on occasion
its still running at 7am which then doesn't a
On Thursday 26 July 2007 10:01:11 Rene Herman wrote:
> On 07/26/2007 09:08 AM, Bongani Hlope wrote:
> > On Thursday 26 July 2007 08:56:59 Rene Herman wrote:
> >> Great. Now concentrate on the "swpd" column, as it's the only thing
> >> relevant here. Th
1 - 100 of 122 matches
Mail list logo