:
:Matt Dillon wrote:
:>
:> You may be able to achieve an effect very similar to mlock(), but
:> runnable by the 'news' user without hacking the kernel, by
:> writing a quick little C program to mmap() the two smaller history
:> files and then madvise() the map using MADV_WILLNEE
Matt Dillon wrote:
>
> You may be able to achieve an effect very similar to mlock(), but
> runnable by the 'news' user without hacking the kernel, by
> writing a quick little C program to mmap() the two smaller history
> files and then madvise() the map using MADV_WILLNEED in a lo
Matt Dillon wrote:
>
> One possible fix would be to have the kernel track cache hits and misses
> on a file and implement a heuristic from those statistics which is used
> to reduce the 'initial page weighting' for pages read-in from the
> 'generally uncacheable file'. This would
Excellent.
What I believe is going on is that without the madvise()/mlock() the
general accesses to the 1 GB main history file are causing the pages to be
flushed from the .hash and .index files too quickly. The performance
problems in general appear to be due to the system
> :The mlock man page refers to some system limit on wired pages; I get no
> :error when mlock()'ing the hash file, and I'm reasonably sure I tweaked
> :the INN source to treat both files identically (and on the other machines
> :I have running, the timestamps of both files remains pretty much unc
:To recap, the difference here is that by cheating, I was able to mlock
:one of the two files (the behaviour I was hoping to be able to achieve
:through first MAP_NOSYNC alone, then in combination with MADV_WILLNEED
:to keep all the pages in memory so much as possible) and achieve a much
:improve
I wouldn't worry about madvise() too much. 4.2 has a really good
heuristic that figures it out for the most part.
(still reading the rest of your postings)
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-
Howdy,
I'm going to breach all sorts of ethics in the worst way by following
up to my own message, just to throw out some new info... 'kay?
Matt wrote, and I quote --
: > However, I noticed something interesting!
Of course I clipped away the interesting Thing, but note the following
that I saw
> ok, since I got about 6 requests in four hours to be Cc'd, I'm
> throwing this back onto the list. Sorry for the double-response that
> some people are going to get!
Ah, good, since I've been deliberately avoiding reading mail in an
attempt to get something useful done in my last
ok, since I got about 6 requests in four hours to be Cc'd, I'm
throwing this back onto the list. Sorry for the double-response that
some people are going to get!
I am going to include some additional thoughts in the front, then break
to my originally private email response.
:> I'm going to take this off of hackers and to private email. My reply
:> will be via private email.
:
:Actually, I was enjoying the discussion, since I was learning something
:in the process of hearing you debug this remotely.
:
:It sure beats the K&R vs. ANSI discussion. :)
:
:Nate
> I'm going to take this off of hackers and to private email. My reply
> will be via private email.
Actually, I was enjoying the discussion, since I was learning something
in the process of hearing you debug this remotely.
It sure beats the K&R vs. ANSI discussion. :)
Nate
To Uns
:errr then keep me in the CC
:
:it's interesting
:
:--
: __--_|\ Julian Elischer
: / \ [EMAIL PROTECTED]
Sure thing. Anyone else who wants to be in the Cc, email me.
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "un
Matt Dillon wrote:
>
> I'm going to take this off of hackers and to private email. My reply
> will be via private email.
>
> -Matt
>
> To Unsubscribe: send mail to [EMAIL PROTECTED]
> with "unsubscribe freebsd-hackers" in the body of th
I'm going to take this off of hackers and to private email. My reply
will be via private email.
-Matt
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message
> :but at last look, history lookups and writes are accounting for more
> :than half (!) of the INN news process time, with available idle time
> :being essentially zero. So...
>
> No idle time? That doesn't sound like blocked I/O to me, it sounds
> like the machine has run out of cpu.
:closely the pattern of what happens to the available memory following
:a fresh boot... At the moment, this (reader) machine has been up for
:half a day, with performance barely able to keep up with a full feed
:(but starting to slip as the overnight burst of binaries is starting),
:but at last l
> :> Personally speaking, I would much rather use MAP_NOSYNC anyway,
> even with
> :...
> :Everything starts out well, where the history disk is beaten at startup
> :but as time passes, the time taken to do lookups and writes drops down
> :to near-zero levels, and the disk gets quiet. And act
:> Personally speaking, I would much rather use MAP_NOSYNC anyway, even with
:> a fixed filesystem syncer. MAP_NOSYNC pages are not restricted by
:...
:
:Yeah, no kidding -- here's what I see it screwing up. First, some
:background:
:
:I've built three news machines, two transit boxen a
Long ago, it was written here on 25 Oct 2000 by Matt Dillon:
> :Consider that a file with a huge number of pages outstanding
> :should probably be stealing pages from its own LRU list, and
> :not the system, to satisfy new requests. This is particularly
> :true of files that are demanding resour
On Wed, 25 Oct 2000 21:54:42 + (GMT), Terry Lambert <[EMAIL PROTECTED]> wrote:
>I think the idea of a fixed limit on the FS buffer cache is
>probably wrong in the first place; certainly, there must be
>high and low reserves, but:
>
>|--| all of memor
:On Tue, Oct 24, 2000 at 01:10:19PM -0700, Matt Dillon wrote:
:> Ouch. The original VM code assumed that pages would not often be
:> ripped out from under the pageadaemon, so it felt free to restart
:> whenever. I think you are absolutely correct in regards to the
:> clustering c
Here's a test patch, inclusive of some debugging sysctls:
vm.always_launder set to 1 to give up on trying to avoid
pageouts.
vm.vm_pageout_stats_rescans
Number of times the main inactive scan
> :Consider that a file with a huge number of pages outstanding
> :should probably be stealing pages from its own LRU list, and
> :not the system, to satisfy new requests. This is particularly
> :true of files that are demanding resources on a resource-bound
> :system.
>
> This isn't exactly
:
:Consider that a file with a huge number of pages outstanding
:should probably be stealing pages from its own LRU list, and
:not the system, to satisfy new requests. This is particularly
:true of files that are demanding resources on a resource-bound
:system.
:...
:
On Tue, Oct 24, 2000 at 01:10:19PM -0700, Matt Dillon wrote:
> Ouch. The original VM code assumed that pages would not often be
> ripped out from under the pageadaemon, so it felt free to restart
> whenever. I think you are absolutely correct in regards to the
> clustering code c
> > I'll take a crack at implementing the openbsd (or was it netbsd?) partial
> > fsync() code as well, to prevent the update daemon from locking up large
> > files that have lots of dirty pages for long periods of time.
>
> Making the partial fsync would help some people but probably not
> these
:Ok, now I feel pretty lost, how is there a relationship between
:max_page_launder and async writes? If increasing max_page_launder
:increases the amount of async writes, isn't that a good thing?
The async writes are competing against the rest of the system
for disk resources. While it
* Matt Dillon <[EMAIL PROTECTED]> [001024 15:32] wrote:
>
> :The people getting hit by this are Yahoo! boxes, they have giant areas
> :of NOSYNC mmap'd data, the issue is that for them the first scan through
> :the loop always sees dirty data that needs to be written out. I think
> :they also ne
:The people getting hit by this are Yahoo! boxes, they have giant areas
:of NOSYNC mmap'd data, the issue is that for them the first scan through
:the loop always sees dirty data that needs to be written out. I think
:they also need a _lot_ more than 32 pages cleaned per pass because all
:of thi
* Matt Dillon <[EMAIL PROTECTED]> [001024 13:11] wrote:
> Ouch. The original VM code assumed that pages would not often be
> ripped out from under the pageadaemon, so it felt free to restart
> whenever. I think you are absolutely correct in regards to the
> clustering code causin
Ouch. The original VM code assumed that pages would not often be
ripped out from under the pageadaemon, so it felt free to restart
whenever. I think you are absolutely correct in regards to the
clustering code causing nearby-page ripouts.
I don't have much time available, bu
Matt, I'm not sure if Paul mailed you yet so I thought I'd take the
initiative of bugging you about some reported problems (lockups)
when dealing with machines that have substantial MAP_NOSYNC'd
data along with a page shortage.
What seems to happen is that vm_pageout_scan (src/sys/vm/vm_pageout.c
33 matches
Mail list logo