On Monday, 20 March 2000 at 14:04:48 -0800, Matthew Dillon wrote:
>
>     If a particular subsystem needs b_data, then that subsystem is obviously
>     willing to take the virtual mapping / unmapping hit.  If you look at
>     Greg's current code this is, in fact, what is occuring.... the critical
>     path through the buffer cache in a heavily loaded system tends to require
>     a KVA mapping *AND* a KVA unmapping on every buffer access (just that the
>     unmappings tend to be for unrelated buffers).  The reason this occurs
>     is because even with the larger amount of KVA we made available to the
>     buffer cache in 4.x, there still isn't enough to leave mappings intact
>     for long periods of time.  A 'systat -vm 1' will show you precisely
>     what I mean (also sysctl -a | fgrep bufspace).
>
>     So we will at least not be any worse off then we are now, and probably
>     better off since many of the buffers in the new system will not have
>     to be mapped.  For example, when vinum's RAID5 breaks up a request
>     and issues a driveio() it passes a buffer which is assigned to b_data
>     which must be translated (through page table lookups) to physical
>     addresses anyway, so the fact that that vinum does not populate
>     b_pages[] does *NOT* help it in the least.  It actually makes the job
>     harder.

I think you may be confusing two things, though it doesn't seem to
make much difference.  driveio() is used only for accesses to the
configuration information; normal Vinum I/O goes via launch_requests()
(in vinumrequest.c).  And it's not just RAID-5 that breaks up a
request, it's any access that goes over more than one subdisk (even
concatenated plexes in exceptional cases).

Greg
--
Finger [EMAIL PROTECTED] for PGP public key
See complete headers for address and phone numbers


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-current" in the body of the message

Reply via email to