On Thu, 8 Jul 1999, Patryk Zadarnowski wrote:
>
> > Why not put the kernel in a different address space? IIRC there's no
> > absolute requirement for the kernel and userland to be in the same
> > address space, and that way we would have 4 GB for each.
>
> Wouldn't that make system calls tha
On Wed, 7 Jul 1999 18:21:03 -0700 (PDT)
Matthew Dillon <[EMAIL PROTECTED]> wrote:
> Now, I also believe that when UVM maps those pages, it makes them
> copy-on-write so I/O can be initiated on the data without having to
> stall anyone attempting to make further modifications to
> we already use the gs register for SMP now..
> what about the fs register?
> I vaguely remember that the different segments could be used to achieve
> this (%fs points to user space or something)
... as I've suggested a few days ago, and was told to shut up with a (rather
irrelevant) refere
> Why not put the kernel in a different address space? IIRC there's no
> absolute requirement for the kernel and userland to be in the same
> address space, and that way we would have 4 GB for each.
Wouldn't that make system calls that need to share data between kernel
and user spaces hopeless
we already use the gs register for SMP now..
what about the fs register?
I vaguely remember that the different segments could be used to achieve
this (%fs points to user space or something)
julian
On Wed, 7 Jul 1999, Matthew Dillon wrote:
> :Why not put the kernel in a different address sp
:Why not put the kernel in a different address space? IIRC there's no
:absolute requirement for the kernel and userland to be in the same
:address space, and that way we would have 4 GB for each.
:
:Greg
No, the syscall overhead is way too high if we have to mess with MMU
context. This
On Thursday, 8 July 1999 at 9:26:09 +1000, Peter Jeremy wrote:
> David Greenman wrote:
>> Yes, I do - at least with the 512MB figure. That would be half of the 1GB
>> KVA space and large systems really need that space for things like network
>> buffers and other map regions.
>
> Matthew Dillon
:On Thu, 08 Jul 1999 08:36:19 +0800
: Peter Wemm <[EMAIL PROTECTED]> wrote:
:
: > Out of curiosity, how does it handle the problem of small 512 byte
: > directories? Does it consume a whole page or does it do something smarter?
: > Or does the ubc work apply to read/write only and the filesystem
:The way this is done in the still-in-development branch of NetBSD's
:unified buffer cache is to basically elimiate the old buffer cache
:interface for vnode read/write completely. When you want to do that
:sort of I/O to a vnode, you simply map a window of the object into
:KVA space (via ubc_all
On Thu, 08 Jul 1999 08:36:19 +0800
Peter Wemm <[EMAIL PROTECTED]> wrote:
> Out of curiosity, how does it handle the problem of small 512 byte
> directories? Does it consume a whole page or does it do something smarter?
> Or does the ubc work apply to read/write only and the filesystem itsel
Jason Thorpe wrote:
> On Wed, 7 Jul 1999 17:03:16 -0700 (PDT)
> Matthew Dillon <[EMAIL PROTECTED]> wrote:
>
> > If this could result in a smaller overall structure, it may be worth i
t.
> > To really make the combined structure smaller we would also have to
> > pair-down the
David Greenman wrote:
> Yes, I do - at least with the 512MB figure. That would be half of the 1GB
>KVA space and large systems really need that space for things like network
>buffers and other map regions.
Matthew Dillon <[EMAIL PROTECTED]> wrote:
>What would be an acceptable upper limit?
:>limit ought to work for a 4G machine
:>
:>Since most of those news files were small, I think Kirk's news test code
:>is pretty much the worse case scenario as far as vnode allocation goes.
:
: Well, I could possibly live with 256MB, but the vnode/fsnode consumption
:seems to be get
>: Yes, I do - at least with the 512MB figure. That would be half of the 1GB
>:KVA space and large systems really need that space for things like network
>:buffers and other map regions.
>:
>:-DG
>:
>:David Greenman
>:Co-founder/Principal Architect, The FreeBSD Project - http://www.freebsd.org
>
:
: Yes, I do - at least with the 512MB figure. That would be half of the 1GB
:KVA space and large systems really need that space for things like network
:buffers and other map regions.
:
:-DG
:
:David Greenman
:Co-founder/Principal Architect, The FreeBSD Project - http://www.freebsd.org
:Creato
>Since we have increased the hard page table allocation for the kernel to
>1G (?) we should be able to safely increase VM_KMEM_SIZE_MAX. I was
>thinking of increasing it to 512MB. This increase only effects
>large-memory systems. It keeps them from locking up :-)
>
>Anyone
16 matches
Mail list logo