With regard to the mark phase ,  the CLR GC ( and most decent generational
collectors) reduce the time required in the Mark phase significantly by
ignoring the inner references of older generation objects however, an older
object may have been written to and may have hold a reference to a newer
object. Hence it uses a compiler inserted  cheap write barrier which sets a
bit in a table when there is a write change ( note this cardtable each bit
represents 128 bytes mem so 1 DWORD is a pages). Now since it ignores older
object that haven't changed , it is very likely to have less page faults
since these objects are the most likely to suffer page faults. Obviously the
oldest generation mark and sweep will be an issue but memory volatility of
these objects are normally very low and shouldn't require many collections.
While you could go further by exposing the page state ( which would in this
case potentially avoid the write barrier) it introduces tighter coupling and
more complexities and most GCs are cross plat form. However for a language
based OS like singularity where there is only one GC in the whole machine it
has some merits. 

With regard to pages being written out Nurseries will require continual page
writes and when compaction occurs most of the address space will get changed
also. ( Though some GCs may / could play around with moving VM page order
for efficient compaction provided the pages haven't changed , again the
above write barrier is very useful) 

I thought the 2 main OS (Linux and Windows) have given up on the concept on
LRU type paging since 
1. GC's will have artificially high ranking and steal pages and the OS has
no control over the variety of GCs produced  at least Javascript , .NET and
Java are present on most Windows systems.
2. The Caches with paging problem , where the page removed is the most
likely to be used next. ( not just disk buffer caches but all caches) 
They now devalue LRU and use paging mainly as a VM allocation system and
base it more on the working set.

Also why should there be many page faults in modern systems ? By the time an
OS designed now is released you are looking at either mobile phone systems (
or smaller) with no swap space or 8 Gig 8 core machines, in either case a
new OS should have few bloated apps should have plenty of memory.   

Regards, 

Ben

 >-----Original Message-----
 >From: Bill Frantz [mailto:[email protected]]
 >Sent: Wednesday, February 24, 2010 4:22 AM
 >To: CapROS development; Design
 >Subject: [CapROS-devel] Paging and Garbage Collection
 >
 >These are some thoughts on the interaction between paging and garbage
 >collection that I thought I should get down into electrons and find out
 >how
 >others react to them.
 >
 >Assumptions:
 >
 >(1) The mark phase of a garbage collector is not a particularly good
 >paging
 >citizen. It has a fairly large working set, and is referencing many
 >pages.
 >Many of the objects it needs to mark, while still live, are likely to be
 >not recently used. If it marks objects in the object representation
 >itself,
 >it will also change the pages, requiring them to be written out.
 >
 >(2) The sweep phase, depending on the details of the algorithm used, may
 >also visit a large number of pages, and may also modify those pages.
 >
 >
 >Observations:
 >
 >(1) If the pager allows the garbage collector to reference objects on
 >pages
 >that are about to be cleaned, or about to be removed from memory, it may
 >be
 >possible to reduce the number of page faults caused by garbage
 >collection.
 >How one might arrange this kind of interaction in CapROS or similar
 >systems
 >is left as an exercise for the student. :-)
 >
 >Cheers - Bill
 >
 >------------------------------------------------------------------------
 >---
 >Bill Frantz        |"After all, if the conventional wisdom was working,
 >the
 >408-356-8506       | rate of systems being compromised would be going
 >down,
 >www.periwinkle.com | wouldn't it?" -- Marcus Ranum
 >
 >------------------------------------------------------------------------
 >------
 >Download Intel® Parallel Studio Eval
 >Try the new software tools for yourself. Speed compiling, find bugs
 >proactively, and fine-tune applications for parallel performance.
 >See why Intel Parallel Studio got high marks during beta.
 >http://p.sf.net/sfu/intel-sw-dev
 >_______________________________________________
 >CapROS-devel mailing list
 >[email protected]
 >https://lists.sourceforge.net/lists/listinfo/capros-devel


------------------------------------------------------------------------------
Download Intel® Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
CapROS-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/capros-devel

Reply via email to