-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Neal H. Walfield schrieb: > [...] We are also very interested in discussing reactions to > this proposal.
The paper tries to articulate problems and solutions. While I think it does so pretty well, the information density in the text is very high (certainly due to the five pages limit imposed), and therefore it is relatively hard to "fill in the gaps" of the proposed lowlevel system structure (i.e. resource pools). I think it'd be great if you could elaborate on that a bit, concretizing the paper. I will outline my present understanding (or better, interpolation) of it below, in the hope of making the task easier for you (so you "just" have to correct the misstakes, which, however, will certainly be many). A resource pool is an abstract entity to manage accounting of resources (e.g. cpu time). Associated with it is a scheduling policy (which might e.g. include a quota). A resource pool can be used to allocate from it, deallocate, subpools can be created (with equal policy), the scheduling policy can be changed (e.g. to shrink the resources available from a subpool to be given to a child process) to be "worse" than before (e.g. a smaller quota), and it can be destroyed, destroying all subpools created as well. The mechanism of creating subpools effectively organizes resource pool as a tree. There exists a "master pool" from which all other pools are (directly or indirectly) derived, which scheduling policy basically says "all of the resource is available from this pool". Now there are three types of resource pools, for cpu time, for main memory, and for backing store memory. Those pools for cpu time are clearest, I think: they have policies like "at least 5% of the available time, with priority 7 to get more time" or "run at least once every 10ms for 1ms", allowing for both real-time and time-shared processing. What other examples of scheduling policies exist? However, it is not clear to me how the policies creatable in this way can be ordered (e.g., can a pool of the first type exemplfied created as a subpool of the second example?). I don't think cpu time pools are to be passed to servers. Although this would increase accounting, it would as well horrify the complexity of the server and require special kernel support, as has been discussed on the list (or on coyotos-dev?). The pools for main and backing store memory are more complicated, as they interact. Main memory policies are probably in the spirit of "at least 25 pages accessible at any time" (or maybe even "exactly 25 pages accessible at any time"), backing store policies being similar. What other policies exist? How are they ordered? But what happens when pages have to be freed (e.g. because the parent shrinks the main memory pool size)? As memory pools are to be passed to servers, it must be possible to specify from which backing store pool the backing store memory is to be taken when a page is written to disk. So, as it appears to me, an allocation of a main memory page has to have a (potentially void, indicating discardability) backing store allocation as an argument, and, as explained in the paper, a priority that gives the order in which pages are to be freed. "Backing store pages" can be allocated at will. What happens when the page with the lowest priority is paged out, then referenced, which in turn causes the second lowest priority page to be freed, which is then next touched, paged in, causing the lowest priority page to be freed, and so on? That is, how are malicious applications stopped from slowing down the system by dictating a very bad page-out policy? - -- - -ness- -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFFn+nYvD/ijq9JWhsRAo07AJ0Tw/7w09cn+6b3oMbyzdnxO3vkpgCfX1P6 u0elSb1NV8r6+GhvA0HoonY= =bEXO -----END PGP SIGNATURE----- _______________________________________________ L4-hurd mailing list [email protected] http://lists.gnu.org/mailman/listinfo/l4-hurd
