On 22 December 2014 at 06:43, David Gwynne <da...@gwynne.id.au> wrote:
> this introduces a global gc task that loops over all the pools
> looking for pages that havent been used for a very long time so
> they can be freed.
>
> this is the simplest way of doing this without introducing per pool
> timeouts/tasks which in turn could introduce races with pool_destroy,
> or more shared global state that gets touched on pool_get/put ops.
>
> im adding the timeout in src/sys/kern/init_main.c cos having pool_init
> do it causes faults when a pool is created before the timeout
> subsystem is set up.
>
> i have tested this on sparc64, amd64, and hppa, and it seems to do
> the advertised job.
>

i have a couple of questions to understand the reasoning behind this.

1) how is it different from what we have now?

2) why can't you do it when you pool_put?

3) why can't you call it from uvm when there's memory pressure since
   from what i understand pool_reclaim was supposed to work like that?

4) i assume you don't want to call pool_reclaim from pool_gc_pages
   because of the mutex_enter_try benefits, but why does logic in
   these functions differ, e.g. why did you omit the pr_itemsperpage
   bit?

5) why 8 ticks is a "very long time" to free a page?
   why not 10 seconds?

6) why is there an XXX next to the splx?

7) it looks like pool_reclaim_all should also raise an IPL since it
   does the same thing.  wasn't it noteced before?

Reply via email to