On Wed, Sep 19, 2001 at 01:52:12PM -0400, Rodent of Unusual Size wrote:
> Greg Stein wrote:
> > It isn't a bug. Cleanups are for just wrapping things up,
> > not doing work.
>
> If that's the authoritative answer, then we need to provide
> a supported way for 'doing work' at cleanup time.
>
> > You might not even be able to open and use that file if
> > your pool is n the process of being destroyed.
>
> That sounds like utter tripe. If you can't depend on the
> pool lasting at least until your cleanup routine ends,
> then the whole cleanup mechanism is seriously borked. AFAIK
> it isn't, so I think the above assertion *is*.
The problem is cross-dependency between the cleanup actions. One can destroy
something another cleanup needs. If you restrict their actions to "simple"
things, then the cross-dependencies are (hopefully!) removed.
Consider a platform that creates a subpool to manage a set of file
descriptors for the apr_file_t objects. That subpool is linked from the
userdata of specified pool. At cleanup time, that subpool has been tossed,
so you cannot create files on the given pool.
Or maybe it is as simple as APR registers a cleanup on the pool to do some
cleanup, which clears out some basic structure needed for creating FOO
objects. And that APR's cleanup runs before yours.
The basic issue is that cleanups are generally non-deterministic. A cleanup
could run before another and blast some basic service, preventing the second
cleanup from doing "work". Think about database connection pools, shared
memory stuff, temporary files, etc. The list goes on and on.
That said: if you want to create a second list of "do some work when the
pool is cleared" type thing, then fine. As long as its understood that
*that* list is for non-destructive work, and the cleanups are for
destructive work. The work-list can run before subpools are tossed, which
means the pools are "unchanged".
Cheers,
-g
p.s. "utter tripe" indeed... that was rather inflammatory...
--
Greg Stein, http://www.lyra.org/