On Jan-02, Leopold Toetsch wrote: > Steve Fink wrote: > > >Another (maybe silly) possibility suggested itself to me based on a > >private mail re: infant mortality from Thomas Whateley: could we try > >optimistic allocations? > > > > if (alloc_object() == NULL) { > > undo everything > > do_DOD_run > > interp->on_alloc_fail = CRASH_AND_BURN > > start over > > } > > What if e.g. clone needs more headers then one dod run can provide? With > CRASH_AND_BURN it would not succeed. With RETURN_NULL it would start > over and over again, until objects_per_alloc increases the needed header > count.
I think it's reasonable to not guarantee 100% perfect memory usage, if it makes the problem significantly easier. _How much_ to relax that constraint is another question. I was thinking that it might be okay to say that any single operation (even those that could conceivably trigger many other operations) must fit all of its allocations within the total amount of available memory when it starts. Or, in short, I'm saying that maybe it's all right for it to crash even though it could actually succeed if you ran a DOD immediately before every single allocation. A single opcode is probably too constraining, though. We'd probably have to do better for operations that can trigger arbitrary numbers of other operations. I'm not convinced enough of the utility of the general approach, though, so I'm not going to try to figure out how to make the granularity smaller.