Don't worry....ooRexx has a C++ version :-) Small requests end up in one pool to reduce fragmentation and also speed allocation time. Larger blocks of storage are segregated into a different memory pool that uses different allocation heuristics. But it's a tough balancing act trying to predict future memory requirements based on past usage.
Rick On Mon, Mar 30, 2009 at 8:37 AM, Mike Cowlishaw <[email protected]> wrote: > Yes, you're probably out of luck if using malloc. > > And now a bit of history is coming back to me -- is was for exactly this > problem (I think) that I wrote the FASTFREE package for CMS .. which > hooked in instead of SYSGETM/FREEM and significantly improved Rexx > performance. Hmm, I think I have a C version somewhere.... > > Mike > > - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - > Mike Cowlishaw, IBM Fellow > http://bit.ly/mfc > IBM UK (MP8), PO Box 31, Birmingham Road, Warwick, CV34 5JL > > Rick McGuire <[email protected]> wrote on 30/03/2009 13:10:29: > >> Rick McGuire <[email protected]> >> 30/03/2009 13:10 >> >> Please respond to >> Open Object Rexx Developer Mailing List > <[email protected]> >> >> To >> >> Open Object Rexx Developer Mailing List > <[email protected]> >> >> cc >> >> Subject >> >> Re: [Oorexx-devel] System resources exhausted >> >> It actually attempts to do that. I spent a fair amount of time this >> weekend debugging that code since I suspected there was something >> preventing the mergers from happening. However, it turns out in >> practice, to be a fairly rare event where requests end up being >> adjacent to each other. On Windows, I suspect there's a minimum >> granular size to the request needed to make that happen. On Linux, I >> think malloc has a header on each allocated block, so it never seems >> to occur. One thought would be to try to release all segments that >> might be empty in the hopes that the OS might be able to merge them >> into a larger block. This, however, will require significantly more >> rework to the garbage collector than I'm willing to do right before a >> release. >> >> Rick >> >> On Mon, Mar 30, 2009 at 8:04 AM, Mike Cowlishaw <[email protected]> wrote: >> >> The results were somewhat surprising. First of all, the performance >> >> was noticeably worse. The interpreter was essentially in a > continuous >> >> state of garbage collection. AND, on top of that, it ran out of >> >> memory at 4.75Mb rather than 7.5Mb. I tried this using multipliers > of >> >> 2x, 3x, 4x. The bigger the multiplier, the worse the performance > when >> >> things crossed the tipping point. All of these end of giving out of >> >> memory in the same area. >> > >> > Well, thanks for trying! >> > >> > Wonders .. could the heap conglomerate adjacent pieces of storage as >> > they're released (I have no idea of the internals of ooRexx)? >> > >> > Mike >> > >> > >> > >> > >> > >> > Unless stated otherwise above: >> > IBM United Kingdom Limited - Registered in England and Wales with > number >> > 741598. >> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 > 3AU >> > >> > >> > >> > >> > >> > >> > >> > >> > ------------------------------------------------------------------------------ >> > _______________________________________________ >> > Oorexx-devel mailing list >> > [email protected] >> > https://lists.sourceforge.net/lists/listinfo/oorexx-devel >> > >> >> > ------------------------------------------------------------------------------ >> _______________________________________________ >> Oorexx-devel mailing list >> [email protected] >> https://lists.sourceforge.net/lists/listinfo/oorexx-devel > > > > > > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > > > > > > > ------------------------------------------------------------------------------ > _______________________________________________ > Oorexx-devel mailing list > [email protected] > https://lists.sourceforge.net/lists/listinfo/oorexx-devel > ------------------------------------------------------------------------------ _______________________________________________ Oorexx-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/oorexx-devel
