On May 4, 2007, at 9:48 PM, [EMAIL PROTECTED] wrote:
> > I'd like to set my program up as a cron job, but honestly I can't put > it into production until I nail this issue down. We can't run more > than about 250-300 jobs through the system without being forced to > reboot since were leaking memory like a sieve. > > Has anyone else run into these types of issues? > honestly not really, as far as memory not being freed after program exit. id take a hard look at what top has to say. the only thing ive ever seen that doesnt free memory after program exit is if you play around with shared memory....which some databases do, but im pretty sure sqlite3 doesnt. assuming youre using a modern unix system im pretty sure you should be able to track down every allocated byte of ram even if thats the case. as far as memory size within the process itself, id first just throw away the session altogether when it gets too full instead of trying to prune individual elements via expunge, and after that take a look at what gc.get_objects() has to say. im also assuming youre not creating new mappers on the fly since those are cached. also as youve noticed the ORM is not optimized for massive size, as it instruments loaded objects and otherwise takes up a lot of time/space supporting its generally automated nature. if you use straight SQL construction/result sets this problem wont exist. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~----------~----~----~----~------~----~------~--~---