The app was written by taking care of this, and the effectiveness of deleteLater connections was checked by printing debug messages in constructors and destructors. But something may have been left out.
Fragmentation is a dreaded issue. I don't think this is the culprit here, the computation job build heavy trees upon initialization, which are prone to fragmenting, but the computing part is not allocating much memory and only reads the trees, and the whole tree is destroyed at the end of each job. So I would not expect to have 8Gb of fragmented heap after only 1000-2000 jobs. I should be using a custom allocator for building the trees in and avoiding this issue, though. 2015-10-30 16:10 GMT+01:00 Jason H <jh...@gmx.com>: > > I have seen this before in a similar type of app. Don't forget to use > QObject::deleteLater() for objects that are created dynamically, and go > unneeded after and event (like on socket disconnect). > > Also while rare, but very real for long-running C/C++ processes of this > nature is a memory fragmentation issue (Java and .Net don't have this as > they compact their heaps) where you may end up with objects occupying > partial pages, which become sparsely populated over time and expand to > consume all availbile memory pages. The only work around in C/C++ is to > only use dynamic memory only at start-up, and allocate a fixed number of > objects which are re-used. This is what NASA does. You could also take a > hybrid approach and allocate blocks of objects at a time, this won't > prevent, only delay the inevitable. However if your process can eventually > free all objects on the page, then you won't ever run into the problem. > It's only if the lifetimes of the objects vary significantly that this > happens. > > >
_______________________________________________ Interest mailing list Interest@qt-project.org http://lists.qt-project.org/mailman/listinfo/interest