Altman, Drora wrote:
Hi,
I tried to do what suggested to me, adopted the document and released the
parser, but it didn't help much to reduce the memory consumption.
So I made the following trials which didn't succeed too much. Could you please
help me figure out what I'm doing wrong and how should I do it correctly?
1. I tried to import the root node and release the document.
After the call to DOMDocumentImpl::release() some memory is released,
but:
a. not all memory (used memory is more than what was before starting to
use xerces)
This could be memory being held by the C++ run-time heap, which will be
used for later allocations. You might check your pl
b. after releasing the memory, I couldn't use the root node anymore
since it's corrupted after the document's released.
Yes, because releasing the document destroys everything. What I
suggested you do to compact a document is to clone the entire document:
newdoc = static_cast<DOMDocument*>(doc->cloneNode(true));
doc->release;
doc = newdoc;
2. I saw that there is an adoptNode() function, so I tried to use it- adopt the
root node and than release
the document.
However, when trying this I got an exception, so I looked at the source
code and found that in xerces 2.8.0 adoptNode() has the following
implementation:
DOMNode* DOMDocumentImpl::adoptNode(DOMNode*) {
throw DOMException(DOMException::NOT_SUPPORTED_ERR, 0, getMemoryManager());
return 0;
}
I looked at the source code of xerces 3.0.1 which has different implementation
to that function, however, it didn't seem to do what I expected it to do
(bottom line is, that using it didn't help either)
adoptNode() is used to bring a node in from another document, which
Xerces-C doesn't support.
- Eventually, I don't know how can I release the memory allocated by xerces....
You usually cannot control on the C++ run-time heap relinquishes memory
to the operating system. You might want to investigate what sort of heap
control and debugging functions exist on your platform to see if there
are diagnostics that will help you figure out memory usage.
Another issue that I realized is: I saw that we use (quite a lot) the function:
DOMNodeList *DOMElementImpl::getElementsByTagName(const XMLCh *tagname) const
I wondered when (and by whom) should the returned DOMNodeList* be deleted, I
saw an answer regarding this issue in the following link:
http://www.mail-archive.com/[email protected]/msg02511.html
but didn't understand what is the meaning of: Memory for any returned object
... are owned by implementation - whose implementation?
I think that releasing these lists after we used finished using them might help
us as well (I saw that the fNodeListPool is cleaned up when the document is
deleted, but it is not explicitly deleted in any place I could find)
These are kept in a pool within the document, so you cannot release
them. If you are calling this often with different tag names, that could
explain some increased memory usage. You might want to file an
enhancement request that allows for clearing this pool. If you use the
approach I outlined of cloning the entire document, that will take care
of cleaning up this pool.
Dave