I attack this problem by doing incremental parsing such as what the Sax examples do.
You are going to get a very large tree if you parse first, then process. It seems to me that that may be what is taking so long. YMMV:) Bert Williams. -----Original Message----- From: David Hoffer [mailto:[EMAIL PROTECTED] Sent: Sunday, September 14, 2003 6:06 PM To: [EMAIL PROTECTED] Subject: DOM performance/coding issue I have a question about how to effectively parser a large DOM document. For example, I have a lot of 'PatchData' elements. I am looking for the set of these where the child element 'CbName' matches a certain string. The following code takes 30 seconds just to loop through the DOMNodeList before it finds the first matching child. How can I make this faster? DOMNodeList* pDOMNodeList = m_pDocument->getDocumentElement()->getElementsByTagNameNS(NULL, L"PatchData"); for (int i=0; i<pDOMNodeList->getLength(); i++) { DOMNode* pDOMNode = pDOMNodeList->item(i); DOMNodeList* pDOMNodeList = ((const DOMElement*)pDOMNode)->getElementsByTagNameNS(NULL, L"CbName"); pDOMNode = pDOMNodeList->item(0); DOMNode* pChildNode = pDOMNode->getFirstChild(); std::wstring wstrTagValue = pChildNode->getNodeValue(); if (wcscmp(wstrTagValue.c_str(), m_wstrColorbarName.c_str()) == 0) { // This is one of the ones I want...I spend very little time here. } // I spend 30 seconds here, in a loop... } My XML structure is like... <PatchData> <CbName>abc</CbName> <Location>0</Location> <Type>0</Type> <Width>5.15</Width> <Enabled>true</Enabled> </PatchData> <PatchData> <CbName>abc</CbName> <Location>1</Location> <Type>1</Type> <Width>5.25</Width> <Enabled>false</Enabled> </PatchData> -dh --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]