I am attempting to load a large xml file in order to process it flatten it
in memory.  The document itself is around 80mb and the code I have written
uses a FLWOR statement to break it into about 30k new documents which are to
be loaded into a collection.  Before my FLWOR finishes I get the following
error:

XDMP-EXPNTREECACHEFULL: Expanded tree cache full on host temp.marklogic
 [1.0-ml]
I have set the db's in memory list size and in memory tree size to 256mb.  I
have read on other threads here that this is caused by large numbers of
documents remaining in scope during the execution of a single query.
However I am not sure how the newly created documents would remain in
scope.  Can anyone advise on best practices for performing this sort of
action.



The code looks like this:

xdmp:load("/tmp/items.xml", "items.xml");

for $item in doc("items.xml")//item
let $newitem := element { "item" }
                { attribute { "owner" } { $item/@owner } ,
                  element { "timestamp" } { current-dateTime() },
                  for $price in $item/price return
                  <price>{ $price }</price>
                }
return xdmp:document-insert(
fn:concat(
fn:concat("/item/",
xs:string($newitem/@owner)),
current-dateTime() )
, $item)
_______________________________________________
General mailing list
[email protected]
http://xqzone.com/mailman/listinfo/general

Reply via email to