I agree its hard to say without seeing the code. When I first switched to ML 9 I didn't realize the ErrorLog.txt is now broken up by app server like the access logs and the task server has its own log as well. So depending on the error and what app server the thread is executing on it could be in a different log file.
Other possible things you can check: 1. Are you hitting the task server max queue size? Should see the error in the logs. 2. Is it possible your code doesn't create unique URIs for each document. 3. If you are spawning in update mode you may need to manually call xdmp:commit() 4. Possible permission issue? see if an admin user can see all the expected documents. -Will On Wed, Nov 8, 2017 at 5:09 PM, Markus Flatscher < [email protected]> wrote: > It's hard to say without more details about your data and code, but as a > first guess, did you consider uncatchable exceptions? They can occur at the > evaluator layer due to query timeouts (e.g., SVC-CANCELED); during the > commit phase at the data layer, such as when you try to commit a document > with index configuration violations (e.g., XDMP-RANGEINDEX); and for a > handful of other reasons. More details: https://help.marklogic.com/ > knowledgebase/article/View/20/16/uncatchable-exceptions > > However, with appropriate log levels set, your ErrorLog should still show > such errors even if the catch block itself can't see and log them. > > > -- > Markus Flatscher > Senior Consultant, MarkLogic | Avalon Consulting, LLC > [email protected] > > On Wed, Nov 8, 2017 at 5:25 PM, Eliot Kimber <[email protected]> wrote: > >> Using ML 9: >> >> I have a process that quickly creates a large number of small documents, >> one for each item in a set of input items. >> >> My code is basically: >> >> 1. Log that I’m about to act on the input item >> 2. Act on the input item (send the input item to a remote HTTP end point) >> 3. Create a new doc reflecting the input item I just acted on >> >> This code is within a try/catch and I log the exception, so I should know >> if there are any exceptions during this process by examining the log. >> >> I’m processing about 500K input items, with the processing spread over >> the 16 threads of my task server. So there are 16 tasks quickly writing >> these docs concurrently. >> >> I know the exact count of the input items and I get that count in the >> log, so I know that I’m actually processing all the items I should be. >> >> However, if I subsequently count the documents created in step 3 I’m >> short by about 1500, meaning that not all the docs got created, which >> should not be able to happen unless there was an exception between the log >> message and the document-insert() call, but I’m not finding any exceptions >> or other errors reported in the log. >> >> My question: is there anything that would cause docs to silently not get >> created under this kind of heavy-load? I would hope not but just wanted to >> make sure. >> >> I’m assuming this issue is my bug somewhere, but the code is pretty >> simple and I’m not seeing any obvious way the documents could not get >> created without a corresponding exception report. >> >> Thanks, >> >> Eliot >> -- >> Eliot Kimber >> http://contrext.com >> >> >> >> >> _______________________________________________ >> General mailing list >> [email protected] >> Manage your subscription at: >> http://developer.marklogic.com/mailman/listinfo/general >> > > > _______________________________________________ > General mailing list > [email protected] > Manage your subscription at: > http://developer.marklogic.com/mailman/listinfo/general > >
_______________________________________________ General mailing list [email protected] Manage your subscription at: http://developer.marklogic.com/mailman/listinfo/general
