Bruno Dumon wrote:

what do you refer to with 'pipelines' here? The other map:part's?
Transformers after the map:aggregate?

Yes, the other map:parts. The src attribute points to points to another pipeline with cocoon://<pipeline>. Please see my first post as it has an example at the bottom. I re-posted the same thread omitting the example for brevity.

cocoon:// request will continue processing as if no error has occurred but..... you will still see the cocoon error page as if processing actually stopped!

I have a pipeline which aggregates several other pipelines and then writes the generated content to disk. If there is an exception in one of the aggregate parts I need the sitemap processing to stop and handle the exception appropriately. Instead the generated content, which has errored and therefore invalid, is still written to disk. If I remove the aggregate pipeline and just use a regular generator the handle-errors is respected correctly and processing stops before writing the content.

Am I misusing the <map:aggregate> elements? Or is this the expected behavoir?


I have no experience with map:aggregate, but from a quick look at the
code, it doesn't catch any exceptions. What it does always do (also when
an exception occurs) is closing the root tag and sending the end
document event.

That would explain why exceptions are caught and handled after the aggregate parts are executed, as if things went fine, and concatenated instead of being caught and handled by the map:handle-errors defined in the erroring piepline.

BTW: can you point out the file and line you found this on. I've been digging but with no luck yet.

Now I'm just going to guess wildly (since you didn't mention), but if
after the map:generate you have an XSLT transformer and you write the
content using the source writing transformer, I can image the file
indeed still gets written. This is because on the one hand Xalan can
cope with the invalid input, and the endDocument event will cause it to
do the transform and thus cause the source writing transformer to do its
job.
Exactly correct. Please see my first post as it has an example at the bottom. I re-posted the same thread omitting the example for brevity.

While the close-root-tag-and-send-end-document-event behaviour of the
aggregate is debatable, it is the nature of a SAX-pipeline that
everything in the pipeline starts executing together. Therefore things
which have side-effects and for which error recovery is important should
not be done in a streaming pipeline (therefore the source writing
transformer is considered an evil one -- don't use it).

I've also tried some other experiments with aggregate map:parts calling pipelines and setting request attributes. It seems as if a new Request object is created for each map:aggregate part. Therefore setting a request attribute in a pipeline when called from a map:part is not visible to other map:parts or the original pipeline that contains the map:aggregate. In my mind there should always be only one Request object. I haven been able to verify this in the source code yet but seems to be that case. Another undesirable side-effect of the map:aggregate.

The alternative approach is to use flowscript and its processPipelineTo
function, where you can use a try-catch block and remove the file (if
needed) when an error occurs.
I've been trying my hardest to stay away from flowscript :-(

-Justin



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to