On Thu, 2006-03-09 at 14:37 -0500, Justin Hannus wrote:
> Bruno Dumon wrote:
> 
> >what do you refer to with 'pipelines' here? The other map:part's?
> >Transformers after the map:aggregate?
> >
> >  
> >
> Yes, the other map:parts.

Then that's impossible. It will stop at the first part that throws an
exception.

>  The src attribute points to points to another 
> pipeline with cocoon://<pipeline>. Please see my first post as it has an 
> example at the bottom. I re-posted the same thread omitting the example 
> for brevity.
> 
> >>cocoon:// request will continue processing as if no error has occurred 
> >>but..... you will still see the cocoon error page as if processing 
> >>actually stopped!
> >>
> >>I have a pipeline which aggregates several other pipelines and then 
> >>writes the generated content to disk. If there is an exception in one of 
> >>the aggregate parts I need the sitemap processing to stop and handle the 
> >>exception appropriately. Instead the generated content, which has 
> >>errored and therefore invalid, is still written to disk. If I remove the 
> >>aggregate pipeline and just use a regular generator the handle-errors is 
> >>respected correctly and processing stops before writing the content.
> >>
> >>Am I misusing the <map:aggregate> elements? Or is this the expected 
> >>behavoir?
> >>
> >>    
> >>
> >
> >I have no experience with map:aggregate, but from a quick look at the
> >code, it doesn't catch any exceptions. What it does always do (also when
> >an exception occurs) is closing the root tag and sending the end
> >document event.
> >
> >  
> >
> That would explain why exceptions are caught and handled after the 
> aggregate parts are executed, as if things went fine, and concatenated 
> instead of being caught and handled by the map:handle-errors defined in 
> the erroring piepline.

As I wrote, the exceptions are not caught, so it is impossible that
map:parts that follow the one which gives an error are executed.

> 
> BTW: can you point out the file and line you found this on. I've been 
> digging but with no luck yet.

It really doesn't matter, as there's nothing to fix there. While I do
think that sending additional SAX-events after an exception occured is
wrong, changing this will not cause a reliable alternative behaviour, as
the resulting behaviour would just be a coincidence of implementation
details of other transformers. Don't forget the SAX-pipeline is
streaming, you can't pull back events which are already forwarded to the
next transformer.

This being said, ContentAggregator.generate() is where it happens.

> 
> >Now I'm just going to guess wildly (since you didn't mention), but if
> >after the map:generate you have an XSLT transformer and you write the
> >content using the source writing transformer, I can image the file
> >indeed still gets written. This is because on the one hand Xalan can
> >cope with the invalid input, and the endDocument event will cause it to
> >do the transform and thus cause the source writing transformer to do its
> >job.
> >  
> >
> Exactly correct. Please see my first post as it has an example at the 
> bottom. I re-posted the same thread omitting the example for brevity.
> 
> >While the close-root-tag-and-send-end-document-event behaviour of the
> >aggregate is debatable, it is the nature of a SAX-pipeline that
> >everything in the pipeline starts executing together. Therefore things
> >which have side-effects and for which error recovery is important should
> >not be done in a streaming pipeline (therefore the source writing
> >transformer is considered an evil one -- don't use it).
> >
> >  
> >
> I've also tried some other experiments with aggregate map:parts calling 
> pipelines and setting request attributes. It seems as if a new Request 
> object is created for each map:aggregate part.

While Cocoon does create a request object wrapper for each internal
request ("cocoon:" request), most things, including setting request
attributes, are delegated to the original request object (at least, I'd
be very suprised if it's different).

>  Therefore setting a 
> request attribute in a pipeline when called from a map:part is not 
> visible to other map:parts or the original pipeline

Careful here: if you set a request attribute in say, a generator of a
pipeline called from a map:part, and read it out in (e.g.) an action in
the original pipeline, then that won't work indeed.

This has to do with how the sitemap works, the following resource might
help you understand this:
http://cocoon.zones.apache.org/daisy/documentation/863/sitemap/853.html

>  that contains the 
> map:aggregate. In my mind there should always be only one Request 
> object. I haven been able to verify this in the source code yet but 
> seems to be that case. Another undesirable side-effect of the map:aggregate.
> 
> >The alternative approach is to use flowscript and its processPipelineTo
> >function, where you can use a try-catch block and remove the file (if
> >needed) when an error occurs.
> >  
> >
> I've been trying my hardest to stay away from flowscript :-(

Any particular reason? You don't need to use any continuations-related
features or so.

-- 
Bruno Dumon                             http://outerthought.org/
Outerthought - Open Source, Java & XML Competence Support Center
[EMAIL PROTECTED]                          [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to