Torsten Curdt wrote:

Sorry to jump in late.


Thanks for jumping in at all :)

There's a technical difficulty, however, as internal requests are handled differently than external ones when it comes to handling errors occuring during pipeline execution (not during pipeline building).
- pipelines for external requests are executed as soon as the pipeline is ended, i.e. in the map:serialize statement, hence under control of the treeprocessor
- pipelines for internal requests are executed when getInputStream() or toSAX() is called on the "cocoon:" source, out of the control of the treeprocessor.


Ok, but isn't the "cocoon:" source going back to treeprocessor after
all? Or does it just setup the pipelines? I thought every request is
goint through the TP.

...or who is passing the request to the pipeline(s)?


You have to consider the two distinct phases that occur when a request is handled:
- building the pipeline (executing sitemap statements) : matchers, action and flowscript are called, generator, transformers and serializer are added to the pipeline.
- processing the pipeline. The generator's generate() method is called, which starts the processing chain.


Building the pipeline is done by the TP. Execution of statements stops when encountering a "terminal" statement, i.e. a serialize, read, redirect or flowscript call.

Processing the pipeline is different for internal and external requests:
- for external requests, the TP starts the processing within the terminal statement (e.g. <serialize>). The enclosing <handle-errors> can then be handled correctly.
- for internal requests, the TP does *not* start the processing, but gives back a filled pipeline object to the SitemapSource. The SitemapSource starts the processing when the content of the source is needed. This means that pipeline processing occurs out of the <handle-errors> enclosing the <serialize>, and that errors occuring in that phase cannot be handled (at least with the current architecture).


So we can add add handle-errors="always|external|internal" and "?cocoon:handle-errors=true", but it will handle errors occuring during the _building_ of the pipeline, and not during its _execution_.


That's not what I am after. It does not help for error handling on aggregation. We should aim for the execution time.


It all depends where the errors occur in the aggregated pipelines: is it when building the pipeline or when executing it?

Handling errors occuring during the execution of internal requests would require some not so innocent changes in the pipeline machinery [3].


Well, IMHO this is major flaw and should be tackled.
No matter if we need to change something or not. I think the pipeline machinery is so deep core that probably not to many people would notice anyway.


Don't know... but maybe it would be possible to move the error handling further down to the pipeline level? If we are able to add that to the Abstract... classes even less people would be affected. But I have no clue if that's possible at all.

Especially if we don't want to mix concerns.


A solution that I proposed in [1] is that for internal processing, the TP not only builds the pipeline, but also returns a pointer to the error-handling statements wrapped in a Processor. That way, the SitemapSource can call the appropriate error handling statements.

But we can of course go one step at a time and start by catching pipeline build-time exceptions.


Not sure if that's worth the effort.

I'd propose to change what needs to be changed.


Sure :-)

Sylvain

[1] http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=107876029119774&w=2

--
Sylvain Wallez                                  Anyware Technologies
http://www.apache.org/~sylvain           http://www.anyware-tech.com
{ XML, Java, Cocoon, OpenSource }*{ Training, Consulting, Projects }



Reply via email to