One of the things we talked about at the Struts BOF at ApacheCon was ways to share technology between SAF and Shale. Here's an opportunity to do that, which I'd like some input on.
If you've been following my recent SVN commits, you've seen me working on a remoting API for Shale[1] that includes a Processor abstraction to map an incoming request for a "resource" to some sort of processing logic that produces the corresponding response. The basic interface (Processor) is about as simple as it can get (given that this is already a JSF request, so a FacesContext instance is available with access to the request/response/context objects): public interface Processor { public void process(FacesContext context, String resourceId) throws IOException; } There are currently implementations for static resources (from either the webapp or the webapp class loader), a way to bind to a JSF method binding expression and trigger execution of an arbitrary method, and (just commited) a way to invoke a Commons Chain command or chain that corresponds to the resource identifier. In the latter case, for example, it just creates a simple Context object and passes it in to the command, expecting the command to take responsibility for producing the corresponding response. The same concept would seem equally applicabe to other sorts of scenarios ... in particular, invoking a Struts Action, or a XWork/WebWork action. I have a pretty good handle on how to implement the former ... it's that latter that is not obvious at the moment. Feasible implementations would include: * Write a processor that maps to an XWork action * Write a processor that maps to a WebWork action * Write a processor that maps to whatever a SAF 2.0 "action" thing ends up looking like (which means waiting until we know that that is :-). Thoughts on the best approach? Craig [1] http://struts.apache.org/struts-shale/shale-core/apidocs/org/apache/shale/remoting/package-summary.html