Jason and others,

I thought about an approach based on the scrape taglib.
The scrape page tag uses a begin and end anchor element to retrieve the contents between.
All contents in the wiki start with <div id="snip-content" class="snip-content"> and a closing </div> tag.


Shouldn't be to hard to create an ant task which downloads all pages for the xwork/ww2 or other modules.

An approach could be like this:
o Create a XML based config file, where all pages to download are included with a local filename mapping.|
o download the files from the wiki. replace all configured online links with the local filename mapping
o create the local files based on a simple template and store them in the docs directory


The only thing which has to be updated for the task is this simple xml config file.

Any thoughts/comments?

Rainer

|

--
Rainer Hermanns                           [EMAIL PROTECTED]
Woperstr. 34                              tel: +49 (0)170  - 3432 912
D-52134 Herzogenrath                      fax: +49 (0)2406 - 7398




------------------------------------------------------- This SF.Net email sponsored by: Free pre-built ASP.NET sites including Data Reports, E-commerce, Portals, and Forums are available now. Download today and enter to win an XBOX or Visual Studio .NET. http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01 _______________________________________________ Opensymphony-webwork mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/opensymphony-webwork

Reply via email to