On Mon, Jul 26, 2010 at 16:37, Nick Wellnhofer <[email protected]> wrote: > On 26/07/2010 15:58, Nikolai Weibull wrote:
>> I’m using XSL-T for the parsing escaped HTML inside XML. Yes, >> terrible, I know, but it’s the best I can do. >> >> Anyway, I’m currently running out of stack space when trying to parse >> longer strings and was wondering if anyone had actually considered >> “tail-call optimizing” call-template to avoid running out of stack. >> How much work would it be? > I once had a look at it and I'd consider it a lot of work. The libxslt code > uses indirect recursion over three or four functions. It's not trivial to > convert them all to tail calls. Alternatively, one could use trampolines. > But that looks pretty complicated, too. Yeah, I noticed that, too. :-( >> On my Cygwin installation the following script works, but changing the >> test to 5175 causes a silent exit. > I'd suggest to have a look at the EXSLT string functions. Maybe you can > solve your problem with them without using recursive template calls. No, one can’t, I’ve tried. This really needs a “real” HTML lexer. The reason I’m doing this is to be able to mark up the escaped HTML as non-translatable content. It works perfectly, except for the fact that it depends so heavily on recursion. One solution would be to use for-each on each of the individual characters of the string (through str:split($text, "")), but there’s no way to save the current state information of the lexer (that is, are we currently processing an element name, an attribute name, …). _______________________________________________ xslt mailing list, project page http://xmlsoft.org/XSLT/ [email protected] http://mail.gnome.org/mailman/listinfo/xslt
