On Wed, Jul 28, 2010 at 01:33, Michael Ludwig <[email protected]> wrote: > Nikolai Weibull schrieb am 26.07.2010 um 16:49 (+0200): > >> This really needs a “real” HTML lexer. The reason I’m doing this is to >> be able to mark up the escaped HTML as non-translatable content. It >> works perfectly, except for the fact that it depends so heavily on >> recursion. One solution would be to use for-each on each of the >> individual characters of the string (through str:split($text, "")), >> but there’s no way to save the current state information of the lexer >> (that is, are we currently processing an element name, an attribute >> name, …).
> XSLT 1.0 plus EXSLT is of limited expressiveness, which is why some > people find convenient to host it in a more powerful general-purpose > language such as Perl. I moved the recursion up one level, so that only one template recurses. This limits the lexing to 3000 items, but that’s a lot better than 3000 characters. > I haven't quite understood your problem, but chances are Perl allows you > to solve it much more easily than XSLT 1.0. Well, one solution would be to extract the content of elements that contain escaped HTML, process them separately with xsltproc --html, and then use the result. It would have been nice to have an XProc processor for that, but I guess sh should do. _______________________________________________ xslt mailing list, project page http://xmlsoft.org/XSLT/ [email protected] http://mail.gnome.org/mailman/listinfo/xslt
