I've scripted with JS for 12 years and would contribute all I could to a
project of this type. My interest would be more from a reverse engineering
view point. If the LWP could be used to efficiently parse JS at useragent
level it would be both a blessing and a curse. Right now JS is great for
blind-siding un-wanted spiders that are slowly learning what to do. Getting
a jump on this could be advantageous to combating A JS capable robot
harvesting information for nefarious purposes.

Jim

-----Original Message-----
From: John J Lee [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 22, 2006 2:35 PM
To: Stefan Seifert
Cc: libwww@perl.org
Subject: Re: State of the AJAX Union


On Wed, 22 Nov 2006, Stefan Seifert wrote:
[...]
> I too thought about that. Maybe using the JavaScript or
> JavaScript::Spidermonkey module and XML::DOM. I will certainly
> experiment around with them, as we need it at work. Doesn't seem to be

Sigh, we've had this same little discussion at least five times here.

The browser object model is not the XML DOM.  It is the HTML DOM (which is
ill-defined in practice, and is not really a superset of the XML DOM),
plus other stuff.  There is currently no implementation of it outside of
browsers.  Plus you have to build the damned DOM in the first place :-)


> too hard to me, but of course, I'm underestimating that :)

Yes.

As I've said many times before here, getting something working is not too
hard, getting something useful is harder (how much depends on the
audience, I guess), getting something good is a lot of work.  Maybe this
is universally true, but especially so of JS support for LWP :-)


John




--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.409 / Virus Database: 268.14.13/546 - Release Date: 11/22/2006



Reply via email to