At one point, I modified the Scrape taglib to use the standard URLConnection object.  
I botched it the first time, but I think I had it working OK the second time.  I found 
one real problem -- there was a mistake in the use of the regular expression library 
to find the text between the tags.  I don't remember all of the details, but I'll find 
my changes in a day or so and send them here.

As for the proxy, the proxy should work OK with the standard URLConnection object, 
except for authentication.  You need to add a custom head (Proxy-Authentication, I 
think) to the request to solve that problem.  I have some code for that as well.

Ken Meltsner


-----Original Message-----
From: Rich Catlett [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, April 10, 2002 3:10 PM
To: [EMAIL PROTECTED]
Subject: scrape taglib


Still not having any luck with the scrape taglib.  Only responses I have gotten
are from 2 others with the same issue.
  Follow up questions -
     1. Is there a place we can look for errors?

The taglib logs all of it's errors using the servletContext.log() method, in tomcat 
this logs the errors to localhost_log.todays date.txt.

     2. Is there a timeout that is causing it to come back blank?  If so can it
be set?

The connection is set to timeout after 20 secs if you wanted to change it you would 
have to go into the code.

     3. Are there issues using it with a proxy server?

The taglib is not currently set up to work via a proxy, however that is in the works.  
I can give you a test jar file and .tld file if you wanted to try it out, the use of a 
proxy with no authentication should work.

Rich Catlett






--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

--
To unsubscribe, e-mail:   <mailto:[EMAIL PROTECTED]>
For additional commands, e-mail: <mailto:[EMAIL PROTECTED]>

Reply via email to