According to Rob Kremer:
> Changing the time-out valued fixed the problem.  Thanks!
> 
> Jim Cole wrote:
> > Rob Kremer's bits of Thu, 1 Aug 2002 translated to:
> > 
> > 
> >>First issue, I thought there was a problem getting files with applets.  What I
> >>found out is that the web page it is trying to dig has so many URLs in it that
> >>it can't get through all of them in 30 seconds, if I take out half of the URLS,
> >>then it can.  This page is the main page of our web site, and really needs to be
> >>indexed, is there someway I can extend the time-out value?
> >>
> > 
> > Have you tried http://www.htdig.org/dev/htdig-3.2/attrs.html#timeout

Interesting.  I thought the timeout was for when it opens the connection
initially, or for each individual read request.  I.e., if it gets no
response at all for that amount of time, it should abort.  It should not
be aborting simply because the whole file takes longer than 30 seconds
to fetch, as long as it's getting _something_ back from the server during
those 30 seconds.  This may be a bug.

-- 
Gilles R. Detillieux              E-mail: <[EMAIL PROTECTED]>
Spinal Cord Research Centre       WWW:    http://www.scrc.umanitoba.ca/
Dept. Physiology, U. of Manitoba  Winnipeg, MB  R3E 3J7  (Canada)


-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
htdig-general mailing list <[EMAIL PROTECTED]>
To unsubscribe, send a message to <[EMAIL PROTECTED]> with a 
subject of unsubscribe
FAQ: http://htdig.sourceforge.net/FAQ.html

Reply via email to