[email protected] wrote:
On Mon, Jul 13, 2009 at 03:46:01PM -0500, Shawn Walker wrote:
[email protected] wrote:
On Mon, Jul 13, 2009 at 01:27:30PM -0700, Alan Steinberg wrote:
No problem there. See below. I'm going to reboot and flip over to
build 117 to see if I have the same problem. That will help identify
if it's my nv118 system or something on the server end which affects
my system.
-- Alan
wget http://ipkg.sfbay/dev/file/0/d2307dc951d3f7d63fef87e1806976c8eb012e97
--13:19:21--
http://ipkg.sfbay/dev/file/0/d2307dc951d3f7d63fef87e1806976c8eb012e97
=> `d2307dc951d3f7d63fef87e1806976c8eb012e97'
Resolving ipkg.sfbay... 129.xxx.xxx.xxx
Connecting to ipkg.sfbay|129.xxx.xxx.xxx|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16,900,743 (16M) [application/data]
100%[====================================>] 16,900,743 107.22K/s
ETA 00:00
13:22:23 (90.97 KB/s) - `d2307dc951d3f7d63fef87e1806976c8eb012e97'
saved [16900743/16900743]
Hold up, you don't need to reboot. The problem is right here.
You're downloading 16mb at 90 k/s, which means that this will take about
3.2 minutes to complete. 13:22 - 13:19 is 3 minutes, which confirms the
calculations. It appears that you're hitting the timeout because your
link is too slow. It can't download the entire file in 30 seconds, so
it gives up.
A workaround for this is to set PKG_CLIENT_TIMEOUT to the number of
seconds before you think the transfer should time out. In this case,
300 may be a reasonable value.
Shouldn't the timeout be based on no communication from the server
rather than the length of the entire transaction? That is, each time a
package is received from the server, the 30 second timeout should start
over, right?
That's not how libcurl defines the timeouts. They're per operation.
The timeouts can either be configured to set a limit on the entire time
of the whole operation, or the amount of time before we cancel a
connection to the server.
The libcurl docs suggest a timeout of several minutes when using
CURLOPT_TIMEOUT.
However, given that an individual file can be gigabytes in size (think
game files for packages such as Nexuiz), it seems like we need a
different approach for timeouts or transfers.
Specifically, it seems like it would be better to use
CURLOPT_LOW_SPEED_TIME and CURLOPT_LOW_SPEED_LIMIT and not use
CURLOPT_TIMEOUT at all.
Cheers,
--
Shawn Walker
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss