On Fri, May 9, 2008 at 1:02 PM,  <[EMAIL PROTECTED]> wrote:
> Shawn,
>
>> I was wondering if someone could expound a bit on that approach, and
>> why they chose to do a tar stream.
>>
>> The developer I talked to suggested that instead of doing a tar stream
>> from the server, we could simply allow the client to perform HTTP/1.1
>> pipeline requests for each individual file.
>
> We wrote filelist as a way to work around the lack of support for
> HTTP/1.1 features in the current Python libraries.  At least the last
> time I looked, urllib and httplib didn't have any meaningful support for
> pipelining HTTP requests.  The situation was similar for the libraries
> we employed on the server side.  Instead of trying to write our own http
> library, we just put together existing Python components that did work.
>
>> After looking into this a little bit, it looks like the change would
>> be from this:
>> * establish connection
>> * get url_1
>> * readresponse url_1
>> * close connection
>>
>> to this:
>> * establish connection
>> * get url_1
>> * readresponse url_1
>> * get url_2
>> * readresponse url_2
>> * get url_n
>> * readresponse url_n
>> * close connection
>
> If you want performance, I'd change it to this instead:
>
> * establish connection
> * get url_1
> * get url_2
> * get url_N
> * readresponse url_1
> * readresponse url_2
> * readresponse url_N
> * keep connection alive until client exits download stage

Yes, that's right. Sorry.

-- 
Shawn Walker

"To err is human -- and to blame it on a computer is even more so." -
Robert Orben
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss

Reply via email to