[
https://issues.apache.org/jira/browse/THRIFT-669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12795165#action_12795165
]
Aron Sogor commented on THRIFT-669:
-----------------------------------
It is a single request BUT No buffer. Http chunk encoding allows for sending a
the request and the response in chunks.
So each method call is a request chunk each response is a response chunk.
In pseudo:
<HTTP REQUEST HEADER>
<chunk1 request>
<HTTP RESPONSE HEADER>
<chunk1 response>
<chunk2 request>
<chunk 2 response>
Here is a capture from wireshark(no color) calling gettime in a loop:
CONNECT /ds/ HTTP/1.1
Host: localhost:8080
User-Agent: BattleNet
Transfer-Encoding: chunked
content-type: application/x-thrift
Accept: */*
11
........time.....HTTP/1.1 200 OK
Content-Type: application/x-thrift
Transfer-Encoding: chunked
Server: Jetty(7.0.1.v20091125)
18
........time............
11
........time.....
18
........time............
11
........time.....
<and many more>
This is different from the current http client where each method call creates
an - HTTP REQUEST/RESPONSE HEADER
Check with wireshark :)
> Use Http chunk encoding to do full duplex transfer in a single post
> -------------------------------------------------------------------
>
> Key: THRIFT-669
> URL: https://issues.apache.org/jira/browse/THRIFT-669
> Project: Thrift
> Issue Type: Bug
> Affects Versions: 0.2
> Reporter: Aron Sogor
> Attachments: TFullDuplexHttpClient.java
>
>
> Instead of each method call being a separate post, use chunk-encoded request.
> If you look at the traffic in wireshark many times the payload is much
> smaller than the HTTP header. Using chunk encoding, the per method overhead
> of the http header is gone. Running a simple test of getting a time as i32,
> using http post vs chunk encoding I got from 100+ms to ~40ms per request as
> the servlet container did not have to process the overhead of a "new request".
> More I think with jetty and continuation the long running connections could
> actually scale and perform a lot better than the current HttpClient.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.