Ben, you are correct the dial thing, not sure how that ended up commented.

Anyway, I've removed the custom dial (and tried with the timeout enabled as 
well), it did increase r/s a lil bit (5%-10% or so) but it increased the 
amounts of timeouts on the remote urls as well.

El jueves, 2 de febrero de 2017, 14:11:29 (UTC-3), James Bardin escribió:
>
> First things I notice are that you're overriding the default dialer with 
> one that doesn't timeout, and you've commented out ReadTimeout in the 
> client. Both of those could indefinitely hold up client connections 
> regardless of the DoTimeout call, which just ensures that the Do function 
> returns before the deadline. 
>
> Otherwise, I think you're going to have to instrument the code a little 
> better to see what is holding you up. 
>
>
> On Thursday, February 2, 2017 at 11:27:38 AM UTC-5, emarti...@gmail.com 
> wrote:
>>
>> Thanks for the answer.
>>
>> Yes, it seems to be blocking, I just fixed it with: 
>> http://blog.sgmansfield.com/2016/01/the-hidden-dangers-of-default-rand/
>>
>> After that change my code is working a lil bit better but still I see a 
>> ton of timeouts + high latency on responses. Maybe the code is locking on 
>> another part of the code?
>>
>> El jueves, 2 de febrero de 2017, 4:09:07 (UTC-3), land...@gmail.com 
>> escribió:
>>>
>>> func randInt(min int, max int) int {
>>> rand.Seed(int64(time.Now().Nanosecond()))
>>>     return min + rand.Intn(max-min)
>>> }
>>>
>>> Is the culprit. the default rand locks globally for concurrent access. 
>>>  You need to create a new rand in each goroutine you want to use it in, for 
>>> maximum speed.
>>>
>>> On Wednesday, February 1, 2017 at 8:26:07 PM UTC-6, emarti...@gmail.com 
>>> wrote:
>>>>
>>>> Hello,
>>>>
>>>> I'm writing a POC for a future RTB platform. Basically I'm doing stress 
>>>> tests by receiving HTTP requests and performing HTTP GET requests in the 
>>>> background.
>>>>
>>>> Issue I face is that when i try to scale up, URLs starts to timeout. I 
>>>> am trying to find what our bottleneck is but so far no luck (we aren't 
>>>> running out of ephemeral ports).
>>>>
>>>> Remote URLs takes about 500ms-1000ms to respond. We are stress testing 
>>>> it with wrk with 5000 concurrent requests. (5000 incoming requests which 
>>>> translated into 30 remote URL requests = 150k requests)
>>>>
>>>> Here is our code:
>>>>
>>>> package main
>>>>
>>>> import (
>>>>      "net/http"
>>>>      "time"
>>>>       "runtime"
>>>>       "io"
>>>>       "net"
>>>>       "github.com/valyala/fasthttp"
>>>>       "math/rand"
>>>> )
>>>>
>>>> type HttpResponse struct {
>>>>   url      string
>>>>   response *http.Response
>>>>   err      error
>>>> }
>>>>
>>>> var urls = []string{
>>>> //lots of urls
>>>> }
>>>>
>>>>
>>>> //func asyncHttpGets(urls []string) []*HttpResponse {
>>>>
>>>> var(
>>>>  clients []fasthttp.Client
>>>>  total_clients int
>>>>  max_urls int
>>>> )
>>>>
>>>> func init(){
>>>>     max_urls=30
>>>>   clients =append(clients, create_client() )
>>>> }
>>>>
>>>> func create_client() fasthttp.Client{
>>>>
>>>>     return fasthttp.Client{
>>>>          Dial: func(addr string) (net.Conn, error) {
>>>>                 var dialer = net.Dialer{}
>>>>                 return dialer.Dial("tcp", addr)
>>>>             },
>>>>
>>>>             MaxIdleConnDuration:30*time.Second,
>>>>             MaxConnsPerHost:2024,
>>>>          //ReadTimeout: 1*time.Second,
>>>>             }
>>>> }
>>>>
>>>> func randInt(min int, max int) int {
>>>> rand.Seed(int64(time.Now().Nanosecond()))
>>>>     return min + rand.Intn(max-min)
>>>> }
>>>>
>>>>
>>>> func asyncHttpGets(urls []string) []string{
>>>>  ch:=make(chan string,max_urls)
>>>> var responses  []string
>>>>
>>>>   cl:=0
>>>>   for i := 0; i <= max_urls; i++ {
>>>>     url:=urls[i]
>>>>       go func(url string) {
>>>>
>>>>            req := fasthttp.AcquireRequest()
>>>>          req.SetRequestURI(url)
>>>>          req.Header.Add("Connection", "keep-alive")
>>>>          resp := fasthttp.AcquireResponse()
>>>>          clients[cl].DoTimeout(req, resp,1*time.Second)
>>>>
>>>>             bodyBytes := resp.Body()
>>>>           ch <- string(bodyBytes)
>>>>
>>>>               fasthttp.ReleaseRequest(req)
>>>>               fasthttp.ReleaseResponse(resp)
>>>>       }(url)
>>>>   }
>>>>
>>>>   for {
>>>>       select {
>>>>       case r := <-ch:
>>>>           responses = append(responses, r)
>>>>           if len(responses) == max_urls {
>>>>               return responses
>>>>           }
>>>>       }
>>>>   }
>>>>   return responses
>>>> }
>>>>
>>>> func hello(w http.ResponseWriter, r *http.Request) {
>>>>   results := asyncHttpGets(urls)
>>>>   for _, result := range results {
>>>>            io.WriteString(w,"%s status: %s" + " " + result + "\n")
>>>>   }
>>>> }
>>>>
>>>>
>>>>  func main() {
>>>>     runtime.GOMAXPROCS(0)
>>>>
>>>>     server8000 := http.NewServeMux()
>>>>     server8000.HandleFunc("/", hello)
>>>>     http.ListenAndServe(":8001", server8000)
>>>>  }
>>>>
>>>>
>>>> Any help is really appreciated.
>>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to