[go-nuts] Re: Memory Leak in bytes.Buffer String() method

2020-08-30 Thread Sarath Prabath Redlapalli Jaya
The memory usage of containers are ever increasing and has grown from 100 
MB to ~ 1 GB in couple of hours. Not able to understand where the gap is, 
even though the the builder is being reset. Would keeping a sync.Pool of 
builders help here?

Regards

On Sunday, August 30, 2020 at 8:55:15 PM UTC+5:30 Sarath Prabath Redlapalli 
Jaya wrote:

> Hi, we've been observing high memory usage in the following code
> https://github.com/Masterminds/squirrel/blob/master/placeholder.go#L113
>
> We measured from pprof of heap
>
> (pprof) top20
>
> Showing nodes accounting for 360.07MB, 91.31% of 394.32MB total
>
> Dropped 142 nodes (cum <= 1.97MB)
>
> Showing top 20 nodes out of 139
>
>   flat  flat%   sum%cum   cum%
>
>   101.21MB 25.67% 25.67%   101.21MB 25.67%  bytes.(*Buffer).String
>
>48.01MB 12.18% 37.84%48.01MB 12.18%  
> github.com/newrelic/go-agent/v3/newrelic.(*spanAttributeMap).add
>
>42.61MB 10.81% 48.65%43.11MB 10.93%  
> github.com/newrelic/go-agent/v3/newrelic.(*txnTrace).witnessNode
>
>38.54MB  9.77% 58.42%38.54MB  9.77%  
> go.uber.org/zap/buffer.NewPool.func1
>
>   19MB  4.82% 63.24%   19MB  4.82%  
> github.com/newrelic/go-agent/v3/newrelic.segmentEnd.spanEvent
>
>16.51MB  4.19% 67.43%91.04MB 23.09%  
> github.com/newrelic/go-agent/v3/newrelic.endDatastoreSegment
>
>14.01MB  3.55% 70.98%18.01MB  4.57%  
> github.com/newrelic/go-agent/v3/newrelic.newTxn
>
>13.51MB  3.43% 74.41%93.13MB 23.62%  
> github.com/newrelic/go-agent/v3/newrelic.endExternalSegment
>
>12.01MB  3.05% 77.45%12.01MB  3.05%  
> github.com/newrelic/go-agent/v3/newrelic.getStackTrace
>
>10.51MB  2.66% 80.12%10.51MB  2.66%  
> github.com/newrelic/go-agent/v3/newrelic.(*slowQueries).insertAtIndex
>
>8MB  2.03% 82.15%8MB  2.03%  
> github.com/newrelic/go-agent/v3/newrelic.newSlowQueries
>
> 6.51MB  1.65% 85.73% 6.51MB  1.65%  
> github.com/DataDog/datadog-go/statsd.newStatsdBuffer
>
>4MB  1.01% 86.75%52.01MB 13.19%  
> github.com/newrelic/go-agent/v3/newrelic.(*spanAttributeMap).addString
>
> 3.50MB  0.89% 87.64% 3.50MB  0.89%  
> go.uber.org/zap.(*Logger).clone
>
> 3.50MB  0.89% 88.52% 3.50MB  0.89%  
> github.com/newrelic/go-agent/v3/newrelic.externalMetricKey.scopedMetric
>
>3MB  0.76% 89.28%3MB  0.76%  
> github.com/newrelic/go-agent/v3/newrelic.addUserAttribute
>
>3MB  0.76% 90.05%3MB  0.76%  strings.(*Builder).WriteString
>
> 2.50MB  0.63% 90.68% 6.06MB  1.54%  encoding/json.Unmarshal
>
> 2.50MB  0.63% 91.31% 2.50MB  0.63%  context.(*cancelCtx).Done
>
> =
>
> Resetting the buffer by defer didn't help either. So moved to string 
> builder like this
>
> func replacePositionalPlaceholders(sql, prefix string) (string, error) {
> buf := &strings.Builder{}
> defer buf.Reset()
>
> i := 0
> for {
> p := strings.Index(sql, "?")
> if p == -1 {
> break
> }
>
> if len(sql[p:]) > 1 && sql[p:p+2] == "??" { // escape ?? => ?
> buf.WriteString(sql[:p])
> buf.WriteString("?")
> if len(sql[p:]) == 1 {
> break
> }
> sql = sql[p+2:]
> } else {
> i++
> buf.WriteString(sql[:p])
> fmt.Fprintf(buf, "%s%d", prefix, i)
> sql = sql[p+1:]
> }
> }
>
> buf.WriteString(sql)
> return buf.String(), nil
> }
>
> ==
>
> This didn't help either, from the results below
>
> (pprof) top20
>
> Showing nodes accounting for 220.09MB, 90.37% of 243.55MB total
>
> Dropped 126 nodes (cum <= 1.22MB)
>
> Showing top 20 nodes out of 135
>
>   flat  flat%   sum%cum   cum%
>
>33.97MB 13.95% 13.95%33.97MB 13.95%  strings.(*Builder).Write
>
>29.07MB 11.94% 25.89%29.57MB 12.14%  
> github.com/newrelic/go-agent/v3/newrelic.(*txnTrace).witnessNode
>
>28.01MB 11.50% 37.39%28.01MB 11.50%  
> github.com/newrelic/go-agent/v3/newrelic.(*spanAttributeMap).add
>
>24.34MB  9.99% 47.38%24.34MB  9.99%  strings.(*Builder).WriteString
>
>18.52MB  7.60% 54.98%18.52MB  7.60%  
> go.uber.org/zap/buffer.NewPool.func1
>
>11.01MB  4.52% 59.50%60.59MB 24.88%  
> github.com/newrelic/go-agent/v3/newrelic.endExternalSegment
>
>10.01MB  4.11% 63.61%13.51MB  5.55%  
> github.com/newrelic/go-agent/v3/newrelic.newTxn
>
>   10MB  4.11% 67.72%   10MB  4.11%  
> github.com/newrelic/go-agent/v3/newrelic.segmentEnd.spanEvent
>
>7MB  2.88% 73.73%7MB  2.88%  
> github.com/newrelic/go-agent/v3/newrelic.(*slowQueries).insertAtIndex
>
>  

[go-nuts] Memory Leak in bytes.Buffer String() method

2020-08-30 Thread Sarath Prabath Redlapalli Jaya
Hi, we've been observing high memory usage in the following code
https://github.com/Masterminds/squirrel/blob/master/placeholder.go#L113

We measured from pprof of heap

(pprof) top20

Showing nodes accounting for 360.07MB, 91.31% of 394.32MB total

Dropped 142 nodes (cum <= 1.97MB)

Showing top 20 nodes out of 139

  flat  flat%   sum%cum   cum%

  101.21MB 25.67% 25.67%   101.21MB 25.67%  bytes.(*Buffer).String

   48.01MB 12.18% 37.84%48.01MB 12.18%  
github.com/newrelic/go-agent/v3/newrelic.(*spanAttributeMap).add

   42.61MB 10.81% 48.65%43.11MB 10.93%  
github.com/newrelic/go-agent/v3/newrelic.(*txnTrace).witnessNode

   38.54MB  9.77% 58.42%38.54MB  9.77%  
go.uber.org/zap/buffer.NewPool.func1

  19MB  4.82% 63.24%   19MB  4.82%  
github.com/newrelic/go-agent/v3/newrelic.segmentEnd.spanEvent

   16.51MB  4.19% 67.43%91.04MB 23.09%  
github.com/newrelic/go-agent/v3/newrelic.endDatastoreSegment

   14.01MB  3.55% 70.98%18.01MB  4.57%  
github.com/newrelic/go-agent/v3/newrelic.newTxn

   13.51MB  3.43% 74.41%93.13MB 23.62%  
github.com/newrelic/go-agent/v3/newrelic.endExternalSegment

   12.01MB  3.05% 77.45%12.01MB  3.05%  
github.com/newrelic/go-agent/v3/newrelic.getStackTrace

   10.51MB  2.66% 80.12%10.51MB  2.66%  
github.com/newrelic/go-agent/v3/newrelic.(*slowQueries).insertAtIndex

   8MB  2.03% 82.15%8MB  2.03%  
github.com/newrelic/go-agent/v3/newrelic.newSlowQueries

6.51MB  1.65% 85.73% 6.51MB  1.65%  
github.com/DataDog/datadog-go/statsd.newStatsdBuffer

   4MB  1.01% 86.75%52.01MB 13.19%  
github.com/newrelic/go-agent/v3/newrelic.(*spanAttributeMap).addString

3.50MB  0.89% 87.64% 3.50MB  0.89%  go.uber.org/zap.(*Logger).clone

3.50MB  0.89% 88.52% 3.50MB  0.89%  
github.com/newrelic/go-agent/v3/newrelic.externalMetricKey.scopedMetric

   3MB  0.76% 89.28%3MB  0.76%  
github.com/newrelic/go-agent/v3/newrelic.addUserAttribute

   3MB  0.76% 90.05%3MB  0.76%  strings.(*Builder).WriteString

2.50MB  0.63% 90.68% 6.06MB  1.54%  encoding/json.Unmarshal

2.50MB  0.63% 91.31% 2.50MB  0.63%  context.(*cancelCtx).Done

=

Resetting the buffer by defer didn't help either. So moved to string 
builder like this

func replacePositionalPlaceholders(sql, prefix string) (string, error) {
buf := &strings.Builder{}
defer buf.Reset()

i := 0
for {
p := strings.Index(sql, "?")
if p == -1 {
break
}

if len(sql[p:]) > 1 && sql[p:p+2] == "??" { // escape ?? => ?
buf.WriteString(sql[:p])
buf.WriteString("?")
if len(sql[p:]) == 1 {
break
}
sql = sql[p+2:]
} else {
i++
buf.WriteString(sql[:p])
fmt.Fprintf(buf, "%s%d", prefix, i)
sql = sql[p+1:]
}
}

buf.WriteString(sql)
return buf.String(), nil
}

==

This didn't help either, from the results below

(pprof) top20

Showing nodes accounting for 220.09MB, 90.37% of 243.55MB total

Dropped 126 nodes (cum <= 1.22MB)

Showing top 20 nodes out of 135

  flat  flat%   sum%cum   cum%

   33.97MB 13.95% 13.95%33.97MB 13.95%  strings.(*Builder).Write

   29.07MB 11.94% 25.89%29.57MB 12.14%  
github.com/newrelic/go-agent/v3/newrelic.(*txnTrace).witnessNode

   28.01MB 11.50% 37.39%28.01MB 11.50%  
github.com/newrelic/go-agent/v3/newrelic.(*spanAttributeMap).add

   24.34MB  9.99% 47.38%24.34MB  9.99%  strings.(*Builder).WriteString

   18.52MB  7.60% 54.98%18.52MB  7.60%  
go.uber.org/zap/buffer.NewPool.func1

   11.01MB  4.52% 59.50%60.59MB 24.88%  
github.com/newrelic/go-agent/v3/newrelic.endExternalSegment

   10.01MB  4.11% 63.61%13.51MB  5.55%  
github.com/newrelic/go-agent/v3/newrelic.newTxn

  10MB  4.11% 67.72%   10MB  4.11%  
github.com/newrelic/go-agent/v3/newrelic.segmentEnd.spanEvent

   7MB  2.88% 73.73%7MB  2.88%  
github.com/newrelic/go-agent/v3/newrelic.(*slowQueries).insertAtIndex

   7MB  2.88% 76.60%50.02MB 20.54%  
github.com/newrelic/go-agent/v3/newrelic.endDatastoreSegment

6.51MB  2.67% 79.28% 6.51MB  2.67%  
github.com/DataDog/datadog-go/statsd.newStatsdBuffer

   6MB  2.46% 81.74% 6.50MB  2.67%  
github.com/newrelic/go-agent/v3/newrelic.addUserAttribute

5.50MB  2.26% 84.00% 5.50MB  2.26%  
github.com/newrelic/go-agent/v3/newrelic.getStackTrace

   3MB  1.23% 85.23%3MB  1.23%  
github.com/newrelic/go-agent/v3/newrelic.newSlowQueries

   3MB  1.23% 86.47%3MB  1.23%  go.uber.org/zap.(*Logger).clone

   3MB  1.23% 87.70%3MB  1.23%  
github.com/newrelic/go-agent/v3/newrelic.externalMetricKey.scopedMetric

2.50MB  1.03% 88.72%30.51MB 12.53%  
github.com/newrelic/go-agent/v3/newrelic.(*spanAttributeMap).addString

   2MB  0.82% 90.37%2MB  0.82%  encoding/hex.EncodeToString

(pprof) peek Write

Showing nodes accounting for 243.55MB, 100% of 243.55MB total

--+-

  flat  flat%   sum%cum   cum% 

[go-nuts] Clean Architecture in Go - Dynamic Projection

2020-08-13 Thread Sarath Prabath Redlapalli Jaya
Hi All,

We've a complex and nested Domain Object *User*

type User struct {
Profile *Profile
// ..
// ..
}

stored in a SQL database as 
id, user_name, email, image(*Profile*), category(Profile)

Using Clean Architecture by having SQL Database behind an interface, how 
can we best implement querying dynamic set of fields?
1. If we use database column names inside Service layer, it violates Clean 
Architecture
2. To maintain the integrity of these layers, we might need to use 
reflection to figure out the column name? or is there any better way to 
achieve it using code generation?

Regards

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/0de24593-ab88-40dc-b529-72874920623fn%40googlegroups.com.


Re: [go-nuts] json decode is very slow

2020-04-30 Thread Sarath Prabath Redlapalli Jaya
Thanks Amnon & Hi Jesper,

You're correct. Most of the time is actually spent in *ioutil.ReadAll()* .
Trying to debug network issues related to GCP, GKE & the functionality of
ISTIO sidecar reverse proxy, but seems okay so far couldn't find anything.

But, I've performed an interesting experiment.

Tried the same API in Ruby with Sinatra by mirroring the traffic. When
instrumented the API Response Times, all were <= 100 ms. Any clues on how
to debug this?I can't seem to find much on inner workings of
http.Request.Body -> io.ReadCloser

Also, If we're are looking for flaky end-user connections i.e., mobile
wireless devices etc., the connection termination is at reverse proxy
itself and the lag in reading request shouldn't be observed at the
application level. Correct me if I'm wrong.

Regards

On Thu, Apr 30, 2020 at 2:34 PM Jesper Louis Andersen <
jesper.louis.ander...@gmail.com> wrote:

> On Tue, Apr 28, 2020 at 6:48 PM Sarath Prabath Redlapalli Jaya <
> harrysarath2...@gmail.com> wrote:
>
>> We've instrument the above code block and reading request body bytes is 
>> taking very long i.e., upto 5 secs sometimes reaching upto 20 seconds at the 
>> throughput of ~4-5K RPM
>>
>>
>> The Request Body Size Metrics are as follows
>>
>> Average: 73190 Bytes
>> 90th percentile: 170862 Bytes
>> 99th percentile: 467638 Bytes
>>
>>
> Let us assume we have 500 kilobytes of data. At 20 seconds processing
> time, you are looking at something like 25 bytes per millisecond, which
> seems rather low for a decent CPU. Try measuring the time it takes to
> ioutil.ReadAll from the Reader I assume you have access to. My hunch is you
> are spending most of the time there, and relatively little on decoding the
> JSON data you have. Once you've done this, you will have some more
> knowledge of where to look: the network, or the JSON decoder.
>
> If this is a web request, you should also look at how time it takes the
> other end to feed data to the GCP process. Chances are that the problem is
> in the other end, and you are looking at a slow sender, especially if you
> have a large set of mobile devices out there with variety in their internet
> connections.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/CADOH-f9mhKy0e3B9exCe8P3BNmgO5OMCXwGsNN1VjWSg3ofNqQ%40mail.gmail.com.


[go-nuts] json decode is very slow

2020-04-28 Thread Sarath Prabath Redlapalli Jaya


req := &mystruct{}
json.NewDecoder(r.Body).Decode(req)


We've instrument the above code block and reading request body bytes is 
taking very long i.e., upto 5 secs sometimes reaching upto 20 seconds at 
the throughput of ~4-5K RPM

The Request Body Size Metrics are as follows

Average: 73190 Bytes
90th percentile: 170862 Bytes
99th percentile: 467638 Bytes

Tried ioutil.ReadAll too. But, the read time is same. We're not sure why 
it's so very slow on production with HIGH BANDWIDTH Network line.

Also, there are 5% of errors in this API with following errors while 
decoding json data

   1. unexpected EOF
   2. Error in decoding request: read tcp 127.0.0.1:8080->127.0.0.1:48896: 
   i/o timeout

We're using Kubernetes with Istio 1.0.5 on GCP. These errors are probably 
from Envoy Sidecar, but since the Envoy acts as a reverse proxy, not sure 
why we can get EOF or Timeout Errors since end user connection terminates 
at proxy itself.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/871e3230-a4f3-4469-8aaf-a8e3b6f49e73%40googlegroups.com.