* thatipelli santhosh <santhoshthatipelli...@gmail.com> [191015 08:07]:
> Hi Marvin,
> 
> I am curious to know if we approx 100000 ( we don't know exact amount of
> slice entries)  add to slice.  Slice will grow based on our size of data
> and underlying array size but doing this create multiple underlying arrays
> (more memory resource utilization, processing). In such case,  what is the
> optimum solution?

The built-in append function does not simply allocate a new backing
array with the exact size needed for the new slice; it allocates a slice
with extra capacity.  It uses some heuristics to give a reasonable
speed/memory-usage trade-off for "typical" cases.  The current algorithm
can be found in src/runtime/slice.go in the function growslice.  Note
that the specific algorithm is not covered by the Go compatibility
guarantee.

If you are trying to be practical, you should start by implementing the
obvious and straight-forward algorithm first (i.e. use append as it was
intended), and then profile your code (the whole application, not just
the code that grows the slice) to determine where optimization will be
useful.  It is very unlikely that the calls to append will be worth
spending time improving.

If this is an academic question, start the same way, but how you decide
where to spend time analyzing the results may be different.

In an application that is only going to have a slice of about 100,000
integers, the cost of append is unlikely to be of any concern at all on
any real production machine.

If it turns out that the append is a bottleneck (memory or speed), you
can write your own Append function by starting with the AppendByte
function from the go-slices-usage-and-internals blog that I linked to in
my previous message.  Do the obvious fix-ups for byte to int (or
whatever your data type is), and modify the algorithm that determines
the capacity of the new slice (the (n+1)*2 in that function).  Then use
your Append function instead of the built-in append.  (Again, note that
this simple (n+1)*2 is _not_ what the built-in append function uses.)

A very simple first optimization (without resorting to your own Append
function) is to initially create the slice using make([]int, 0,
initialCapacity) where you choose initialCapacity to be larger than the
expected maximum size by some arbitrary percentage (e.g. 25%).

This playground link «https://play.golang.org/p/qtfHaoK-6Hy» shows
exactly where the allocations will occur.  It took exactly 28
allocations to reach 100,000 elements.  To show how insignificant that
is, on my laptop it took only 11ms to run that program.  Is that 11ms
really significant in your program?

...Marvin

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/20191015141236.rldnk7dscbio6pfs%40basil.wdw.

Reply via email to