FWIW, I found this thread: https://groups.google.com/g/golang-nuts/c/QFiyvup5UuY/m/xBalv9Y3CAAJ which says experimentally (albeit without showing code or results) that append(S(nil), s...) append(s[:0:0], s...) are faster than some other methods of cloning a slice, when s is "large".
On Thursday 26 September 2024 at 14:55:22 UTC+1 Brian Candler wrote: > > I want to know the reason behind the decision of using *append(s[:0:0], > s...)* over the previous code > > It would be helpful to identify specifically the "previous code" you're > comparing against. Looking in git history I find this commit from about a > year ago: > > commit b581e447394b4ba7a08ea64b214781cae0f4ef6c > Author: Brad Fitzpatrick <brad...@golang.org> > Date: Sat Aug 19 09:08:38 2023 -0700 > > slices: simplify Clone a bit > > No need for an explicit nil check. Slicing the input slice > down to zero capacity also preserves nil. > > Change-Id: I1f53cc485373d0e65971cd87b6243650ac72612c > Reviewed-on: https://go-review.googlesource.com/c/go/+/521037 > Run-TryBot: Brad Fitzpatrick <brad...@golang.org> > Reviewed-by: Dmitri Shuralyov <dmit...@google.com> > TryBot-Result: Gopher Robot <go...@golang.org> > Reviewed-by: Ian Lance Taylor <ia...@google.com> > > diff --git a/src/slices/slices.go b/src/slices/slices.go > index a4d9f7e3f5..252a8eecfc 100644 > --- a/src/slices/slices.go > +++ b/src/slices/slices.go > @@ -333,11 +333,8 @@ func Replace[S ~[]E, E any](s S, i, j int, v ...E) S { > // Clone returns a copy of the slice. > // The elements are copied using assignment, so this is a shallow clone. > func Clone[S ~[]E, E any](s S) S { > - // Preserve nil in case it matters. > - if s == nil { > - return nil > - } > - return append(S([]E{}), s...) > + // The s[:0:0] preserves nil in case it matters. > + return append(s[:0:0], s...) > } > > Is that the change you're referring to? > > The comment says that "slicing the input slice down to zero capacity also > preserves nil", which I confirm: > https://go.dev/play/p/W21qUffeSpg > > Therefore, it's an explicit goal of the code to preserve nilness ("in case > it matters"), which your alternative of *append(S(nil), s...)* would not > do. I do agree that for most practical purposes a nil slice and a > zero-length, zero-capacity slice are more or less interchangeable, but it > *is* possible to distinguish them: > https://go.dev/play/p/Irxuq6pbv4X > ... and therefore some code might depend on this (perhaps a serialization > library?). It's user-visible, so it's safest to clone like with like. > > Apart from that, your issue seems to be: cloning an empty slice with the > new code keeps a reference to the original slice backing array, albeit with > zero len and cap. It can never overwrite the original backing slice, but it > *can* prevent the original backing array being freed. Is that a correct > summary? > > It seems to me that it would be pretty perverse to take a large slice, > slice it down to zero len, and then ask for it to be cloned; your example > doesn't seem like a real-world use case. > > On Thursday 26 September 2024 at 13:29:59 UTC+1 Hikmatulloh Hari Mukti > (Hari) wrote: > >> Hi gophers, I want to know the reason behind the decision of using >> *append(s[:0:0], >> s...)* over the previous code since the two code return different slice >> when dealing slice with zero len. The previous code will return brand new >> slice with size zero, while the current code return an empty slice that's >> still pointing to the previous array. And also, why not maybe using >> *append(S(nil), >> s...)* instead? This will return nil when dealing with zero len slice >> though, but what's the potential problem that it will cause? >> >> I don't know if this can be considered for a problem, but here is my >> concern for the current code, *append(s[:0:0], s...)* : >> >> If we try to create slices from an array pool to reduce allocation by >> using append, and many our slices turned out to be zero, slices.Clone will >> return slice that still pointing to array in the pool. If we try creating >> many of them concurrently, (if I understand it correctly) the pool may try >> to create many array objects as the object retrieved from Get may haven't >> been Put back to the pool. Those array objects can only be >> garbage-collected after those slices are no longer used / reachable and if >> it's an array of a big struct, wouldn't it might potentially pressure the >> memory? >> >> Here is just a pseudo-code for illustration only. I think the array >> generated by pool will only be garbage-collected once *ch* is consumed >> and the slices are no longer used: >> >> var pool = sync.Pool{New: func() any { return &[255]bigstruct{} }} >> var ch = make(chan []bigstruct, 1000) >> for i := 0; i < 1000; i++ { >> go func() { >> arr := pool.Get().(*[255]bigstruct) >> defer pool.Put(arr) >> s := arr[:0] >> ch <- slices.Clone(s) // slice points to arr >> }() >> } >> >> >> CMIIW and thank you! >> >> >> >> -- You received this message because you are subscribed to the Google Groups "golang-nuts" group. To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/golang-nuts/322a28bc-f821-443d-ba6a-82136b82de17n%40googlegroups.com.