zeroshade commented on issue #506:
URL: https://github.com/apache/arrow-go/issues/506#issuecomment-3321356143

   > So, I believe somewhere library is using the default allocator.
   
   The default internal buffer pool uses the default allocator as there isn't 
currently a good way to pass an allocator down to it. If the memory use is 
increasing linearly, then there's likely buffers that aren't being released 
back to the pool. 
   
   The leak is likely because the buffer pool won't call `Release` on the 
buffers in the pool. We might need to create a better pool of some kind. I'll 
take a look into it.
   
   That said, my earlier statements are still correct in this particular case. 
According to pprof the bulk of the memory usage and in-use objects are the 
column metadata objects.
   
   > If I close the file frequently,will result in many small files.
   
   How small are you talking about? As I said earlier, based on what you said 
the estimated record sizes are, if you cut off the files after 1000000 rows, 
that should be a bit over 1.5GB before compression and encoding. That's 
definitely not "many small files" if you have a bunch of 1GB files hanging 
around. So, when you say "will result in many small files", how small are the 
files you're talking about if you close the file more frequently?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to