I'm using the official golang driver for mongo. What I'm doing is that I'm 
reading a csv file upto a certain lines, parsing the data and then 
inserting the data in the DB. Let's say I have 10K contacts and the 
readLimit is 1000, then I read csv till 1000 lines and then do a bulk write 
with ordered=false. I assumed this would take a constant memory as the data 
size for each bulk write is the same. However that is not the case. The 
memory consumed by the bulk write increases significantly with the size of 
the CSV. Can anyone explain this?

Here's some data I collected

batchSize = 1000


10k - 14 MB

20K - 30MB

30K - 59MB

40K - 137 MB

50K -241 MB

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/027d3785-099c-4acf-8373-3f9ac948983b%40googlegroups.com.

Reply via email to