We are currently using ArangoDB as a migration database (100.000 JSON 
files, 50 GB data, about 25% of the  JSON files contain base64 encoded 
images, PDF files etc.).
I wrote a custom import script for the data that takes about 90 minutes for 
the import using pyArango - one JSON file at a time...working nicely so far.
Question: would it make sense parallelize the import in order to speed up 
the import process? Or is the performance of ArangoDB CPU/IO bound for such 
mass imports?
We are running a standard standalone installation of ArangoDB 3.4.5 on a 
local SDD...no fancy setup.

Andreas

-- 
You received this message because you are subscribed to the Google Groups 
"ArangoDB" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/arangodb/f5ae1967-402d-432d-a824-f37b15794fc9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to