Valentin Kuznetsov <vkuz...@gmail.com> added the comment:

Hi,
I just found this bug and would like to add my experience with 
performance of large JSON docs. I have a few JSON docs about 180MB in 
size which I read from data-services. I use python2.6, run on Linux, 64-
bit node w/ 16GB of RAM and 8 core CPU, Intel Xeon 2.33GHz each. I used 
both json and cjson modules to parse my documents. My observation that 
the amount of RAM used to parse such docs is about 2GB, which is a way 
too much. The total time spent about 30 seconds (using cjson). The 
content of my docs are very mixed, lists, strings, other dicts. I can 
provide them if it will be required, but it's 200MB :)

For comparison, I got the same data in XML and using 
cElementTree.iterparse I stay w/ 300MB RAM usage per doc, which is 
really reasonable to me.

I can provide some benchmarks and perform such tests if it will be 
required.

----------
nosy: +vkuznet

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue6594>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to