On 2012-10-13 01:26, Sean Kelly wrote:

Here are my results:


$ dmd -release -inline -O dtest
$ ll input.txt
-rw-r--r--  1 sean  staff  365105313 Oct 12 15:50 input.txt
$ time dtest

real  1m36.462s
user 1m32.468s
sys   0m1.102s


Then I ran my SAX style parser example on the same input file:


$ make example
cc example.c -o example lib/release/myparser.a
$ time example

real  0m2.191s
user 0m1.944s
sys   0m0.241s


So clearly the problem isn't parsing JSON in general but rather generating an 
object tree for a large input stream.  Note that the D app used gigabytes of 
memory to process this file--I believe the total VM footprint was around 3.5 
GB--while my app used a fixed amount roughly equal to the size of the input 
file.  In short, DOM style parsers are great for small data and terrible for 
large data.

I tried JSON parser in Tango, using D2, this is the results I got for a file just below 360 MB:

real    1m2.848s
user    0m58.321s
sys     0m1.423s

Since the XML parser in Tango is so fast I expected more from the JSON parser as well. But I have no idea what kind of parser the JSON parser uses.

--
/Jacob Carlborg

Reply via email to