On Sat, 31 Aug 2019, Anthony Walter wrote:

Could you include https://github.com/BeRo1985/pasjson in the comparison?

Sure. I also have a few other people have requested. I will also list the
license of each in the first table.


[snip]

For example if wanted to store object state using RTTI in a JSON file,
create a separate TJsonObjectState class to handle this for you. Or if you
wanted to create a database table from a JSON file, or create a JSON file
from a database table, then again write this into its own class.

Not sure I understand what you mean.

It seems to me that in that case you will repeat your scanner/parser code all over the place. in case of an error, you need to fix it in as many places.

I can of course be wrong.

The current fpjson scanner/parser can be used anywhere. You don't need to use fpjson data structures to be able to use the scanner or reader:
It's perfectly possible to use the scanner/parser to create TJSONNode from 
JSONTools.

But general usability comes indeed at the price of some speed loss.

That said, your classes are markedly slower when it comes to data manipulation.

The following is 100 (!) times slower than fpjson code:

{$mode objfpc}
{$h+}
uses dateutils,sysutils, jsontools;

Var
  I,N : Integer;
  D,E : TJSONNode;
  B : double;
  NT : TDateTime;

begin
  N:=10000000;
  D:=TJSONNode.Create;
  D.Parse('{ "d": 12345678.3 }');
  E:=D.Child(0);
  NT:=Now;
  B:=1;
  for i:=0 to N  do
    B:=E.AsNumber * 2;
  Writeln('Time ',MillisecondsBetween(Now,NT));
  D.Free;
end.

home:~> ./tb
Time 3888

Same program in fpJSON:

home:~> ./tb2
Time 32

This is because when accessing the value, you must do the conversion to
float.  Every time.

This is true for JSON string values as well: you must re-decode the JSON on
every access. And you do it over and over again, each time the data is accessed.

No doubt you can easily fix this by storing the value in the proper type, but 
this
will slow down your parser.

So: if you use the resulting JSON a lot, code will run faster in fpJSON.

It thus boils down to a choice: do you need fast processing or fast parsing ?

In the end it will probably not matter: most likely all the nodes will be 
traversed
in a typical use case, and the overall time for your and my approach will be 
similar.

This is the danger of benchmarks. They focus on 1 aspect. In real life, all
aspects are usually present.

Anyway.

While coding this small test, I noticed that this also does not work in 
jsontools:

D : TJSONNode;

begin
  D.Parse('true'); // or D.Parse('12345678.3');
end.

An unhandled exception occurred at $00000000004730B0:
EJsonException: Root node must be an array or object

if you look at browser specs, this is supposed to work as well:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse

Also frequently encountered is omitting "" around property names. JSON is a
subset of Javascript:

D.Parse('{ d: 12345678.3 }');

Results in:

An unhandled exception occurred at $0000000000473075:
EJsonException: Error while parsing text

Both are things which are supported in fpJSON. No doubt you can fix this
easily.


So you see, with some extra study, the picture of what is "better", jsontool
or fpjson is not so clear as it may seem. In the end it all boils down to some 
choices.

Michael.

PS. With 2 relatively simple changes, I took 40% off the parsing time of fpJSON.
No doubt more gain can be achieved, for example I didn't do the suggestion
by Benito yet.

--
_______________________________________________
lazarus mailing list
lazarus@lists.lazarus-ide.org
https://lists.lazarus-ide.org/listinfo/lazarus

Reply via email to