On Fri, Sep 12, 2014 at 6:19 PM, Ron W <ronw.m...@gmail.com> wrote:

> Nor does the JSON API need to handle "things". Even the "TH1 API" doesn't
> handle things. It's the code using the API that does the handling
>

Not if it comes to validation like length limits - that has to be done
C-side.


>
>
>> The only expectations of the JSON API would be to limit the length of
>>> strings delivered to the rest of Fossil and map queries/updates
>>> appropriately (see below).
>>>
>>
>> Any such limits cannot be imposed until the JSON is parsed in its
>> entirety, though. The JSON "DOM" (so to say) gets parsed at a lower level,
>> and then the Fossil side of that has to understand what those limits are
>> and where to apply them. e.g. it must not apply them in the Wiki APIs,
>> where pages can be arbitrarily long.
>>
>
> To avoid buffer overflows. I would hope that the JSON API already does
> this.
>

All buffers are allocated dynamically at a far deeper level (based on a
parser written by Doug Crockford, if it's any consolation ;).



>
>
>> Why are string length limits so important here - i don't get that part.
>> What if someone wants to add a "rough draft of your most recent novel"
>> field?
>>
>
> Same as above, avoiding buffer overflows. If the API (and Fossil) can
> handle a string the size of "Encyclopedia Galactica", that's great.
>

If someone submits a legal JSON string of 2GB in length, it will get read
in just like anything else. The length is not known in advance (other than
the CONTENT_LENGTH CGI var, which (A) covers the whole input blob and (B)
can be set to an arbitrary value by a malicious caller). The lowest level
of the parser reads byte by byte (expanding its buffers as needed), and
then hands the result of each atomic JSON value back up to the DOM-style
API (which sits between fossil and the parser). In terms of memory use, it
will take whatever it can get, but only allocates what it needs to handle
the current element. The lowest-level push-parser has no mechanism to say,
"if a given element is larger than X, abort."[1] So yes, one can DoS it by
simply feeding it absurdly massive JSON, but that's a potential problem any
JSON-consuming application has, as the parsing of JSON request data is done
at one (or more) level(s) removed from the app-level code. One can also
potentially DoS fossil (and most other POST-reading apps) simply by POSTing
arbitrarily large/meaningless form/urlencoded data.


[1] one exception: the max nesting depth of objects/arrays is configurable,
defaulting to 15 IIRC.


i'll need to see an example before that's clear to me, i think. Can you
>> compose some pseudo-json for such a request/response exchange?
>>
>
> I guess something like:
>
> "QUERY",                  // message type
> "JiraId", "1234",        // query fields and values
> "", "",                         // end of query
> "*",                            // fields to include in response (in this
> case, all)
> ""                              // end of message
>
> Similar for "NEW" and "UPDATE" messages, except the second list of the
> just field names not needed.
>

That's where my "problem" (per se) is: handling new/update with arbitrary/d
field names is a huge pain in the butt in C. i'm completely open to
assisting someone else in doing it, but this particular problem is not one
i want to personally explore so long as there are much more fun things to
be coding :).


-- 
----- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to