My point about JSON, etc is that there is no reason not to use that as a
query language if that makes it easier. If your system is efficient with
JSON, why not accept a query that is formatted as JSON? It's not
semantically different to SQL syntax. Here's an example (with a roughly
JSON notation):

{
  operation: "insert"
  table: "blah"
  columns: ["a", "b", "c"]
  values: [1.3, 2.0, 3.1]
  on-conflict: "replace"
}

That is equivalent to an INSERT SQL statement, but why form that SQL
string, possibly using memory and time, when your system can spit out JSON
(or whatever) effortlessly? Why are people who come from the websphere
learning SQL syntax? It has no magic, the magic is in what it means, which
anyone can understand (tables, columns, joins, search criteria). The syntax
is completely arbitrary, from the 70's or 80's and probably ultimately
inspired by COBOL. There is of course a lot existing information based
around SQL syntax, but most people want to insert some data and do fairly
straight forward queries on it. SQL is probably mostly confusing to them.

The feature I'm working on now, as a first step, basically feeds the parser
tokens so I don't have to generate a query string. Even that gives me a big
saving (mostly in memory), without changing the syntax or introducing
subtle bugs. The next step is "rationalize" the syntax progressively so
that the sequence of tokens I need to pass is closer to the representation
I use internally (in my code). This is the least impact approach I think.

You could insert yourself into any point in the SQL to bytecodes process.
If you wanted to fully support XML queries or whatever that could be a part
of the process, whereby appropriate function calls are generated (say into
your XML query library), or virtual table instance are created or the query
is transformed appropriately.



On Thu, Jun 4, 2015 at 12:05 PM, Nico Williams <nico at cryptonector.com>
wrote:

> On Thu, Jun 04, 2015 at 11:45:28AM -0700, Darko Volaric wrote:
> > Which sort of leads me to my next feature, which is bypassing the SQL
> > language. [...]
>
> I like SQL, but sure, if the compiler worked by first parsing into an
> AST, and if the AST were enough of an interface (versioned, though not
> necessarily backward-compatible between versions), then one could:
>
>  - write different front-end languages (though an AST isn't needed for
>    this: you can always generate SQL)
>
>  - write powerful macro languages
>
>  - write alternative/additional optimizers (and linters, syntax
>    highlighters, ...) that work at the AST level
>
> If the VDBE bytecode were also a versioned interface then one could
> write peep-hole optimizers as well.
>
> One might even want to generate IR code for LLVM, or use a JIT-er,
> though for SQL I don't think that would pay off.  I suspect that most of
> the CPU cycles go to data-intensive tasks such as I/O, cache thrashing,
> and encoding/decoding.  I'd be much more interested in SQLite4 being
> finished than an LLVM backend for SQLite3, and I'd be very interested in
> seeing if word-optimized variable-length encoding would have better
> performance than byte-optimized variable-length encoding.  The point
> though is that using an AST would make the system more modular.
>
> >    [...]. Why use that crusty old syntax when it's equally expressible in
> > JSON, XML or something else. Again I see it just as an API, [...]
>
> Now I'm confused.  JSON and XML are not query languages.  There exist
> query languages for them (e.g., XPath/XSLT for XML).
>
> I suppose you might have meant that SQL itself is a data representation
> language like JSON and XML are, and it is (data being expressed as
> INSERT .. VALUES ..; statements).
>
> Nico
> --
> _______________________________________________
> sqlite-users mailing list
> sqlite-users at mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>

Reply via email to