Something else came to mind, are there plans to support prepared queries?
I recall someone saying before that Ignite does internally cache queries
but it's not at all clear if or how it does do that. I assume a simple hash
of the query isn't enough.
We generate SQL queries based on user runtime settings and they can get to
hundreds of lines long, I imagine this means most of our queries are not
being cached but there are patterns so we could generate and manage
prepared queries ourselves.
Also, will there be a dedicated API for doing SQL queries rather than
having to pass a SqlFieldsQuery to a cache that has nothing to do with the
cache being queried? When I first started with Ignite years ago, this was
beyond confusing for me. I'm trying to run select x from B but I pass this
to a cache called DUMMY or whatever arbitrary name...
On Fri, Jul 23, 2021 at 4:05 PM Courtney Robinson
wrote:
> Andrey,
> Thanks for the response - see my comments inline.
>
>
>> I've gone through the questions and have no the whole picture of your use
>> case.
>
> Would you please clarify how you exactly use the Ignite? what are the
>> integration points?
>>
>
> I'll try to clarify - we have a low/no code platform. A user designs a
> model for their application and we map this model to Ignite tables and
> other data sources. The model I'll describe is what we're building now and
> expected to be in alpha some time in Q4 21. Our current production
> architecture is different and isn't as generic, it is heavily tied to
> Ignite and we've redesigned to get some flexibility where Ignite doesn't
> provide what we want. Things like window functions and other SQL-99 limits.
>
> In the next gen version we're working on you can create a model for a
> Tweet(content, to) and we will create an Ignite table with content and to
> columns using the type the user selects. This is the simplest case.
> We are adding generic support for sources and sinks and using Calcite as a
> data virtualisation layer. Ignite is one of the available source/sinks.
>
> When a user creates a model for Tweet, we also allow them to specify how
> they want to index the data. We have a copy of the calcite Elasticsearch
> adapter modified for Solr.
>
> When a source is queried (Ignite or any other that we support), we
> generate SQL that Calcite executes. Calcite will push down the generated
> queries to Solr and Solr produces a list of IDs (in case of Ignite) and we
> do a multi-get from Ignite to produce the actual results.
>
> Obviously there's a lot more to this but that should give you a general
> idea.
>
> and maybe share some experience with using Ignite SPIs?
>>
> Our evolution with Ignite started from the key value + compute APIs. We
> used the SPIs then but have since moved to using only the Ignite SQL API
> (we gave up transactions for this).
>
> We originally used the indexing SPI to keep our own lucene index of data
> in a cache. We did not use the Ignite FTS as it is very limited compared to
> what we allow customers to do. If I remember correctly, we were using an
> affinity compute job to send queries to the right Ignite node and
> then doing a multi-get to pull the data from caches.
> I think we used one or two other SPIs and we found them very useful to be
> able to extend and customise Ignite without having to fork/change upstream
> classes. We only stopped using them because we eventually concluded that
> using the SQL only API was better for numerous reasons.
>
>
>> We'll keep the information in mind while developing the Ignite,
>> because this may help us to make a better product.
>>
>> By the way, I'll try to answer the questions.
>>
>> > 1. Schema change - does that include the ability to change the types
>> of
>> > fields/columns?
>> Yes, we plan to support transparent conversion to a wider type on-fly
>> (e.g.
>> 'int' to 'long').
>> This is a major point of our Live-schema concept.
>> In fact, there is no need to convert data on all the nodes in a
>> synchronous
>> way as old SQL databases do (if one supports though),
>> we are going to support multiple schema versions and convert data
>> on-demand
>> on a per-row basis to the latest version,
>> then write-back the row.
>>
>
> I can understand. The auto conversion to wider type makes sense.
>
>>
>> More complex things like 'String' -> 'int' are out of scope for now
>> because
>> it requires the execution of a user code on the critical path.
>>
>
> I would argue though that executing user code on the critical path
> shouldn't be a blocker for custom conversions. I feel if a user is making
> an advance enough integration to provide custom conversions they would be
> aware that it impacts the system as a whole.
>
> The limitation here is column MUST NOT be indexed, because an index over
>> the data of different kinds is impossible.
>>
> Understood - I'd make the case that indexing should be pluggable. I would
> love for us to be able to take indexing away from Ignite in our impl. - I
> t