Yes, as I understand it uses Spark internals from the first commit)))
The reason - we take Spark SQL query execution plan and try to execute it
on Ignite cluster
Also we inherit a lot of Developer API related classes that could be
unstable. Spark has no good point for extension and this is a reason why we
should go deeper

пн, 30 сент. 2019 г. в 20:17, Ivan Pavlukhin <vololo...@gmail.com>:

> Hi Alexey,
>
> As an external watcher very far from Ignite Spark integration I would
> like to ask a humble question for my understanding. Why this
> integration uses Spark internals? Is it a common approach for
> integrating with Spark?
>
> пн, 30 сент. 2019 г. в 16:17, Alexey Zinoviev <zaleslaw....@gmail.com>:
> >
> > Hi, Igniters
> > I've started the work on the Spark 2.4 support
> >
> > We started the discussion here, in
> > https://issues.apache.org/jira/browse/IGNITE-12054
> >
> > The Spark internals were totally refactored between 2.3 and 2.4 versions,
> > main changes touches
> >
> >    - External catalog and listeners refactoring
> >    - Changes of HAVING operator semantic support
> >    - Push-down NULL filters generation in JOIN plans
> >    - minor changes in Plan Generation that should be adopted in our
> >    integration module
> >
> > I propose the initial solution here via creation of new module spark-2.4
> > here https://issues.apache.org/jira/browse/IGNITE-12247 and addition of
> new
> > profile spark-2.4 (to avoid possible clashes with another spark versions)
> >
> > Also I've transformed ticket to an Umbrella ticket and created a few
> > tickets for muted tests (around 7 from 211 tests are muted now)
> >
> > Please, if somebody interested in it, make an initial review of modular
> > ignite structure and changes (without deep diving into Spark code).
> >
> > And yes, the proposed code is a copy-paste of spark-ignite module with a
> > few fixes
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>

Reply via email to