[ 
https://issues.apache.org/jira/browse/IGNITE-7077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16440509#comment-16440509
 ] 

Nikolay Izhikov commented on IGNITE-7077:
-----------------------------------------

[~vkulichenko]

> I noticed that the new setting name has ignite. prefix. I generally like 
> this, but this is not consistent with others.

All other options are used for an *Ignite* Data Frame configuration.
I think we don't need an additional prefix to be passed straight to the 
{{IgniteRelationProvider}}.

{{OPTION_DISABLE_SPARK_SQL_OPTIMIZATION}} - option to be used when *Spark 
session* is configured.
So I made this option consistent with other spark session options. 
{{"spark.sql.inMemoryColumnarStorage.compressed"}}, etc [1]

> Is it possible to change all others in the same way? If yes, let's do this. 
> Otherwise, let's remove the prefix.

I think we shouldn't change existing constant values. 
It's a part of public API and we released it.

Personally, I like current naming.
Anyway, I can remove prefix in {{OPTION_DISABLE_SPARK_SQL_OPTIMIZATION}}, if 
you want.
Do you like {{"disableIgniteSparkSQLOptimization"}}?

> Out of curiosity, what is the purpose of the code below? Shouldn't we just do 
> nothing if optimization is disabled?

Thank you. It's a bug in my PR and I fixed it.
Spark Session configuration can be changed at runtime.
So we have to check if Ignite optimization enabled each time query occurs.
And if it disabled we have to remove {{IgniteOptimization}} from extra 
optimizations.
Correct code is
{{code:scala}}
        if (optimizationDisabled) {
            experimentalMethods.extraOptimizations = 
                experimentalMethods.extraOptimizations.filter(_ != 
IgniteOptimization)
        } else {
            val optimizationExists = 
experimentalMethods.extraOptimizations.contains(IgniteOptimization)

            if (!optimizationExists)
                experimentalMethods.extraOptimizations = 
experimentalMethods.extraOptimizations :+ IgniteOptimization
        }
{{code}}

Please, see updated PR


[1] 
[https://github.com/apache/spark/blob/v2.2.0/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L80]

> Spark Data Frame Support. Strategy to convert complete query to Ignite SQL
> --------------------------------------------------------------------------
>
>                 Key: IGNITE-7077
>                 URL: https://issues.apache.org/jira/browse/IGNITE-7077
>             Project: Ignite
>          Issue Type: New Feature
>          Components: spark
>    Affects Versions: 2.3
>            Reporter: Nikolay Izhikov
>            Assignee: Nikolay Izhikov
>            Priority: Major
>              Labels: bigdata
>             Fix For: 2.5
>
>
> Basic support of Spark Data Frame for Ignite implemented in IGNITE-3084.
> We need to implement custom spark strategy that can convert whole Spark SQL 
> query to Ignite SQL Query if query consists of only Ignite tables.
> The strategy does nothing if spark query includes not only Ignite tables.
> Memsql implementation can be taken as an example - 
> https://github.com/memsql/memsql-spark-connector



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to