[ 
https://issues.apache.org/jira/browse/SPARK-26204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Okolnychyi updated SPARK-26204:
-------------------------------------
    Description: 
The {{InSet}} expression was introduced in SPARK-3711 to avoid O(n) time 
complexity in the {{In}} expression. As {{InSet}} relies on Scala 
{{immutable.Set}}, it introduces expensive autoboxing. As a consequence, the 
performance of {{InSet}} might be significantly slower than {{In}} even on 100+ 
values.

We need to find an approach how to optimize {{InSet}} expressions and avoid the 
cost of autoboxing.

 There are a few approaches that we can use:
 * Collections for primitive values (e.g., FastUtil,  HPPC)
 * Type specialization in Scala (e.g., OpenHashSet in Spark)

According to my local benchmarks, {{OpenHashSet}}, which is already available 
in Spark and uses type specialization, can significantly reduce the memory 
footprint. However, it slows down the computation even compared to the built-in 
Scala sets. On the other hand, FastUtil and HPPC did work and gave a 
substantial improvement in the performance. So, it makes sense to evaluate 
primitive collections.

See the attached screenshot of what I experienced while testing.

  was:
The {{InSet}} expression was introduced in SPARK-3711 to avoid O\(n\) time 
complexity in the {{In}} expression. As {{InSet}} relies on Scala 
{{immutable.Set}}, it introduces expensive autoboxing. As a consequence, the 
performance of {{InSet}} might be significantly slower than {{In}} even on 100+ 
values.

We need to find an approach how to optimize {{InSet}} expressions and avoid the 
cost of autoboxing.

 There are a few approaches that we can use:
 * Collections for primitive values (e.g., FastUtil,  HPPC)
 * Type specialization in Scala (would it even work for code gen in Spark?)

I tried to use {{OpenHashSet}}, which is already available in Spark and uses 
type specialization. However, I did not manage to avoid autoboxing. On the 
other hand, FastUtil did work and I saw a substantial improvement in the 
performance.

See the attached screenshot of what I experienced while testing.
 


> Optimize InSet expression
> -------------------------
>
>                 Key: SPARK-26204
>                 URL: https://issues.apache.org/jira/browse/SPARK-26204
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: Anton Okolnychyi
>            Priority: Major
>         Attachments: heap size.png
>
>
> The {{InSet}} expression was introduced in SPARK-3711 to avoid O(n) time 
> complexity in the {{In}} expression. As {{InSet}} relies on Scala 
> {{immutable.Set}}, it introduces expensive autoboxing. As a consequence, the 
> performance of {{InSet}} might be significantly slower than {{In}} even on 
> 100+ values.
> We need to find an approach how to optimize {{InSet}} expressions and avoid 
> the cost of autoboxing.
>  There are a few approaches that we can use:
>  * Collections for primitive values (e.g., FastUtil,  HPPC)
>  * Type specialization in Scala (e.g., OpenHashSet in Spark)
> According to my local benchmarks, {{OpenHashSet}}, which is already available 
> in Spark and uses type specialization, can significantly reduce the memory 
> footprint. However, it slows down the computation even compared to the 
> built-in Scala sets. On the other hand, FastUtil and HPPC did work and gave a 
> substantial improvement in the performance. So, it makes sense to evaluate 
> primitive collections.
> See the attached screenshot of what I experienced while testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to