[ 
https://issues.apache.org/jira/browse/SPARK-40499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17806201#comment-17806201
 ] 

Joey Pereira commented on SPARK-40499:
--------------------------------------

I've found another app of ours with similarly unbearable regressions. The app 
previously performed in <2hr run-time prior to upgrading, and is now >24hr on 
Spark3. It's a lot simpler, so this is more-or-less the SQL it's doing (with a 
number of other internal group-by fields removed for brievity).

{code:sql}
with t as (
  select
    method,
    response_code,
    if(response_code >= 200 and response_code < 300, duration_ms, null) as 
duration_ms_2xx,
  from logs
)
select
  method,
  count(*) as cnt,
  sum(case when response_code >= 200 and response_code < 300 then 1 else 0 end) 
as cnt_2xx,
  sum(case when response_code >= 300 and response_code < 400 then 1 else 0 end) 
as cnt_3xx,
  sum(case when response_code >= 400 and response_code < 500 then 1 else 0 end) 
as cnt_4xx,
  sum(case when response_code >= 500 then 1 else 0 end) as cnt_5xx,
  approx_percentile(duration_ms_2xx, 0.5) as latency_p50,
  approx_percentile(duration_ms_2xx, 0.9) as latency_p90,
  approx_percentile(duration_ms_2xx, 0.99) as latency_p99,
  approx_percentile(duration_ms_2xx, 0.999) as latency_p999,
from t
group by 1
{code}

This is simple enough that I'm a bit more convinced it could be the CaseWhen 
change.

> Spark 3.2.1 percentlie_approx query much slower than Spark 2.4.0
> ----------------------------------------------------------------
>
>                 Key: SPARK-40499
>                 URL: https://issues.apache.org/jira/browse/SPARK-40499
>             Project: Spark
>          Issue Type: Bug
>          Components: Shuffle
>    Affects Versions: 3.2.1
>         Environment: hadoop: 3.0.0 
> spark:  2.4.0 / 3.2.1
> shuffle:spark 2.4.0
>            Reporter: xuanzhiang
>            Priority: Major
>         Attachments: Screenshot 2024-01-05 at 3.51.52 PM.png, Screenshot 
> 2024-01-05 at 3.53.10 PM.png, spark2.4-shuffle-data.png, 
> spark3.2-shuffle-data.png
>
>
> spark.sql(
>       s"""
>          |SELECT
>          | Info ,
>          | PERCENTILE_APPROX(cost,0.5) cost_p50,
>          | PERCENTILE_APPROX(cost,0.9) cost_p90,
>          | PERCENTILE_APPROX(cost,0.95) cost_p95,
>          | PERCENTILE_APPROX(cost,0.99) cost_p99,
>          | PERCENTILE_APPROX(cost,0.999) cost_p999
>          |FROM
>          | textData
>          |""".stripMargin)
>  * When we used spark 2.4.0, aggregation adopted objHashAggregator, stage 2 
> pull shuffle data very quick . but , when we use spark 3.2.1 and use old 
> shuffle , 140M shuffle data cost 3 hours. 
>  * If we upgrade the Shuffle, will we get performance regression?
>  *  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to