[ 
https://issues.apache.org/jira/browse/SPARK-33161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankush Kankariya updated SPARK-33161:
-------------------------------------
    Description: 
I am noticing a difference in behaviour on upgrading to spark 3 where the 
NumPartitions are changing on df.select which causing my zip operations to fail 
on mismatch. With spark 2.4.4 it works fine. This does not happen with filter 
but only with select cols
{code:java}
spark = SparkSession.builder.appName("pytest-pyspark-local-testing"). \ 
master("local[5]"). \ config("spark.executor.memory", "2g"). \ 
config("spark.driver.memory", "2g"). \ config("spark.ui.showConsoleProgress", 
"false"). \ config("spark.sql.shuffle.partitions",10). \ 
config("spark.sql.optimizer.dynamicPartitionPruning.enabled","false").getOrCreate()
{code}
 

With Spark 2.4.4:
  df = spark.table("tableA")
 print(df.rdd.getNumPartitions()) #10
 new_df = df.filter("id is not null")
 print(new_df.rdd.getNumPartitions()) #10
 new_2_df = df.select("id")
 print(new_2_df.rdd.getNumPartitions()) #10
  

With Spark 3.0.0:
 df = spark.table("tableA")
 print(df.rdd.getNumPartitions()) #10
 new_df = df.filter("id is not null")
 print(new_df.rdd.getNumPartitions()) #10

new_1_df = df.select("*")
 print(new_1_df.rdd.getNumPartitions()) #10


 new_2_df = df.select("id")
 print(new_2_df.rdd.getNumPartitions()) #1
 See the last line where it changes to 1 partition from initial 10. Any 
thoughts?

 

  was:
I am noticing a difference in behaviour on upgrading to spark 3 where the 
NumPartitions are changing on df.select which causing my zip operations to fail 
on mismatch. With spark 2.4.4 it works fine. This does not happen with filter 
but only with select cols
{code:java}
spark = SparkSession.builder.appName("pytest-pyspark-local-testing"). \ 
master("local[5]"). \ config("spark.executor.memory", "2g"). \ 
config("spark.driver.memory", "2g"). \ config("spark.ui.showConsoleProgress", 
"false"). \ config("spark.sql.shuffle.partitions",10). \ 
config("spark.sql.optimizer.dynamicPartitionPruning.enabled","false").getOrCreate()
{code}
 


With Spark 2.4.4:
 df = spark.table("tableA")
print(df.rdd.getNumPartitions()) #10
new_df = df.filter("id is not null")
print(new_df.rdd.getNumPartitions()) #10
new_2_df = df.select("id")
print(new_2_df.rdd.getNumPartitions()) #10
 

With Spark 3.0.0:
df = spark.table("tableA")
print(df.rdd.getNumPartitions()) #10
new_df = df.filter("id is not null")
print(new_df.rdd.getNumPartitions()) #10
new_2_df = df.select("id")
print(new_2_df.rdd.getNumPartitions()) #1
See the last line where it changes to 1 partition from initial 10. Any thoughts?

 


> Spark 3: Partition count changing on dataframe select cols
> ----------------------------------------------------------
>
>                 Key: SPARK-33161
>                 URL: https://issues.apache.org/jira/browse/SPARK-33161
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 3.0.0, 3.0.1
>            Reporter: Ankush Kankariya
>            Priority: Blocker
>              Labels: spark-core, spark-sql
>
> I am noticing a difference in behaviour on upgrading to spark 3 where the 
> NumPartitions are changing on df.select which causing my zip operations to 
> fail on mismatch. With spark 2.4.4 it works fine. This does not happen with 
> filter but only with select cols
> {code:java}
> spark = SparkSession.builder.appName("pytest-pyspark-local-testing"). \ 
> master("local[5]"). \ config("spark.executor.memory", "2g"). \ 
> config("spark.driver.memory", "2g"). \ config("spark.ui.showConsoleProgress", 
> "false"). \ config("spark.sql.shuffle.partitions",10). \ 
> config("spark.sql.optimizer.dynamicPartitionPruning.enabled","false").getOrCreate()
> {code}
>  
> With Spark 2.4.4:
>   df = spark.table("tableA")
>  print(df.rdd.getNumPartitions()) #10
>  new_df = df.filter("id is not null")
>  print(new_df.rdd.getNumPartitions()) #10
>  new_2_df = df.select("id")
>  print(new_2_df.rdd.getNumPartitions()) #10
>   
> With Spark 3.0.0:
>  df = spark.table("tableA")
>  print(df.rdd.getNumPartitions()) #10
>  new_df = df.filter("id is not null")
>  print(new_df.rdd.getNumPartitions()) #10
> new_1_df = df.select("*")
>  print(new_1_df.rdd.getNumPartitions()) #10
>  new_2_df = df.select("id")
>  print(new_2_df.rdd.getNumPartitions()) #1
>  See the last line where it changes to 1 partition from initial 10. Any 
> thoughts?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to