[ https://issues.apache.org/jira/browse/SPARK-26672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Wang, Gang resolved SPARK-26672. -------------------------------- Resolution: Not A Problem This is only a bug in our inner Spark version. It's ok in community version. > SinglePartition may not satisfies > HashClusteredDistribution/OrderedDistribution > ------------------------------------------------------------------------------- > > Key: SPARK-26672 > URL: https://issues.apache.org/jira/browse/SPARK-26672 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.4.0 > Reporter: Wang, Gang > Priority: Major > > If we are loading data to a *bucketed table* TEST_TABLE of which bucket > number is not 1, from another table SRC_TABLE(bucketed or not) with sql: > insert overwrite table TEST_TABLE select * from SRC_TABLE limit 1000. > Data inserted into TEST_TABLE will not be bucketed since after LimitExec the > output partitioning will be SinglePartition, and in current logic, it > satisfies the HashClusteredDistribution, so no shuffle will be added. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org