Pranav Rao created SPARK-23442:
----------------------------------

             Summary: Reading from partitioned and bucketed table uses only 
bucketSpec.numBuckets partitions in all cases
                 Key: SPARK-23442
                 URL: https://issues.apache.org/jira/browse/SPARK-23442
             Project: Spark
          Issue Type: Bug
          Components: Spark Core, SQL
    Affects Versions: 2.2.1
         Environment: {{{{spark.sql("SET spark.default.parallelism=1000") }}}}

{{spark.sql("set spark.sql.shuffle.partitions=500") }}

{{spark.sql("set spark.sql.files.maxPartitionBytes=134217728")}}

{{-----}}

{{$ hdfs getconf -confKey mapreduce.input.fileinputformat.split.minsize}}
0 

$ hdfs getconf -confKey dfs.blocksize
 134217728 

$ hdfs getconf -confKey mapreduce.job.maps
 32
            Reporter: Pranav Rao


Through the DataFrameWriter[T] interface I have created a external HIVE table 
with 5000 (horizontal) partitions and 50 buckets in each partition. Overall the 
dataset is 600GB and the provider is Parquet.

Now this works great when joining with a similarly bucketed dataset - it's able 
to avoid a shuffle. 

But any action on this Dataframe(from _spark.table("tablename")_), works with 
only 50 RDD partitions. This is happening because of 
[createBucketedReadRDD|https://github.com/apachttps:/github.com/apache/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.she/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.sc].
 So the 600GB dataset is only read through 50 tasks, which makes this 
partitioning + bucketing scheme not useful at all.

I cannot expose the base directory of the parquet folder for reading the 
dataset, because the partition locations don't follow a (basePath + partSpec) 
format.

Meanwhile, are there workarounds to use higher parallelism while reading such a 
table? Let me know if we



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to