I have dataset consisting of 50000 binary files (each between 500kb and 2MB). They are stored in HDFS on a Hadoop cluster. The datanodes of the cluster are also the workers for Spark. I open the files as a RDD using sc.binaryFiles("hdfs:///path_to_directory").When I run the first action that involves this RDD, Spark spawns a RDD with more than 30000 partitions. And this takes ages to process these partitions even if you simply run "count". Performing a "repartition" directly after loading does not help, because Spark seems to insist on materializing the RDD created by binaryFiles first.
How I can get around this? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Ahhhh-Spark-creates-30000-partitions-What-can-I-do-tp25140.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org