RE: Repartitioning by partition size, not by number of partitions.
Hi Ilya, This seems to me as quiet complicated solution, I'm thinking that easier (though not optimal) solution might be for example to use heuristicaly something like RDD.coalesce(RDD.getNumPartitions() / N), but it keeps me wonder that Spark does not have something like RDD.coalesce(partition_size). __ Hi Jan. I've actually written a function recently to do precisely that using the RDD.randomSplit function. You just need to calculate how big each element of your data is, then how many of each data can fit in each RDD to populate the input to rqndomSplit. Unfortunately, in my case I wind up with GC errors on large data doing this and am still debugging :) -Original Message- From: jan.zi...@centrum.cz [jan.zi...@centrum.cz ] Sent: Friday, October 31, 2014 06:27 AM Eastern Standard Time To: user@spark.apache.org Subject: Repartitioning by partition size, not by number of partitions. Hi, I have inpot data that are many of very small files containing one .json. For performance reasons (I use PySpark) I have to do repartioning, currently I do: sc.textFile(files).coalesce(100)) Problem is that I have to guess the number of partitions in a such way that it's as fast as possible and I am still on the sefe side with the RAM memory. So this is quiet difficult. For this reason I would like to ask if there is some way, how to replace coalesce(100) by something that creates N partitions of the given size? I went through the documentation, but I was not able to find some way, how to do that. thank you in advance for any help or advice. The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org
RE: Repartitioning by partition size, not by number of partitions.
Hi Jan. I've actually written a function recently to do precisely that using the RDD.randomSplit function. You just need to calculate how big each element of your data is, then how many of each data can fit in each RDD to populate the input to rqndomSplit. Unfortunately, in my case I wind up with GC errors on large data doing this and am still debugging :) -Original Message- From: jan.zi...@centrum.cz [jan.zi...@centrum.cz<mailto:jan.zi...@centrum.cz>] Sent: Friday, October 31, 2014 06:27 AM Eastern Standard Time To: user@spark.apache.org Subject: Repartitioning by partition size, not by number of partitions. Hi, I have inpot data that are many of very small files containing one .json. For performance reasons (I use PySpark) I have to do repartioning, currently I do: sc.textFile(files).coalesce(100)) Problem is that I have to guess the number of partitions in a such way that it's as fast as possible and I am still on the sefe side with the RAM memory. So this is quiet difficult. For this reason I would like to ask if there is some way, how to replace coalesce(100) by something that creates N partitions of the given size? I went through the documentation, but I was not able to find some way, how to do that. thank you in advance for any help or advice. The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Repartitioning by partition size, not by number of partitions.
Hi, I have inpot data that are many of very small files containing one .json. For performance reasons (I use PySpark) I have to do repartioning, currently I do: sc.textFile(files).coalesce(100)) Problem is that I have to guess the number of partitions in a such way that it's as fast as possible and I am still on the sefe side with the RAM memory. So this is quiet difficult. For this reason I would like to ask if there is some way, how to replace coalesce(100) by something that creates N partitions of the given size? I went through the documentation, but I was not able to find some way, how to do that. thank you in advance for any help or advice. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org