Specifying a custom Partitioner on RDD creation in Spark 2

2018-04-10 Thread Colin Williams
objectKey => { logInfo(s"working on object: ${objectKey}") byteArrayBuffer.appendAll(S3Util.getBytes(S3Util.getClient(region, S3Util.getCredentialsProvider("INSTANCE", "")), bucket, objectKey)) } ) Source.fromByte

Spark Custom Partitioner not picked

2016-03-06 Thread Prabhu Joseph
Hi All, When i am submitting a spark job on YARN with Custom Partitioner, it is not picked by Executors. Executors still using the default HashPartitioner. I added logs into both HashPartitioner (org/apache/spark/Partitioner.scala) and Custom Partitioner. The completed executor logs shows

Re: map operation clears custom partitioner

2016-02-22 Thread Silvio Fiorito
You can use mapValues to ensure partitioning is not lost. From: Brian London <brianmlon...@gmail.com<mailto:brianmlon...@gmail.com>> Date: Monday, February 22, 2016 at 1:21 PM To: user <user@spark.apache.org<mailto:user@spark.apache.org>> Subject: map operation c

Re: map operation clears custom partitioner

2016-02-22 Thread Sean Owen
in mapping) On Mon, Feb 22, 2016 at 6:21 PM, Brian London <brianmlon...@gmail.com> wrote: > It appears that when a custom partitioner is applied in a groupBy operation, > it is not propagated through subsequent non-shuffle operations. Is this > intentional? Is there any way to carry cust

map operation clears custom partitioner

2016-02-22 Thread Brian London
It appears that when a custom partitioner is applied in a groupBy operation, it is not propagated through subsequent non-shuffle operations. Is this intentional? Is there any way to carry custom partitioning through maps? I've uploaded a gist that exhibits the behavior. https://gist.github.com

Re: How to use a custom partitioner in a dataframe in Spark

2016-02-18 Thread Koert Kuipers
base. I also need to load them back and need to be able to do a join >>> on userId. My idea is to partition by userId hashcode first and then on >>> userId. >>> >>> >>> >>> On Wed, Feb 17, 2016 at 11:51 AM, Michael Armbrust < >>> m

Re: How to use a custom partitioner in a dataframe in Spark

2016-02-18 Thread Rishi Mishra
16 at 11:51 AM, Michael Armbrust < >> mich...@databricks.com> wrote: >> >>> Can you describe what you are trying to accomplish? What would the >>> custom partitioner be? >>> >>> On Tue, Feb 16, 2016 at 1:21 PM, SRK <swethakasire...@gmail.com> w

Re: How to use a custom partitioner in a dataframe in Spark

2016-02-17 Thread swetha kasireddy
ed, Feb 17, 2016 at 11:51 AM, Michael Armbrust <mich...@databricks.com > > wrote: > >> Can you describe what you are trying to accomplish? What would the >> custom partitioner be? >> >> On Tue, Feb 16, 2016 at 1:21 PM, SRK <swethakasire...@gmail.com> wrote:

Re: How to use a custom partitioner in a dataframe in Spark

2016-02-17 Thread swetha kasireddy
ich...@databricks.com> wrote: > Can you describe what you are trying to accomplish? What would the custom > partitioner be? > > On Tue, Feb 16, 2016 at 1:21 PM, SRK <swethakasire...@gmail.com> wrote: > >> Hi, >> >> How do I use a custom partitioner when I do

Re: How to use a custom partitioner in a dataframe in Spark

2016-02-17 Thread Rishi Mishra
fore storing in table. Regards, Rishitesh Mishra, SnappyData . (http://www.snappydata.io/) https://in.linkedin.com/in/rishiteshmishra On Tue, Feb 16, 2016 at 11:51 PM, SRK <swethakasire...@gmail.com> wrote: > Hi, > > How do I use a custom partitioner when I do a saveAsTable in a data

How to use a custom partitioner in a dataframe in Spark

2016-02-16 Thread SRK
Hi, How do I use a custom partitioner when I do a saveAsTable in a dataframe. Thanks, Swetha -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-a-custom-partitioner-in-a-dataframe-in-Spark-tp26240.html Sent from the Apache Spark User List

Re: python rdd.partionBy(): any examples of a custom partitioner?

2015-12-07 Thread Fengdong Yu
refer here: https://www.safaribooksonline.com/library/view/learning-spark/9781449359034/ch04.html of section: Example 4-27. Python custom partitioner > On Dec 8, 2015, at 10:07 AM, Keith Freeman <8fo...@gmail.com> wrote: > > I'm not a python expert, so I'm wondering

python rdd.partionBy(): any examples of a custom partitioner?

2015-12-07 Thread Keith Freeman
I'm not a python expert, so I'm wondering if anybody has a working example of a partitioner for the "partitionFunc" argument (default "portable_hash") to rdd.partitionBy()? - To unsubscribe, e-mail:

Does using Custom Partitioner before calling reduceByKey improve performance?

2015-10-27 Thread swetha
in context: http://apache-spark-user-list.1001560.n3.nabble.com/Does-using-Custom-Partitioner-before-calling-reduceByKey-improve-performance-tp25214.html Sent from the Apache Spark User List mailing list archive at Nabble.com

Re: Does using Custom Partitioner before calling reduceByKey improve performance?

2015-10-27 Thread Tathagata Das
If you just want to control the number of reducers, then setting the numPartitions is sufficient. If you want to control how exact partitioning scheme (that is some other scheme other than hash-based) then you need to implement a custom partitioner. It can be used to improve data skews, etc. which

Re: Does using Custom Partitioner before calling reduceByKey improve performance?

2015-10-27 Thread swetha kasireddy
So, Wouldn't using a customPartitioner on the rdd upon which the groupByKey or reduceByKey is performed avoid shuffles and improve performance? My code does groupByAndSort and reduceByKey on different datasets as shown below. Would using a custom partitioner on those datasets before using

Re: Does using Custom Partitioner before calling reduceByKey improve performance?

2015-10-27 Thread Tathagata Das
ey on different > datasets as shown below. Would using a custom partitioner on those datasets > before using a groupByKey or reduceByKey improve performance? My idea is > to avoid shuffles and improve performance. Also, right now I see a lot of > spills when there is a very l

Re: Does using Custom Partitioner before calling reduceByKey improve performance?

2015-10-27 Thread Tathagata Das
2015 at 6:35 PM, swetha kasireddy < >> swethakasire...@gmail.com> wrote: >> >>> So, Wouldn't using a customPartitioner on the rdd upon which the >>> groupByKey or reduceByKey is performed avoid shuffles and improve >>> performance? My code does groupBy

Re: Does using Custom Partitioner before calling reduceByKey improve performance?

2015-10-27 Thread swetha kasireddy
using a customPartitioner on the rdd upon which the >> groupByKey or reduceByKey is performed avoid shuffles and improve >> performance? My code does groupByAndSort and reduceByKey on different >> datasets as shown below. Would using a custom partitioner on those datasets >> befo

Re: Custom Partitioner

2015-09-02 Thread Jem Tucker
le in pyspark, so if we want >> >> create one. how should we create that. my question is that. >> >> >> >> On Tue, Sep 1, 2015 at 3:57 PM, Jem Tucker <jem.tuc...@gmail.com> >> wrote: >> >>> >> >>> Ah sorry I miss read yo

Re: Custom Partitioner

2015-09-02 Thread shahid ashraf
> >> On Tue, Sep 1, 2015 at 3:57 PM, Jem Tucker <jem.tuc...@gmail.com> > wrote: > >>> > >>> Ah sorry I miss read your question. In pyspark it looks like you just > >>> need to instantiate the Partitioner class with numPartitions and >

Re: Custom Partitioner

2015-09-01 Thread Davies Liu
ur question. In pyspark it looks like you just >>> need to instantiate the Partitioner class with numPartitions and >>> partitionFunc. >>> >>> On Tue, Sep 1, 2015 at 11:13 AM shahid ashraf <sha...@trialx.com> wrote: >>>> >>>> Hi >>>> >>

Re: Custom Partitioner

2015-09-01 Thread Jem Tucker
Hi, You just need to extend Partitioner and override the numPartitions and getPartition methods, see below class MyPartitioner extends partitioner { def numPartitions: Int = // Return the number of partitions def getPartition(key Any): Int = // Return the partition for a given key } On Tue,

Re: Custom Partitioner

2015-09-01 Thread Jem Tucker
Ah sorry I miss read your question. In pyspark it looks like you just need to instantiate the Partitioner class with numPartitions and partitionFunc. On Tue, Sep 1, 2015 at 11:13 AM shahid ashraf <sha...@trialx.com> wrote: > Hi > > I did not get this, e.g if i need to create a cus

Custom Partitioner

2015-09-01 Thread shahid qadri
Hi Sparkians How can we create a customer partition in pyspark - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org

Re: Custom Partitioner

2015-09-01 Thread shahid ashraf
Hi I did not get this, e.g if i need to create a custom partitioner like range partitioner. On Tue, Sep 1, 2015 at 3:22 PM, Jem Tucker <jem.tuc...@gmail.com> wrote: > Hi, > > You just need to extend Partitioner and override the numPartitions and > getPartition methods, s

Re: Custom Partitioner

2015-09-01 Thread shahid ashraf
just need > to instantiate the Partitioner class with numPartitions and partitionFunc. > > On Tue, Sep 1, 2015 at 11:13 AM shahid ashraf <sha...@trialx.com> wrote: > >> Hi >> >> I did not get this, e.g if i need to create a custom partitioner like >> range partitione

Re: Custom Partitioner

2015-09-01 Thread Jem Tucker
tion. In pyspark it looks like you just >> need to instantiate the Partitioner class with numPartitions and >> partitionFunc. >> >> On Tue, Sep 1, 2015 at 11:13 AM shahid ashraf <sha...@trialx.com> wrote: >> >>> Hi >>> >>> I did not ge

Custom partitioner

2015-07-26 Thread Hafiz Mujadid
. Thanks -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Custom-partitioner-tp24001.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user

Re: Custom partitioner

2015-07-26 Thread Ted Yu
in this regard. Thanks -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Custom-partitioner-tp24001.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe

Python Custom Partitioner

2015-05-04 Thread ayan guha
Hi Can someone share some working code for custom partitioner in python? I am trying to understand it better. Here is documentation partitionBy(*numPartitions*, *partitionFunc=function portable_hash at 0x2c45140*) https://spark.apache.org/docs/1.3.1/api/python/pyspark.html

Re: Python Custom Partitioner

2015-05-04 Thread ๏̯͡๏
I have implemented map-side join with broadcast variables and the code is on mailing list (scala). On Mon, May 4, 2015 at 8:38 PM, ayan guha guha.a...@gmail.com wrote: Hi Can someone share some working code for custom partitioner in python? I am trying to understand it better. Here

Re: Python Custom Partitioner

2015-05-04 Thread ayan guha
someone share some working code for custom partitioner in python? I am trying to understand it better. Here is documentation partitionBy(*numPartitions*, *partitionFunc=function portable_hash at 0x2c45140*) https://spark.apache.org/docs/1.3.1/api/python/pyspark.html#pyspark.RDD.partitionBy