Re: Partitioning in spark streaming

2015-08-12 Thread Tathagata Das
Yes.

On Wed, Aug 12, 2015 at 12:12 PM, Mohit Anchlia mohitanch...@gmail.com
wrote:

 Thanks! To write to hdfs I do need to use saveAs method?

 On Wed, Aug 12, 2015 at 12:01 PM, Tathagata Das t...@databricks.com
 wrote:

 This is how Spark does. It writes the task output to a uniquely-named
 temporary file, and then atomically (after the task successfully completes)
 renames the temp file to the expected file name file/partition-XXX


 On Tue, Aug 11, 2015 at 9:53 PM, Mohit Anchlia mohitanch...@gmail.com
 wrote:

 Thanks for the info. When data is written in hdfs how does spark keeps
 the filenames written by multiple executors unique

 On Tue, Aug 11, 2015 at 9:35 PM, Hemant Bhanawat hemant9...@gmail.com
 wrote:

 Posting a comment from my previous mail post:

 When data is received from a stream source, receiver creates blocks of
 data.  A new block of data is generated every blockInterval milliseconds. N
 blocks of data are created during the batchInterval where N =
 batchInterval/blockInterval. A RDD is created on the driver for the blocks
 created during the batchInterval. The blocks generated during the
 batchInterval are partitions of the RDD.

 Now if you want to repartition based on a key, a shuffle is needed.

 On Wed, Aug 12, 2015 at 4:36 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:

 How does partitioning in spark work when it comes to streaming? What's
 the best way to partition a time series data grouped by a certain tag like
 categories of product video, music etc.








Re: Partitioning in spark streaming

2015-08-11 Thread Mohit Anchlia
I am also trying to understand how are files named when writing to hadoop?
for eg: how does saveAs method ensures that each executor is generating
unique files?

On Tue, Aug 11, 2015 at 4:21 PM, ayan guha guha.a...@gmail.com wrote:

 partitioning - by itself - is a property of RDD. so essentially it is no
 different in case of streaming where each batch is one RDD. You can use
 partitionBy on RDD and pass on your custom partitioner function to it.

 One thing you should consider is how balanced are your partitions ie your
 partition scheme should not skew data into one partition too much.

 Best
 Ayan

 On Wed, Aug 12, 2015 at 9:06 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:

 How does partitioning in spark work when it comes to streaming? What's
 the best way to partition a time series data grouped by a certain tag like
 categories of product video, music etc.




 --
 Best Regards,
 Ayan Guha



Partitioning in spark streaming

2015-08-11 Thread Mohit Anchlia
How does partitioning in spark work when it comes to streaming? What's the
best way to partition a time series data grouped by a certain tag like
categories of product video, music etc.


Re: Partitioning in spark streaming

2015-08-11 Thread ayan guha
partitioning - by itself - is a property of RDD. so essentially it is no
different in case of streaming where each batch is one RDD. You can use
partitionBy on RDD and pass on your custom partitioner function to it.

One thing you should consider is how balanced are your partitions ie your
partition scheme should not skew data into one partition too much.

Best
Ayan

On Wed, Aug 12, 2015 at 9:06 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:

 How does partitioning in spark work when it comes to streaming? What's the
 best way to partition a time series data grouped by a certain tag like
 categories of product video, music etc.




-- 
Best Regards,
Ayan Guha


Re: Partitioning in spark streaming

2015-08-11 Thread Hemant Bhanawat
Posting a comment from my previous mail post:

When data is received from a stream source, receiver creates blocks of
data.  A new block of data is generated every blockInterval milliseconds. N
blocks of data are created during the batchInterval where N =
batchInterval/blockInterval. A RDD is created on the driver for the blocks
created during the batchInterval. The blocks generated during the
batchInterval are partitions of the RDD.

Now if you want to repartition based on a key, a shuffle is needed.

On Wed, Aug 12, 2015 at 4:36 AM, Mohit Anchlia mohitanch...@gmail.com
wrote:

 How does partitioning in spark work when it comes to streaming? What's the
 best way to partition a time series data grouped by a certain tag like
 categories of product video, music etc.



Re: Partitioning in spark streaming

2015-08-11 Thread Mohit Anchlia
Thanks for the info. When data is written in hdfs how does spark keeps the
filenames written by multiple executors unique

On Tue, Aug 11, 2015 at 9:35 PM, Hemant Bhanawat hemant9...@gmail.com
wrote:

 Posting a comment from my previous mail post:

 When data is received from a stream source, receiver creates blocks of
 data.  A new block of data is generated every blockInterval milliseconds. N
 blocks of data are created during the batchInterval where N =
 batchInterval/blockInterval. A RDD is created on the driver for the blocks
 created during the batchInterval. The blocks generated during the
 batchInterval are partitions of the RDD.

 Now if you want to repartition based on a key, a shuffle is needed.

 On Wed, Aug 12, 2015 at 4:36 AM, Mohit Anchlia mohitanch...@gmail.com
 wrote:

 How does partitioning in spark work when it comes to streaming? What's
 the best way to partition a time series data grouped by a certain tag like
 categories of product video, music etc.