many partitions are you seeing for that job? You can try doing a
dstream.repartition to see if it increase from 11 to a higher number.
Thanks
Best Regards
On Thu, Aug 20, 2015 at 2:28 AM, swetha wrote:
> Hi,
>
> How to set the number of executors and tasks in a Spark Streaming job in
Hi,
How to set the number of executors and tasks in a Spark Streaming job in
Mesos? I have the following settings but my job still shows me 11 active
tasks and 11 executors. Any idea as to why this is happening
?
sparkConf.set("spark.mesos.coarse", "true")
sparkConf.
This one would give you a better understanding
http://stackoverflow.com/questions/24622108/apache-spark-the-number-of-cores-vs-the-number-of-executors
Thanks
Best Regards
On Wed, Nov 26, 2014 at 10:32 PM, Akhil Das
wrote:
> 1. On HDFS files are treated as ~64mb in block size. When you put the s
1. On HDFS files are treated as ~64mb in block size. When you put the same
file in local file system (ext3/ext4) it will be treated as different (in
your case it looks like ~32mb) and that's why you are seeing 9 output files.
2. You could set *num-executors *to increase the number of executor
proc
Hi,
I am running Spark in the stand alone mode.
1) I have a file of 286MB in HDFS (block size is 64MB) and so is split into
5 blocks. When I have the file in HDFS, 5 tasks are generated and so 5
files in the output. My understanding is that there will be a separate
partition for each block and th