Hi all,

We have a spark streaming job which reads from two kafka topics with 10
partitions each. And we are running the streaming job with 3 concurrent
microbatches. (So total 20 partitions and 3 concurrency)

We have following question:

In our processing DAG, we do a rdd.persist() at one stage, after which we
fork out the DAG into two. Each of the forks has an action (forEach) at the
end. In this case, we are observing that the number of executors is not
exceeding the number of input kafka partitions. Job is not spawning more
than 60 executors (2*10*3). And we see that the tasks from the two actions
and the 3 concurrent microbatches are competing with each other for
resources. So even though the max processing time of a task is 'x', the
overall  processing time of the stage is much greater than 'x'.

Is there a way by which we can ensure that the two forks of the DAG get
processed in parallel by spawning more number of executors?
(We have not put any cap of maxExecutors)

Following are the job configurations:
spark.dynamicAllocation.enabled: true
spark.dynamicAllocation.minExecutors: NOT_SET

Please let us know if you have any ideas that can be useful here.

Thanks,
-Vibhor

-- 



*-----------------------------------------------------------------------------------------*


*This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the 
system manager. This message contains confidential information and is 
intended only for the individual named. If you are not the named addressee, 
you should not disseminate, distribute or copy this email. Please notify 
the sender immediately by email if you have received this email by mistake 
and delete this email from your system. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.*****

 ****

*Any views or opinions presented in this 
email are solely those of the author and do not necessarily represent those 
of the organization. Any information on shares, debentures or similar 
instruments, recommended product pricing, valuations and the like are for 
information purposes only. It is not meant to be an instruction or 
recommendation, as the case may be, to buy or to sell securities, products, 
services nor an offer to buy or sell securities, products or services 
unless specifically stated to be so on behalf of the Flipkart group. 
Employees of the Flipkart group of companies are expressly required not to 
make defamatory statements and not to infringe or authorise any 
infringement of copyright or any other legal right by email communications. 
Any such communication is contrary to organizational policy and outside the 
scope of the employment of the individual concerned. The organization will 
not accept any liability in respect of such communication, and the employee 
responsible will be personally liable for any damages or other liability 
arising.*****

 ****

*Our organization accepts no liability for the 
content of this email, or for the consequences of any actions taken on the 
basis of the information *provided,* unless that information is 
subsequently confirmed in writing. If you are not the intended recipient, 
you are notified that disclosing, copying, distributing or taking any 
action in reliance on the contents of this information is strictly 
prohibited.*


_-----------------------------------------------------------------------------------------_

Reply via email to