Github user elyast commented on the pull request:

    https://github.com/apache/spark/pull/5063#issuecomment-84059146
  
    Hi @sryza @jongyoul,
    To give an illustration of this, let's say I have 10 nodes, 64 cores each, 
lets say 10 streaming jobs are running with 1 minute window (so every minute 
they will launch tasks) and lets say those 10 streaming jobs apparently run 10 
executors each.
    
    Lets say processing each minute takes 10 seconds. At the end of the day you 
will end up with 100 cores reservation on Mesos that will be there as long as 
streaming applications are running. (and the executor most probably are not 
doing any useful work most of the time). Currently there is no way to tweak how 
many cores it is given to executor, u may even think u can just assign 0.1 CPU 
for better utilization of resources.
    
    Let me know what do u think


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to