Github user mengxr commented on the pull request:

    https://github.com/apache/spark/pull/1124#issuecomment-46530221
  
    For the first scenario, it won't make the performance worse because the 
system doesn't really work now for serialized task result of size between 10M 
and spark.akka.frameSize. But yes, the ideal solution is to get the conf and 
set the right frame size. Maybe we can first request the conf from the driver, 
and then create a new ActorSystem on the backend with the correct frame size. 
This saves us from thinking about different deploy modes.
    
    Any ActorSystem created using `AkkaUtils.createActorSystem` has at least 
10M as the max frame size. It won't hang jobs. 
    
    Now I'm testing whether `- 1024` is risky in SchedulerBackend.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to