Github user GavinGavinNo1 commented on the pull request:

    https://github.com/apache/spark/pull/8713#issuecomment-139702061
  
    Thank you much for your comment. I think I haven't got what you mean for 
the ability to connect to multiple metastores.One HiveContext can only connect 
to one metastore, right? Or you mean creating multiple HiveContext to connect 
to multiple metastores with one SparkContext in one JVM? If so, it'll lead to 
the same JVM OOM problem in theory.
    We use spark 1.3.1 formerly. You know it isn't supported for dynamic 
allocation in standalone mode. We have several apps and each one launches 
timely tasks using HiveContext. Due to the limit of hardware resources, we must 
stop SparkContext to release CPU and memory resources when a task is done. When 
Spark 1.4.1 comes out, it brings many new features and we want to switch to 
this version. However, problems mentioned in my issue make a lot of trouble to 
us.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to