[
https://issues.apache.org/jira/browse/HIVE-7593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14104308#comment-14104308
]
Hive QA commented on HIVE-7593:
-------------------------------
{color:red}Overall{color}: -1 at least one tests failed
Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12662806/HIVE-7593.1-spark.patch
{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 5958 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_opt_vectorization
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_fs_default_name2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_null
{noformat}
Test results:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/67/testReport
Console output:
http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/67/console
Test logs:
http://ec2-54-176-176-199.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-67/
Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}
This message is automatically generated.
ATTACHMENT ID: 12662806
> Instantiate SparkClient per user session [Spark Branch]
> -------------------------------------------------------
>
> Key: HIVE-7593
> URL: https://issues.apache.org/jira/browse/HIVE-7593
> Project: Hive
> Issue Type: Sub-task
> Components: Spark
> Reporter: Xuefu Zhang
> Assignee: Chinna Rao Lalam
> Attachments: HIVE-7593-spark.patch, HIVE-7593.1-spark.patch
>
>
> SparkContext is the main class via which Hive talk to Spark cluster.
> SparkClient encapsulates a SparkContext instance. Currently all user sessions
> share a single SparkClient instance in HiveServer2. While this is good enough
> for a POC, even for our first two milestones, this is not desirable for a
> multi-tenancy environment and gives least flexibility to Hive users. Here is
> what we propose:
> 1. Have a SparkClient instance per user session. The SparkClient instance is
> created when user executes its first query in the session. It will get
> destroyed when user session ends.
> 2. The SparkClient is instantiated based on the spark configurations that are
> available to the user, including those defined at the global level and those
> overwritten by the user (thru set command, for instance).
> 3. Ideally, when user changes any spark configuration during the session, the
> old SparkClient instance should be destroyed and a new one based on the new
> configurations is created. This may turn out to be a little hard, and thus
> it's a "nice-to-have". If not implemented, we need to document that
> subsequent configuration changes will not take effect in the current session.
> Please note that there is a thread-safety issue on Spark side where multiple
> SparkContext instances cannot coexist in the same JVM (SPARK-2243). We need
> to work with Spark community to get this addressed.
> Besides above functional requirements, avoid potential issues is also a
> consideration. For instance, sharing SC among users is bad, as resources
> (such as jar for UDF) will be also shared, which is problematic. On the other
> hand, one SC per job seems too expensive, as the resource needs to be
> re-rendered even there isn't any change.
--
This message was sent by Atlassian JIRA
(v6.2#6252)