Patrick Liu created SPARK-3544:
----------------------------------

             Summary: SparkSQL thriftServer cannot release locks correctly in 
Zookeeper
                 Key: SPARK-3544
                 URL: https://issues.apache.org/jira/browse/SPARK-3544
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 1.1.0
            Reporter: Patrick Liu
            Priority: Critical


Bug description:
The thriftServer cannot release locks correctly in Zookeeper.
Once a thriftServer is started, the 1st SQL submited by Beeline or JDBC 
requiring locks can be successfully computed.
However, the 2rd SQL requiring locks will be blocked. And the thriftServer log 
shows: INFO Driver: <PERFLOG method=acquireReadWriteLocks>.

2 Tests to illustrate the problem:
Test 1:
(0) Start thriftServer & use beeline to connect to it.
(1) Switch database(require locks);(OK)
(2) Drop table(require locks) “drop table src"; (Blocked)

Test 2:
(0) Start thriftServer & use beeline to connect to it.
(1) Drop table(require locks) "drop table src"; (OK)
(2) Drop another table(require locks) "drop table src2;"(Blocked)

Basic Information:
Spark 1.1.0
Hadoop 2.0.0-cdh4.6.0

Compile Command:
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m" 
mvn -Dhadoop.version=2.0.0-cdh4.6.0 -Phive -Pspark-ganglia-lgpl -DskipTests 
package 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to