I compile/make a distribution, with either the 1.4 branch or master, using the -Phive-thriftserver, and attempt a JDBC connection to a mysql DB..using latest connector (5.1.36) jar.
When I setup the pyspark shell doing: bin/pyspark --jars mysql-connection...jar --driver-class-path mysql-connector..jar when I make a data frame from sqlContext.read.jdbc("jdbc://...) and perhaps I do df.show() things seem to work; but if I compile with out that -Phive-thriftserver, the same python, in the same settings (spark-env, spark-defaults), just hangs...never to return. I am curious, how the hive-thriftserver module plays into this type of interaction. Thanks in advance. Cheers, Aaron --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org For additional commands, e-mail: dev-h...@spark.apache.org