Hi, 
This is probably a silly question on my part… 

I’m looking at the latest (spark 1.6.1 release) and would like to do a build w 
Hive and JDBC support. 

From the documentation, I see two things that make me scratch my head.

1) Scala 2.11 
"Spark does not yet support its JDBC component for Scala 2.11.”

So if we want to use JDBC, don’t use Scala 2.11.x (in this case its 2.11.8)

2) Hive Support
"To enable Hive integration for Spark SQL along with its JDBC server and CLI, 
add the -Phive and Phive-thriftserver profiles to your existing build options. 
By default Spark will build with Hive 0.13.1 bindings.”

So if we’re looking at a later release of Hive… lets say 1.1.x … still use the 
-Phive and Phive-thriftserver . Is there anything else we should consider? 

Just asking because I’ve noticed that this part of the documentation hasn’t 
changed much over the past releases. 

Thanks in Advance, 

-Mike

Reply via email to