Mich

Are you building your own releases from the source? 
Which version of Scala? 

Again, the builds seem to be ok and working, but I don’t want to hit some 
‘gotcha’ if I could avoid it. 


> On Apr 13, 2016, at 7:15 AM, Mich Talebzadeh <mich.talebza...@gmail.com> 
> wrote:
> 
> Hi,
> 
> I am not sure this helps.
> 
> we use Spark 1.6 and Hive 2. I also use JDBC (beeline for Hive)  plus Oracle 
> and Sybase. They all work fine.
> 
> 
> HTH
> 
> Dr Mich Talebzadeh
>  
> LinkedIn  
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>
>  
> http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
>  
> 
> On 12 April 2016 at 23:42, Michael Segel <msegel_had...@hotmail.com 
> <mailto:msegel_had...@hotmail.com>> wrote:
> Hi, 
> This is probably a silly question on my part… 
> 
> I’m looking at the latest (spark 1.6.1 release) and would like to do a build 
> w Hive and JDBC support. 
> 
> From the documentation, I see two things that make me scratch my head.
> 
> 1) Scala 2.11 
> "Spark does not yet support its JDBC component for Scala 2.11.”
> 
> So if we want to use JDBC, don’t use Scala 2.11.x (in this case its 2.11.8)
> 
> 2) Hive Support
> "To enable Hive integration for Spark SQL along with its JDBC server and CLI, 
> add the -Phive and Phive-thriftserver profiles to your existing build 
> options. By default Spark will build with Hive 0.13.1 bindings.”
> 
> So if we’re looking at a later release of Hive… lets say 1.1.x … still use 
> the -Phive and Phive-thriftserver . Is there anything else we should 
> consider? 
> 
> Just asking because I’ve noticed that this part of the documentation hasn’t 
> changed much over the past releases. 
> 
> Thanks in Advance, 
> 
> -Mike
> 
> 

Reply via email to