Re: Drop support for old Hive in Spark 3.0?

2018-10-26 Thread Michael Shtelma
Which alternatives to ThriftServer do we really have? If ThriftServer is not there anymore, there is no other way to connect to Spark SQL using JDBC and this is the primary way for connecting BI tools to Spark SQL. Do I miss something? The question is, if Spark would like to be the tool, used

Inner join with the table itself

2018-01-15 Thread Michael Shtelma
Hi all, If I try joining the table with itself using join columns, I am getting the following error: "Join condition is missing or trivial. Use the CROSS JOIN syntax to allow cartesian products between these relations.;" This is not true, and my join is not trivial and is not a real cross join.

Re: Compiling Spark UDF at runtime

2018-01-13 Thread Michael Shtelma
Thanks! yes, this would be an option of course. HDFS or Alluxio. Sincerely, Michael Shtelma On Fri, Jan 12, 2018 at 3:26 PM, Georg Heiler <georg.kf.hei...@gmail.com> wrote: > You could store the jar in hdfs. Then even in yarn cluster mode your give > workaround should work. >

Compiling Spark UDF at runtime

2018-01-12 Thread Michael Shtelma
Hi all, I would like to be able to compile Spark UDF at runtime. Right now I am using Janino for that. My problem is, that in order to make my compiled functions visible to spark, I have to set janino classloader (janino gives me classloader with compiled UDF classes) as context class loader

Re: Accessing the SQL parser

2018-01-12 Thread Michael Shtelma
Hi AbdealiJK, In order to get AST you can parse your query with Spark Parser : LogicalPlan logicalPlan = sparkSession.sessionState().sqlParser().parsePlan("select * from myTable"); Afterwards you can implement your custom logic and execute it in this way: Dataset ds =