Re: Querying Drill with Spark DataFrame

2017-07-22 Thread Luqman Ghani
BTW, do we have to register JdbcDialect for every Spark/SQL context, or
once for a Spark server?

On Sun, Jul 23, 2017 at 2:26 AM, Luqman Ghani  wrote:

> I have found the solution for this error. I have to register a JdbcDialect
> for Drill as mentioned in the following post on SO:
>
> https://stackoverflow.com/questions/35476076/integrating-spark-sql-and-
> apache-drill-through-jdbc
>
> Thanks
>
> On Sun, Jul 23, 2017 at 2:10 AM, Luqman Ghani  wrote:
>
>> I have done that, but Spark is encompassing my query with same clause:
>> SELECT "CustomerID", etc FROM ( my query from table) so same error.
>>
>> On Sun, Jul 23, 2017 at 2:02 AM, ayan guha  wrote:
>>
>>> You can formulate a query in dbtable clause in jdbc reader.
>>>
>>> On Sun, 23 Jul 2017 at 6:43 am, Luqman Ghani  wrote:
>>>
 Hi,

 I'm working on integrating Apache Drill with Apache Spark with Drill's
 JDBC driver. I'm trying a simple select * from table from Drill through
 spark.sqlContext.load via jdbc driver. I'm running the following code in
 Spark Shell:

 > ./bin/spark-shell --driver-class-path 
 > /home/ubuntu/dir/spark/jars/jackson-databind-2.6.5.jar
 --packages org.apache.drill.exec:drill-jdbc-all:1.10.0

 scala>  val options = Map[String,String](

 "driver" -> "org.apache.drill.jdbc.Driver",

 "url" -> "jdbc:drill:drillbit=localhost:31010",

 "dbtable" -> "(SELECT * FROM dfs.root.`output.parquet`) AS Customers")

 scala> val df = spark.sqlContext.load("jdbc",options)

 scala> df.schema

 res0: org.apache.spark.sql.types.StructType =
 StructType(StructField(CustomerID,IntegerType,true),

 StructField(First_name,StringType,true),

 StructField(Last_name,StringType,true),

 StructField(Email,StringType,true), StructField(Gender,StringType,
 true),

 StructField(Country,StringType,true))

 It gives correct schema of DataFrame, but when I do:

 scala> df.show

 *I am facing the following error:*

 java.sql.SQLException: Failed to create prepared statement: PARSE
 ERROR: *Encountered "\"" at line 1, column 23.*

 Was expecting one of:

 "STREAM" ...

 "DISTINCT" ...

 "ALL" ...

 "*" ...

 "+" ...

 "-" ...

  ...

 __MORE_DRILL_GRAMMAR__ ...


 SQL Query SELECT * FROM (SELECT "CustomerID","First_name","Las
 t_name","Email","Gender","Country" FROM (SELECT * FROM
 dfs.root.`output.parquet`) AS Customers ) LIMIT 0

 Now, the Encountered quote is at "CustomerID" in the query.

 I tried to run the following query in Drill shell:

 SELECT "CustomerID" from dfs.root.`output.parquet`;

 It gives the same error of 'Encountered "\"" '.

 I want to ask if there is any way to remove the above "SELECT
 "CustomerID","First_name","Last_name","Email","Gender","Country" FROM" from
 the above query formulated by Spark and pushed down to Apache Drill via
 JDBC driver.

 Or any other way around like removing the Quotes?


 Thanks,

 Luqman

>>> --
>>> Best Regards,
>>> Ayan Guha
>>>
>>
>>
>


Re: Querying Drill with Spark DataFrame

2017-07-22 Thread Luqman Ghani
I have found the solution for this error. I have to register a JdbcDialect
for Drill as mentioned in the following post on SO:

https://stackoverflow.com/questions/35476076/integrating-spark-sql-and-apache-drill-through-jdbc

Thanks

On Sun, Jul 23, 2017 at 2:10 AM, Luqman Ghani  wrote:

> I have done that, but Spark is encompassing my query with same clause:
> SELECT "CustomerID", etc FROM ( my query from table) so same error.
>
> On Sun, Jul 23, 2017 at 2:02 AM, ayan guha  wrote:
>
>> You can formulate a query in dbtable clause in jdbc reader.
>>
>> On Sun, 23 Jul 2017 at 6:43 am, Luqman Ghani  wrote:
>>
>>> Hi,
>>>
>>> I'm working on integrating Apache Drill with Apache Spark with Drill's
>>> JDBC driver. I'm trying a simple select * from table from Drill through
>>> spark.sqlContext.load via jdbc driver. I'm running the following code in
>>> Spark Shell:
>>>
>>> > ./bin/spark-shell --driver-class-path 
>>> > /home/ubuntu/dir/spark/jars/jackson-databind-2.6.5.jar
>>> --packages org.apache.drill.exec:drill-jdbc-all:1.10.0
>>>
>>> scala>  val options = Map[String,String](
>>>
>>> "driver" -> "org.apache.drill.jdbc.Driver",
>>>
>>> "url" -> "jdbc:drill:drillbit=localhost:31010",
>>>
>>> "dbtable" -> "(SELECT * FROM dfs.root.`output.parquet`) AS Customers")
>>>
>>> scala> val df = spark.sqlContext.load("jdbc",options)
>>>
>>> scala> df.schema
>>>
>>> res0: org.apache.spark.sql.types.StructType =
>>> StructType(StructField(CustomerID,IntegerType,true),
>>>
>>> StructField(First_name,StringType,true),
>>>
>>> StructField(Last_name,StringType,true),
>>>
>>> StructField(Email,StringType,true), StructField(Gender,StringType,
>>> true),
>>>
>>> StructField(Country,StringType,true))
>>>
>>> It gives correct schema of DataFrame, but when I do:
>>>
>>> scala> df.show
>>>
>>> *I am facing the following error:*
>>>
>>> java.sql.SQLException: Failed to create prepared statement: PARSE ERROR: 
>>> *Encountered
>>> "\"" at line 1, column 23.*
>>>
>>> Was expecting one of:
>>>
>>> "STREAM" ...
>>>
>>> "DISTINCT" ...
>>>
>>> "ALL" ...
>>>
>>> "*" ...
>>>
>>> "+" ...
>>>
>>> "-" ...
>>>
>>>  ...
>>>
>>> __MORE_DRILL_GRAMMAR__ ...
>>>
>>>
>>> SQL Query SELECT * FROM (SELECT "CustomerID","First_name","Las
>>> t_name","Email","Gender","Country" FROM (SELECT * FROM
>>> dfs.root.`output.parquet`) AS Customers ) LIMIT 0
>>>
>>> Now, the Encountered quote is at "CustomerID" in the query.
>>>
>>> I tried to run the following query in Drill shell:
>>>
>>> SELECT "CustomerID" from dfs.root.`output.parquet`;
>>>
>>> It gives the same error of 'Encountered "\"" '.
>>>
>>> I want to ask if there is any way to remove the above "SELECT
>>> "CustomerID","First_name","Last_name","Email","Gender","Country" FROM" from
>>> the above query formulated by Spark and pushed down to Apache Drill via
>>> JDBC driver.
>>>
>>> Or any other way around like removing the Quotes?
>>>
>>>
>>> Thanks,
>>>
>>> Luqman
>>>
>> --
>> Best Regards,
>> Ayan Guha
>>
>
>


Re: Querying Drill with Spark DataFrame

2017-07-22 Thread Luqman Ghani
I have done that, but Spark is encompassing my query with same clause:
SELECT "CustomerID", etc FROM ( my query from table) so same error.

On Sun, Jul 23, 2017 at 2:02 AM, ayan guha  wrote:

> You can formulate a query in dbtable clause in jdbc reader.
>
> On Sun, 23 Jul 2017 at 6:43 am, Luqman Ghani  wrote:
>
>> Hi,
>>
>> I'm working on integrating Apache Drill with Apache Spark with Drill's
>> JDBC driver. I'm trying a simple select * from table from Drill through
>> spark.sqlContext.load via jdbc driver. I'm running the following code in
>> Spark Shell:
>>
>> > ./bin/spark-shell --driver-class-path 
>> > /home/ubuntu/dir/spark/jars/jackson-databind-2.6.5.jar
>> --packages org.apache.drill.exec:drill-jdbc-all:1.10.0
>>
>> scala>  val options = Map[String,String](
>>
>> "driver" -> "org.apache.drill.jdbc.Driver",
>>
>> "url" -> "jdbc:drill:drillbit=localhost:31010",
>>
>> "dbtable" -> "(SELECT * FROM dfs.root.`output.parquet`) AS Customers")
>>
>> scala> val df = spark.sqlContext.load("jdbc",options)
>>
>> scala> df.schema
>>
>> res0: org.apache.spark.sql.types.StructType = StructType(StructField(
>> CustomerID,IntegerType,true),
>>
>> StructField(First_name,StringType,true),
>>
>> StructField(Last_name,StringType,true),
>>
>> StructField(Email,StringType,true), StructField(Gender,StringType,true),
>>
>> StructField(Country,StringType,true))
>>
>> It gives correct schema of DataFrame, but when I do:
>>
>> scala> df.show
>>
>> *I am facing the following error:*
>>
>> java.sql.SQLException: Failed to create prepared statement: PARSE ERROR: 
>> *Encountered
>> "\"" at line 1, column 23.*
>>
>> Was expecting one of:
>>
>> "STREAM" ...
>>
>> "DISTINCT" ...
>>
>> "ALL" ...
>>
>> "*" ...
>>
>> "+" ...
>>
>> "-" ...
>>
>>  ...
>>
>> __MORE_DRILL_GRAMMAR__ ...
>>
>>
>> SQL Query SELECT * FROM (SELECT "CustomerID","First_name","
>> Last_name","Email","Gender","Country" FROM (SELECT * FROM
>> dfs.root.`output.parquet`) AS Customers ) LIMIT 0
>>
>> Now, the Encountered quote is at "CustomerID" in the query.
>>
>> I tried to run the following query in Drill shell:
>>
>> SELECT "CustomerID" from dfs.root.`output.parquet`;
>>
>> It gives the same error of 'Encountered "\"" '.
>>
>> I want to ask if there is any way to remove the above "SELECT
>> "CustomerID","First_name","Last_name","Email","Gender","Country" FROM" from
>> the above query formulated by Spark and pushed down to Apache Drill via
>> JDBC driver.
>>
>> Or any other way around like removing the Quotes?
>>
>>
>> Thanks,
>>
>> Luqman
>>
> --
> Best Regards,
> Ayan Guha
>


Re: [Spark] Working with JavaPairRDD from Scala

2017-07-22 Thread Lukasz Tracewski
Hi - and my thanks to you and Gerard. Only late hour in the night can explain 
how I could possibly miss this.

Cheers!
Lukasz

On 22/07/2017 10:48, yohann jardin wrote:

Hello Lukasz,


You can just:

val pairRdd = javapairrdd.rdd();


Then pairRdd will be of type RDD>, with K being 
com.vividsolutions.jts.geom.Polygon, and V being 
java.util.HashSet[com.vividsolutions.jts.geom.Polygon]



If you really want to continue with Java objects:

val calculateIntersection = new Function2, 
scala.collection.mutable.Set[Double]>() {}

and in the curly braces, overriding the call function.


Another solution would be to use lambda (I do not code much in scala and I'm 
definitely not sure this works, but I expect it to, so you'd have to test it):

javaparrdd.map((polygon: Polygon, hash: HashSet) => (polygon, 
hash.asScala.map(polygon.intersection(_).getArea))


De : Lukasz Tracewski 

Envoyé : samedi 22 juillet 2017 00:18
À : user@spark.apache.org
Objet : [Spark] Working with JavaPairRDD from Scala


Hi,

I would like to call a method on JavaPairRDD from Scala and I am not sure how 
to write a function for the "map". I am using a third-party library that uses 
Spark for geospatial computations and it happens that it returns some results 
through Java API. I'd welcome a hint how to write a function for 'map' such 
that JavaPairRDD is happy.

Here's a signature:
org.apache.spark.api.java.JavaPairRDD[com.vividsolutions.jts.geom.Polygon,java.util.HashSet[com.vividsolutions.jts.geom.Polygon]]
 = org.apache.spark.api.java.JavaPairRDD

Normally I would write something like this:

def calculate_intersection(polygon: Polygon, hashSet: HashSet[Polygon]) = {
  (polygon, hashSet.asScala.map(polygon.intersection(_).getArea))
}

javapairrdd.map(calculate_intersection)


... but it will complain that it's not a Java Function.

My first thought was to implement the interface, i.e.:


class PairRDDWrapper extends 
org.apache.spark.api.java.function.Function2[Polygon, HashSet[Polygon]]
{
  override def call(polygon: Polygon, hashSet: HashSet[Polygon]): (Polygon, 
scala.collection.mutable.Set[Double]) = {
(polygon, hashSet.asScala.map(polygon.intersection(_).getArea))
  }
}




I am not sure though how to use it, or if it makes any sense in the first 
place. Should be simple, it's just my Java / Scala is "little rusty".


Cheers,
Lucas



Re: Querying Drill with Spark DataFrame

2017-07-22 Thread ayan guha
You can formulate a query in dbtable clause in jdbc reader.

On Sun, 23 Jul 2017 at 6:43 am, Luqman Ghani  wrote:

> Hi,
>
> I'm working on integrating Apache Drill with Apache Spark with Drill's
> JDBC driver. I'm trying a simple select * from table from Drill through
> spark.sqlContext.load via jdbc driver. I'm running the following code in
> Spark Shell:
>
> > ./bin/spark-shell --driver-class-path
> /home/ubuntu/dir/spark/jars/jackson-databind-2.6.5.jar --packages
> org.apache.drill.exec:drill-jdbc-all:1.10.0
>
> scala>  val options = Map[String,String](
>
> "driver" -> "org.apache.drill.jdbc.Driver",
>
> "url" -> "jdbc:drill:drillbit=localhost:31010",
>
> "dbtable" -> "(SELECT * FROM dfs.root.`output.parquet`) AS Customers")
>
> scala> val df = spark.sqlContext.load("jdbc",options)
>
> scala> df.schema
>
> res0: org.apache.spark.sql.types.StructType =
> StructType(StructField(CustomerID,IntegerType,true),
>
> StructField(First_name,StringType,true),
>
> StructField(Last_name,StringType,true),
>
> StructField(Email,StringType,true), StructField(Gender,StringType,true),
>
> StructField(Country,StringType,true))
>
> It gives correct schema of DataFrame, but when I do:
>
> scala> df.show
>
> *I am facing the following error:*
>
> java.sql.SQLException: Failed to create prepared statement: PARSE ERROR: 
> *Encountered
> "\"" at line 1, column 23.*
>
> Was expecting one of:
>
> "STREAM" ...
>
> "DISTINCT" ...
>
> "ALL" ...
>
> "*" ...
>
> "+" ...
>
> "-" ...
>
>  ...
>
> __MORE_DRILL_GRAMMAR__ ...
>
>
> SQL Query SELECT * FROM (SELECT
> "CustomerID","First_name","Last_name","Email","Gender","Country" FROM
> (SELECT * FROM dfs.root.`output.parquet`) AS Customers ) LIMIT 0
>
> Now, the Encountered quote is at "CustomerID" in the query.
>
> I tried to run the following query in Drill shell:
>
> SELECT "CustomerID" from dfs.root.`output.parquet`;
>
> It gives the same error of 'Encountered "\"" '.
>
> I want to ask if there is any way to remove the above "SELECT
> "CustomerID","First_name","Last_name","Email","Gender","Country" FROM" from
> the above query formulated by Spark and pushed down to Apache Drill via
> JDBC driver.
>
> Or any other way around like removing the Quotes?
>
>
> Thanks,
>
> Luqman
>
-- 
Best Regards,
Ayan Guha


Querying Drill with Spark DataFrame

2017-07-22 Thread Luqman Ghani
Hi,

I'm working on integrating Apache Drill with Apache Spark with Drill's JDBC
driver. I'm trying a simple select * from table from Drill through
spark.sqlContext.load via jdbc driver. I'm running the following code in
Spark Shell:

> ./bin/spark-shell --driver-class-path
/home/ubuntu/dir/spark/jars/jackson-databind-2.6.5.jar --packages
org.apache.drill.exec:drill-jdbc-all:1.10.0

scala>  val options = Map[String,String](

"driver" -> "org.apache.drill.jdbc.Driver",

"url" -> "jdbc:drill:drillbit=localhost:31010",

"dbtable" -> "(SELECT * FROM dfs.root.`output.parquet`) AS Customers")

scala> val df = spark.sqlContext.load("jdbc",options)

scala> df.schema

res0: org.apache.spark.sql.types.StructType =
StructType(StructField(CustomerID,IntegerType,true),

StructField(First_name,StringType,true),

StructField(Last_name,StringType,true),

StructField(Email,StringType,true), StructField(Gender,StringType,true),

StructField(Country,StringType,true))

It gives correct schema of DataFrame, but when I do:

scala> df.show

*I am facing the following error:*

java.sql.SQLException: Failed to create prepared statement: PARSE
ERROR: *Encountered
"\"" at line 1, column 23.*

Was expecting one of:

"STREAM" ...

"DISTINCT" ...

"ALL" ...

"*" ...

"+" ...

"-" ...

 ...

__MORE_DRILL_GRAMMAR__ ...


SQL Query SELECT * FROM (SELECT
"CustomerID","First_name","Last_name","Email","Gender","Country" FROM
(SELECT * FROM dfs.root.`output.parquet`) AS Customers ) LIMIT 0

Now, the Encountered quote is at "CustomerID" in the query.

I tried to run the following query in Drill shell:

SELECT "CustomerID" from dfs.root.`output.parquet`;

It gives the same error of 'Encountered "\"" '.

I want to ask if there is any way to remove the above "SELECT
"CustomerID","First_name","Last_name","Email","Gender","Country" FROM" from
the above query formulated by Spark and pushed down to Apache Drill via
JDBC driver.

Or any other way around like removing the Quotes?


Thanks,

Luqman


Re: custom joins on dataframe

2017-07-22 Thread Sumedh Wale
The Dataset.join(right: Dataset[_], joinExprs: Column) API can use any 
arbitrary expression so you can use UDF for join.


The problem with all non-equality joins is that they use 
BroadcastNestedLoopJoin or equivalent, that is an (M X N) nested-loop 
which will be unusable for medium/large tables. At least one of the 
tables should be small for this to work with an acceptable performance. 
For example if one table has 100M rows after filter, and other 1M rows, 
then NLJ will result in 100 trillion rows to be scanned that will take 
very long under normal circumstances, but if one of the sides is much 
smaller after filter say few thousand rows then can be fine.


What you probably need for large tables is to implement own optimized 
join operator and use some join structure that can do the join 
efficiently without having to do nested loops (i.e. some fancy structure 
for efficient fuzzy joins). Its possible to do that using internal Spark 
APIs but its not easy and you have to implement an efficient join 
structure first. Or perhaps some existing libraries out there could work 
for you (like https://github.com/soundcloud/cosine-lsh-join-spark?).


--
Sumedh Wale
SnappyData (http://www.snappydata.io)

On Saturday 22 July 2017 09:09 PM, Stephen Fletcher wrote:
Normally a family of joins (left, right outter, inner) are performed 
on two dataframes using columns for the comparison ie left("acol") === 
ight("acol") . the comparison operator of the "left" dataframe does 
something internally and produces a column that i assume is used by 
the join.


What I want is to create my own comparison operation (i have a case 
where i want to use some fuzzy matching between rows and if they fall 
within some threshold we allow the join to happen)


so it would look something like

left.join(right, my_fuzzy_udf (left("cola"),right("cola")))

Where my_fuzzy_udf  is my defined UDF. My main concern is the column 
that would have to be output what would its value be ie what would the 
function need to return that the udf susbsystem would then turn to a 
column to be evaluated by the join.



Thanks in advance for any advice



-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Informing Spark about specific Partitioning scheme to avoid shuffles

2017-07-22 Thread saatvikshah1994
Hi everyone,

My environment is PySpark with Spark 2.0.0. 

I'm using spark to load data from a large number of files into a Spark
dataframe with fields say field1 to field10. While loading my data I have
ensured that records are partitioned by field1 and field2(without using
partitionBy). This was done when loading the data into a RDD of lists and
before the .toDF() call. So I assume Spark would not already know that such
a partitioning exists and might trigger a shuffle if I call a shuffling
transform using field1 or field2 as keys and then cache that information. 

Is it possible to inform Spark once I've created the data-frame about my
custom partitioning scheme? Or would spark have already discovered this
somehow before the shuffling transform call? 





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Informing-Spark-about-specific-Partitioning-scheme-to-avoid-shuffles-tp28922.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



custom joins on dataframe

2017-07-22 Thread Stephen Fletcher
Normally a family of joins (left, right outter, inner) are performed on two
dataframes using columns for the comparison ie left("acol") ===
ight("acol") . the comparison operator of the "left" dataframe does
something internally and produces a column that i assume is used by the
join.

What I want is to create my own comparison operation (i have a case where i
want to use some fuzzy matching between rows and if they fall within some
threshold we allow the join to happen)

so it would look something like

left.join(right, my_fuzzy_udf (left("cola"),right("cola")))

Where my_fuzzy_udf  is my defined UDF. My main concern is the column that
would have to be output what would its value be ie what would the function
need to return that the udf susbsystem would then turn to a column to be
evaluated by the join.


Thanks in advance for any advice


Re: Is there a way to run Spark SQL through REST?

2017-07-22 Thread Sumedh Wale

On Saturday 22 July 2017 01:31 PM, kant kodali wrote:

Is there a way to run Spark SQL through REST?


There is spark-jobserver 
(https://github.com/spark-jobserver/spark-jobserver). It does more than 
just REST API (like long running SparkContext).


regards

--
Sumedh Wale
SnappyData (http://www.snappydata.io)


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Is there a way to run Spark SQL through REST?

2017-07-22 Thread Jean Georges Perrin
There's Livi but it's pretty resource intensive.

I know it's not helpful but my company has developed its own and I am trying to 
Open Source it. 

Looks like there are quite a few companies who had the need and custom build. 

jg


> On Jul 22, 2017, at 04:01, kant kodali  wrote:
> 
> Is there a way to run Spark SQL through REST?


-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



RE: [Spark] Working with JavaPairRDD from Scala

2017-07-22 Thread yohann jardin
Hello Lukasz,


You can just:

val pairRdd = javapairrdd.rdd();


Then pairRdd will be of type RDD>, with K being 
com.vividsolutions.jts.geom.Polygon, and V being 
java.util.HashSet[com.vividsolutions.jts.geom.Polygon]



If you really want to continue with Java objects:

val calculateIntersection = new Function2, 
scala.collection.mutable.Set[Double]>() {}

and in the curly braces, overriding the call function.


Another solution would be to use lambda (I do not code much in scala and I'm 
definitely not sure this works, but I expect it to, so you'd have to test it):

javaparrdd.map((polygon: Polygon, hash: HashSet) => (polygon, 
hash.asScala.map(polygon.intersection(_).getArea))


De : Lukasz Tracewski 
Envoyé : samedi 22 juillet 2017 00:18
À : user@spark.apache.org
Objet : [Spark] Working with JavaPairRDD from Scala


Hi,

I would like to call a method on JavaPairRDD from Scala and I am not sure how 
to write a function for the "map". I am using a third-party library that uses 
Spark for geospatial computations and it happens that it returns some results 
through Java API. I'd welcome a hint how to write a function for 'map' such 
that JavaPairRDD is happy.

Here's a signature:
org.apache.spark.api.java.JavaPairRDD[com.vividsolutions.jts.geom.Polygon,java.util.HashSet[com.vividsolutions.jts.geom.Polygon]]
 = org.apache.spark.api.java.JavaPairRDD

Normally I would write something like this:

def calculate_intersection(polygon: Polygon, hashSet: HashSet[Polygon]) = {
  (polygon, hashSet.asScala.map(polygon.intersection(_).getArea))
}

javapairrdd.map(calculate_intersection)


... but it will complain that it's not a Java Function.

My first thought was to implement the interface, i.e.:


class PairRDDWrapper extends 
org.apache.spark.api.java.function.Function2[Polygon, HashSet[Polygon]]
{
  override def call(polygon: Polygon, hashSet: HashSet[Polygon]): (Polygon, 
scala.collection.mutable.Set[Double]) = {
(polygon, hashSet.asScala.map(polygon.intersection(_).getArea))
  }
}




I am not sure though how to use it, or if it makes any sense in the first 
place. Should be simple, it's just my Java / Scala is "little rusty".


Cheers,
Lucas


unsubscribe

2017-07-22 Thread Vasilis Hadjipanos
Please unsubscribe me

Is there a way to run Spark SQL through REST?

2017-07-22 Thread kant kodali
Is there a way to run Spark SQL through REST?