Hi,

I've played with the feature to expose RDD via Thrift to enable JDBC
access. (Spark 1.2)


val eventsView = sqlContext.createSchemaRDD(eventSchemaRdd)
     eventsView.registerTempTable("Events")

HiveThriftServer2.startWithContext(sqlContext)


This works all fine.

Now, my understanding is you can't deploy this to a yarn-cluster. Is this
correct or what are my options here ? My major concern is scaleability
(e.g. having a lot of SQL requests, which may also not be trivial)

Thanks,
Marco

Reply via email to