Could you clarify what you mean by "build another Spark and work through Spark Submit"?
If you are referring to utilizing Spark spark and thrift, you could start the Spark service and then have your spark-shell, spark-submit, and/or thrift service aim at the master you have started. On Thu Feb 05 2015 at 2:02:04 AM Ashutosh Trivedi (MT2013030) < ashutosh.triv...@iiitb.org> wrote: > Hi Denny , Ismail one last question.. > > > Is it necessary to build another Spark and work through Spark-submit ? > > > I work on IntelliJ using SBT as build script, I have Hive set up with > postgres as metastore, I can run the hive server using command > > *hive --service metastore* > > *hive --service hiveserver2* > > > After that if I can use hive-context in my code > > val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc) > > > Do some processing on RDD and persist it on hive using registerTempTable > > and tableau can extract that RDD persisted on hive. > > > Regards, > > Ashutosh > > > ------------------------------ > *From:* Denny Lee <denny.g....@gmail.com> > > *Sent:* Thursday, February 5, 2015 1:27 PM > *To:* Ashutosh Trivedi (MT2013030); İsmail Keskin > *Cc:* user@spark.apache.org > *Subject:* Re: Tableau beta connector > The context is that you would create your RDDs and then persist them in > Hive. Once in Hive, the data is accessible from the Tableau extract through > Spark thrift server. > On Wed, Feb 4, 2015 at 23:36 Ashutosh Trivedi (MT2013030) < > ashutosh.triv...@iiitb.org> wrote: > >> Thanks Denny and Ismail. >> >> >> Denny ,I went through your blog, It was great help. I guess tableau >> beta connector also following the same procedure,you described in blog. I >> am building the Spark now. >> >> Basically what I don't get is, where to put my data so that tableau can >> extract. >> >> >> So Ismail,its just Spark SQL. No RDDs I think I am getting it now . We >> use spark for our big data processing and we want *processed data (Rdd)* >> into tableau. So we should put our data in hive metastore and tableau will >> extract it from there using this connector? Correct me if I am wrong. >> >> >> I guess I have to look at how thrift server works. >> ------------------------------ >> *From:* Denny Lee <denny.g....@gmail.com> >> *Sent:* Thursday, February 5, 2015 12:20 PM >> *To:* İsmail Keskin; Ashutosh Trivedi (MT2013030) >> *Cc:* user@spark.apache.org >> *Subject:* Re: Tableau beta connector >> >> Some quick context behind how Tableau interacts with Spark / Hive >> can also be found at >> https://www.concur.com/blog/en-us/connect-tableau-to-sparksql - its for >> how to connect from Tableau to the thrift server before the official >> Tableau beta connector but should provide some of the additional context >> called out. HTH! >> >> On Wed Feb 04 2015 at 10:47:23 PM İsmail Keskin < >> ismail.kes...@dilisim.com> wrote: >> >>> Tableau connects to Spark Thrift Server via an ODBC driver. So, none of >>> the RDD stuff applies, you just issue SQL queries from Tableau. >>> >>> The table metadata can come from Hive Metastore if you place your >>> hive-site.xml to configuration directory of Spark. >>> >>> On Thu, Feb 5, 2015 at 8:11 AM, ashu <ashutosh.triv...@iiitb.org> wrote: >>> >>>> Hi, >>>> I am trying out the tableau beta connector to Spark SQL. I have few >>>> basics >>>> question: >>>> Will this connector be able to fetch the schemaRDDs into tableau. >>>> Will all the schemaRDDs be exposed to tableau? >>>> Basically I am not getting what tableau will fetch at data-source? Is it >>>> existing files in HDFS? RDDs or something else. >>>> Question may be naive but I did not get answer anywhere else. Would >>>> really >>>> appreciate if someone has already tried it, can help me with this. >>>> >>>> Thanks, >>>> Ashutosh >>>> >>>> >>>> >>>> -- >>>> View this message in context: >>>> http://apache-spark-user-list.1001560.n3.nabble.com/Tableau-beta-connector-tp21512.html >>>> Sent from the Apache Spark User List mailing list archive at Nabble.com. >>>> >>>> --------------------------------------------------------------------- >>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org >>>> For additional commands, e-mail: user-h...@spark.apache.org >>>> >>>> >>>