Great, glad it worked out!
From: Todd Nist
Date: Thursday, February 19, 2015 at 9:19 AM
To: Silvio Fiorito
Cc: user@spark.apache.orgmailto:user@spark.apache.org
Subject: Re: SparkSQL + Tableau Connector
Hi Silvio,
I got this working today using your suggestion with the Initial SQL and a
Custom
Subject: Re: SparkSQL + Tableau Connector
First sorry for the long post. So back to tableau and Spark SQL, I'm
still missing something.
TL;DR
To get the Spark SQL Temp table associated with the metastore are there
additional steps required beyond doing the below?
Initial SQL on connection
at is the Hive metastore mysql database if you are using
mysql as the DB for metastore.
Date: Wed, 11 Feb 2015 19:53:35 -0500
Subject: Re: SparkSQL + Tableau Connector
From: tsind...@gmail.com
To: alee...@hotmail.com
CC: ar...@sigmoidanalytics.com; user@spark.apache.org
First sorry for the long
Sorry folks, it is executing Spark jobs instead of Hive jobs. I mis-read the
logs since there were other activities going on on the cluster.
From: alee...@hotmail.com
To: ar...@sigmoidanalytics.com; tsind...@gmail.com
CC: user@spark.apache.org
Subject: RE: SparkSQL + Tableau Connector
Date: Wed
@spark.apache.orgmailto:user@spark.apache.org
Subject: Re: SparkSQL + Tableau Connector
First sorry for the long post. So back to tableau and Spark SQL, I'm still
missing something.
TL;DR
To get the Spark SQL Temp table associated with the metastore are there
additional steps required beyond doing the below
...@hotmail.com
To: ar...@sigmoidanalytics.com; tsind...@gmail.com
CC: user@spark.apache.org
Subject: RE: SparkSQL + Tableau Connector
Date: Wed, 11 Feb 2015 11:56:44 -0800
I'm using mysql as the metastore DB with Spark 1.2.
I simply copy the hive-site.xml to /etc/spark/ and added the mysql JDBC
JAR
Hi Arush,
So yes I want to create the tables through Spark SQL. I have placed the
hive-site.xml file inside of the $SPARK_HOME/conf directory I thought that
was all I should need to do to have the thriftserver use it. Perhaps my
hive-site.xml is worng, it currently looks like this:
;
NULLMichael
30 Andy
19 Justin
NULLMichael
30 Andy
19 Justin
Time taken: 0.576 seconds
From: Todd Nist
Date: Tuesday, February 10, 2015 at 6:49 PM
To: Silvio Fiorito
Cc: user@spark.apache.org
Subject: Re: SparkSQL + Tableau Connector
Hi Silvio,
Ah, I like
Hi,
I'm trying to understand how and what the Tableau connector to SparkSQL is
able to access. My understanding is it needs to connect to the
thriftserver and I am not sure how or if it exposes parquet, json,
schemaRDDs, or does it only expose schemas defined in the metastore / hive.
For
I am a little confused here, why do you want to create the tables in hive.
You want to create the tables in spark-sql, right?
If you are not able to find the same tables through tableau then thrift is
connecting to a diffrent metastore than your spark-shell.
One way to specify a metstore to
BTW what tableau connector are you using?
On Wed, Feb 11, 2015 at 12:55 PM, Arush Kharbanda
ar...@sigmoidanalytics.com wrote:
I am a little confused here, why do you want to create the tables in
hive. You want to create the tables in spark-sql, right?
If you are not able to find the same
1. Can the connector fetch or query schemaRDD's saved to Parquet or JSON
files? NO
2. Do I need to do something to expose these via hive / metastore other
than creating a table in hive? Create a table in spark sql to expose via
spark sql
3. Does the thriftserver need to be configured to expose
@spark.apache.org
Subject: SparkSQL + Tableau Connector
Hi,
I'm trying to understand how and what the Tableau connector to SparkSQL is able
to access. My understanding is it needs to connect to the thriftserver and I
am not sure how or if it exposes parquet, json, schemaRDDs, or does it only
: Re: SparkSQL + Tableau Connector
Hi Silvio,
Ah, I like that, there is a section in Tableau for Initial SQL to be executed
upon connecting this would fit well there. I guess I will need to issue a
collect(), coalesce(1,true).saveAsTextFile(...) or use repartition(1), as the
file currently
Arush,
As for #2 do you mean something like this from the docs:
// sc is an existing SparkContext.val sqlContext = new
org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.sql(CREATE TABLE IF NOT EXISTS src (key INT, value
STRING))sqlContext.sql(LOAD DATA LOCAL INPATH
Arush,
Thank you will take a look at that approach in the morning. I sort of
figured the answer to #1 was NO and that I would need to do 2 and 3 thanks
for clarifying it for me.
-Todd
On Tue, Feb 10, 2015 at 5:24 PM, Arush Kharbanda ar...@sigmoidanalytics.com
wrote:
1. Can the connector
users using org.apache.spark.sql.parquet options
(path 'examples/src/main/resources/users.parquet’)
cache table users
From: Todd Nist
Date: Tuesday, February 10, 2015 at 3:03 PM
To: user@spark.apache.org
Subject: SparkSQL + Tableau Connector
Hi,
I'm trying to understand how and what
17 matches
Mail list logo