Cassandra table.
Spark SQL does not provide any feature for safe parameter binding, so
I thought about using the JDBC thrift server and the JDBC interface.
Inserting data into an external table from hive is performed by
running CREATE EXTERNAL TABLE ... STORED BY...
However, when trying
thought about using the JDBC thrift server and the JDBC interface.
Inserting data into an external table from hive is performed by running
CREATE EXTERNAL TABLE ... STORED BY...
However, when trying to execute this statement through the thrift
server, I always get the following error
Hi,
We've been using the JDBC thrift server for a couple of weeks now and running
queries on it like a regular RDBMS.
We're about to deploy it in a shared production cluster.
Any advice, warning on a such setup. Yarn or Mesos?
How about dynamic resource allocation in a already running thrift
Which version of spark you are using? You might encounter SPARK-6882
<https://issues.apache.org/jira/browse/SPARK-6882> if Kerberos is enabled.
-Sathish
On Thu, Oct 8, 2015 at 10:46 AM Younes Naguib <
younes.nag...@tritondigital.com> wrote:
> Hi,
>
>
>
> We’ve been u
.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19963.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr
-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e
What’s the command line you used to build Spark? Notice that you need to
add |-Phive-thriftserver| to build the JDBC Thrift server. This profile
was once removed in in v1.1.0, but added back in v1.2.0 because of
dependency issue introduced by Scala 2.11 support.
On 11/27/14 12:53 AM
Thanks for your response.
I'm using the following command.
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean
package
Regards.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift
-Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean
package
Regards.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19933.html
Sent from the Apache Spark User List mailing
Yes, I'm building it from Spark 1.1.0
Thanks in advance.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19937.html
Sent from the Apache Spark User List mailing list archive
.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19937.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-generate-assembly-jar-which-includes-jdbc-thrift-server-tp19887p19945.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Which version are you using? Also |.saveAsTable()| saves the table to
Hive metastore, so you need to make sure your Spark application points
to the same Hive metastore instance as the JDBC Thrift server. For
example, put |hive-site.xml| under |$SPARK_HOME/conf|, and run
|spark-shell
I am writing a Spark job to persist data using HiveContext so that it can
be accessed via the JDBC Thrift server. Although my code doesn't throw an
error, I am unable to see my persisted data when I query from the Thrift
server.
I tried three different ways to get this to work:
1)
val
14 matches
Mail list logo