I am able to connect to MySQL Hive metastore from the client cluster
machine.
-sh-4.1$ mysql --user=hiveuser --password=pass --host=
hostname.vip.company.com
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 9417286
Server version: 5.5.12-eb-5.5.12-log MySQL-eb
The mysql command line doesn't use JDBC to talk to MySQL server, so
this doesn't verify anything.
I think this Hive metastore installation guide from Cloudera may be
helpful. Although this document is for CDH4, the general steps are the
same, and should help you to figure out the
Hello Lian
Can you share the URL ?
On Mon, Mar 30, 2015 at 6:12 PM, Cheng Lian lian.cs@gmail.com wrote:
The mysql command line doesn't use JDBC to talk to MySQL server, so
this doesn't verify anything.
I think this Hive metastore installation guide from Cloudera may be
helpful.
Ah, sorry, my bad...
http://www.cloudera.com/content/cloudera/en/documentation/cdh4/v4-2-0/CDH4-Installation-Guide/cdh4ig_topic_18_4.html
On 3/30/15 10:24 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote:
Hello Lian
Can you share the URL ?
On Mon, Mar 30, 2015 at 6:12 PM, Cheng Lian lian.cs@gmail.com
I have raised a JIRA - https://issues.apache.org/jira/browse/SPARK-6622 .
In order to track this issue and possibly if it requires a fix from Spark
On Tue, Mar 31, 2015 at 9:31 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) deepuj...@gmail.com wrote:
Hello Lian,
This blog talks about how to install Hive meta store. I
Yes am using yarn-cluster and i did add it via --files. I get Suitable
error not found error
Please share the spark-submit command that shows mysql jar containing
driver class used to connect to Hive MySQL meta store.
Even after including it through
--driver-class-path
I have few tables that are created in Hive. I wan to transform data stored
in these Hive tables using Spark SQL. Is this even possible ?
So far i have seen that i can create new tables using Spark SQL dialect.
However when i run show tables or do desc hive_table it says table not
found.
I am now
Seems Spark SQL accesses some more columns apart from those created by hive.
You can always recreate the tables, you would need to execute the table
creation scripts but it would be good to avoid recreation.
On Fri, Mar 27, 2015 at 3:20 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) deepuj...@gmail.com wrote:
I did copy
I did copy hive-conf.xml form Hive installation into spark-home/conf. IT
does have all the meta store connection details, host, username, passwd,
driver and others.
Snippet
==
configuration
property
namejavax.jdo.option.ConnectionURL/name
Since hive and spark SQL internally use HDFS and Hive metastore. The only
thing you want to change is the processing engine. You can try to bring
your hive-site.xml to %SPARK_HOME%/conf/hive-site.xml.(Ensure that the hive
site xml captures the metastore connection details).
Its a hack, i havnt
I can recreate tables but what about data. It looks like this is a obvious
feature that Spark SQL must be having. People will want to transform tons
of data stored in HDFS through Hive from Spark SQL.
Spark programming guide suggests its possible.
Spark SQL also supports reading and writing
Are you running on yarn?
- If you are running in yarn-client mode, set HADOOP_CONF_DIR to
/etc/hive/conf/ (or the directory where your hive-site.xml is located).
- If you are running in yarn-cluster mode, the easiest thing to do is to
add--files=/etc/hive/conf/hive-site.xml (or the path for
12 matches
Mail list logo