I invoke a drop table command and it hangs there. Here's the log. I am
using mysql and I can invoke describe command and create table through
mysql console, so I assume mysql works properly. Can anyone help this ?
Thanks
2015-03-11 14:48:09,441 INFO [main]: ql.Driver
(Driver.java:checkConcurren
Hi,
I am using a tool to move data from Oracle to HDFS via hiverserver 2 set up.
The tool supposed to produce a text file that hive supposed to load into a
table
I am getting the following error from hiverserver2
Loading data to table asehadoop.rs_lastcommit partition (p_orig=0, p_c
Hi,
I have set up hiverserv2 running on port 10010 and I can use beeline from
remote host to connect to the server host
beeline -u jdbc:hive2://rhes564:10010/default
org.apache.hive.jdbc.HiveDriver -n hduser -p x
scan complete in 5ms
Connecting to jdbc:hive2://rhes564:10010/default
S
hi, i'm using hdp2.2 (VM) with hive 0.14.0.2.2.0.0-2041.
did the following:
create table t1(id string);create table t2(id string);select
t1.INPUT__FILE__NAME from t1 join t2 on t1.id = t2.id
fails on: Execution failed with exit status: 2
notice this happens also on self joins:select t1_1.INPUT__
Hey Viral,
Is there a similar config for tez ?
Thanks
On Mon, Mar 9, 2015 at 6:36 PM, Viral Bajaria
wrote:
> We use the hive.job.name property to set meaningful job names. Look into
> using that before submitting queries.
>
> Thanks,
> Viral
>
>
> On Mon, Mar 9, 2015 at 2:47 PM, Alex Bohr
The aws instance is done. We are working to restore it.
Thanks,
Xuefu
On Tue, Mar 10, 2015 at 12:17 AM, wangzhenhua (G)
wrote:
> Hi, all,
>
> When I build hive source using Maven, it gets stuck in:
> "Downloading:
> http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark_2.10-1.3-r
@Xuefu: Thanks, I know this. Since I haven't got any answer there I sought
maybe I could get some help here.
On Tue, Mar 10, 2015 at 2:42 PM, Xuefu Zhang wrote:
> This question seems more suitable to Spark community. FYI, this is Hive
> user list.
>
> On Tue, Mar 10, 2015 at 5:46 AM, shahab wro
This question seems more suitable to Spark community. FYI, this is Hive
user list.
On Tue, Mar 10, 2015 at 5:46 AM, shahab wrote:
> Hi,
>
> Does any one know how to deploy a custom UDAF jar file in SparkSQL? Where
> should i put the jar file so SparkSQL can pick it up and make it accessible
> fo
Hi,
Does any one know how to deploy a custom UDAF jar file in SparkSQL? Where
should i put the jar file so SparkSQL can pick it up and make it accessible
for SparkSQL applications?
I do not use spark-shell instead I want to use it in an spark application.
I posted same question to Spark mailing
i have executed below Hive query
create table table_llv_N_C as select
table_line_n_passed.chromosome_number,table_line_n_passed.position,
table_line_c_passed.id from table_line_n_passed join table_line_c_passed on
(table_line_n_passed.chromosome_number=table_line_c_passed.chromosome_number)
and
hi Swagatika,
*base on further log file analysis i think problem with low disk space.*
*below is full stack trace.*
..
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
processing row (tag=1)
{"key":{"joinkey0":"12"},"value":{"_col2":"."},"alias":1} at
org.apache.had
Hi all,
MYSQL ERROR is resolved by creating metastore tables by using
HIVE-SCHEMA-0.14.0 from HIVE-1.2.0-SNAPSHOT
Thanks & Regards
Amithsha
On Tue, Mar 10, 2015 at 2:31 PM, Amith sha wrote:
> Have tried that too
> Dropped the database & created a new one
> But Same error
>
>
>
> Thanks & Regard
Have tried that too
Dropped the database & created a new one
But Same error
Thanks & Regards
Amithsha
On Tue, Mar 10, 2015 at 2:03 PM, Mich Talebzadeh wrote:
> OK not surprising the script does not have a means to drop the object before
> creating it so the script is recreating the object ag
OK not surprising the script does not have a means to drop the object before
creating it so the script is recreating the object again!
So your best bet it to drop metastoreDB and recreate it as empty and run your
script again.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publicati
after executing this command
mysql>use metastoreDB
mysql>source hive-schema--0.14.0.mysql.sql
i am getting this
ERROR 1061 (42000): Duplicate key name 'PCS_STATS_IDX'
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 ro
>From the sql code do you know which table and primary key it is complaining?
I have used the script for Oracle 11g and modified one work with SAP Sybase ASE
15.7 and they both worked fine.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory
Hi Srinivas
Have used hive-schema-0.14.0 but same error found
2015-03-10 13:04:03,829 ERROR [main]: DataNucleus.Datastore
(Log4JLogger.java:error(115)) - An exception was thrown while
adding/validating class(es) : Specified key was too long; max key
length is 767 bytes
com.mysql.jdb
ok thank you Srinivas wil try it and reply now
Thanks & Regards
Amithsha
On Tue, Mar 10, 2015 at 12:59 PM, Srinivas Thunga
wrote:
> i am thinking to run hive-schema-0.14.0.mysql.sql. it will have 54 tables.
>
> Thanks & Regards,
>
> Srinivas T
>
> On Tue, Mar 10, 2015 at 12:55 PM, Amith sha wro
i am thinking to run hive-schema-0.14.0.mysql.sql. it will have 54 tables.
*Thanks & Regards,*
*Srinivas T*
On Tue, Mar 10, 2015 at 12:55 PM, Amith sha wrote:
> so can u suggest the solution
> Thanks & Regards
> Amithsha
>
>
> On Tue, Mar 10, 2015 at 12:53 PM, Srinivas Thunga
> wrote:
> > Hi,
Hi, all,
When I build hive source using Maven, it gets stuck in:
"Downloading:
http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark_2.10-1.3-rc1/org/pentaho/pentaho-aggdesigner-algorithm/5.1.5-jhyde/pentaho-aggdesigner-algorithm-5.1.5-jhyde.pom";,
and I can't connect to the server
so can u suggest the solution
Thanks & Regards
Amithsha
On Tue, Mar 10, 2015 at 12:53 PM, Srinivas Thunga
wrote:
> Hi,
>
> I guess you have the sql as
>
> hive-schema-1.1.0.mysql.sql
>
> for this you will get only 45 tables only as Nucleus will not be there.
>
> I am also faced the same problem
Hi,
I guess you have the sql as
hive-schema-1.1.0.mysql.sql
for this you will get only 45 tables only as Nucleus will not be there.
I am also faced the same problem
*Thanks & Regards,*
*Srinivas T*
On Tue, Mar 10, 2015 at 12:46 PM, Amith sha wrote:
> Now i am able to create a metastore dat
Now i am able to create a metastore database after exporting the hive
in .bashrc But the same mysql error is found
Thanks & Regards
Amithsha
On Tue, Mar 10, 2015 at 12:03 PM, Amith sha wrote:
> Hi all,
>
> I have Configured Hive 1.1.0 in Hadoop 2.4.1 successfully.Have started
> the metastore by
same error found while inserting into table
Thanks & Regards
Amithsha
On Tue, Mar 10, 2015 at 12:29 PM, Mahesh Sankaran
wrote:
> Hi,
> I am working in hive-1.2.0-snapshot configuration.I have configured
> metastore with mysql.I can create database and table.But when i try to
> insert data
Hi,
I am working in hive-1.2.0-snapshot configuration.I have configured
metastore with mysql.I can create database and table.But when i try to
insert data into table, am facing following error.
hive> insert into table test values (1);
Query ID = hadoop2_20150310122424_8c7b0c91-384e-46b4-9621-
25 matches
Mail list logo