You can also use
ALTER TABLE SET TBLPROPERTIES('EXTERNAL'='TRUE')
And then drop it
Daniel
> On 22 במרץ 2015, at 04:15, Stephen Boesch wrote:
>
>
> There is a hive table for which the metadata points to a non-existing hdfs
> file. Simply calling
>
> drop table
>
> results in:
>
> Fa
You can (as a workaround) just create it's directory and then drop it
Daniel
> On 22 במרץ 2015, at 04:15, Stephen Boesch wrote:
>
>
> There is a hive table for which the metadata points to a non-existing hdfs
> file. Simply calling
>
> drop table
>
> results in:
>
> Failed to load me
Hi Mich,
:) A coincidence. Even I am new to hive. My test script which I am trying to
execute contains a drop and a create statement.
Script :-
use test_db;
DROP TABLE IF EXISTS demoHiveTable;
CREATE EXTERNAL TABLE demoHiveTable (
demoId string,
demoName string
) ROW FORMAT DELIMITED FIELDS TERM
There is a hive table for which the metadata points to a non-existing hdfs
file. Simply calling
drop table
results in:
Failed to load metadata for table: db.mytable
Caused by TAbleLoadingException: Failed to load metadata for table: db.mytable
File does not exist: hdfs://
Caused by Fi
Hi Amal;
Me coming from relational database (Oracle, Sybase) background J always
expect that a DDL statement like DROP TABLE has to run in its own
transaction and cannot be combined with a DML statement.
Now I suspect that when you run the command DROP TABLE IF EXIASTS
; like below in beeh
Hi Mich,
Thank you for your response. I was not aware of beeline. I have now included
this in my app and this looks a much better solution going forward. In the
last couple of hours I have tried to work with beeline but have been facing
some issues.
1. I was able to run on the remote
Thanks lot Steve for your time and response.
I did try that out and was able to execute the basic select statements. But the
concern I had is that the queries contained in the hql files are complicated
queries with joins, grouping etc and they also do some insert operations.
Essentially, I was
Hi Amal,
Do you have hiveserver2 running?
You can use beeline to execute the query outside of JAVA
beeline -u jdbc:hive2://rhes564:10010/default
org.apache.hive.jdbc.HiveDriver -n hduser -p ' -f
./create_index_on_t.sql
And the output shows there as well.
scan complete in 10m
Hi Hive experts,
I know that Hive is writing logs to ATS (application timeline server) when a
hive script is executed from the CLI. However, when I try to submit Hive jobs
from WebHCat, it seems that it does not write logs into ATS? How could I
configure Hive to make the jobs submitted from Web
Hi Everyone,
I am trying to execute a hive *.hql file from a java application. I had tried
a couple of ways of doing it through JDBC driver for hive and through spring
jdbc template but yet, the only way which was successful for me was to create a
runtime process and then execute it.
The java
There are more elegant ways I am sure, but you could also use a
java.io.BufferedReader and read the file content into a string and execute it
much as you would a hard coded SQL statement in your class.
Sent from my iPad
> On Mar 21, 2015, at 5:04 AM, Amal Gupta wrote:
>
> Hi Everyone,
>
> I
Hi experts
I am submitting the Hive job (run on Tez) using WebHCat, and I am wondering if
there is a programmatical way to get the YARN ID of the job I submitted? From
the WebHCat/Templeton I can only get the job id, which does not equal to the
YARN application ID sometimes...
Xiaoyong
12 matches
Mail list logo