there are issues with locks not being released even when the transaction is
aborted. There are still entries in hive_locks.
I ended up deleting the row from hive_locks table manually. Not ideal but
you know that the lock should not be there as the table is dropped.
HTH
Dr Mich Talebzadeh
Hello, I'm using Apache Hive 1.2.1 and Apache Storm to stream data in hive
table.
After making some tests I tried to truncate my table, but sql execution
doesn't complete because of the lock on table:
select * from HIVE_LOCKS;
# TXN_ID, TXN_STATE, TXN_STARTED, TXN_LAST_HEARTBEAT, TXN_USER,
Thank you for the responses
On Sunday, August 21, 2016, Mich Talebzadeh
wrote:
> Hi Rahul,
>
> I don't believe you can drop a particular bucket in Hive
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn *
>
Hi Mich!
There is no problem is displaying records or performing any aggregations on
the records after inserting data from spark into the hive table. It is the
count query (in hive) that returns the wrong result in hive prior to
issuing the compute statistics command.
On Mon, Aug 22, 2016 at
Ok This is my test
1) create table in Hive and populate it with two rows
hive> create table testme (col1 int, col2 string);
OK
hive> insert into testme values (1,'London');
Query ID = hduser_20160821212812_2a8384af-23f1-4f28-9395-a99a5f4c1a4a
OK
hive> insert into testme values (2,'NY');
Query ID
Hi Furcy,
If I execute the command "ANALYZE TABLE TEST_ORC COMPUTE STATISTICS" before
checking the count from hive, Hive returns the correct count albeit it does
not spawn a map-reduce job for computing the count.
I'm running a HDP 2.4 Cluster with Hive 1.2.1.2.4 and Spark 1.6.1
If others can
Hi Nitin,
I confirm that there is something odd here.
I did the following test :
create table test_orc (id int, name string, dept string) stored as ORC;
insert into table test_orc values (1, 'abc', 'xyz');
insert into table test_orc values (2, 'def', 'xyz');
insert into table test_orc values
Hi!
I've noticed that hive has problems in registering new data records if the
same table is written to using both the hive terminal and spark sql. The
problem is demonstrated through the commands listed below
hive> use