All locks in Hive are on database, table, or partition level. There are
no row level locks. When using DbTxnManager the locking is chosen to be
as fine grained as possible (ie only partitions are locked when we can a
priori know the partitions that will be used in the query).
Alan.
Mich
Hi,
Thanks for all the useful info. I managed to set up hive concurrency using
Oracle Metastore for Hive. Had to modify hive-txn-schema-0.14.0.oracle.sql
script in order to drop the existing tables as I had created the metastore
with hive-schema-0.14.0.oracle.sql initially.
All the
I'm happy to look into improving the Regex serde performance, any tips
on where I should start looking?.
There are three things off the top of my head.
First up, the matcher needs to be reused within a single scan. You can
also check the groupCount exactly once for a given pattern.
Hi Mich,
I have faced same issue while inserting into table .nothing had helped
me on that.
Surprisingly , restarting cluster had worked for me. I am also interested
in solution to this.
Also can you please check if there is any lock on table causing mapper
waiting for lock on table.
-
Hi,
This is my first post to this forum :)
I found the only 'ZLIB, SNAPPY, LZO' are supported.
Is there a way to add custom splittable codec support for ORC file format in
hive.
Thanks
Amar Mhatre
Hi Sanjiv,
I can see that this can be a major problem as fundamentally we don’t have any
real clue about the cause of the issue logged somewhere
I did the following to “try” to resolve the issue
1. Rebooted the cluster. No luck
2. Reformatted the Namenode and cleaned up
Hi,
I'm working with Hive.
I have many queries executing in a specific database in Hive and I have
tried to created an table in that database.
The creating gets stopped until the queries end. If those queries last 10
hours I couldn't create any table in the next 10 hours, is it normal? It
seems a
There currently isn't a way to do that. What are your requirements that
would be easier with a custom codec? ORC uses the codecs in a very specific
way so that it can support indexing. By default ORC indexes each 10k rows
and the compression is done in blocks so that the reader can skip over
If you're seeing it list progress (or attempted progress) as here, this
isn't a locking issue. All locks are obtained before the job is
submitted to Hadoop.
Alan.
Mich Talebzadeh mailto:m...@peridale.co.uk
April 7, 2015 at 14:09
Hi,
Today I have noticed the following issue.
A simple
Hello,
Using Hadoop 2.6.0 and OOZIE 4.1.0 with apache-hive 1.0.
I am seeing an issue with the oozie workflow example failing. Its two
instructions
CREATE EXTERNAL TABLE test (a INT) STORED AS TEXTFILE LOCATION '${INPUT}';
INSERT OVERWRITE DIRECTORY '${OUTPUT}' SELECT * FROM test;
The first
As a secondary thought, is it possible to remove the table from mysql if
its not possible to remove it from hive. What all entries in the mysql
tables would I need to remove?
On Tue, Apr 7, 2015 at 10:52 AM, Udit Mehta ume...@groupon.com wrote:
Hi,
I was able to create a highly nested table
11 matches
Mail list logo