Hi
Is it possible to retstrict access on few columns of a table using view on
top of table and expose allowed columns in view . But making table
invisible in select queries and show tables queries while view is visible?
Thanks
Hi
I have a doubt on hive locking mechanism.
I have 0.13 deployed on my cluster.
When I create explicit lock using
lock table tablename partition(partitionname) exclusive. It acquires lock
as expected.
I have a requirement to release the lock if hive connection with process
who created the lock
Hi
I have a requirement to dump parquet files in hive table using custom MR.
Parquet has so many data models- avro-parquet,proto-parquet,hive-parquet ?
Which one is recommended over other for inmemory plain java objects.
Hive internally uses MapredParquetOutputformat . Is it better than
I have a hive script , where I call a udf .
Script works fine when called from local shell script.
But when called from within oozie workflow, it throws an exception saying
jar not found.
add jar hdfs://hdfspath of jar;
create temporary function duncname as 'pkg.className';
then on calling
In Normal MR job can I configure ( cluster wide) default number of reducers
- if I don't specify any reducers in my job.
In MapReduce job how reduce tasks numbers are decided ?
I haven't override the mapred.reduce.tasks property and its creating ~700
reduce tasks.
Thanks
Hi
Is there any way in hive0.10 to rename a database ?
Thanks
Hi
Want to know hive across volume rename issue ?
I am getting error when loading hdfs file into hive table
If dir already exists in table.,loading fails but renames hdfs file.
In second try while loading renamed file it succeeds since file in table is
not present.
Why this issue comes and wats
what could be the reason for
create table test_table(a int);
FAILED: Error in metadata: MetaException(message:javax.jdo.JDOException:
Couldnt obtain a new sequence (unique id) : Binary logging not possible.
Message: Transaction level 'READ-COMMITTED' in InnoDB is not safe for
binlog mode
Hi
While writing in a partitioned hive table using oozie , at end of job I am
getting copy file exception.
Oozie is creating job with mapred user not by user who submitted the job,
and table is created using another user.
Does providing 777 access on table directory solves the problem? Or is
the table explicitly via UNLOCK TABLE
If you're using ZK for your locking I think the client dying (as opposed
to ending the session) should cause the lock to expire. If not, you may
have to assure the unlock happens in your application. Hope that helps.
Alan.
Shushant Arora shushantaror
stable.
Gotta luv it. Good luck.
On Sat, Sep 20, 2014 at 8:00 AM, Shushant Arora shushantaror...@gmail.com
wrote:
Hi Alan
I have 0.10 version of hive deployed in my org's cluster, I cannot update
that because of org's policy.
How can I achieve exclusive lock functionality while
Hive version 0.9 and later has a bug
While inserting in a hive table Hive takes an exclusive lock. But if table
is partitioned , and insert is in dynamic partition , it will take shared
lock on table but if all partitions are static then hive takes exclusive
lock on partitions in which data is
.
Shushant Arora shushantaror...@gmail.com
September 20, 2014 at 5:39
Hive version 0.9 and later has a bug
While inserting in a hive table Hive takes an exclusive lock. But if table
is partitioned , and insert is in dynamic partition , it will take shared
lock on table but if all partitions
While group by, if I do collect_set on some other column , documentation
says it will return Array of that column after removing duplicates, but its
not doing dedup?Is it expected?
I have a hive table in which data is populated from RDBMS on daily basis.
After map reduce each mapper write its data in hive table partitioned at
month level.
Issue is daily when job runs it fetches data of last day and each mapper
writes its output in seperate file. Shall I merge those files in
in detail. Say for ex.
-How are you planning to consume the data stored in this partition table?
- Are you looking for storage and performance optimizations? Etc.
Thanks
Saurabh
Sent from my iPhone, please avoid typos.
On 05-May-2014, at 3:33 pm, Shushant Arora shushantaror...@gmail.com
wrote
for sqoop - They imports data from RDBMS.
On Thu, May 1, 2014 at 7:13 PM, Shushant Arora
shushantaror...@gmail.comwrote:
Hi
I have a requirement to transfer data from RDBMS mysql to partitioned
hive table
Partitioned on Year and month.
Each record in mysql data contains timestamp of user
what is hive storage handlers?
What are the best practices for hive hbase integration?
19 matches
Mail list logo