Hi Nitin,
how to create table with AcidOuptut Format.?Can you send me
examples.
Thanks
Mahesh
On Tue, Nov 4, 2014 at 12:21 PM, Nitin Pawar
wrote:
> As the error says, your table file format has to be AcidOutPutFormat or
> table needs to be bucketed to perform update operation.
>
>
Hi ,
While Hive doesn't supports multi-value LIKE queries which are supported in
SQL : ex.
SELECT * FROM user_table WHERE first_name LIKE ANY ( 'root~%' , 'user~%' );
We can convert it into equivalent HIVE queries as :
SELECT * FROM user_table WHERE first_name LIKE 'root~%' OR first_name
LIKE '
As the error says, your table file format has to be AcidOutPutFormat or
table needs to be bucketed to perform update operation.
You may want to create a new table from your existing table with
AcidOutPutFormat and insert data from current table to that table and then
try update op on new table
On
Hi ,
Is anyone tried hive 0.14 configuration.I built it using maven from
github.
Insert is working fine but when i use update/delete i got the error.First
i created table and inserted rows.
CREATE TABLE new(id int ,name string)ROW FORMAT DELIMITED FIELDS
TERMINATED BY ',';
insert into ta
As Nitin mentions, the behavior is "to a string representing the timestamp of
that moment in the current system time zone". What are the timezone settings
on your machine?
$ TZ="GMT" date -r 0
Thu Jan 1 00:00:00 GMT 1970
$ TZ="UTC" date -r 0
Thu Jan 1 00:00:00 UTC 1970
$ TZ="Europe/London
I'd consider this behaviour as a bug and would like to raise it as such.
Is there anyone to confirm it's the same on Hive 0.14?
On Fri, Oct 31, 2014 at 3:41 PM, Maciek wrote:
> Actually confirmed! It's down to the timezone settings
> I've moved temporarily server/client settings to 'Atlantic/Rey
Late reply, but I saw a similar issue as described in HIVE-6928, which is
fixed. Not sure if its the same issue. You can also try
--outputFormat=vertical as a workaround.
Thanks
Szehon
On Sat, Nov 1, 2014 at 12:03 AM, Vikas Parashar
wrote:
> No Cli, will give you that access. this is all depe
What is your connection URL ? In case of Kerberos, it should be
jdbc:hive2://:/;principal=
thanks
Prasad
On Mon, Nov 3, 2014 at 1:16 AM, konark modi wrote:
> Hi Vaibhav,
>
> Yes I have done that. I am on hive version. 0.12.
>
> But the same problem.
>
> Regards
> Konark
> On Nov 3, 2014 2:1
Hi Team,
I am trying to create a parent child hierarchy using hive UDAF function
My Mapper output is coming as per expected.
But in reducer Merge function my AggregationBuffer object is over
written while calling ((LazyBinaryMap) ).getMap() function.This is
happening intermittently
Please le
Hi Vaibhav,
Yes I have done that. I am on hive version. 0.12.
But the same problem.
Regards
Konark
On Nov 3, 2014 2:14 PM, "Vaibhav Gumashta"
wrote:
> Are you doing a kinit before launching beeline shell? Also what Hive
> version are you on?
>
> You need to do a kinit before you start using th
Are you doing a kinit before launching beeline shell? Also what Hive
version are you on?
You need to do a kinit before you start using the beeline shell in a
kerberized cluster setup. Kinit will essentially log the end user to
Kerberos KDC, and set up the Kerberos TGT in the local system cache.
T
Hi All,
I am running into a strange issue since last week. Would request your help.
I have a remote client from where in I ant to connect to hive via beeline.
I have two hadoop clusters one with kerberos authentication and other
without authentication. I am able to connect to HIVE which does not
12 matches
Mail list logo