I normally set default value to 0 if its empty
On Thu, Dec 19, 2013 at 12:04 PM, Andre Araujo wrote:
> Thanks, Nitin.
>
> More specifically, what about empty arrays? How can I convince Hive that
> an empty array -- array() -- has a type of array?
>
> Thanks again!
> Andre'
>
>
> On 19 December
Thanks, Nitin.
More specifically, what about empty arrays? How can I convince Hive that an
empty array -- array() -- has a type of array?
Thanks again!
Andre'
On 19 December 2013 17:22, Nitin Pawar wrote:
> from what I know, There is no direct way to type cast array directly as of
> now. If i
from what I know, There is no direct way to type cast array directly as of
now. If its added now then well and good
I normally typecast individual element of the array and then join them back
(mostly via udf).
I will see if I can find that code
On Thu, Dec 19, 2013 at 11:47 AM, Andre Araujo wr
Hi, all,
Is there a way to typecast arrays in hive? What I want is that for a
specific select, where I specify and empty array as the value for one of
the columns, for Hive to treat the empty array as array, instead of
the default, which is array.
I searched the documentation but couldn't find an
well. its back again. so while max_connections definitely should be
increased for me anyway it isn't the root cause. its currently set to 500
which is plenty. turning on the debugging log level to '2' on the
mysql/percona server its now reporting drop packets. doesn't say why but
we're still digg
Should hive-on-tez-conf.txt be added to the wiki, or is it not soup yet?
-- Lefty
On Mon, Dec 16, 2013 at 10:25 AM, Cheolsoo Park wrote:
> Closing the loop. We identified the issue with help from the Tez team. It
> was mis-configured mapreduce.reduce.cpu.vcores that caused problems.
>
> If anyo
It may be worth looking in webhcat.log and using job tracker UI
On Mon, Dec 2, 2013 at 6:21 AM, Jonathan Hodges wrote:
> Hi,
>
> I have setup WebHCat that is bundled with Hive 0.11.0. I am able to kick
> of map reduce jobs with the REST API successfully. However I am having
> some issues with
Thanks for the response and suggestion Rick.
That certainly seems like a plausible solution given that we are using the
mysql server as a metastore for more than a few dev and prod hive
instances. I think also with hive v0.12 the connection pooling middleware
is different (BoneCP) so who knows w
never mind, replacing schema.literal with 'avro.schema.literal' worked for
us!!
On Mon, Dec 16, 2013 at 4:01 PM, Sagar Mehta wrote:
> Hi Guys,
>
> We have an avro backed production table bucketed by a column and it works
> fine with the old Hive = 0.7 on chd3u2.
>
> Now we have moved to Hive 0.
Thanks Dima for pointing out the important info.
I'm doing a similar migration from *Hive 0.9 to Hive 0.12* and looking for some
steps that needs to be performed as part of the migrate\upgrade.
Can you please share me the steps that you followed or referred?
Also I'm looking for what else needs
Dear hive users,
I recently upgraded my single node cluster from CDH 4.5 to CDH 5 Beta 1 which
updated Hive from 0.10 to 0.11 and HBase from 0.94.6 to 0.95.2.
Back in CDH 4.5 I created an external table in hive that is stored in HBase.
Querying that table from hive worked fine, when I added the
Hi Nirmal,
I recently performed a similar upgrade process from Hive 0.7 to Hive 0.10
It seems that as some bugs in the 0.7 versions are solved in newer release,
some new bugs appear.
I'd recommend to setup a test environment running the newer version of Hive,
and testing the SQLs you are running
12 matches
Mail list logo