IONS.PART_NAME, which is an indexed column. Try again, undoing that
>> line of the patch first.
>>
>> JVS
>>
>> From: Ray Duong [ray.du...@gmail.com]
>> Sent: Wednesday, June 16, 2010 5:24 PM
>> To: hive-user@hadoop.
again, undoing that
> line of the patch first.
>
> JVS
>
> From: Ray Duong [ray.du...@gmail.com]
> Sent: Wednesday, June 16, 2010 5:24 PM
> To: hive-user@hadoop.apache.org
> Subject: Re: Hive-Hbase with large number of columns
>
> H
@hadoop.apache.org
Subject: Re: Hive-Hbase with large number of columns
Hi,
I applied the patch HIVE-1364 and rebuilt the metastore. I was able to create
an external table in Hive for a large number of columns ( upto 4000bytes).
Now when I tried to drop the external table I get the following
Hi,
I applied the patch HIVE-1364 and rebuilt the metastore. I was able to
create an external table in Hive for a large number of columns ( upto
4000bytes).
Now when I tried to drop the external table I get the following error
message. Is there another file that I need to modify in order to dro
Column-level properties are attractive for other reasons, but I don't think we
should do it as a workaround for underlying limits. I've noted in JIRA that I
think a LOB would be more appropriate here.
Note that while you're waiting for a permanent resolution, you can use ALTER
TABLE on your me
Yes, I think I might have to do that. I was trying to avoid multiple Hbase
scans with one big table.
BTW, would it better to store the column SERDE properties for each column
versus at the table level to avoid the 767 or 4000 byte limitation?
Thanks again,
-ray
On Tue, Jun 15, 2010 at 2:42 PM,
On Tue, Jun 15, 2010 at 5:04 PM, Ray Duong wrote:
> Thanks for all the help.
>
> -ray
>
>
> On Tue, Jun 15, 2010 at 1:26 PM, Carl Steinbach wrote:
>
>> Hi Ray,
>>
>> 4000 bytes is the maximum VARCHAR size allowed on Oracle 9i/10g/11g. As
>> far as I know this is the smallest maximum VARCHAR size
Thanks for all the help.
-ray
On Tue, Jun 15, 2010 at 1:26 PM, Carl Steinbach wrote:
> Hi Ray,
>
> 4000 bytes is the maximum VARCHAR size allowed on Oracle 9i/10g/11g. As far
> as I know this is the smallest maximum VARCHAR size out of the databases we
> currently try to support (MySQL, Oracle,
Hi Ray,
4000 bytes is the maximum VARCHAR size allowed on Oracle 9i/10g/11g. As far
as I know this is the smallest maximum VARCHAR size out of the databases we
currently try to support (MySQL, Oracle, Derby, etc).
Carl
On Tue, Jun 15, 2010 at 1:15 PM, Ray Duong wrote:
> Thank John/Carl,
>
> Ye
Thank John/Carl,
Yep, there seems to be a limit on the 767 byte size. So I see the patch
HIVE-1364 to set it to 4000 bytes. I'm using Db-derby, do you know if there
is a limit beyond 4000 bytes?
-ray
Error:
Caused by: ERROR 22001: A truncation error was encountered trying to shrink
VARCHAR 'se
Hi Ray,
There is currently a 767 byte size limit on SERDEPROPERTIES values (see
http://issues.apache.org/jira/browse/HIVE-1364). It's possible that you're
bumping into this limitation (assuming you abbreviated the column names in
your example).
On Tue, Jun 15, 2010 at 12:03 PM, John Sichi wrote:
That exception is coming from the metastore (trying to write the table
definition). Could you dig down into the Hive logs to see if you can get the
underlying cause?
You can get the logs to spew on console by adding "-hiveconf
hive.root.logger=DEBUG,console" to your Hive CLI invocation.
JVS
Hi,
I'm trying to map a Hbase table in Hive that contains large number of
columns. Since Hbase is designed to be a wide table, does Hive/Hbase
integration have any set limitation on the number of columns it can map in
one table? I seem to hit a limit at 10 columns.
Thanks,
-ray
create external
13 matches
Mail list logo