Here are the create table and alter table statements:
CREATE external TABLE if not exists mytable (
bc string
,src_spaceid string
,srcpvid string
,dstpvid string
,dst_spaceid string
,page_params map
,clickinfo map
,viewinfo array>
)
PARTITIONED BY ( datestamp string
Great, glad to hear you figured it out. I think you might be able to achieve
the same effect by specifying --auxpath when you start the Thrift hiveserver
(similar to the instructions for running CLI). That way JDBC client programs
won't need to do anything special.
JVS
On Jun 17, 2010, at 5:
Thanks Guys,
I found a workaround for the size limitation. It seems Mysql does support
upto 65,535 bytes for varchar columns. I manually modified the column
property without the patch code and it seems to ignore the size limit.
-ray
On Wed, Jun 16, 2010 at 8:02 PM, Carl Steinbach wrote:
>
hmm... Can you send the exact command and also the create table command for
this table.
Ashish
From: Pradeep Kamath [mailto:prade...@yahoo-inc.com]
Sent: Thursday, June 17, 2010 9:09 AM
To: hive-user@hadoop.apache.org
Subject: RE: alter table add partition error
Sorry - that was a cut-paste error - I don't have the action part - so I
am specifying key-value pairs. Since what I am trying to do seems like a
basic operation, I am wondering if it's something to do with my Serde -
unfortunately the error I see gives me no clue of what could be wrong -
any help
I am working with complex Hive queries and moderate amounts of data. I am
running into a problem where my JDBC connection is timing out before the
Hive answer is returned. The timeout seems to occur at about 3 minutes, but
the query takes at least 5. I'm running Hadoop 0.20.1, Hive 0.4.0, and I'
This problem is solved.
The JDBC client need to execute "add jar" operation to add
"hive_hbase-handler.jar", which is needed by the hive-server to run
map/reduce job.
The demo hive-hbase integration client code is below:
import java.sql.SQLException;
import java.sql.Connection;
import java.sql.Re