Hi Guys,
That's a big head start. It looks like I need to:
1) Configure Hive to use Derby as a meta db
2) Launch the hive thrift service with bin/hive --service hiveserver
3) Using the thrift api, I should be able to send queries from remote hosts
Am I missing anything from there?
Thanks!
On T
> I attached a HiveLet (Made up term)
That's a cool name!
Hive supports both a Thrift service as well as a partial JDBC interface.
Check out sample usage in service/src/test and jdbc/src/test. I can help you
set up the thrift service if you have problems.
On 2/19/09 2:16 PM, "Edward Capriolo" wrote:
> The best way to answer this is that all hadoop com
The best way to answer this is that all hadoop components work
remotely, assuming you have the proper configuration and library files
(the same ones from the remote cluster)
I attached a HiveLet (Made up term). It was my first API testing
program. It is more or less a 'One Shot', run the query and
Hi,
How do you execute hive queries programmatically and/or remotely? I'm still
new to hadoop and hive so I may be missing something obvious.
I recognize that a PDO/DBI/JDBC type connection doesn't make sense with
Hive. Nor does running queries from a web request.
I'd like to do something like: