Hive doesn't work with ExecuteSQL as its JDBC driver does not support
all the JDBC API calls made by ExecuteSQL / PutSQL.  However I am
working on a Hive NAR to include ExecuteHiveQL and PutHiveQL
processors (https://issues.apache.org/jira/browse/NIFI-981), there is
a prototype pull request on GitHub
(https://github.com/apache/nifi/pull/372) if you'd like to try them
out. I am currently adding support for Kerberos and finishing up, then
will issue a new PR for the processors.

To use ExecuteScript in the meantime, you've got a couple of options
after downloading the driver and all its dependencies (or better yet,
the single "fat JAR"):

1) Add the location of the JAR(s) to the Module Directory property of
the ExecuteScript dialog. You will have to create your own connection,
if you're using Groovy then its Sql facility is quite nice
(http://www.schibsted.pl/2015/06/groovy-sql-an-easy-way-to-database-scripting/)

2) Create a Database Connection Pool configured to point at the JAR(s)
and use the Hive driver (org.apache.hive.jdbc.HiveDriver). Then you
can get a connection from there and continue on with Groovy SQL (for
example). I have a blog post about this:
http://funnifi.blogspot.com/2016/04/sql-in-nifi-with-executescript.html

Regards,
Matt

On Tue, Apr 26, 2016 at 8:07 AM, Pierre Villard
<pierre.villard...@gmail.com> wrote:
> Hi Mike,
>
> I never tried but using the JDBC client you should be able to query your
> Hive table using ExecuteSQL processor.
>
> Hope that helps,
> Pierre
>
>
> 2016-04-26 13:53 GMT+02:00 Mike Harding <mikeyhard...@gmail.com>:
>>
>> Hi All,
>>
>> I have a requirement to access a lookup Hive table to translate a code
>> number in a FlowFile to a readable name. I'm just unsure how trivial it is
>> to connect to the db from an ExecuteScript processor?
>>
>> Nifi and the hiveserver2 sit on the same node so I'm wondering if its
>> possible to use HiveServer2's JDBC client
>> (https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-JDBC)
>> without any issues?
>>
>> Thanks in advance,
>> Mike
>
>

Reply via email to