[ 
https://issues.apache.org/jira/browse/HIVE-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13251965#comment-13251965
 ] 

Mithun Radhakrishnan commented on HIVE-2928:
--------------------------------------------

@Carl: Thanks. I'd seen that thread before, but I hadn't actually tested that 
out myself. I just did, and here's what I found out:

1. If the complete schema is created apriori (using the 
hive-schema-0.9.0.oracle.sql in the attached patch), then hive works with 
Oracle. (This is with "datanucleus.validateColumns = false".)
2. What's neat is that "datanucleus.autoCreateSchema = true" doesn't mess this 
up, because the schema is completely constructed. The exception in the Original 
Description was a result of there being schema differences that JDO attempted 
to resolve.

(For the record, Datanucleus does recommend turning both flags off, for the 
sake of performance. They're meant to be used for the first start-up.)

The offshoot of this would be that any changes in the schema would have to be 
resolved using a migration-script for Oracle, and won't be done automatically 
by the JDO-lib.

I've modified the attached patch to remove the proposed package.jdo change, and 
keep just the Oracle schema-sql script.

Thanks a bunch.

                
> Support for Oracle-backed Hive-Metastore ("longvarchar" to "clob" in 
> package.jdo)
> ---------------------------------------------------------------------------------
>
>                 Key: HIVE-2928
>                 URL: https://issues.apache.org/jira/browse/HIVE-2928
>             Project: Hive
>          Issue Type: New Feature
>          Components: Metastore
>    Affects Versions: 0.9.0
>            Reporter: Mithun Radhakrishnan
>         Attachments: HIVE-2928.patch
>
>
> I'm trying to get the Hive-Metastore to work when backed by an Oracle 
> backend. There's a change to hive's package.jdo that I'd like advice/comments 
> on.
> One sticking point on working with Oracle has been the TBLS table (MTable) 
> and its 2 LONGVARCHAR properties (VIEW_ORIGINAL_TEXT and VIEW_EXPANDED_TEXT). 
> Oracle doesn't support more than one LONGVARCHAR property per table (for 
> reason of legacy), and prefers that one use CLOBs instead. If one switches to 
> CLOB properties, with no modification to hive's package.jdo, one sees the 
> following exception:
> <quote>
> Incompatible data type for column TBLS.VIEW_EXPANDED_TEXT : was CLOB
> (datastore), but type expected was LONGVARCHAR (metadata). Please check that
> the type in the datastore and the type specified in the MetaData are
> consistent.
> org.datanucleus.store.rdbms.exceptions.IncompatibleDataTypeException:
> Incompatible data type for column TBLS.VIEW_EXPANDED_TEXT : was CLOB
> (datastore), but type expected was LONGVARCHAR (metadata). Please check that
> the type in the datastore and the type specified in the MetaData are
> consistent.
>         at
> org.datanucleus.store.rdbms.table.ColumnImpl.validate(ColumnImpl.java:521)
>         at
> org.datanucleus.store.rdbms.table.TableImpl.validateColumns(TableImpl.java:2
> </quote>
> But if one rebuilds Hive with the package.jdo changed to use CLOBs instead of 
> LONGVARCHARs, things look promising:
> 1. The exception no longer occurs. Things seem to work with Oracle. (I've yet 
> to scale-test.)
> 2. These modified hive-libraries work as is with pre-existing mysql 
> metastores. Migrating data isn't a worry.
> 3. The unit-tests seem to run through. 
> Would there be opposition to changing the package.jdo's LONGVARCHAR 
> references to CLOB, if this works with mysql and with Oracle? 
> Mithun
> P.S. I also have a working hive-schema-0.9.0-oracle.sql script that I'm 
> testing, for the related issue of creating the required tables in Oracle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to