[ 
https://issues.apache.org/jira/browse/FLINK-10556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16669576#comment-16669576
 ] 

ASF GitHub Bot commented on FLINK-10556:
----------------------------------------

bowenli86 opened a new pull request #6969: [FLINK-10556][Table API & SQL]Add 
APIs to ExternalCatalog, CrudExternalCatalog and InMemoryCrudExternalCatalog 
for views and UDFs
URL: https://github.com/apache/flink/pull/6969
 
 
   ## What is the purpose of the change
   
   Currently Flink's external catalog have APIs for tables only. However, views 
and UDFs are also common objects in a catalog.
   
   Adding initial APIs and in-memory implementations for views and UDFs to 
external catalog. These APIs are  required when we store Flink views and UDFs 
in an external persistent storage. These APIs will evolve as we make progress 
in Flink-Hive integration.
   
   ## Brief change log
   
   - added initial APIs for views and UDFs in `ExternalCatalog` and 
`CrudExternalCatalog`
   - added in-memory implementations  in `InMemoryCrudExternalCatalog`
   - added relevant tests
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
     - Added unit tests in `InMemoryExternalCatalogTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
   none
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes )
     - If yes, how is the feature documented? (JavaDocs)
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Integration with Apache Hive
> ----------------------------
>
>                 Key: FLINK-10556
>                 URL: https://issues.apache.org/jira/browse/FLINK-10556
>             Project: Flink
>          Issue Type: New Feature
>          Components: Batch Connectors and Input/Output Formats, SQL Client, 
> Table API & SQL
>    Affects Versions: 1.6.0
>            Reporter: Xuefu Zhang
>            Assignee: Xuefu Zhang
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: Flink-Hive Metastore Connectivity Design.pdf, Proposal_ 
> Integrate Flink with Hive Ecosystem.pdf
>
>
> This is an umbrella JIRA tracking all enhancement and issues related to 
> integrating Flink with Hive ecosystem. This is an outcome of a discussion in 
> the community, and thanks go to everyone that provided feedback and interest.
> Specifically, we'd like to see the following features and capabilities 
> immediately in Flink:
> # Metadata interoperability
> # Data interoperability
> # Data type compatibility
> # Hive UDF support
> # DDL/DML/Query language compatibility
> For a longer term, we'd also like to add or improve:
> # Compatible SQL service, client tools, JDBC/ODBC drivers
> # Better task failure tolerance and task scheduling
> # Support other user customizations in Hive (storage handlers, serdes, etc).
> I will provide more details regarding the proposal in a doc shortly. Design 
> doc, if deemed necessary, will be provided in each related sub tasks under 
> this JIRA.
> Feedback and contributions are greatly welcome!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to