Re: [ANNOUNCE] Flink Table Store Joins Apache Incubator as Apache Paimon(incubating)

2023-03-27 文章 yu zelin
Congratulations!

Best,
Yu Zelin

> 2023年3月27日 17:23,Yu Li  写道:
> 
> Dear Flinkers,
> 
> As you may have noticed, we are pleased to announce that Flink Table Store 
> has joined the Apache Incubator as a separate project called Apache 
> Paimon(incubating) [1] [2] [3]. The new project still aims at building a 
> streaming data lake platform for high-speed data ingestion, change data 
> tracking and efficient real-time analytics, with the vision of supporting a 
> larger ecosystem and establishing a vibrant and neutral open source community.
> 
> We would like to thank everyone for their great support and efforts for the 
> Flink Table Store project, and warmly welcome everyone to join the 
> development and activities of the new project. Apache Flink will continue to 
> be one of the first-class citizens supported by Paimon, and we believe that 
> the Flink and Paimon communities will maintain close cooperation.
> 
> 亲爱的Flinkers,
> 
> 正如您可能已经注意到的,我们很高兴地宣布,Flink Table Store 已经正式加入 Apache 孵化器独立孵化 [1] [2] 
> [3]。新项目的名字是 Apache 
> Paimon(incubating),仍致力于打造一个支持高速数据摄入、流式数据订阅和高效实时分析的新一代流式湖仓平台。此外,新项目将支持更加丰富的生态,并建立一个充满活力和中立的开源社区。
> 
> 在这里我们要感谢大家对 Flink Table Store 项目的大力支持和投入,并热烈欢迎大家加入新项目的开发和社区活动。Apache Flink 
> 将继续作为 Paimon 支持的主力计算引擎之一,我们也相信 Flink 和 Paimon 社区将继续保持密切合作。
> 
> Best Regards,
> Yu (on behalf of the Apache Flink PMC and Apache Paimon PPMC)
> 
> 致礼,
> 李钰(谨代表 Apache Flink PMC 和 Apache Paimon PPMC)
> 
> [1] https://paimon.apache.org/
> [2] https://github.com/apache/incubator-paimon
> [3] https://cwiki.apache.org/confluence/display/INCUBATOR/PaimonProposal



Re: 【SQL Gateway - HiveServer2】UnsupportedOperationException: Unrecognized TGetInfoType value: CLI_ODBC_KEYWORDS.

2022-11-02 文章 yu zelin
Hi,
  Sorry for slow reply. I have debugged and found why the error occurs. Please 
try using Hive2’s Beeline to connect to the SQL Gateway. I think it is a quick 
solution. 

   The ‘CLI_ODBC_KEYWORDS' is new in Hive3, and in Initialization phase of 
connection, Beeline will send a request using this InfoType. Because we didn’t 
handle it, the value is null, but Hive requires this value. So, an error 
occurred and Hive will close the I/O after finding it is null (this is why this 
shows in log:  java.net.SocketException: Socket closed), and the client can’t 
communicate with gateway anymore. Hive2’s Beeline won’t send this, and I think 
using it to send your SQL to Hive3 environment is OK.

  We will fix this later.
Best,
yuzelin
  

> 2022年11月1日 14:52,QiZhu Chan  写道:
> 
> Hi team,
>   I had starting the SQL Gateway with the HiveServer2 Endpoint, and then I 
> submit SQL with Apache Hive Beeline, but I get the following exception:
> 
> java.lang.UnsupportedOperationException: Unrecognized TGetInfoType value: 
> CLI_ODBC_KEYWORDS.
>   at 
> org.apache.flink.table.endpoint.hive.HiveServer2Endpoint.GetInfo(HiveServer2Endpoint.java:371)
>  [flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$GetInfo.getResult(TCLIService.java:1537)
>  [flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$GetInfo.getResult(TCLIService.java:1522)
>  [flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
> [flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> [flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  [flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.lang.Thread.run(Thread.java:834) [?:?]
> 2022-11-01 13:55:33,885 ERROR org.apache.thrift.server.TThreadPoolServer  
>  [] - Thrift error occurred during processing of message.
> org.apache.thrift.protocol.TProtocolException: Required field 'infoValue' is 
> unset! Struct:TGetInfoResp(status:TStatus(statusCode:ERROR_STATUS, 
> infoMessages:[*java.lang.UnsupportedOperationException:Unrecognized 
> TGetInfoType value: CLI_ODBC_KEYWORDS.:9:8, 
> org.apache.flink.table.endpoint.hive.HiveServer2Endpoint:GetInfo:HiveServer2Endpoint.java:371,
>  
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$GetInfo:getResult:TCLIService.java:1537,
>  
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$GetInfo:getResult:TCLIService.java:1522,
>  org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39, 
> org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39, 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:286,
>  
> java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1128,
>  
> java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:628,
>  java.lang.Thread:run:Thread.java:834], errorMessage:Unrecognized 
> TGetInfoType value: CLI_ODBC_KEYWORDS.), infoValue:null)
>   at 
> org.apache.hive.service.rpc.thrift.TGetInfoResp.validate(TGetInfoResp.java:379)
>  ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetInfo_result.validate(TCLIService.java:5228)
>  ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetInfo_result$GetInfo_resultStandardScheme.write(TCLIService.java:5285)
>  ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetInfo_result$GetInfo_resultStandardScheme.write(TCLIService.java:5254)
>  ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$GetInfo_result.write(TCLIService.java:5205)
>  ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:53) 
> ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
> ~[flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  [flink-sql-connector-hive-3.1.2_2.12-1.16.0.jar:1.16.0]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> 

Re: flink sql client取消sql-clients-default.yaml后那些预置catalogs建议在哪里定义呢?

2022-10-31 文章 yu zelin
Hi,
Leonard 提到的 -i 参数可以满足你的需求。在初始化SQL文件中可以SET/RESET属性,CREATE/DROP等。
更多信息请查看:  
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sqlclient/#sql-client-startup-options
 


祝好,
yuzelin

> 2022年10月31日 16:57,Leonard Xu  写道:
> 
> Hi,
> 
> 我记得有个-i 参数可以指定初始化sql文件,你贴你的初始化sql在文件里加进去就可以了。
> 
> 祝好,
> Leonard
> 
> 
> 
> 
>> 2022年10月31日 下午4:52,casel.chen  写道:
>> 
>> flink新版本已经找不到sql-clients-default.yaml文件了,那么之前配置的那些预置catalogs建议在哪里定义呢?通过初始化sql么?
>