Re: -yD Kerberos 认证问题

2019-12-30 Thread Terry Wang
Hi ~ 这个问题在最新的代码上已经修复了,在flink 1.9 上应该也是不存在这个问题的,你可以用下看看~ Best, Terry Wang > 2019年12月31日 14:18, > 写道: > > 大家好 > > 我们这里有通过-yd动态的提交Kerberos认证参数的需求, > > 想问下这个jira为啥被标记为了won’t fix,谢谢 > > https://issues.apache.org/jira/browse/FLINK-12130 >

-yD Kerberos 认证问题

2019-12-30 Thread sllence
大家好 我们这里有通过-yd动态的提交Kerberos认证参数的需求, 想问下这个jira为啥被标记为了won’t fix,谢谢 https://issues.apache.org/jira/browse/FLINK-12130

回复: StreamTableEnvironment.registerDatastream() 开放用户自定义的schemaDescriptionh和DeserializationSchema

2019-12-30 Thread aven . wu
你好! “把 JSONObject类型定义成object类型” 可以解决在确定字段和类型的情况下并且需要编码到程序中。 如果能开放这部分的能力,可以不通过编码(新增POJO)的方式来完成一个Datastream 到 stream 的table注册。 best wish 发送自 Windows 10 版邮件应用 发件人: Terry Wang 发送时间: 2019年12月30日 12:37 收件人: user-zh@flink.apache.org 主题: Re: StreamTableEnvironment.registerDatastream()

Re: Connect RocksDB which created by Flink checkpoint

2019-12-30 Thread Congxian Qiu
If you have specified the LOCAL_DIRECTORIES[1] , then the LOG will go into the LOCAL_DIRECTORIES. [1] https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/config.html#state-backend-rocksdb-localdir Best, Congxian Yun Tang 于2019年12月30日周一 下午7:03写道: > Hi Alex > > First of all,

Re: Duplicate tasks for the same query

2019-12-30 Thread Kurt Young
BTW, you could also have a more efficient version of deduplicating user table by using the topn feature [1]. Best, Kurt [1] https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/sql.html#top-n On Tue, Dec 31, 2019 at 9:24 AM Jingsong Li wrote: > Hi RKandoji, > > In theory,

Re: Flink SQL + savepoint

2019-12-30 Thread Kurt Young
I created a issue to trace this feature: https://issues.apache.org/jira/browse/FLINK-15440 Best, Kurt On Tue, Dec 31, 2019 at 8:00 AM Fanbin Bu wrote: > Kurt, > > Is there any update on this or roadmap that supports savepoints with Flink > SQL? > > On Sun, Nov 3, 2019 at 11:25 PM Kurt Young

Re: Duplicate tasks for the same query

2019-12-30 Thread Jingsong Li
Hi RKandoji, In theory, you don't need to do something. First, the optimizer will optimize by doing duplicate nodes. Second, after SQL optimization, if the optimized plan still has duplicate nodes, the planner will automatically reuse them. There are config options to control whether we should

Flink Weekly | 每周社区动态更新 - 2019/12/31

2019-12-30 Thread zhisheng
大家好, 很高兴与大家分享本周的社区摘要,其中包括讨论在 Flink SQL 中支持 JSON functions,新增 Flink 国内社区的活动和相关博客,以及汇总中文邮件中大家遇到的问题。 Flink 开发 = * [SQL] Forward Xu 发起了一个讨论,要在 Flink SQL 中支持 JSON functions,最后将讨论的结果和想法记录在了 FLIP-90 [1] *

Re: Flink SQL + savepoint

2019-12-30 Thread Fanbin Bu
Kurt, Is there any update on this or roadmap that supports savepoints with Flink SQL? On Sun, Nov 3, 2019 at 11:25 PM Kurt Young wrote: > It's not possible for SQL and Table API jobs playing with savepoints yet, > but I > think this is a popular requirement and we should definitely discuss the

Re: Duplicate tasks for the same query

2019-12-30 Thread RKandoji
Thanks Terry and Jingsong, Currently I'm on 1.8 version using Flink planner for stream proessing, I'll switch to 1.9 version to try out blink planner. Could you please point me to any examples (Java preferred) using SubplanReuser? Thanks, RK On Sun, Dec 29, 2019 at 11:32 PM Jingsong Li wrote:

Re: Stateful function metrics

2019-12-30 Thread Dan Pettersson
Hi Igal and Thanks for your quick response and yes, you got my second question right. I'm a building a small PoC around fraudulent trades and in short, I've fine-grained the functions to the level participantId + "::" + instrumentId (ie "BankA::AMAZON") In this flow of stock exchange messages,

Re: Submit high version compiled code jar to low version flink cluster?

2019-12-30 Thread Yun Tang
Hi Lei It's better to use the SAME version to submit job from client side. Even the major version of Flink is the same, the compatibility has not been declared to support. There exist a known issue due to some classes missing 'serialVersionUID'. [1] [1]

Re: Connect RocksDB which created by Flink checkpoint

2019-12-30 Thread Yun Tang
Hi Alex First of all, RocksDB is not created by Flink checkpoint mechanism. RocksDB would be launched once you have configured and use keyed state no mater whether you have ever enabled checkpoint. If you want to check configuration and data in RocksDB, please login the task manager node. The

Connect RocksDB which created by Flink checkpoint

2019-12-30 Thread qq
Hi all. How can I connect RocksDB which created by Flink checkpoint, aim to check the rocksdb configuration and data in rocksdb. Thanks very much. AlexFu

clean package maven dependency

2019-12-30 Thread 陈赋赟
HI ALL i want to package project ,but when mvn clean package executed,then throw exception which repository should i use? >> [ERROR] Failed to execute goal on project flink-dist_2.11: Could not resolve dependencies for project org.apache.flink:flink-dist_2.11:jar:1.8-SNAPSHOT: The

Re: Exactly-once ambiguities

2019-12-30 Thread Alessandro Solimando
> Regarding the event-time processing and watermarking, I have got that if > an event will be received late, after the allowed lateness time, it will be > dropped even though I think it is an antithesis of exactly-once semantic. > > Yes, allowed lateness is a compromise between exactly-once

Re: An issue with low-throughput on Flink 1.8.3 running Yahoo streaming benchmarks

2019-12-30 Thread vino yang
Hi Shinhyung, Can you compare the performance of the different Flink versions based on the same environment (Or at least the same configuration of the node and framework)? I see there are some different configurations of both clusters and frameworks. It would be better to comparison in the same