Re: java.lang.AbstractMethodError when implementing KafkaSerializationSchema

2020-03-25 Thread Arvid Heise
Hi Steve, I just noticed some inconsistency: Your class correctly contains the bridge method (last method in javap). Your stacktrace however mentions *org/apache/**flink/kafka/shaded*/org/apache/kafka/clients/producer/ProducerRecord instead of org.apache.kafka.clients.producer.ProducerRecord. Did

Re: Issue with single job yarn flink cluster HA

2020-03-25 Thread Dinesh J
Hi Andrey, Yes . The job is not restarting sometimes after the current leader failure. Below is the message displayed when trying to reach the application master url via yarn ui and message remains the same even if the yarn job is running for 2 days. During this time , even current yarn application

subscribe messages

2020-03-25 Thread Jianhui

Re: subscribe messages

2020-03-25 Thread Marta Paes Moreira
Hi, Jianhui! To subscribe, please send an e-mail to user-subscr...@flink.apache.org instead. For more information on mailing list subscriptions, check [1]. [1] https://flink.apache.org/community.html#mailing-lists On Wed, Mar 25, 2020 at 10:07 AM Jianhui <980513...@qq.com> wrote: > >

Flink1.10 Cluster's Received is zero in the web when consume from Kafka0.11

2020-03-25 Thread Jim Chen
Hi, all When I use flink-connector-kafka-0.11 consume Kafka0.11, the Cluster web's Received Record is always 0. However, the log is not empty. Any one can help me? [image: image.png]

Re: Flink1.10 Cluster's Received is zero in the web when consume from Kafka0.11

2020-03-25 Thread Chesnay Schepler
This is a known limitation, see https://issues.apache.org/jira/browse/FLINK-7286 . As a crude workaround you may either break the chain after the source / before the sink, or query the numRecordsOut metric for the source / numRecordsIn metric for the sink via the WebUI metrics tab or REST API.

Re: Flink1.10 Cluster's Received is zero in the web when consume from Kafka0.11

2020-03-25 Thread Jim Chen
Thanks for the tip! May be call env.disableOperatorChaining() can show the received on the dashborad Chesnay Schepler 于2020年3月25日周三 下午5:56写道: > This is a known limitation, see > https://issues.apache.org/jira/browse/FLINK-7286 . > > As a crude workaround you may either break the chain after the

Question of SQL join on nested field of right table

2020-03-25 Thread izual
Hi, Community: I defined a dim table(tblDim) with schema : root |-- dim_nested_fields: ROW<`id` INT, `product_name` STRING> and the part of SQL is : JOIN ... ON leftTable.`nested_field`.id = tblDim.`dim_nested_fields`.id. which will throw an exception like: Exception in thread "main" o

How to consume kafka from the last offset?

2020-03-25 Thread Jim Chen
Hi, All I use flink-connector-kafka-0.11 consume the Kafka0.11. In KafkaConsumer params, i set the group.id and auto.offset.reset. In the Flink1.10, set the kafkaConsumer.setStartFromGroupOffsets(); Then, i restart the application, found the offset is not from the last position. Any one know wh

Re: How to consume kafka from the last offset?

2020-03-25 Thread Dominik Wosiński
Hi Jim, Well, *auto.offset.reset *is only used when there is no offset saved for this *group.id * in Kafka. So, if You want to read the data from the latest record (and by latest I mean the newest here) You should assign the *group.id * that was not previously used

Re: Question of SQL join on nested field of right table

2020-03-25 Thread Jark Wu
Hi, This is because temporal table join doesn't support join on a nested join. In blink planner, a temporal table join will be translated into lookup join which will use the equality condition fields as the lookup keys. However, nested fields can't be lookup keys for now. Is that possible to have

Re:Re: Question of SQL join on nested field of right table

2020-03-25 Thread izual
Yes, I am trying to do this too, just as your advice. It shall work. Thank u. At 2020-03-25 19:16:21, "Jark Wu" wrote: Hi, This is because temporal table join doesn't support join on a nested join. In blink planner, a temporal table join will be translated into lookup join which will

How to make two insert-into sqls orderly

2020-03-25 Thread izual
We have two output sinks, and the order assurance is achieved by code like this: record.apply(insert_into_sink1).thenApply( recorder_2 = foo(record) recorder_2.insert_into_sink2 ) So when sink2 receives the record_2, record must be existed in sink1, then we can seek corresponding value of rec

Re: How to make two insert-into sqls orderly

2020-03-25 Thread Benchao Li
Hi izual, AFAIK, there is no way to to this in pure SQL. izual 于2020年3月25日周三 下午10:33写道: > We have two output sinks, and the order assurance is achieved by code like > this: > > record.apply(insert_into_sink1).thenApply( > > recorder_2 = foo(record) > > recorder_2.insert_into_sink2 > > ) > >

Re: How to debug checkpoints failing to complete

2020-03-25 Thread David Anderson
seeksst has already covered many of the relevant points, but a few more thoughts: I would start by checking to see if the checkpoints are failing because they timeout, or for some other reason. Assuming they are timing out, then a good place to start is to look and see if this can be explained by

Re: Dynamic Flink SQL

2020-03-25 Thread Krzysztof Zarzycki
Hello Arvid, Thanks for joining to the thread! First, did you take into consideration that I would like to dynamically add queries on the same source? That means first define one query, later the day add another one , then another one, and so on. A Week later kill one of those, start yet another on

Re: How to calculate one alarm strategy for each device or one alarm strategy for each type of IOT device

2020-03-25 Thread Dawid Wysakowicz
Hi, I think what you are doing makes sense in principal. Probably you don't want to store all the data until you have enough but compute only what's necessary on the fly. So e.g. for your example I would store only how many consecutive events with temperature higher than 10 you have seen so far.

savepoint - checkpoint - directory

2020-03-25 Thread Fanbin Bu
Hi, For savepoint, the dir looks like s3://bucket/savepoint-jobid/* To resume, i do: flink run -s s3://bucket/savepoint-jobid/ perfect! For checkpoint, the dir looks like s3://bucket/jobid/chk-100 s3://bucket/jobid/shared. <-- what is this for? To resume, which one should i do: flink run -s

Re: Dynamic Flink SQL

2020-03-25 Thread Arvid Heise
I saw that requirement but I'm not sure if you really need to modify the query at runtime. Unless you need reprocessing for newly added rules, I'd probably just cancel with savepoint and restart the application with the new rules. Of course, it depends on the rules themselves and how much state th

Re: Help with flink hdfs sink

2020-03-25 Thread Nick Bendtner
Thank you so much guys, I used "hdfs://nameservice/path/of/your/file", works fine for me now. Best, Nick On Fri, Mar 20, 2020 at 3:48 AM Yang Wang wrote: > I think Jingsong is right. You miss a slash in your HDFS path. > > Usually a HDFS path is like this "hdfs://nameservice/path/of/your/file".

Re: time-windowed joins and tumbling windows

2020-03-25 Thread Vinod Mehra
Thanks Timo for the suggestion! Also apologies for missing your response last week. I will try to come up with a reproducible test case. On Wed, Mar 18, 2020 at 9:27 AM Timo Walther wrote: > Hi Vinod, > > thanks for answering my questions. The == Optimized Logical Plan == > looks as expected. Ho

Re: How to calculate one alarm strategy for each device or one alarm strategy for each type of IOT device

2020-03-25 Thread yang xu
Thank you very much. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: How to make two insert-into sqls orderly

2020-03-25 Thread Zhenghua Gao
Hi izual, There is a workaround that you could implement your own sink which write record sink1 and sink2 in turn. *Best Regards,* *Zhenghua Gao* On Wed, Mar 25, 2020 at 10:41 PM Benchao Li wrote: > Hi izual, > > AFAIK, there is no way to to this in pure SQL. > > > > > izual 于2020年3月25日周三 下午

Re: Emit message at start and end of event time session window

2020-03-25 Thread Manas Kale
Hi Till, When I run the example code that you posted, the order of the three messages (window started, contents of window and window ended) is non-deterministic. This is surprising to me, as setParallelism(1) has been used in the pipeline - I assumed this should eliminate any form of race condition