Hi jiangjie,
Yeah I am using the second case. (Flink 1.7.1, Kafka
0.10.2, FlinkKafkaConsumer010)
But now there is a problem, the data is consumed normally, but the commit
offset is not continued. The following exception is found:
[image: image.png]
Becket Qin 于2019年9月5日周四 上午11:32写道:
> Hi
Hi, Shuyi:
What is the progress of the discussion? We also look forward to this
feature.
Thanks.
Shuyi Chen 于2018年6月8日周五 下午3:04写道:
> Thanks a lot for the comments, Till and Fabian.
>
> The RemoteEnvrionment does provide a way to specify jar files at
> construction, but we want the jar files
>
> Thanks for the help!
>
:29写道:
> you only have to compile the module that you changed along with
> flink-dist to test things locally.
>
> On 06.06.2018 10:27, Marvin777 wrote:
> > Hi, all.
> > It takes a long time to modify some of the code and recompile it. The
> > process is painful.
>
Hi, all.
It takes a long time to modify some of the code and recompile it. The
process is painful.
Is there any method that I can save time.
Thanks!
Hi, all:
I have some question about LatencyGauge change to histogram metric. Whether
such a scheme is feasible?
I want to know the latest progress on the question of FLINK-7608.
@zentol, you suggested that we should delay merging this PR by a week or 2,
and now What should I do in my version
That has been solved, Because of the hadoop version issue.
Thanks.
2017-11-08 17:54 GMT+08:00 Chesnay Schepler :
> For me they showed in user mailing list, but not in dev. (or maybe the
> reverse, not quite sure...)
>
> On 08.11.2017 10:47, Aljoscha Krettek wrote:
>
>> Hi,
>>