Hi,
Tzu-Li (Gordon) Tai wrote
> These seems odd. Are your events intended to be a list? If not, this
> should be a `DataStream
>
> `.
>
> From the code snippet you’ve attached in the first post, it seems like
> you’ve initialized your source incorrectly.
>
> `env.fromElements(List<...>)` will
Hi,
Here’s an example:
DataStream inputStream = …;
inputStream.addSink(new FlinkKafkaProducer09<>(
“defaultTopic”, new CustomKeyedSerializationSchema(), props));
Code for CustomKeyedSerializationSchema:
public class CustomKeyedSerializationSchema implements
KeyedDeserializationSchema {
Hi,
void output(DataStream> inputStream) {
These seems odd. Are your events intended to be a list? If not, this should be
a `DataStream`.
From the code snippet you’ve attached in the first post, it seems like you’ve
initialized your source incorrectly.
`env.fromElements(List<...>)` will take t
Hi,
We have been testing with the FsStateBackend for the last few days and have not
encountered this issue anymore.
However, we will evaluate the rocksdb backend again soon because we want
incremental checkpoint. I will report back if I have more updates.
Best regards,
Kien
On Jul 15, 2017,
Hi Prashantnayak
Thanks a lot for reporting this problem. Can you provide more details to
address it?
I am guessing master has to delete too many files when a checkpoint is
subsumed, which is very common in our cases. The number of files in the
recovery directory will increase if the master canno
Hi,
the Table API internally operates on Row. If you ingest other types they
are first converted into Rows.
I would recommend to convert your DataStream into a DataStream
using a MapFunction and to convert that stream into a Table using
TableEnvironment.fromDataStream().
Best, Fabian
2017-07-12
Tzu-Li (Gordon) Tai wrote
> It seems like you’ve misunderstood how to use the FlinkKafkaProducer, or
> is there any specific reason why you want to emit elements to Kafka in a
> map function?
>
> The correct way to use it is to add it as a sink function to your
> pipeline, i.e.
>
> DataStream
>
Hi,
There was also a problem in releasing the ES 5 connector with Flink 1.3.0. You
only said you’re using Flink 1.3, would that be 1.3.0 or 1.3.1?
Best,
Aljoscha
> On 16. Jul 2017, at 13:42, Fabian Wollert wrote:
>
> Hi Aljoscha,
>
> we are running Flink in Stand alone mode, inside Docker in
Hi Aljoscha,
we are running Flink in Stand alone mode, inside Docker in AWS. I will
check tomorrow the dependencies, although i'm wondering: I'm running Flink
1.3 averywhere and the appropiate ES connector which was only released with
1.3, so it's weird where this dependency mix up comes from ...
Hi,
Ok, then I misunderstood. Yes, a PurgingTrigger it similar (the same) to always
returning FIRE_AND_PURGE instead of FIRE in a custom Trigger. I thought your
problem was that data is never cleared away when using GlobalWindows. Is that
not the case?
Best,
Aljoscha
> On 14. Jul 2017, at 16:2
10 matches
Mail list logo