Re: Heartbeat of TaskManager timed out.

2020-07-07 Thread Xintong Song
Thanks for the updates, Ori.

I'm not familiar with Scala. Just curious, if what you suspect is true, is
it a bug of Scala?

Thank you~

Xintong Song



On Tue, Jul 7, 2020 at 1:41 PM Ori Popowski  wrote:

> Hi,
>
> I just wanted to update that the problem is now solved!
>
> I suspect that Scala's flatten() method has a memory problem on very
> large lists (> 2 billion elements). When using Scala Lists, the memory
> seems to leak but the app keeps running, and when using Scala Vectors, a
> weird IllegalArgumentException is thrown [1].
>
> I implemented my own flatten() method using Arrays and quickly ran into
> NegativeArraySizeException since the integer representing the array size
> wrapped around at Integer.MaxValue and became negative. After I started
> catching this exception all my cluster problems just resolved. Checkpoints,
> the heartbeat timeout, and also the memory and CPU utilization.
>
> I still need to confirm my suspicion towards Scala's flatten() though,
> since I haven't "lab-tested" it.
>
> [1] https://github.com/NetLogo/NetLogo/issues/1830
>
> On Sun, Jul 5, 2020 at 2:21 PM Ori Popowski  wrote:
>
>> Hi,
>>
>> I initially thought this, so this is why my heap is almost 30GiB.
>> However, I started to analyze the Java Flight Recorder files, and I
>> suspect there's a memory leak in Scala's flatten() method.
>> I changed the line that uses flatten(), and instead of flatten() I'm
>> just creating a ByteArray the size flatten() would have returned, and I
>> no longer have the heartbeat problem.
>>
>> So now my code is
>> val recordingData = recordingBytes.flatten
>>
>> instead of
>> val recordingData =
>> Array.fill[Byte](recordingBytes.map(_.length).sum)(0)
>>
>> I attach a screenshot of Java Mission Control
>>
>>
>>
>> On Fri, Jul 3, 2020 at 7:24 AM Xintong Song 
>> wrote:
>>
>>> I agree with Roman's suggestion for increasing heap size.
>>>
>>> It seems that the heap grows faster than freed. Thus eventually the Full
>>> GC is triggered, taking more than 50s and causing the timeout. However,
>>> even the full GC frees only 2GB space out of the 28GB max size. That
>>> probably suggests that the max heap size is not sufficient.
>>>
 2020-07-01T10:15:12.869+: [Full GC (Allocation Failure)
  28944M->26018M(28960M), 51.5256128 secs]
 [Eden: 0.0B(1448.0M)->0.0B(1448.0M) Survivors: 0.0B->0.0B Heap:
 28944.6M(28960.0M)->26018.9M(28960.0M)], [Metaspace:
 113556K->112729K(1150976K)]
   [Times: user=91.08 sys=0.06, real=51.53 secs]
>>>
>>>
>>> I would not be so sure about the memory leak. I think it could be a
>>> normal pattern that memory keeps growing as more data is processed. E.g.,
>>> from the provided log, I see window operation tasks executed in the task
>>> manager. Such operation might accumulate data until the window is emitted.
>>>
>>> Maybe Ori you can also take a look at the task manager log when the job
>>> runs with Flink 1.9 without this problem, see how the heap size changed. As
>>> I mentioned before, it is possible that, with the same configurations Flink
>>> 1.10 has less heap size compared to Flink 1.9, due to the memory model
>>> changes.
>>>
>>> Thank you~
>>>
>>> Xintong Song
>>>
>>>
>>>
>>> On Thu, Jul 2, 2020 at 8:58 PM Ori Popowski  wrote:
>>>
 Thank you very much for your analysis.

 When I said there was no memory leak - I meant that from the specific
 TaskManager I monitored in real-time using JProfiler.
 Unfortunately, this problem occurs only in 1 of the TaskManager and you
 cannot anticipate which. So when you pick a TM to profile at random -
 everything looks fine.

 I'm running the job again with Java FlightRecorder now, and I hope I'll
 find the reason for the memory leak.

 Thanks!

 On Thu, Jul 2, 2020 at 3:42 PM Khachatryan Roman <
 khachatryan.ro...@gmail.com> wrote:

> Thanks, Ori
>
> From the log, it looks like there IS a memory leak.
>
> At 10:12:53 there was the last "successfull" gc when 13Gb freed in
> 0.4653809 secs:
> [Eden: 17336.0M(17336.0M)->0.0B(2544.0M) Survivors: 40960.0K->2176.0M
> Heap: 23280.3M(28960.0M)->10047.0M(28960.0M)]
>
> Then the heap grew from 10G to 28G with GC not being able to free up
> enough space:
> [Eden: 2544.0M(2544.0M)->0.0B(856.0M) Survivors: 2176.0M->592.0M Heap:
> 12591.0M(28960.0M)->11247.0M(28960.0M)]
> [Eden: 856.0M(856.0M)->0.0B(1264.0M) Survivors: 592.0M->184.0M Heap:
> 12103.0M(28960.0M)->11655.0M(28960.0M)]
> [Eden: 1264.0M(1264.0M)->0.0B(1264.0M) Survivors: 184.0M->184.0M Heap:
> 12929.0M(28960.0M)->12467.0M(28960.0M)]
> ... ...
> [Eden: 1264.0M(1264.0M)->0.0B(1264.0M) Survivors: 184.0M->184.0M Heap:
> 28042.6M(28960.0M)->27220.6M(28960.0M)]
> [Eden: 1264.0M(1264.0M)->0.0B(1264.0M) Survivors: 184.0M->184.0M Heap:
> 28494.5M(28960.0M)->28720.6M(28960.0M)]
> [Eden: 224.0M(1264.0M)->0.0B(1448.0M) Survivors: 184.0M->0.0B Heap:
>

Re: Heartbeat of TaskManager timed out.

2020-07-07 Thread Ori Popowski
I wouldn't want to jump into conclusions, but from what I see, very large
lists and vectors do not work well with flatten in 2.11, each for its own
reasons.

In any case, it's 100% not a Flink issue.

On Tue, Jul 7, 2020 at 10:10 AM Xintong Song  wrote:

> Thanks for the updates, Ori.
>
> I'm not familiar with Scala. Just curious, if what you suspect is true, is
> it a bug of Scala?
>
> Thank you~
>
> Xintong Song
>
>
>
> On Tue, Jul 7, 2020 at 1:41 PM Ori Popowski  wrote:
>
>> Hi,
>>
>> I just wanted to update that the problem is now solved!
>>
>> I suspect that Scala's flatten() method has a memory problem on very
>> large lists (> 2 billion elements). When using Scala Lists, the memory
>> seems to leak but the app keeps running, and when using Scala Vectors, a
>> weird IllegalArgumentException is thrown [1].
>>
>> I implemented my own flatten() method using Arrays and quickly ran into
>> NegativeArraySizeException since the integer representing the array size
>> wrapped around at Integer.MaxValue and became negative. After I started
>> catching this exception all my cluster problems just resolved. Checkpoints,
>> the heartbeat timeout, and also the memory and CPU utilization.
>>
>> I still need to confirm my suspicion towards Scala's flatten() though,
>> since I haven't "lab-tested" it.
>>
>> [1] https://github.com/NetLogo/NetLogo/issues/1830
>>
>> On Sun, Jul 5, 2020 at 2:21 PM Ori Popowski  wrote:
>>
>>> Hi,
>>>
>>> I initially thought this, so this is why my heap is almost 30GiB.
>>> However, I started to analyze the Java Flight Recorder files, and I
>>> suspect there's a memory leak in Scala's flatten() method.
>>> I changed the line that uses flatten(), and instead of flatten() I'm
>>> just creating a ByteArray the size flatten() would have returned, and I
>>> no longer have the heartbeat problem.
>>>
>>> So now my code is
>>> val recordingData = recordingBytes.flatten
>>>
>>> instead of
>>> val recordingData =
>>> Array.fill[Byte](recordingBytes.map(_.length).sum)(0)
>>>
>>> I attach a screenshot of Java Mission Control
>>>
>>>
>>>
>>> On Fri, Jul 3, 2020 at 7:24 AM Xintong Song 
>>> wrote:
>>>
 I agree with Roman's suggestion for increasing heap size.

 It seems that the heap grows faster than freed. Thus eventually the
 Full GC is triggered, taking more than 50s and causing the timeout.
 However, even the full GC frees only 2GB space out of the 28GB max size.
 That probably suggests that the max heap size is not sufficient.

> 2020-07-01T10:15:12.869+: [Full GC (Allocation Failure)
>  28944M->26018M(28960M), 51.5256128 secs]
> [Eden: 0.0B(1448.0M)->0.0B(1448.0M) Survivors: 0.0B->0.0B Heap:
> 28944.6M(28960.0M)->26018.9M(28960.0M)], [Metaspace:
> 113556K->112729K(1150976K)]
>   [Times: user=91.08 sys=0.06, real=51.53 secs]


 I would not be so sure about the memory leak. I think it could be a
 normal pattern that memory keeps growing as more data is processed. E.g.,
 from the provided log, I see window operation tasks executed in the task
 manager. Such operation might accumulate data until the window is emitted.

 Maybe Ori you can also take a look at the task manager log when the job
 runs with Flink 1.9 without this problem, see how the heap size changed. As
 I mentioned before, it is possible that, with the same configurations Flink
 1.10 has less heap size compared to Flink 1.9, due to the memory model
 changes.

 Thank you~

 Xintong Song



 On Thu, Jul 2, 2020 at 8:58 PM Ori Popowski  wrote:

> Thank you very much for your analysis.
>
> When I said there was no memory leak - I meant that from the specific
> TaskManager I monitored in real-time using JProfiler.
> Unfortunately, this problem occurs only in 1 of the TaskManager and
> you cannot anticipate which. So when you pick a TM to profile at random -
> everything looks fine.
>
> I'm running the job again with Java FlightRecorder now, and I hope
> I'll find the reason for the memory leak.
>
> Thanks!
>
> On Thu, Jul 2, 2020 at 3:42 PM Khachatryan Roman <
> khachatryan.ro...@gmail.com> wrote:
>
>> Thanks, Ori
>>
>> From the log, it looks like there IS a memory leak.
>>
>> At 10:12:53 there was the last "successfull" gc when 13Gb freed in
>> 0.4653809 secs:
>> [Eden: 17336.0M(17336.0M)->0.0B(2544.0M) Survivors: 40960.0K->2176.0M
>> Heap: 23280.3M(28960.0M)->10047.0M(28960.0M)]
>>
>> Then the heap grew from 10G to 28G with GC not being able to free up
>> enough space:
>> [Eden: 2544.0M(2544.0M)->0.0B(856.0M) Survivors: 2176.0M->592.0M
>> Heap: 12591.0M(28960.0M)->11247.0M(28960.0M)]
>> [Eden: 856.0M(856.0M)->0.0B(1264.0M) Survivors: 592.0M->184.0M Heap:
>> 12103.0M(28960.0M)->11655.0M(28960.0M)]
>> [Eden: 1264.0M(1264.0M)->0.0B(1264.0M) Survivors: 184.0M->184.0M
>>

Any idea for data skew in hash join

2020-07-07 Thread faaron zheng
Hi, all, I use flink 1.10 to run a sql and I find that almost 60% of the data 
is concentrated on one parallelism. Is there any good idea for this scene?

Re: How to ensure that job is restored from savepoint when using Flink SQL

2020-07-07 Thread Fabian Hueske
Hi Jie Feng,

As you said, Flink translates SQL queries into streaming programs with
auto-generated operator IDs.
In order to start a SQL query from a savepoint, the operator IDs in the
savepoint must match the IDs in the newly translated program.
Right now this can only be guaranteed if you translate the same query with
the same Flink version (optimizer changes might change the structure of the
resulting plan even if the query is the same).
This is of course a significant limitation, that the community is aware of
and planning to improve in the future.

I'd also like to add that it can be very difficult to assess whether it is
meaningful to start a query from a savepoint that was generated with a
different query.
A savepoint holds intermediate data that is needed to compute the result of
a query.
If you update a query it is very well possible that the result computed by
Flink won't be equal to the actual result of the new query.

Best, Fabian

Am Mo., 6. Juli 2020 um 10:50 Uhr schrieb shadowell :

>
> Hello, everyone,
> I have some unclear points when using Flink SQL. I hope to get an
> answer or tell me where I can find the answer.
> When using the DataStream API, in order to ensure that the job can
> recover the state from savepoint after adjustment, it is necessary to
> specify the uid for the operator. However, when using Flink SQL, the uid of
> the operator is automatically generated. If the SQL logic changes (operator
> order changes), when the task is restored from savepoint, will it cause
> some of the operator states to be unable to be mapped back, resulting in
> state loss?
>
> Thanks~
> Jie Feng
> shadowell
> shadow...@126.com
>
> 
> 签名由 网易邮箱大师  定制
>


Manual allocation of slot usage

2020-07-07 Thread Mu Kong
Hi community,

I'm running an application to consume data from kafka, and process it then
put data to the druid.
I wonder if there is a way where I can allocate the data source consuming
process evenly across the task manager to maximize the usage of the network
of task managers.

So, for example, I have 15 task managers and I set parallelism for the
kafka source as 60, since I have 60 partitions in kafka topic.
What I want is flink cluster will put 4 kafka source subtasks on each task
manager.

Is that possible? I have gone through the document, the only thing we found
is
cluster.evenly-spread-out-slots
which does exact the opposite of what I want. It will put the subtasks of
the same operator onto one task manager as much as possible.

So, is some kind of manual resource allocation available?
Thanks in advance!


Best regards,
Mu


Re: Manual allocation of slot usage

2020-07-07 Thread Yangze Guo
Hi, Mu,

IIUC, cluster.evenly-spread-out-slots would fulfill your demand. Why
do you think it does the opposite of what you want. Do you run your
job in active mode? If so, cluster.evenly-spread-out-slots might not
work very well because there could be insufficient task managers when
request slot from ResourceManager. This has been discussed in
https://issues.apache.org/jira/browse/FLINK-12122 .


Best,
Yangze Guo

On Tue, Jul 7, 2020 at 5:44 PM Mu Kong  wrote:
>
> Hi community,
>
> I'm running an application to consume data from kafka, and process it then 
> put data to the druid.
> I wonder if there is a way where I can allocate the data source consuming 
> process evenly across the task manager to maximize the usage of the network 
> of task managers.
>
> So, for example, I have 15 task managers and I set parallelism for the kafka 
> source as 60, since I have 60 partitions in kafka topic.
> What I want is flink cluster will put 4 kafka source subtasks on each task 
> manager.
>
> Is that possible? I have gone through the document, the only thing we found is
>
> cluster.evenly-spread-out-slots
>
> which does exact the opposite of what I want. It will put the subtasks of the 
> same operator onto one task manager as much as possible.
>
> So, is some kind of manual resource allocation available?
> Thanks in advance!
>
>
> Best regards,
> Mu


Re: SSL for QueryableStateClient

2020-07-07 Thread Chesnay Schepler

Queryable state does not support SSL.

On 06/07/2020 22:42, mail2so...@yahoo.co.in wrote:

Hello,

I am running flink on Kubernetes, and from outside the Ingress to a 
proxy on Kubernetes is via SSL 443 PORT only.


Can you please provide guidance on how to setup the SSL for 
/*QueryableStateClient*/, the client to inquire the state.



Please let me know if any other details is needed.

Thanks & Regards
Souma Suvra Ghosh





Re: Stateful Functions: Deploying to existing Cluster

2020-07-07 Thread Jan Brusch

Hi Igal,

just as a feedback for you and anyone else reading this: Worked like a 
charm. Thanks again for your quick help!



Best regards

Jan


On 06.07.20 14:02, Igal Shilman wrote:

Hi Jan,

Stateful functions would look at the java class path for the module.yaml,
So one way would be including the module.yaml in your 
src/main/resources/ directory.


Good luck,
Igal.


On Mon, Jul 6, 2020 at 12:39 PM Jan Brusch > wrote:


Hi,

quick question about Deploying a Flink Stateful Functions
Application to
an existing cluster: The Documentation says to integrate
"statefun-flink-distribution" as additional maven Dependency in
the fat
jar.

(https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.1/deployment-and-operations/packaging.html#flink-jar)

But how and where do I upload my module.yml for external function
definitions in that scenario...?


Best regards

Jan


--
neuland  – Büro für Informatik GmbH
Konsul-Smidt-Str. 8g, 28217 Bremen

Telefon (0421) 380107 57
Fax (0421) 380107 99
https://www.neuland-bfi.de

https://twitter.com/neuland
https://facebook.com/neulandbfi
https://xing.com/company/neulandbfi


Geschäftsführer: Thomas Gebauer, Jan Zander
Registergericht: Amtsgericht Bremen, HRB 23395 HB
USt-ID. DE 246585501



Re: Timeout when using RockDB to handle large state in a stream app

2020-07-07 Thread Yun Tang
Hi Felipe

flink_taskmanager_Status_JVM_Memory_Direct_MemoryUsed cannot tell you how much 
memory is used by RocksDB as it mallocate memory from os directly instead from 
JVM.

Moreover, I cannot totally understand why you ask how to increase the memory of 
the JM and TM when using the PredefinedOptions.SPINNING_DISK_OPTIMIZED for 
RocksDB.
Did you mean how to increase the total process memory? If so, as Flink uses 
managed memory to control RocksDB [1] by default, you could increase total 
memory by increasing managed memory [2][3]

[1] 
https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#state-backend-rocksdb-memory-managed
[2] 
https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#taskmanager-memory-managed-fraction
[3] 
https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#taskmanager-memory-managed-size

Best,
Yun Tang




From: Felipe Gutierrez 
Sent: Monday, July 6, 2020 19:17
To: Yun Tang 
Cc: Ori Popowski ; user 
Subject: Re: Timeout when using RockDB to handle large state in a stream app

Hi all,

I tested the two TPC-H query 03 [1] and 10 [2] using Datastream API on
the cluster with RocksDB state backend. One thing that I did that
improved a lot was to replace the List POJO to a
List>. Then I could load a table of 200MB in memory as my
state. However, the original table is 725MB, and turned out that I
need another configuration. I am not sure what I can do more to reduce
the size of my state. If one of you have an idea I am thankful to
hear.

Now, speaking about the flink-conf.yaml file and the RocksDB
configuration. When I use these configurations on the flink-conf.yaml
the stream job still runs out of memory.
jobmanager.heap.size: 4g # default: 2048m
heartbeat.timeout: 10
taskmanager.memory.process.size: 2g # default: 1728m

Then I changed for this configuration which I can set
programmatically. The stream job seems to behave better. It starts to
process something, then the metrics disappear for some time and appear
again. The available and used memory on the TM
(flink_taskmanager_Status_JVM_Memory_Direct_MemoryUsed) is 167MB. And
the available and used memory on the JM
(flink_jobmanager_Status_JVM_Memory_Direct_MemoryUsed) is 610KB. I
guess the PredefinedOptions.SPINNING_DISK_OPTIMIZED configuration is
overwriting the configuration on the flink-conf.yaml file.

RocksDBStateBackend stateBackend = new RocksDBStateBackend(stateDir, true);
stateBackend.setPredefinedOptions(PredefinedOptions.SPINNING_DISK_OPTIMIZED);
env.setStateBackend(stateBackend);

How can I increase the memory of the JM and TM when I am still using
the PredefinedOptions.SPINNING_DISK_OPTIMIZED for RocksDB?

[1] 
https://github.com/felipegutierrez/explore-flink/blob/master/src/main/java/org/sense/flink/examples/stream/tpch/TPCHQuery03.java
[2] 
https://github.com/felipegutierrez/explore-flink/blob/master/src/main/java/org/sense/flink/examples/stream/tpch/TPCHQuery10.java

--
-- Felipe Gutierrez
-- skype: felipe.o.gutierrez
-- https://felipeogutierrez.blogspot.com

On Fri, Jul 3, 2020 at 9:01 AM Felipe Gutierrez
 wrote:
>
> yes. I agree. because RocsDB will spill data to disk if there is not
> enough space in memory.
> Thanks
> --
> -- Felipe Gutierrez
> -- skype: felipe.o.gutierrez
> -- https://felipeogutierrez.blogspot.com
>
> On Fri, Jul 3, 2020 at 8:27 AM Yun Tang  wrote:
> >
> > Hi Felipe,
> >
> > I noticed my previous mail has a typo: RocksDB is executed in task main 
> > thread which does not take the role to respond to heart beat. Sorry for 
> > previous typo, and the key point I want to clarify is that RocksDB should 
> > not have business for heartbeat problem.
> >
> > Best
> > Yun Tang
> > 
> > From: Felipe Gutierrez 
> > Sent: Tuesday, June 30, 2020 17:46
> > To: Yun Tang 
> > Cc: Ori Popowski ; user 
> > Subject: Re: Timeout when using RockDB to handle large state in a stream app
> >
> > Hi,
> >
> > I reduced the size of the tables that I am loading on a ListState and
> > the query worked. One of them was about 700MB [1] [2].
> >
> > Now I am gonna deploy it on the cluster and check if it works. I will
> > probably need to increase the heartbeat timeout.
> >
> > Thanks,
> > Felipe
> > [1] 
> > https://github.com/apache/flink/blob/master/flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/TPCHQuery3.java
> > [2] 
> > https://github.com/apache/flink/blob/master/flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/TPCHQuery10.java
> > --
> > -- Felipe Gutierrez
> > -- skype: felipe.o.gutierrez
> > -- https://felipeogutierrez.blogspot.com
> >
> > On Tue, Jun 30, 2020 at 10:51 AM Yun Tang  wrote:
> > >
> > > Hi Felipe
> > >
> > > RocksDB is executed in task main thread which does take the role to 
> > > respond to heart beat and RocksDB mainly use native memory which is 
> > > decoupled from JVM heap to not bring any GC pre

Re: Manual allocation of slot usage

2020-07-07 Thread Mu Kong
Hi, Guo,

Thanks for helping out.

My application has a kafka source with 60 subtasks(parallelism), and we
have 15 task managers with 15 slots on each.

*Before I applied the cluster.evenly-spread-out-slots,* meaning it is set
to default false, the operator 'kafka source" has 11 subtasks allocated in
one single task manager,
while the remaining 49 subtasks of "kafka source" distributed to the
remaining 14 task managers.

*After I set cluster.evenly-spread-out-slots to true*, the 60 subtasks of
"kafka source" were allocated to only 4 task managers, and they took 15
slots on each of these 4 TMs.

What I thought is that this config will make the subtasks of one operator
more evenly spread among the task managers, but it seems it made them
allocated in the same task manager as much as possible.

The version I'm deploying is 1.9.0.

Best regards,
Mu

On Tue, Jul 7, 2020 at 7:10 PM Yangze Guo  wrote:

> Hi, Mu,
>
> IIUC, cluster.evenly-spread-out-slots would fulfill your demand. Why
> do you think it does the opposite of what you want. Do you run your
> job in active mode? If so, cluster.evenly-spread-out-slots might not
> work very well because there could be insufficient task managers when
> request slot from ResourceManager. This has been discussed in
> https://issues.apache.org/jira/browse/FLINK-12122 .
>
>
> Best,
> Yangze Guo
>
> On Tue, Jul 7, 2020 at 5:44 PM Mu Kong  wrote:
> >
> > Hi community,
> >
> > I'm running an application to consume data from kafka, and process it
> then put data to the druid.
> > I wonder if there is a way where I can allocate the data source
> consuming process evenly across the task manager to maximize the usage of
> the network of task managers.
> >
> > So, for example, I have 15 task managers and I set parallelism for the
> kafka source as 60, since I have 60 partitions in kafka topic.
> > What I want is flink cluster will put 4 kafka source subtasks on each
> task manager.
> >
> > Is that possible? I have gone through the document, the only thing we
> found is
> >
> > cluster.evenly-spread-out-slots
> >
> > which does exact the opposite of what I want. It will put the subtasks
> of the same operator onto one task manager as much as possible.
> >
> > So, is some kind of manual resource allocation available?
> > Thanks in advance!
> >
> >
> > Best regards,
> > Mu
>


Check pointing for simple pipeline

2020-07-07 Thread Prasanna kumar
Hi ,

I have pipeline. Source-> Map(JSON transform)-> Sink..

Both source and sink are Kafka.

What is the best checkpoint ing mechanism?

 Is setting checkpoints incremental a good option? What should be careful
of?

I am running it on aws emr.

Will checkpoint slow the speed?

Thanks,
Prasanna.


Heterogeneous or Dynamic Stream Processing

2020-07-07 Thread Rob Shepherd
Hi All,

It'd be great to consider stream processing as a platform for our upcoming
projects. Flink seems to be the closeted match.

However we have numerous stream processing workloads and would want to be
able to scale up to 1000's different streams;  each quite similar in
structure/sequence but with the functional logic being very different in
each.

For example, there is always a "validate" stage - but what that means is
dependant on the client/data/context etc and would typically map to a few
line of script to perform.

In essence, our sequences can often be deconstructed down to 8-12 python
snippets and the serverless/functional paradigm seems to fit well.

Whilst we can deploy our functions readily to a faas/k8s or something
(which seems to fit the bill with remote functions) I don't yet see how to
quickly draw these together in a dynamic stream.

My initial thoughts would be to create a very general purpose stream job
that then works through the context of mapping functions to flink tasks
based on the client dataset.

E.g. some pseudo code:

ingress()
extract()
getDynamicStreamFunctionDefs()
getFunction1()
runFunction1()
abortOnError()
getFunction2()
runFunction2()
abortOnError()
...
getFunction10()
runFunction10()
sinkData()

Most functions are not however simple lexical operations, or
extractors/mappers - but on the whole require a few database/API calls to
retrieve things like previous data, configurations etc.

They are not necessarily long running but certainly Async is a
consideration.

I think every stage will be UDFs (and then Meta-UDFs at that)

As a result I'm not sure if we can get this to fit without a brittle set of
workarounds, and ruin any benefit of running through flink etc...
but it would great to hear opinions of others who might have tackled this
kind of dynamic tasking.


I'm happy to explain this better if it isn't clear.

With best regards

Rob




Rob Shepherd BEng PhD


Re: Timeout when using RockDB to handle large state in a stream app

2020-07-07 Thread Felipe Gutierrez
I figured out that for my stream job the best was just to use the
default MemoryStateBackend. I load a table from a file of 725MB in a
UDF. I am also not using Flink ListState since I don't have to change
the values of this table. i only do a lookup.

The only thing that I need was more memory for the TM and a bit larger
timeout. Currently, my configurations are these. I am not sure if
there are a better configuration
heartbeat.timeout: 10
taskmanager.memory.flink.size: 12g
taskmanager.memory.jvm-overhead.max: 4g
taskmanager.memory.jvm-metaspace.size: 2048m # default: 1024m

Another thing that is not working is this parameter that when I set it
I get an JVM argument error and the TM does not start.

taskmanager.memory.task.heap.size: 2048m # default: 1024m # Flink error

Best,
Felipe
--
-- Felipe Gutierrez
-- skype: felipe.o.gutierrez
-- https://felipeogutierrez.blogspot.com

On Tue, Jul 7, 2020 at 2:17 PM Yun Tang  wrote:
>
> Hi Felipe
>
> flink_taskmanager_Status_JVM_Memory_Direct_MemoryUsed cannot tell you how 
> much memory is used by RocksDB as it mallocate memory from os directly 
> instead from JVM.
>
> Moreover, I cannot totally understand why you ask how to increase the memory 
> of the JM and TM when using the PredefinedOptions.SPINNING_DISK_OPTIMIZED for 
> RocksDB.
> Did you mean how to increase the total process memory? If so, as Flink uses 
> managed memory to control RocksDB [1] by default, you could increase total 
> memory by increasing managed memory [2][3]
>
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#state-backend-rocksdb-memory-managed
> [2] 
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#taskmanager-memory-managed-fraction
> [3] 
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#taskmanager-memory-managed-size
>
> Best,
> Yun Tang
>
>
>
> 
> From: Felipe Gutierrez 
> Sent: Monday, July 6, 2020 19:17
> To: Yun Tang 
> Cc: Ori Popowski ; user 
> Subject: Re: Timeout when using RockDB to handle large state in a stream app
>
> Hi all,
>
> I tested the two TPC-H query 03 [1] and 10 [2] using Datastream API on
> the cluster with RocksDB state backend. One thing that I did that
> improved a lot was to replace the List POJO to a
> List>. Then I could load a table of 200MB in memory as my
> state. However, the original table is 725MB, and turned out that I
> need another configuration. I am not sure what I can do more to reduce
> the size of my state. If one of you have an idea I am thankful to
> hear.
>
> Now, speaking about the flink-conf.yaml file and the RocksDB
> configuration. When I use these configurations on the flink-conf.yaml
> the stream job still runs out of memory.
> jobmanager.heap.size: 4g # default: 2048m
> heartbeat.timeout: 10
> taskmanager.memory.process.size: 2g # default: 1728m
>
> Then I changed for this configuration which I can set
> programmatically. The stream job seems to behave better. It starts to
> process something, then the metrics disappear for some time and appear
> again. The available and used memory on the TM
> (flink_taskmanager_Status_JVM_Memory_Direct_MemoryUsed) is 167MB. And
> the available and used memory on the JM
> (flink_jobmanager_Status_JVM_Memory_Direct_MemoryUsed) is 610KB. I
> guess the PredefinedOptions.SPINNING_DISK_OPTIMIZED configuration is
> overwriting the configuration on the flink-conf.yaml file.
>
> RocksDBStateBackend stateBackend = new RocksDBStateBackend(stateDir, true);
> stateBackend.setPredefinedOptions(PredefinedOptions.SPINNING_DISK_OPTIMIZED);
> env.setStateBackend(stateBackend);
>
> How can I increase the memory of the JM and TM when I am still using
> the PredefinedOptions.SPINNING_DISK_OPTIMIZED for RocksDB?
>
> [1] 
> https://github.com/felipegutierrez/explore-flink/blob/master/src/main/java/org/sense/flink/examples/stream/tpch/TPCHQuery03.java
> [2] 
> https://github.com/felipegutierrez/explore-flink/blob/master/src/main/java/org/sense/flink/examples/stream/tpch/TPCHQuery10.java
>
> --
> -- Felipe Gutierrez
> -- skype: felipe.o.gutierrez
> -- https://felipeogutierrez.blogspot.com
>
> On Fri, Jul 3, 2020 at 9:01 AM Felipe Gutierrez
>  wrote:
> >
> > yes. I agree. because RocsDB will spill data to disk if there is not
> > enough space in memory.
> > Thanks
> > --
> > -- Felipe Gutierrez
> > -- skype: felipe.o.gutierrez
> > -- https://felipeogutierrez.blogspot.com
> >
> > On Fri, Jul 3, 2020 at 8:27 AM Yun Tang  wrote:
> > >
> > > Hi Felipe,
> > >
> > > I noticed my previous mail has a typo: RocksDB is executed in task main 
> > > thread which does not take the role to respond to heart beat. Sorry for 
> > > previous typo, and the key point I want to clarify is that RocksDB should 
> > > not have business for heartbeat problem.
> > >
> > > Best
> > > Yun Tang
> > > 
> > > From: Felipe Gutierrez 
> > > Sent: Tuesday, June 30, 2020 17:46
> > > To: Yun Tang 
> > > Cc: Ori

[ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Zhijiang
The Apache Flink community is very happy to announce the release of Apache 
Flink 1.11.0, which is the latest major release.

Apache Flink(r) is an open-source stream processing framework for distributed, 
high-performing, always-available, and accurate data streaming applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements for 
this new major release:
https://flink.apache.org/news/2020/07/06/release-1.11.0.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364

We would like to thank all contributors of the Apache Flink community who made 
this release possible!

Cheers,
Piotr & Zhijiang

Re: Decompressing Tar Files for Batch Processing

2020-07-07 Thread Chesnay Schepler

I would probably go with a separate process.

Downloading the file could work with Flink if it is already present in 
some supported filesystem. Decompressing the file is supported for 
selected formats (deflate, gzip, bz2, xz), but this seems to be an 
undocumented feature, so I'm not sure how usable it is in reality.


On 07/07/2020 01:30, Austin Cawley-Edwards wrote:

Hey all,

I need to ingest a tar file containing ~1GB of data in around 10 CSVs. 
The data is fairly connected and needs some cleaning, which I'd like 
to do with the Batch Table API + SQL (but have never used before). 
I've got a small prototype loading the uncompressed CSVs and applying 
the necessary SQL, which works well.


I'm wondering about the task of downloading the tar file and unzipping 
it into the CSVs. Does this sound like something I can/ should do in 
Flink, or should I set up another process to download, unzip, and 
store in a filesystem to then read with the Flink Batch job? My 
research is leading me towards doing it separately but I'd like to do 
it all in the same job if there's a creative way.


Thanks!
Austin





Re: Heterogeneous or Dynamic Stream Processing

2020-07-07 Thread Arvid Heise
Hi Rob,

In the past I used a mixture of configuration and template queries to
achieve a similar goal (I had only up to 150 of these jobs per
application). My approach was not completely dynamic as you have described
but rather to compose a big query from a configuration during the start of
the application and restart to reflect changes.

For the simple extractor/mapper, I'd use Table API and plug in SQL
statements [1] that could be easily given by experienced
end-users/analysts. Abort logic should be added programmatically to each of
the extractor/mapper through Table API (for example, extractor can output
an error column that also gives an explanation and this column is then
checked for non-null). The big advantage of using Table API over a simple
SQL query is that you can add structural variance: your application may use
1 extractor or 100; it's just a matter of a loop.

Note that async IO is currently not available in Table API, but you can
easily switch back and forth between Table API and Datastream. I'd
definitely suggest to use async IO for your described use cases.

So please consider to also use that less dynamic approach; you'd get much
for free: SQL support with proper query validation and meaningful error
messages. And it's also much easier to test/debug.


https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/common.html#sql

On Tue, Jul 7, 2020 at 4:01 PM Rob Shepherd  wrote:

> Hi All,
>
> It'd be great to consider stream processing as a platform for our upcoming
> projects. Flink seems to be the closeted match.
>
> However we have numerous stream processing workloads and would want to be
> able to scale up to 1000's different streams;  each quite similar in
> structure/sequence but with the functional logic being very different in
> each.
>
> For example, there is always a "validate" stage - but what that means is
> dependant on the client/data/context etc and would typically map to a few
> line of script to perform.
>
> In essence, our sequences can often be deconstructed down to 8-12 python
> snippets and the serverless/functional paradigm seems to fit well.
>
> Whilst we can deploy our functions readily to a faas/k8s or something
> (which seems to fit the bill with remote functions) I don't yet see how to
> quickly draw these together in a dynamic stream.
>
> My initial thoughts would be to create a very general purpose stream job
> that then works through the context of mapping functions to flink tasks
> based on the client dataset.
>
> E.g. some pseudo code:
>
> ingress()
> extract()
> getDynamicStreamFunctionDefs()
> getFunction1()
> runFunction1()
> abortOnError()
> getFunction2()
> runFunction2()
> abortOnError()
> ...
> getFunction10()
> runFunction10()
> sinkData()
>
> Most functions are not however simple lexical operations, or
> extractors/mappers - but on the whole require a few database/API calls to
> retrieve things like previous data, configurations etc.
>
> They are not necessarily long running but certainly Async is a
> consideration.
>
> I think every stage will be UDFs (and then Meta-UDFs at that)
>
> As a result I'm not sure if we can get this to fit without a brittle set
> of workarounds, and ruin any benefit of running through flink etc...
> but it would great to hear opinions of others who might have tackled this
> kind of dynamic tasking.
>
>
> I'm happy to explain this better if it isn't clear.
>
> With best regards
>
> Rob
>
>
>
>
> Rob Shepherd BEng PhD
>
>

-- 

Arvid Heise | Senior Java Developer



Follow us @VervericaData

--

Join Flink Forward  - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time

--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--
Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
(Toni) Cheng


Re: Decompressing Tar Files for Batch Processing

2020-07-07 Thread Austin Cawley-Edwards
Hey Chesnay,

Thanks for the advice, and easy enough to do it in a separate process.

Best,
Austin

On Tue, Jul 7, 2020 at 10:29 AM Chesnay Schepler  wrote:

> I would probably go with a separate process.
>
> Downloading the file could work with Flink if it is already present in
> some supported filesystem. Decompressing the file is supported for
> selected formats (deflate, gzip, bz2, xz), but this seems to be an
> undocumented feature, so I'm not sure how usable it is in reality.
>
> On 07/07/2020 01:30, Austin Cawley-Edwards wrote:
> > Hey all,
> >
> > I need to ingest a tar file containing ~1GB of data in around 10 CSVs.
> > The data is fairly connected and needs some cleaning, which I'd like
> > to do with the Batch Table API + SQL (but have never used before).
> > I've got a small prototype loading the uncompressed CSVs and applying
> > the necessary SQL, which works well.
> >
> > I'm wondering about the task of downloading the tar file and unzipping
> > it into the CSVs. Does this sound like something I can/ should do in
> > Flink, or should I set up another process to download, unzip, and
> > store in a filesystem to then read with the Flink Batch job? My
> > research is leading me towards doing it separately but I'd like to do
> > it all in the same job if there's a creative way.
> >
> > Thanks!
> > Austin
>
>
>


Re: Decompressing Tar Files for Batch Processing

2020-07-07 Thread Austin Cawley-Edwards
On Tue, Jul 7, 2020 at 10:53 AM Austin Cawley-Edwards <
austin.caw...@gmail.com> wrote:

> Hey Xiaolong,
>
> Thanks for the suggestions. Just to make sure I understand, are you saying
> to run the download and decompression in the Job Manager before executing
> the job?
>
> I think another way to ensure the tar file is not downloaded more than
> once is a source w/ parallelism 1. The issue I can't get past is after
> decompressing the tarball, how would I pass those OutputStreams for each
> entry through Flink?
>
> Best,
> Austin
>
>
>
> On Tue, Jul 7, 2020 at 5:56 AM Xiaolong Wang 
> wrote:
>
>> It seems like to me that it can not be done by Flink, for code will be
>> run across all task managers. That way, there will be multiple downloads of
>> you tar file, which is unnecessary.
>>
>> However, you can do it  on your code before initializing a Flink runtime,
>> and the code will be run only on the client side.
>>
>> On Tue, Jul 7, 2020 at 7:31 AM Austin Cawley-Edwards <
>> austin.caw...@gmail.com> wrote:
>>
>>> Hey all,
>>>
>>> I need to ingest a tar file containing ~1GB of data in around 10 CSVs.
>>> The data is fairly connected and needs some cleaning, which I'd like to do
>>> with the Batch Table API + SQL (but have never used before). I've got a
>>> small prototype loading the uncompressed CSVs and applying the necessary
>>> SQL, which works well.
>>>
>>> I'm wondering about the task of downloading the tar file and unzipping
>>> it into the CSVs. Does this sound like something I can/ should do in Flink,
>>> or should I set up another process to download, unzip, and store in a
>>> filesystem to then read with the Flink Batch job? My research is leading me
>>> towards doing it separately but I'd like to do it all in the same job if
>>> there's a creative way.
>>>
>>> Thanks!
>>> Austin
>>>
>>


Re: Heterogeneous or Dynamic Stream Processing

2020-07-07 Thread Rob Shepherd
Very helpful thank you Arvid.

I've been reading up but I'm not sure I grasp all of that just yet.  Please
may I ask for clarification?

1. Could I summarise correctly that I may build a list of functions from an
SQL call which can then be looped over?
This looping sounds appealing and you are right that "1 or 100" is a big
bonus.

2. "during the start of the application and restart to reflect changes"
"during the start" do you mean when the job first boots, or immediately
upon ingress of the data event from the queue?
"restart" is this an API call to maybe abort an execution of a piece of
data but with more up-to-date context.


Trying to be a fast learner, and very grateful for the pointers.

With thanks and best regards

Rob




Rob Shepherd BEng PhD



On Tue, 7 Jul 2020 at 15:33, Arvid Heise  wrote:

> Hi Rob,
>
> In the past I used a mixture of configuration and template queries to
> achieve a similar goal (I had only up to 150 of these jobs per
> application). My approach was not completely dynamic as you have described
> but rather to compose a big query from a configuration during the start of
> the application and restart to reflect changes.
>
> For the simple extractor/mapper, I'd use Table API and plug in SQL
> statements [1] that could be easily given by experienced
> end-users/analysts. Abort logic should be added programmatically to each of
> the extractor/mapper through Table API (for example, extractor can output
> an error column that also gives an explanation and this column is then
> checked for non-null). The big advantage of using Table API over a simple
> SQL query is that you can add structural variance: your application may use
> 1 extractor or 100; it's just a matter of a loop.
>
> Note that async IO is currently not available in Table API, but you can
> easily switch back and forth between Table API and Datastream. I'd
> definitely suggest to use async IO for your described use cases.
>
> So please consider to also use that less dynamic approach; you'd get much
> for free: SQL support with proper query validation and meaningful error
> messages. And it's also much easier to test/debug.
>
>
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/common.html#sql
>
> On Tue, Jul 7, 2020 at 4:01 PM Rob Shepherd  wrote:
>
>> Hi All,
>>
>> It'd be great to consider stream processing as a platform for our
>> upcoming projects. Flink seems to be the closeted match.
>>
>> However we have numerous stream processing workloads and would want to be
>> able to scale up to 1000's different streams;  each quite similar in
>> structure/sequence but with the functional logic being very different in
>> each.
>>
>> For example, there is always a "validate" stage - but what that means is
>> dependant on the client/data/context etc and would typically map to a few
>> line of script to perform.
>>
>> In essence, our sequences can often be deconstructed down to 8-12 python
>> snippets and the serverless/functional paradigm seems to fit well.
>>
>> Whilst we can deploy our functions readily to a faas/k8s or something
>> (which seems to fit the bill with remote functions) I don't yet see how to
>> quickly draw these together in a dynamic stream.
>>
>> My initial thoughts would be to create a very general purpose stream job
>> that then works through the context of mapping functions to flink tasks
>> based on the client dataset.
>>
>> E.g. some pseudo code:
>>
>> ingress()
>> extract()
>> getDynamicStreamFunctionDefs()
>> getFunction1()
>> runFunction1()
>> abortOnError()
>> getFunction2()
>> runFunction2()
>> abortOnError()
>> ...
>> getFunction10()
>> runFunction10()
>> sinkData()
>>
>> Most functions are not however simple lexical operations, or
>> extractors/mappers - but on the whole require a few database/API calls to
>> retrieve things like previous data, configurations etc.
>>
>> They are not necessarily long running but certainly Async is a
>> consideration.
>>
>> I think every stage will be UDFs (and then Meta-UDFs at that)
>>
>> As a result I'm not sure if we can get this to fit without a brittle set
>> of workarounds, and ruin any benefit of running through flink etc...
>> but it would great to hear opinions of others who might have tackled this
>> kind of dynamic tasking.
>>
>>
>> I'm happy to explain this better if it isn't clear.
>>
>> With best regards
>>
>> Rob
>>
>>
>>
>>
>> Rob Shepherd BEng PhD
>>
>>
>
> --
>
> Arvid Heise | Senior Java Developer
>
> 
>
> Follow us @VervericaData
>
> --
>
> Join Flink Forward  - The Apache Flink
> Conference
>
> Stream Processing | Event Driven | Real Time
>
> --
>
> Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
>
> --
> Ververica GmbH
> Registered at Amtsgericht Charlottenburg: HRB 158244 B
> Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
> (Toni) Cheng
>


FlinkKinesisProducer blocking ?

2020-07-07 Thread Vijay Balakrishnan
Hi,
current setup.

Kinesis stream 1 -> Kinesis Analytics Flink -> Kinesis stream 2
|
> Firehose Delivery stream

Curl eror:
org.apache.flink.kinesis.shaded.com.amazonaws.services.kinesis.producer.LogInputStreamReader
 - [2020-07-02 15:22:32.203053] [0x07f4][0x7ffbced15700] [error]
[AWS Log: ERROR](CurlHttpClient)Curl returned error code 28

But I am still seeing tons of the curl 28 error. I use parallelism of 80
for the Sink to Kinesis Data stream(KDS). Which seems to point to KDS being
pounded with too many requests - the 80(parallelism) * 10(ThreadPool size)
= 800 requests. Is my understanding correct ? So, maybe reduce the 80
parallelism ??
*I still don't understand why the logs are stuck with just
FlinkKInesisProducer for around 4s(blocking calls???) *with the rest of the
Flink Analytics application not producing any logs while this happens.
*I noticed that the FlinkKInesisProducer took about 3.785secs, 3.984s,
4.223s in between other application logs in Kibana when the Kinesis
GetIterator Age peaked*. It seemed like FlinkKinesisProducer was blocking
for that long when the Flink app was not able to generate any other logs.

Looked at this:
https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kinesis.html#backpressure

Could use this:
producerConfig.put("RequestTimeout", "1");//from 6000

But doesn't really solve the problem when trying to maintain a real time
processing system.

TIA


Re: Heterogeneous or Dynamic Stream Processing

2020-07-07 Thread Arvid Heise
Hi Rob,

1. When you start a flink application, you actually just execute a Java
main called the driver. This driver submits a job graph to the job manager,
which executes the job. Since the driver is an ordinary Java program that
uses the Flink API, you can compose the job graph in any way you want. Have
a look at one example to see what I mean [1]. It's not hard to imagine that
you can compose a query such as

List extractorQueries = new ArrayList<>();
Table table = tableEnvironment.from("testCatalog.`default`.testTable");
Table errors = tableEnvironment.fromValues();
for (int index = 0; index < extractorQueries.size(); index++) {
   String extractorQuery = extractorQueries.get(index);
   table = table.addColumns($(extractorQuery).as("extractedValue" +
index, "error"));
   errors = errors.unionAll(table.filter($("error").isNotNull()));
   table = table.filter($("error").isNull()).dropColumns($("error"));
}
// write table and errors

This query first loads the data from a testTable and then successively
applies sql expressions that calculate one value + one error column each.
The value is stored in extractedValue0...99 (assuming 100 extractor
queries). All values that cause errors, will have a value in the error
column set. These are collected in the table "errors" for side output (very
useful for debugging and improving the extractor queries). All good records
(error IS NULL) are retained for further processing and the error column
gets dropped.

Btw there is also a Python entry point available, which offers you more or
less the same. I just haven't tried it yet. [2]

Lastly, currently all extractors are executed in succession. Of course, it
is also possible to run them independently if you have different source
streams. You can then later join / union them.

2. The downside of this driver approach is that changes in the
configuration are not directly reflected. However, upon restart Flink will
adapt the changes and recover from the last checkpoint [3] (= snapshot of
the current processing state, which can be done every second in your case
as the state is rather small). So now you just need to find a way to force
a restart.

One approach is to kill it manually and start again, but that's not scaling
well. However, Flink's fault tolerance feature can be somewhat exploited:
You can have one part of your program fail on config change, which will
restart the whole application automatically if configured correctly and
thus using the latest configuration.

[1]
https://github.com/apache/flink/blob/master/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/basics/StreamSQLExample.java#L77-L100
[2]
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/python/python_udfs.html
[3]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/concepts/stateful-stream-processing.html#checkpointing

On Tue, Jul 7, 2020 at 6:12 PM Rob Shepherd  wrote:

> Very helpful thank you Arvid.
>
> I've been reading up but I'm not sure I grasp all of that just yet.
> Please may I ask for clarification?
>
> 1. Could I summarise correctly that I may build a list of functions from
> an SQL call which can then be looped over?
> This looping sounds appealing and you are right that "1 or 100" is a big
> bonus.
>
> 2. "during the start of the application and restart to reflect changes"
> "during the start" do you mean when the job first boots, or immediately
> upon ingress of the data event from the queue?
> "restart" is this an API call to maybe abort an execution of a piece of
> data but with more up-to-date context.
>
>
> Trying to be a fast learner, and very grateful for the pointers.
>
> With thanks and best regards
>
> Rob
>
>
>
>
> Rob Shepherd BEng PhD
>
>
>
> On Tue, 7 Jul 2020 at 15:33, Arvid Heise  wrote:
>
>> Hi Rob,
>>
>> In the past I used a mixture of configuration and template queries to
>> achieve a similar goal (I had only up to 150 of these jobs per
>> application). My approach was not completely dynamic as you have described
>> but rather to compose a big query from a configuration during the start of
>> the application and restart to reflect changes.
>>
>> For the simple extractor/mapper, I'd use Table API and plug in SQL
>> statements [1] that could be easily given by experienced
>> end-users/analysts. Abort logic should be added programmatically to each of
>> the extractor/mapper through Table API (for example, extractor can output
>> an error column that also gives an explanation and this column is then
>> checked for non-null). The big advantage of using Table API over a simple
>> SQL query is that you can add structural variance: your application may use
>> 1 extractor or 100; it's just a matter of a loop.
>>
>> Note that async IO is currently not available in Table API, but you can
>> easily switch back and forth between Table API and Datastream. I'd
>> definitely suggest to use async IO for your described use cases.
>>
>> So please consider to also use that less dy

TaskManager docker image for Beam WordCount failing with ClassNotFound Exception

2020-07-07 Thread Avijit Saha
Hi,
I am trying the run the Beam WordCount example on Flink runner using docker
container-s for 'Jobcluster' and 'TaskManager'.

When I put the Beam Wordcount custom jar in the /opt/flink/usrlib/ dir -
the 'taskmanager' docker image  fails at runtime with ClassNotFound
Exception for the following:
Caused by: java.lang.ClassNotFoundException:
org.apache.beam.runners.core.metrics.MetricUpdates$MetricUpdate:
taskmanager_1  | Caused by: java.lang.ClassNotFoundException:
org.apache.beam.runners.core.metrics.MetricUpdates$MetricUpdate
taskmanager_1  |at
java.net.URLClassLoader.findClass(URLClassLoader.java:382)
taskmanager_1  |at
java.lang.ClassLoader.loadClass(ClassLoader.java:424)
taskmanager_1  |at
org.apache.flink.util.ChildFirstClassLoader.loadClass(ChildFirstClassLoader.java:69)
taskmanager_1  |at
java.lang.ClassLoader.loadClass(ClassLoader.java:357)
taskmanager_1  |... 68 more

If I  instead put the Beam WordCount jar in the "/opt/flink-1.10.1/lib" dir
as follows,
$ ls
flink-dist_2.12-1.10.1.jar
flink-table-blink_2.12-1.10.1.jar
flink-table_2.12-1.10.1.jar
log4j-1.2.17.jar
 slf4j-log4j12-1.7.15.jar
 word-count-beam-bundled-0.1.jar

It runs without any Exception!

Is this the expected behavior? Do we need to always bundle the job-jar in
the same lib location as other flink jars?


Re: Heterogeneous or Dynamic Stream Processing

2020-07-07 Thread Rob Shepherd
Thank you for the excellent clarifications.
I couldn't quite figure out how to map the above to my domain.

Nevertheless i have a working demo that performs the following pseudo code:

Let's say that each "channel" has slightly different stream requirements
and we can look up the list of operations needed to be performed using a
channel key.
(an operation is our term for some element of processing, a FaaS call or
local routine maybe)

1. extract channel key from incoming message
2. lookup channel info and enrich the stream object with channel info and a
list of operations
3. i...n using the iterative stream API, loop around each operation in the
list from (2).
4. sink

https://gist.github.com/robshep/bf38b7753062e9d49d365e505e86385e#file-dynamicstramjob-java-L52

I've some work to do to understand storing and retrieving state, as my demo
just stores the loop-state in my main stream object - I don't know whether
this is bad or bad-practice.

I'd be really grateful if anyone can cast their eye on this little demo and
see if there are any gotchas or pitfalls I'm likely to succumb to with this
approach.

With thanks for your help getting started



Rob Shepherd BEng PhD



On Tue, 7 Jul 2020 at 19:26, Arvid Heise  wrote:

> Hi Rob,
>
> 1. When you start a flink application, you actually just execute a Java
> main called the driver. This driver submits a job graph to the job manager,
> which executes the job. Since the driver is an ordinary Java program that
> uses the Flink API, you can compose the job graph in any way you want. Have
> a look at one example to see what I mean [1]. It's not hard to imagine that
> you can compose a query such as
>
> List extractorQueries = new ArrayList<>();
> Table table = tableEnvironment.from("testCatalog.`default`.testTable");
> Table errors = tableEnvironment.fromValues();
> for (int index = 0; index < extractorQueries.size(); index++) {
>String extractorQuery = extractorQueries.get(index);
>table = table.addColumns($(extractorQuery).as("extractedValue" + index, 
> "error"));
>errors = errors.unionAll(table.filter($("error").isNotNull()));
>table = table.filter($("error").isNull()).dropColumns($("error"));
> }
> // write table and errors
>
> This query first loads the data from a testTable and then successively
> applies sql expressions that calculate one value + one error column each.
> The value is stored in extractedValue0...99 (assuming 100 extractor
> queries). All values that cause errors, will have a value in the error
> column set. These are collected in the table "errors" for side output (very
> useful for debugging and improving the extractor queries). All good records
> (error IS NULL) are retained for further processing and the error column
> gets dropped.
>
> Btw there is also a Python entry point available, which offers you more or
> less the same. I just haven't tried it yet. [2]
>
> Lastly, currently all extractors are executed in succession. Of course, it
> is also possible to run them independently if you have different source
> streams. You can then later join / union them.
>
> 2. The downside of this driver approach is that changes in the
> configuration are not directly reflected. However, upon restart Flink will
> adapt the changes and recover from the last checkpoint [3] (= snapshot of
> the current processing state, which can be done every second in your case
> as the state is rather small). So now you just need to find a way to force
> a restart.
>
> One approach is to kill it manually and start again, but that's not
> scaling well. However, Flink's fault tolerance feature can be somewhat
> exploited: You can have one part of your program fail on config change,
> which will restart the whole application automatically if configured
> correctly and thus using the latest configuration.
>
> [1]
> https://github.com/apache/flink/blob/master/flink-examples/flink-examples-table/src/main/java/org/apache/flink/table/examples/java/basics/StreamSQLExample.java#L77-L100
> [2]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/python/python_udfs.html
> [3]
> https://ci.apache.org/projects/flink/flink-docs-release-1.11/concepts/stateful-stream-processing.html#checkpointing
>
> On Tue, Jul 7, 2020 at 6:12 PM Rob Shepherd  wrote:
>
>> Very helpful thank you Arvid.
>>
>> I've been reading up but I'm not sure I grasp all of that just yet.
>> Please may I ask for clarification?
>>
>> 1. Could I summarise correctly that I may build a list of functions from
>> an SQL call which can then be looped over?
>> This looping sounds appealing and you are right that "1 or 100" is a big
>> bonus.
>>
>> 2. "during the start of the application and restart to reflect changes"
>> "during the start" do you mean when the job first boots, or immediately
>> upon ingress of the data event from the queue?
>> "restart" is this an API call to maybe abort an execution of a piece of
>> data but with more up-to-date context.
>>
>>
>> Trying to be a fast learner, 

Re: Manual allocation of slot usage

2020-07-07 Thread Yangze Guo
Hi, Mu,

AFAIK, this feature is added to 1.9.2. If you use 1.9.0, would you
like to upgrade your Flink distribution?

Best,
Yangze Guo

On Tue, Jul 7, 2020 at 8:33 PM Mu Kong  wrote:
>
> Hi, Guo,
>
> Thanks for helping out.
>
> My application has a kafka source with 60 subtasks(parallelism), and we have 
> 15 task managers with 15 slots on each.
>
> Before I applied the cluster.evenly-spread-out-slots, meaning it is set to 
> default false, the operator 'kafka source" has 11 subtasks allocated in one 
> single task manager,
> while the remaining 49 subtasks of "kafka source" distributed to the 
> remaining 14 task managers.
>
> After I set cluster.evenly-spread-out-slots to true, the 60 subtasks of 
> "kafka source" were allocated to only 4 task managers, and they took 15 slots 
> on each of these 4 TMs.
>
> What I thought is that this config will make the subtasks of one operator 
> more evenly spread among the task managers, but it seems it made them 
> allocated in the same task manager as much as possible.
>
> The version I'm deploying is 1.9.0.
>
> Best regards,
> Mu
>
> On Tue, Jul 7, 2020 at 7:10 PM Yangze Guo  wrote:
>>
>> Hi, Mu,
>>
>> IIUC, cluster.evenly-spread-out-slots would fulfill your demand. Why
>> do you think it does the opposite of what you want. Do you run your
>> job in active mode? If so, cluster.evenly-spread-out-slots might not
>> work very well because there could be insufficient task managers when
>> request slot from ResourceManager. This has been discussed in
>> https://issues.apache.org/jira/browse/FLINK-12122 .
>>
>>
>> Best,
>> Yangze Guo
>>
>> On Tue, Jul 7, 2020 at 5:44 PM Mu Kong  wrote:
>> >
>> > Hi community,
>> >
>> > I'm running an application to consume data from kafka, and process it then 
>> > put data to the druid.
>> > I wonder if there is a way where I can allocate the data source consuming 
>> > process evenly across the task manager to maximize the usage of the 
>> > network of task managers.
>> >
>> > So, for example, I have 15 task managers and I set parallelism for the 
>> > kafka source as 60, since I have 60 partitions in kafka topic.
>> > What I want is flink cluster will put 4 kafka source subtasks on each task 
>> > manager.
>> >
>> > Is that possible? I have gone through the document, the only thing we 
>> > found is
>> >
>> > cluster.evenly-spread-out-slots
>> >
>> > which does exact the opposite of what I want. It will put the subtasks of 
>> > the same operator onto one task manager as much as possible.
>> >
>> > So, is some kind of manual resource allocation available?
>> > Thanks in advance!
>> >
>> >
>> > Best regards,
>> > Mu


Re: Manual allocation of slot usage

2020-07-07 Thread Xintong Song
Hi Mu,
Regarding your questions.

   - The feature `spread out tasks evenly across task managers` is
   introduced in Flink 1.10.0, and backported to Flink 1.9.2, per the JIRA
   ticket [1]. That means if you configure this option in Flink 1.9.0, it
   should not take any effect.
   - Please be aware that this feature ATM only works for standalone
   deployment (including standalone Kubernetes deployment). For the native
   Kubernetes, Yarn and Mesos deployment, it is a known issue that this
   feature does not work as expected.
   - Regarding the scheduling behavior changes, we would need more
   information to explain this. To provide the information needed, the easiest
   way is probably to provide the jobmanager log files, if you're okay with
   sharing them. If you cannot share the logs, then it would be better to
   answer the following questions
  - What Flink deployment are you using? (Standalone/K8s/Yarn/Mesos)
  - How many times have you tried with and without
  `cluster.evenly-spread-out-slots`? In other words, the described
behaviors
  before and after setting `cluster.evenly-spread-out-slots`, can they be
  stably reproduced?
  - How many TMs do you have? And how many slots does each TM has?


Thank you~

Xintong Song


[1] https://issues.apache.org/jira/browse/FLINK-12122

On Tue, Jul 7, 2020 at 8:33 PM Mu Kong  wrote:

> Hi, Guo,
>
> Thanks for helping out.
>
> My application has a kafka source with 60 subtasks(parallelism), and we
> have 15 task managers with 15 slots on each.
>
> *Before I applied the cluster.evenly-spread-out-slots,* meaning it is set
> to default false, the operator 'kafka source" has 11 subtasks allocated in
> one single task manager,
> while the remaining 49 subtasks of "kafka source" distributed to the
> remaining 14 task managers.
>
> *After I set cluster.evenly-spread-out-slots to true*, the 60 subtasks of
> "kafka source" were allocated to only 4 task managers, and they took 15
> slots on each of these 4 TMs.
>
> What I thought is that this config will make the subtasks of one operator
> more evenly spread among the task managers, but it seems it made them
> allocated in the same task manager as much as possible.
>
> The version I'm deploying is 1.9.0.
>
> Best regards,
> Mu
>
> On Tue, Jul 7, 2020 at 7:10 PM Yangze Guo  wrote:
>
>> Hi, Mu,
>>
>> IIUC, cluster.evenly-spread-out-slots would fulfill your demand. Why
>> do you think it does the opposite of what you want. Do you run your
>> job in active mode? If so, cluster.evenly-spread-out-slots might not
>> work very well because there could be insufficient task managers when
>> request slot from ResourceManager. This has been discussed in
>> https://issues.apache.org/jira/browse/FLINK-12122 .
>>
>>
>> Best,
>> Yangze Guo
>>
>> On Tue, Jul 7, 2020 at 5:44 PM Mu Kong  wrote:
>> >
>> > Hi community,
>> >
>> > I'm running an application to consume data from kafka, and process it
>> then put data to the druid.
>> > I wonder if there is a way where I can allocate the data source
>> consuming process evenly across the task manager to maximize the usage of
>> the network of task managers.
>> >
>> > So, for example, I have 15 task managers and I set parallelism for the
>> kafka source as 60, since I have 60 partitions in kafka topic.
>> > What I want is flink cluster will put 4 kafka source subtasks on each
>> task manager.
>> >
>> > Is that possible? I have gone through the document, the only thing we
>> found is
>> >
>> > cluster.evenly-spread-out-slots
>> >
>> > which does exact the opposite of what I want. It will put the subtasks
>> of the same operator onto one task manager as much as possible.
>> >
>> > So, is some kind of manual resource allocation available?
>> > Thanks in advance!
>> >
>> >
>> > Best regards,
>> > Mu
>>
>


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Paul Lam
Finally! Thanks for Piotr and Zhijiang being the release managers, and everyone 
that contributed to the release!

Best,
Paul Lam

> 2020年7月7日 22:06,Zhijiang  写道:
> 
> The Apache Flink community is very happy to announce the release of Apache 
> Flink 1.11.0, which is the latest major release.
> 
> Apache Flink® is an open-source stream processing framework for distributed, 
> high-performing, always-available, and accurate data streaming applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html 
> 
> 
> Please check out the release blog post for an overview of the improvements 
> for this new major release:
> https://flink.apache.org/news/2020/07/06/release-1.11.0.html 
> 
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>  
> 
> 
> We would like to thank all contributors of the Apache Flink community who 
> made this release possible!
> 
> Cheers,
> Piotr & Zhijiang



Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Dian Fu
Thanks Piotr and Zhijiang for the great work and everyone who contributed to 
this release!

Regards,
Dian

> 在 2020年7月8日,上午10:12,Paul Lam  写道:
> 
> Finally! Thanks for Piotr and Zhijiang being the release managers, and 
> everyone that contributed to the release!
> 
> Best,
> Paul Lam
> 
>> 2020年7月7日 22:06,Zhijiang > > 写道:
>> 
>> The Apache Flink community is very happy to announce the release of Apache 
>> Flink 1.11.0, which is the latest major release.
>> 
>> Apache Flink® is an open-source stream processing framework for distributed, 
>> high-performing, always-available, and accurate data streaming applications.
>> 
>> The release is available for download at:
>> https://flink.apache.org/downloads.html 
>> 
>> 
>> Please check out the release blog post for an overview of the improvements 
>> for this new major release:
>> https://flink.apache.org/news/2020/07/06/release-1.11.0.html 
>> 
>> 
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>>  
>> 
>> 
>> We would like to thank all contributors of the Apache Flink community who 
>> made this release possible!
>> 
>> Cheers,
>> Piotr & Zhijiang
> 



Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Jark Wu
Congratulations!
Thanks Zhijiang and Piotr for the great work as release manager, and thanks
everyone who makes the release possible!

Best,
Jark

On Wed, 8 Jul 2020 at 10:12, Paul Lam  wrote:

> Finally! Thanks for Piotr and Zhijiang being the release managers, and
> everyone that contributed to the release!
>
> Best,
> Paul Lam
>
> 2020年7月7日 22:06,Zhijiang  写道:
>
> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.11.0, which is the latest major release.
>
> Apache Flink® is an open-source stream processing framework for distributed,
> high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements for
> this new major release:
> https://flink.apache.org/news/2020/07/06/release-1.11.0.html
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>
> We would like to thank all contributors of the Apache Flink community who made
> this release possible!
>
> Cheers,
> Piotr & Zhijiang
>
>
>


Re: [Third-party Tool] Flink memory calculator

2020-07-07 Thread Yangze Guo
Hi, there,

As Flink 1.11.0 released, we provide a new calculator[1] for this
version. Feel free to try it and any feedback or suggestion is
welcomed!

[1] 
https://github.com/KarmaGYZ/flink-memory-calculator/blob/master/calculator-1.11.sh

Best,
Yangze Guo

On Wed, Apr 1, 2020 at 9:45 PM Yangze Guo  wrote:
>
> @Marta
> Thanks for the tip! I'll do that.
>
> Best,
> Yangze Guo
>
> On Wed, Apr 1, 2020 at 8:05 PM Marta Paes Moreira  wrote:
> >
> > Hey, Yangze.
> >
> > I'd like to suggest that you submit this tool to Flink Community Pages [1]. 
> > That way it can get more exposure and it'll be easier for users to find it.
> >
> > Thanks for your contribution!
> >
> > [1] https://flink-packages.org/
> >
> > On Tue, Mar 31, 2020 at 9:09 AM Yangze Guo  wrote:
> >>
> >> Hi, there.
> >>
> >> In the latest version, the calculator supports dynamic options. You
> >> could append all your dynamic options to the end of "bin/calculator.sh
> >> [-h]".
> >> Since "-tm" will be deprecated eventually, please replace it with
> >> "-Dtaskmanager.memory.process.size=".
> >>
> >> Best,
> >> Yangze Guo
> >>
> >> On Mon, Mar 30, 2020 at 12:57 PM Xintong Song  
> >> wrote:
> >> >
> >> > Hi Jeff,
> >> >
> >> > I think the purpose of this tool it to allow users play with the memory 
> >> > configurations without needing to actually deploy the Flink cluster or 
> >> > even have a job. For sanity checks, we currently have them in the 
> >> > start-up scripts (for standalone clusters) and resource managers (on 
> >> > K8s/Yarn/Mesos).
> >> >
> >> > I think it makes sense do the checks earlier, i.e. on the client side. 
> >> > But I'm not sure if JobListener is the right place. IIUC, JobListener is 
> >> > invoked before submitting a specific job, while the mentioned checks 
> >> > validate Flink's cluster level configurations. It might be okay for a 
> >> > job cluster, but does not cover the scenarios of session clusters.
> >> >
> >> > Thank you~
> >> >
> >> > Xintong Song
> >> >
> >> >
> >> >
> >> > On Mon, Mar 30, 2020 at 12:03 PM Yangze Guo  wrote:
> >> >>
> >> >> Thanks for your feedbacks, @Xintong and @Jeff.
> >> >>
> >> >> @Jeff
> >> >> I think it would always be good to leverage exist logic in Flink, such
> >> >> as JobListener. However, this calculator does not only target to check
> >> >> the conflict, it also targets to provide the calculating result to
> >> >> user before the job is actually deployed in case there is any
> >> >> unexpected configuration. It's a good point that we need to parse the
> >> >> dynamic configs. I prefer to parse the dynamic configs and cli
> >> >> commands in bash instead of adding hook in JobListener.
> >> >>
> >> >> Best,
> >> >> Yangze Guo
> >> >>
> >> >> On Mon, Mar 30, 2020 at 10:32 AM Jeff Zhang  wrote:
> >> >> >
> >> >> > Hi Yangze,
> >> >> >
> >> >> > Does this tool just parse the configuration in flink-conf.yaml ?  
> >> >> > Maybe it could be done in JobListener [1] (we should enhance it via 
> >> >> > adding hook before job submission), so that it could all the cases 
> >> >> > (e.g. parameters coming from command line)
> >> >> >
> >> >> > [1] 
> >> >> > https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/core/execution/JobListener.java#L35
> >> >> >
> >> >> >
> >> >> > Yangze Guo  于2020年3月30日周一 上午9:40写道:
> >> >> >>
> >> >> >> Hi, Yun,
> >> >> >>
> >> >> >> I'm sorry that it currently could not handle it. But I think it is a
> >> >> >> really good idea and that feature would be added to the next version.
> >> >> >>
> >> >> >> Best,
> >> >> >> Yangze Guo
> >> >> >>
> >> >> >> On Mon, Mar 30, 2020 at 12:21 AM Yun Tang  wrote:
> >> >> >> >
> >> >> >> > Very interesting and convenient tool, just a quick question: could 
> >> >> >> > this tool also handle deployment cluster commands like "-tm" mixed 
> >> >> >> > with configuration in `flink-conf.yaml` ?
> >> >> >> >
> >> >> >> > Best
> >> >> >> > Yun Tang
> >> >> >> > 
> >> >> >> > From: Yangze Guo 
> >> >> >> > Sent: Friday, March 27, 2020 18:00
> >> >> >> > To: user ; user...@flink.apache.org 
> >> >> >> > 
> >> >> >> > Subject: [Third-party Tool] Flink memory calculator
> >> >> >> >
> >> >> >> > Hi, there.
> >> >> >> >
> >> >> >> > In release-1.10, the memory setup of task managers has changed a 
> >> >> >> > lot.
> >> >> >> > I would like to provide here a third-party tool to simulate and get
> >> >> >> > the calculation result of Flink's memory configuration.
> >> >> >> >
> >> >> >> >  Although there is already a detailed setup guide[1] and migration
> >> >> >> > guide[2] officially, the calculator could further allow users to:
> >> >> >> > - Verify if there is any conflict in their configuration. The
> >> >> >> > calculator is more lightweight than starting a Flink cluster,
> >> >> >> > especially when running Flink on Yarn/Kubernetes. User could make 
> >> >> >> > sure
> >> >> >> > their configuration is correct locally before deploying it to 
> >> >> >> > external
> 

Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Yangze Guo
Thanks, Zhijiang and Piotr. Congrats to everyone involved!

Best,
Yangze Guo

On Wed, Jul 8, 2020 at 10:19 AM Jark Wu  wrote:
>
> Congratulations!
> Thanks Zhijiang and Piotr for the great work as release manager, and thanks
> everyone who makes the release possible!
>
> Best,
> Jark
>
> On Wed, 8 Jul 2020 at 10:12, Paul Lam  wrote:
>
> > Finally! Thanks for Piotr and Zhijiang being the release managers, and
> > everyone that contributed to the release!
> >
> > Best,
> > Paul Lam
> >
> > 2020年7月7日 22:06,Zhijiang  写道:
> >
> > The Apache Flink community is very happy to announce the release of
> > Apache Flink 1.11.0, which is the latest major release.
> >
> > Apache Flink® is an open-source stream processing framework for distributed,
> > high-performing, always-available, and accurate data streaming
> > applications.
> >
> > The release is available for download at:
> > https://flink.apache.org/downloads.html
> >
> > Please check out the release blog post for an overview of the improvements 
> > for
> > this new major release:
> > https://flink.apache.org/news/2020/07/06/release-1.11.0.html
> >
> > The full release notes are available in Jira:
> >
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
> >
> > We would like to thank all contributors of the Apache Flink community who 
> > made
> > this release possible!
> >
> > Cheers,
> > Piotr & Zhijiang
> >
> >
> >


Re: How to ensure that job is restored from savepoint when using Flink SQL

2020-07-07 Thread shadowell


Hi Fabian,


Thanks for your information!
Actually, I am not clear about the mechanism of auto-generated IDs in Flink SQL 
and the mechanism of how does the operator state mapping back from savepoint.
I hope to get some detail information by giving an example bellow.


I have two sql as samples:
old sql : select id, name, sum(salary) from user_info where id == '001' group 
by TUMBLE(rowtime, INTERVAL '1' DAY), id, name;
new sql:   select id, name, sum(salary) from user_info where id == '001' and 
age >= '28' group by TUMBLE(rowtime, INTERVAL '1' DAY), id, name; 
I just add some age limitation in new SQL. Now, I want to switch the job from 
old one to the new one by trigger a savepoint. Flink will generate operator IDs 
for operators in new SQL.
In this case, just from a technical point of view,  the operator IDs in the 
savepoint of the old SQL job can match the operator IDs in the new SQL job?
My understanding is that Flink will reorder the operators and generate new IDs 
for operators. The new IDs may not match the old IDs. 
This will cause some states failed to be mapped back from the old job 
savepoint, which naturally leads to inaccurate calculation results.
I wonder if my understanding is correct.


Thanks~ 
Jie


| |
shadowell
|
|
shadow...@126.com
|
签名由网易邮箱大师定制
On 7/7/2020 17:23,Fabian Hueske wrote:
Hi Jie Feng,


As you said, Flink translates SQL queries into streaming programs with 
auto-generated operator IDs.
In order to start a SQL query from a savepoint, the operator IDs in the 
savepoint must match the IDs in the newly translated program.
Right now this can only be guaranteed if you translate the same query with the 
same Flink version (optimizer changes might change the structure of the 
resulting plan even if the query is the same).
This is of course a significant limitation, that the community is aware of and 
planning to improve in the future.


I'd also like to add that it can be very difficult to assess whether it is 
meaningful to start a query from a savepoint that was generated with a 
different query.
A savepoint holds intermediate data that is needed to compute the result of a 
query.
If you update a query it is very well possible that the result computed by 
Flink won't be equal to the actual result of the new query.



Best, Fabian



Am Mo., 6. Juli 2020 um 10:50 Uhr schrieb shadowell :



Hello, everyone,
I have some unclear points when using Flink SQL. I hope to get an 
answer or tell me where I can find the answer.
When using the DataStream API, in order to ensure that the job can 
recover the state from savepoint after adjustment, it is necessary to specify 
the uid for the operator. However, when using Flink SQL, the uid of the 
operator is automatically generated. If the SQL logic changes (operator order 
changes), when the task is restored from savepoint, will it cause some of the 
operator states to be unable to be mapped back, resulting in state loss?


Thanks~
Jie Feng 
| |
shadowell
|
|
shadow...@126.com
|
签名由网易邮箱大师定制

Re:[ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread chaojianok
Congratulations! 

Very happy to make some contributions to Flink!













At 2020-07-07 22:06:05, "Zhijiang"  wrote:

The Apache Flink community is very happy to announce the release of Apache 
Flink 1.11.0, which is the latest major release.


Apache Flink® is an open-source stream processing framework for distributed, 
high-performing, always-available, and accurate data streaming applications.


The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements for 
this new major release:
https://flink.apache.org/news/2020/07/06/release-1.11.0.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364

We would like to thank all contributors of the Apache Flink community who made 
this release possible!

Cheers,
Piotr & Zhijiang

Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Jingsong Li
Congratulations!

Thanks Zhijiang and Piotr as release managers, and thanks everyone.

Best,
Jingsong

On Wed, Jul 8, 2020 at 10:51 AM chaojianok  wrote:

> Congratulations!
>
> Very happy to make some contributions to Flink!
>
>
>
>
>
> At 2020-07-07 22:06:05, "Zhijiang"  wrote:
>
> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.11.0, which is the latest major release.
>
> Apache Flink® is an open-source stream processing framework for distributed,
> high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements for
> this new major release:
> https://flink.apache.org/news/2020/07/06/release-1.11.0.html
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>
> We would like to thank all contributors of the Apache Flink community who made
> this release possible!
>
> Cheers,
> Piotr & Zhijiang
>
>
>
>
>


-- 
Best, Jingsong Lee


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Leonard Xu
Congratulations!

Thanks Zhijiang and Piotr for the great work, and thanks everyone involved!

Best,
Leonard Xu



Re: Re:[ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Yun Tang
 Congratulations to every who involved and thanks for Zhijiang and Piotr's work 
as release manager.

From: chaojianok 
Sent: Wednesday, July 8, 2020 10:51
To: Zhijiang 
Cc: dev ; user@flink.apache.org ; 
announce 
Subject: Re:[ANNOUNCE] Apache Flink 1.11.0 released


Congratulations!

Very happy to make some contributions to Flink!





At 2020-07-07 22:06:05, "Zhijiang"  wrote:

The Apache Flink community is very happy to announce the release of Apache 
Flink 1.11.0, which is the latest major release.

Apache Flink® is an open-source stream processing framework for distributed, 
high-performing, always-available, and accurate data streaming applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements for 
this new major release:
https://flink.apache.org/news/2020/07/06/release-1.11.0.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364

We would like to thank all contributors of the Apache Flink community who made 
this release possible!

Cheers,
Piotr & Zhijiang






Re: Check pointing for simple pipeline

2020-07-07 Thread Yun Tang
Hi Prasanna

Using incremental checkpoint is always better than not as this is faster and 
less memory consumed.
However, incremental checkpoint is only supported by RocksDB state-backend.


Best
Yun Tang

From: Prasanna kumar 
Sent: Tuesday, July 7, 2020 20:43
To: d...@flink.apache.org ; user 
Subject: Check pointing for simple pipeline

Hi ,

I have pipeline. Source-> Map(JSON transform)-> Sink..

Both source and sink are Kafka.

What is the best checkpoint ing mechanism?

 Is setting checkpoints incremental a good option? What should be careful of?

I am running it on aws emr.

Will checkpoint slow the speed?

Thanks,
Prasanna.


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Wesley

Nice news. Congrats!


Leonard Xu wrote:

Congratulations!

Thanks Zhijiang and Piotr for the great work, and thanks everyone involved!

Best,
Leonard Xu



Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Rui Li
Congratulations! Thanks Zhijiang & Piotr for the hard work.

On Tue, Jul 7, 2020 at 10:06 PM Zhijiang  wrote:

> The Apache Flink community is very happy to announce the release of
> Apache Flink 1.11.0, which is the latest major release.
>
> Apache Flink® is an open-source stream processing framework for distributed,
> high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements for
> this new major release:
> https://flink.apache.org/news/2020/07/06/release-1.11.0.html
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>
> We would like to thank all contributors of the Apache Flink community who made
> this release possible!
>
> Cheers,
> Piotr & Zhijiang
>


-- 
Best regards!
Rui Li


Re: [ANNOUNCE] Apache Flink 1.11.0 released

2020-07-07 Thread Benchao Li
Congratulations!  Thanks Zhijiang & Piotr for the great work as release
managers.

Rui Li  于2020年7月8日周三 上午11:38写道:

> Congratulations! Thanks Zhijiang & Piotr for the hard work.
>
> On Tue, Jul 7, 2020 at 10:06 PM Zhijiang 
> wrote:
>
>> The Apache Flink community is very happy to announce the release of
>> Apache Flink 1.11.0, which is the latest major release.
>>
>> Apache Flink® is an open-source stream processing framework for distributed,
>> high-performing, always-available, and accurate data streaming
>> applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please check out the release blog post for an overview of the
>> improvements for this new major release:
>> https://flink.apache.org/news/2020/07/06/release-1.11.0.html
>>
>> The full release notes are available in Jira:
>>
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12346364
>>
>> We would like to thank all contributors of the Apache Flink community who 
>> made
>> this release possible!
>>
>> Cheers,
>> Piotr & Zhijiang
>>
>
>
> --
> Best regards!
> Rui Li
>


-- 

Best,
Benchao Li


Chaining the creation of a WatermarkStrategy doesn't work?

2020-07-07 Thread Niels Basjes
Hi,

I'm migrating some of my code to Flink 1.11 and I ran into something I find
strange.

This works

WatermarkStrategy watermarkStrategy = WatermarkStrategy
.forBoundedOutOfOrderness(Duration.of(1, ChronoUnit.MINUTES));

watermarkStrategy
.withTimestampAssigner((SerializableTimestampAssigner)
(element, recordTimestamp) -> 42L);

However this does NOT work

WatermarkStrategy watermarkStrategy = WatermarkStrategy
.forBoundedOutOfOrderness(Duration.of(1, ChronoUnit.MINUTES))
.withTimestampAssigner((SerializableTimestampAssigner)
(element, recordTimestamp) -> 42L);


When I try to compile this last one I get

Error:(109, 13) java: no suitable method found for
withTimestampAssigner(org.apache.flink.api.common.eventtime.SerializableTimestampAssigner)
method
org.apache.flink.api.common.eventtime.WatermarkStrategy.withTimestampAssigner(org.apache.flink.api.common.eventtime.TimestampAssignerSupplier)
is not applicable
  (argument mismatch;
org.apache.flink.api.common.eventtime.SerializableTimestampAssigner
cannot be converted to
org.apache.flink.api.common.eventtime.TimestampAssignerSupplier)
method
org.apache.flink.api.common.eventtime.WatermarkStrategy.withTimestampAssigner(org.apache.flink.api.common.eventtime.SerializableTimestampAssigner)
is not applicable
  (argument mismatch;
org.apache.flink.api.common.eventtime.SerializableTimestampAssigner
cannot be converted to
org.apache.flink.api.common.eventtime.SerializableTimestampAssigner)

Why is that?

-- 
Best regards / Met vriendelijke groeten,

Niels Basjes