I didn't find this as part of the
https://flink.apache.org/news/2019/04/09/release-1.8.0.html notes.
I think an update to the Important Changes section would be valuable for
users upgrading to 1.8 from earlier releases. Also, logging that the
library is on the classpath but the feature flag is set
Thank you, it is fixed in the new version.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,David
Before I open an issue about this and @Andrey Zagrebin @Aljoscha Krettek
suggested me to extends the AbstractStreamOperator to custom the operator
operation on state or extends the statebackend to add a cache layer on it.
Fyi:
https://issues.apache.org/jira/browse/FLINK-10343?fo
Hi,
Can anyone give me a clue about how to read mongodb’s data as a batch/streaming
datasource in Flink? I don’t find the mongodb connector in recent release
version .
Many thanks
Thanks Aitozi.
Your answer makes good sense and I'm trying to implement this now. My code
is written as a KeyedProcessFunction, but I can't see where this exposes
the KeyContext interface. Is there anything you can point me to in the
docs?
Best,
David
On Sun, Apr 28, 2019 at 8:09 PM aitozi
Hi,David
RocksdbKeyedBackend is used under keyContext, every operation with state should
setCurrentKey to let the rocksdb aware of the current key and complute the
currrent keyGroup. Use these two parts to interactive with the underyling
rocksdb.
I think you can achieve this goal by set
We have the following use case: We are reading a stream of events which we
want to write to different parquet files based on data within the element
. The end goal is to register these parquet files in hive to query. I
was exploring the option of using StreamingFileSink for this use case but
found
I have a stateful operator in a task which processes thousands of elements
per second (per task) when using the Filesystem backend. As documented and
reported by other users, when I switch to the RocksDB backend, throughput
is considerably lower. I need something that combines the high performanc
Hi
I want to exactly how Flink read data in the both case of file in local
filesystem and file on distributed file system?
In reading data from local file system I guess every line of the file will
be read by a slot (according to the job parallelism) for applying the map
logic.
In reading from H
Thanks Sameer/Rong:
As Fabian and you have mentioned, the window still sticks around forever for
global window, so I am trying avoid that scenario.
Fabian & Flink team - do you have any insights into what would happen if I
create a window and the later change it's end time during the stream proc
Thanks, Guowei, I see your point.
But I'm afraid there is no direct connection between delivery semantics and
TimeService.
Yes, obviously, Java Timer is the first thing that comes to mind, but it
requires an extra thread to perform background work, this approach inflicts
some drawbacks such as
I have three physical nodes with docker installed on them. I have one docker
container with Mesos, Marathon, Hadoop and Flink. I configured Master node
and Slave nodes for Mesos,Zookeeper and Marathon. I do these works step by
step. First, In Master node, I enter to docker container with this comma
Hi,
*Problem descriptions*
I have two file-sources having a same format, each has at most one new file
every single tumbling window, and I need to merge data from those two
sources. My operators chain is as follow:
FileReader1 --> Parser --\
Unio
13 matches
Mail list logo