Hi,
there are currently no planned releases. I would actually like to start
preparing for the 1.0 release soon, but the community needs to discuss that
first.
How urgently do you need a 0.10.2 release? If this is the last blocker for
using Flink in production at your company, I can push for the
Hi Stephan,
Thanks for your quickly response.
So, consider an operator task with two processed records and no barrier
incoming. If the task fail and must be records, the last consistent
snapshot will be used, which no includes information about the processed
but no checkpointed records. What
Hi
While working on a RichFilterFunction implementation I was wondering, if
there is a much better way to access configuration
options read from file during startup. Actually, I am
using getRuntimeContext().getExecutionConfig().getGlobalJobParameters()
to get access to my settings.
Reason for
Hi Christian,
the open method is called by the Flink workers when the parallel tasks are
initialized.
The configuration parameter is the configuration object of the operator.
You can set parameters in the operator config as follows:
DataSet text = ...
DataSet wc =
Hello,
I'm trying to understand the process of checkpoint processing for
exact-once in Flink, and I have some doubts.
The documentation says that when there is a failure and the state of an
operator is restored, the already processed records are deleted based on
their identifiers.
My doubts is,
Hi!
I think there is a misunderstanding. There are no identifiers maintained
and no individual records deleted.
On recovery, all operators reset their state to a consistent snapshot:
https://ci.apache.org/projects/flink/flink-docs-release-0.10/internals/stream_checkpointing.html
Greetings,
Hi Robert,
We are on deadline for demo stage right now before production for
management so it would be great to have 0.10.2 for stable version within
this week if possible ?
Cheers
On Wed, Jan 13, 2016 at 4:13 PM, Robert Metzger wrote:
> Hi,
>
> there are currently no
Thanks, Gordon, for the nice answer!
One thing is important to add: Exactly-once refers to state maintained by
Flink. All side effects (changes made to the "outside" world), which
includes sinks, need in fact to be idempotent, or will only have "at-least
once" semantics.
In practice, this works
Hi Francis,
A part of every complete snapshot is the record positions associated with
the barrier that triggered the checkpointing of this snapshot. The snapshot
is completed only when all the records within the checkpoint reaches the
sink. When a topology fails, all the operators' state will
I’m experimenting combining Spring with Flink. I’ve successfully instrumented
for Gradle, but Maven is emitting ClassNotFoundExceptions for items ostensibly
on the class path.
Project is currently configured for:
1. Scala 2.10.4
2. Flink 0.9.1
I execute the following
```
# In one terminal
$
Hi,
use JDBCOutputFormatBuilder to set all required parameters:
> JDBCOutputFormatBuilder builder = JDBCOutputFormat.buildJDBCOutputFormat();
> builder.setDBUrl(...)
> // and more
>
> var.write(builder.finish, OL);
-Matthias
On 01/13/2016 06:21 PM, Traku traku wrote:
> Hi everyone.
>
> I'm
thank you!!
2016-01-13 20:51 GMT+01:00 Matthias J. Sax :
> Hi,
>
> use JDBCOutputFormatBuilder to set all required parameters:
>
> > JDBCOutputFormatBuilder builder =
> JDBCOutputFormat.buildJDBCOutputFormat();
> > builder.setDBUrl(...)
> > // and more
> >
> >
Simply passing FlinkUserCodeClassLoader.class.getClassLoader to the parent
constructor cleared the impasse.
2016-01-13 20:06:43.637 INFO 35403 --- [ main]
o.o.e.j.s.SocketTextStreamWordCount$ : Started SocketTextStreamWordCount.
in 5.176 seconds (JVM running for 12.58)
[INFO]
Thanks Robert! I'll be keeping tabs on the PR.
Cheers,
David
On Mon, Jan 11, 2016 at 4:04 PM, Robert Metzger wrote:
> Hi David,
>
> In theory isEndOfStream() is absolutely the right way to go for stopping
> data sources in Flink.
> That its not working as expected is a
Hi,
the window contents are stored in state managed by the window operator at all
times until they are purged by a Trigger returning PURGE from one of its on*()
methods.
Out of the box, Flink does not have something akin to the lateness and cleanup
of Google Dataflow. You can, however
Hi everyone.
I'm trying to migrate some code to flink 0.10 and I'm having a problem.
I try to create a custom sink to insert the data to a postgresql database.
My code was this.
var.output(
// build and configure OutputFormat
JDBCOutputFormat
Hi Saiph,
In Flink, the key for keyBy() can be provided in different ways:
https://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#specifying-keys
(the doc is for DataSet API, but specifying keys is basically the same for
DataStream and DataSet).
As described in the
Hi,
This line «stream.keyBy(0)» only works if stream is of type
DataStream[Tuple] - and this Tuple is not a scala tuple but a flink tuple
(why not to use scala Tuple?). Currently keyBy can be applied to anything
(at least in scala) like DataStream[String] and
DataStream[Array[String]].
Can
18 matches
Mail list logo