Wu 写道:
Great summary! Thanks for adding the translation specification in
it.
I learned a lot from the guide.
Best,
Jark
On Fri, 14 Feb 2020 at 23:39, Aljoscha Krettek <
aljos...@apache.org>
wrote:
Hi Everyone,
we just merged a new style guide for documentation writing:
https://flin
Wouldn't removing the ES 2.x connector be enough because we can then
update the ES 5.x connector? It seems there are some users that still
want to use that one.
Best,
Aljoscha
On 18.02.20 10:42, Robert Metzger wrote:
The ES5 connector is causing some problems on the CI system. It would be
nic
Hi,
thanks for starting this discussion!
However, I have a somewhat opposing opinion to this: we should disallow
using Google Docs for FLIPs and FLIP discussions and follow the already
established process more strictly.
My reasons for this are:
- discussions on the Google Doc are not reflec
ld
expect.
Greg
Note: this summary is rather naive and includes non-squashed hotfix
commits
included in a PR
$ git shortlog --grep 'hotfix' -s -n release-0.9.0..
94 Stephan Ewen
42 Aljoscha Krettek
20 Till Rohrmann
16 Robert Metzger
13 Ufuk Celebi
+1
Although I would hope that it can be more than just "anticipated".
Best,
Aljoscha
On 19.02.20 15:40, Till Rohrmann wrote:
Thanks for volunteering as one of our release managers Zhijiang.
+1 for the *anticipated feature freeze date* end of April. As we go along
and collect more data points
Hi,
the background is this series of Jira Issues and PRs around extending
the .bat scripts for windows:
https://issues.apache.org/jira/browse/FLINK-5333.
I would like to resolve this, by either closing the Jira Issues as
"Won't Do" or finally merging these PRs. The questions I have are:
-
cidentally, ProgramInvocationException, we just throw in place as it
is accessible.
- transitively, flink-optimizer, for one utility.
- transitively, flink-java, for several utilities.
flink-runtime:
- mainly for JobGraph generating.
With a previous discussion with @Aljoscha Krettek our
goal is brie
If there is a noreply email address that could be on purpose. This
happens when you configure github to not show your real e-mail address.
This also happens when contributors open a PR and don't want to show
their real e-mail address. I talked to at least 1 person that had it set
up like this o
> For the -R flag, this was in the PoC that I published just as a quick
> implementation, so that I can move fast to the entrypoint part.
> Personally, I would not even be against having a separate command in
> the CLI for this, sth like run-on-cluster or something along those
> lines.
> What do y
On 09.03.20 03:15, tison wrote:
So far, there is a PR[1] that implements the proposal in this thread.
I look forward to your reviews or start a vote if required.
Nice, I'll try and get to review that this week.
Best,
Aljoscha
Since there was no-one that said we should keep the windows scripts and
no-one responded on the user ML thread I'll close the Jira issues/PRs
about extending the scripts.
Aljoscha
On 10.03.20 03:31, Yang Wang wrote:
For the "run-job", do you mean to submit a Flink job to an existing session
or
just like the current per-job to start a dedicated Flink cluster? Then will
"flink run" be deprecated?
I was talking about the per-job mode that starts a dedicated Flink
cluster.
On 10.03.20 14:35, Robert Metzger wrote:
I'm wondering whether we should file a ticket to remove the *.bat files in
bin/ ?
We can leave them there because they're not doing much harm, and
removing them might actively break some existing setup.
Best,
Aljoscha
Thanks! I'm reading the document now and will get back to you.
Best,
Aljoscha
Hi,
I don't understand this discussion. Hints, as I understand them, should
work like this:
- hints are *optional* advice for the optimizer to try and help it to
find a good execution strategy
- hints should not change query semantics, i.e. they should not change
connector properties executi
On 09.03.20 06:10, Rong Rong wrote:
- Is this feature (disabling checkpoint and restarting job from Kafka
committed GROUP_OFFSET) not supported?
I believe the Flink community never put much (any?) effort into this
because the Flink Kafka Consumer does its own offset handling. Starting
from th
+1 (binding)
Aljoscha
On 12.03.20 14:05, Flavio Pompermaier wrote:
There's also a related issue that I opened a long time ago
https://issues.apache.org/jira/browse/FLINK-10879 that could be closed once
implemented this FLIP (or closed immediately and referenced as duplicated
by the new JIRA ticket that would be creat
+1
Aljoscha
Thanks for the update!
On 13.03.20 13:47, Rong Rong wrote:
1. I think we have finally pinpointed what the root cause to this issue is:
When partitions are assigned manually (e.g. with assign() API instead
subscribe() API) the client will not try to rediscover the coordinator if
it dies [1]. This
On 18.03.20 14:45, Flavio Pompermaier wrote:
what do you think if we exploit this job-submission sprint to address also
the problem discussed in https://issues.apache.org/jira/browse/FLINK-10862?
That's a good idea! What should we do? It seems that most committers on
the issue were in favour o
Agreed to what Dawid and Timo said.
To answer your question about multi line SQL: no, we don't think we need
this in Flink 1.11, we only wanted to make sure that the interfaces that
we now put in place will potentially allow this in the future.
Best,
Aljoscha
On 01.04.20 09:31, godfrey he wr
+1 to making Blink the default planner, we definitely don't want to
maintain two planners for much longer.
Best,
Aljoscha
I think we're designing ourselves into ever more complicated corners
here. Maybe we need to take a step back and reconsider. What would you
think about this (somewhat) simpler proposal:
- introduce a hint called CONNECTOR_OPTIONS(k=v,...). or
CONNECTOR_PROPERTIES, depending on what naming we w
Hi All,
we're currently struggling a bit with test stability, it seems
especially on Azure. If you encounter a test failure in a PR or anywhere
else, please take the time to check if there is already a Jira issue or
create a new one. If there is already an Issue, please report the
additional
Hi,
the documentation has a guide about the Streaming API:
https://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html
This also contains a section about the rolling (HDFS) FileSystem sink:
https://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#hadoop
We still have this issue as blocker
https://issues.apache.org/jira/browse/FLINK-2747
I don’t see a fix for this yet, however.
> On 22 Oct 2015, at 11:53, Maximilian Michels wrote:
>
> Ah. Good catch! :) The job counter should not be limited by the size
> of the job history. https://issues.apac
There are still references to 0.10-SNAPSHOT in the release. Especially for the
quickstarts this is problematic:
~/D/flink (release-0.10.0-rc1|✔) $ ag "0.10-SNAPSHOT"
docs/_config.yml
30:version: "0.10-SNAPSHOT"
docs/apis/best_practices.md
331:0.10-SNAPSHOT
346:0.10-SNAPSH
start-cluster-streaming.sh and start-local-streaming.sh don’t work if the flink
path has spaces. I’m fixing it on master and on release-0.10.
> On 26 Oct 2015, at 23:06, Maximilian Michels wrote:
>
> Please vote on releasing the following candidate as Apache Flink version
> 0.10.0:
>
> The comm
The plan visualizer does not show anything for the output generated by
“bin/flink info”
> On 27 Oct 2015, at 13:48, Aljoscha Krettek wrote:
>
> start-cluster-streaming.sh and start-local-streaming.sh don’t work if the
> flink path has spaces. I’m fixing it on master and on release
>
> On 27 October 2015 at 15:18, Maximilian Michels wrote:
>
>> Good catch, Aljoscha. As far as I know the plan visualizer is only broken
>> for Safari. It works for me with Firefox.
>>
>> On Tue, Oct 27, 2015 at 3:14 PM, Aljoscha Krettek
>> wrote:
worked, so if we can quickly create a new release candidate it could
be successful.
@Ufuk, do you still want to fix the missing default configuration parameters
before the next RC?
What do you think?
> On 27 Oct 2015, at 15:46, Aljoscha Krettek wrote:
>
> Yes, I can confirm that it w
I think the quickstarts should be very easy to discover, so we should keep them
on the main page. If you just browse to flink.apache.org you would not be aware
that they exist.
> On 28 Oct 2015, at 13:37, Matthias J. Sax wrote:
>
> What about "Quickstart" menu point... I really would like to mo
Could it be a problem that there are two TaskManagers running per machine?
> On 29 Oct 2015, at 19:04, Greg Hogan wrote:
>
> I have memory logging enabled. Tail of TaskManager log on 10.0.88.140:
>
> 17:35:26,415 INFO
> org.apache.flink.runtime.taskmanager.TaskManager - Garbage
> c
-1
We include openjdk JMH and this is GPL v2 license. This could be a problem.
> On 02 Nov 2015, at 14:27, Fabian Hueske wrote:
>
> -1
> A user reported a bug in the Scala DataSet API: FLINK-2953
>
> Should be easy to solve. I will provide a fix soon.
>
> 2015-10-30 15:51 GMT+01:00 Maximilian M
I also have an open PR that adds some new dependencies, most notably Nifi,
Storm, betty-router and Elasticsearch, and removes AWS.
> On 02 Nov 2015, at 14:39, Aljoscha Krettek wrote:
>
> -1
> We include openjdk JMH and this is GPL v2 license. This could be a problem.
>> On 02
cancel the release candidate and will create
>> a new one once the fixes are in.
>>
>> On Mon, Nov 2, 2015 at 3:02 PM, Aljoscha Krettek wrote:
>>> I also have an open PR that adds some new dependencies, most notably Nifi,
>>> Storm, betty-router and Elasticsearc
This is the PR: https://github.com/apache/flink/pull/1316 if anyone is
interested or knows something about how we have to declare licenses.
> On 02 Nov 2015, at 16:10, Aljoscha Krettek wrote:
>
> flink-storm has a dependency on storm-core. Therefore I though it should be
> added to
02 Nov 2015, at 16:26, Aljoscha Krettek wrote:
>>
>> This is the PR: https://github.com/apache/flink/pull/1316 if anyone is
>> interested or knows something about how we have to declare licenses.
>>> On 02 Nov 2015, at 16:10, Aljoscha Krettek wrote:
>>>
>>>
1:compile
> [INFO] | | \- com.jamesmurty.utils:java-xmlbuilder:jar:0.4:compile
> [INFO] | | \- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
> +[INFO] | | \-
> org.apache.directory.server:apacheds-kerberos-codec:jar:2.0.0-M15:compile
> [INFO] | | \- org.codehau
leases are correct there should be no problem
>>
>> On Mon, Nov 2, 2015 at 7:11 PM, Aljoscha Krettek
>> wrote:
>>
>>> Sorry for the back-and-forth guys. I updated my PR to completely remove
>>> the LICENSE/NOTICE files that where specific to the binary relea
Ahh, there are no fold tests for the aligned time window operators.
If you use Ingestion time or event time as stream characteristic it works
correctly.
On Wed, Nov 4, 2015, 15:23 Maximilian Michels wrote:
> That's basically what I also found out too so far. If you want to fix
> it please go ah
The reason why we didn’t want to have it in the JobManager interface by default
is that it could be a security concern. I also see, however, that security in
general is not very strong right now.
> On 05 Nov 2015, at 09:58, Matthias J. Sax wrote:
>
> I like the idea in general! Not sure if int
I see Gyula’s point. In the case of the TupleTypeInfo subclass it only works
because the equals method of TypleTypeInfo is used, IMHO.
Stupid implementation mistakes should be caught by the Java type checker. I
don’t think it would allows passing a Map to the map method if
the type of the DataS
+1
I think this one could be it. I did:
- verify the checksums of some of the release artifacts, I assume that the
rest is also OK
- test build for custom Hadoop versions 2.4, 2.5, 2.6
- verify that LICENSE/NOTICE are correct
- verify that licenses of dependencies are compatible
- read the R
+1 for some way of declaring public interfaces as experimental.
> On 10 Nov 2015, at 22:24, Stephan Ewen wrote:
>
> I think we need anyways an annotation "@PublicExperimental".
>
> We can make this annotation such that it can be added to methods and can
> use that to declare
> Methods in an oth
Let’s try this again… :D
+1
I think this one could be it. I did:
- verify the checksums of some of the release artifacts, I assume that
the rest is also OK
- test build for custom Hadoop versions 2.4, 2.5, 2.6
- verify that LICENSE/NOTICE are correct
- verify that licenses of dependencies are com
Big +1
Of course, we had the initial talk about it… :D
> On 11 Nov 2015, at 19:33, Kirschnick, Johannes
> wrote:
>
> Hi Stephan,
>
> looking at the TypeHint, I got reminded on how Gson (a Json Parser) handles
> this situation of parsing generics.
>
> See here for an overview
> https://sites.
Hi,
you can do it using the register* methods on StreamExecutionEnvironment. So,
for example:
// set up the execution environment
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
env.registerType(InputType.class);
env.registerType(MicroModel.class);
I
IMHO it’s not possible to have streaming/batch specific ExecutionConfig since
the user functions share a common interface, i.e.
getRuntimeContext().getExecutionConfig() simply returns the same type for both.
What could be done is to migrate batch/streaming specific stuff to the
ExecutionEnviron
I would be against adding anything Storm-specific in the core (streaming is
core as well) Flink APIs. If we add stuff there we have to stick to it and I
don’t see a lot of use for reusing single Bolts/Spouts.
I’m very excited about the work on Storm compatibility in general, though. :D
> On 14
Hi,
@Konstantin: are you using event-time or processing-time windows. If you are
using processing time, then you can only do it the way Fabian suggested. The
problem here is, however, that the .keyBy().reduce() combination would emit a
new maximum for every element that arrives there and you nev
Hi,
I’m not sure this is a problem. If a user specifies sliding windows then one
element can (and will) end up in several windows. If these are joined then
there will be multiple results. If the user does not want multiple windows then
tumbling windows should be used.
IMHO, this is quite straig
+1
I ran an example with a custom operator that processes high-volume kafka
input/output and has a large state size. I ran this on 10 GCE nodes.
> On 25 Nov 2015, at 14:58, Till Rohrmann wrote:
>
> Alright, then I withdraw my remark concerning testdata.avro.
>
> On Wed, Nov 25, 2015 at 2:56 P
pand=1
>>> Since this is really hard to read, I (half-automated) generated the
>>> following list of annotated classes:
>>>
>>>
>> https://github.com/rmetzger/flink/blob/interface_stability_revapi/annotations.md
>>>
>>> Please let me know
Hi,
just some information. The Table API code generator already has preliminary
support for generating code that is NULL-aware. So for example if you have
expressions like 1 + NULL the result would also be null.
I think one of the missing pieces is a way to get data that contains null
values in
Oh, this is probably the Jira for what I mentioned:
https://issues.apache.org/jira/browse/FLINK-2988
> On 27 Nov 2015, at 11:02, Aljoscha Krettek wrote:
>
> Hi,
> just some information. The Table API code generator already has preliminary
> support for generating code that is
It seems there is an Either.Left stored somewhere in the Object. Could that be?
> On 28 Nov 2015, at 20:18, Vasiliki Kalavri wrote:
>
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:307)
Yes, I agree, these are the steps to take. Thanks for creating the issues.
> On 08 Dec 2015, at 07:18, Li, Chengxiang wrote:
>
> Chengxiang
We would need to have a stable interface between the connectors and flink and
have very good checks that ensure that we don’t inadvertently break things.
> On 10 Dec 2015, at 15:45, Fabian Hueske wrote:
>
> Sounds like a good idea to me.
>
> +1
>
> Fabian
>
> 2015-12-10 15:31 GMT+01:00 Maxim
Hi All,
I want to discuss some ideas about improving the primitives/operations that
Flink offers for user-state, timers and windows and how these concepts can be
unified.
It has come up a lot lately that people have very specific requirements
regarding the state that they keep and it seems nece
Clojure is not considering the user-jar when trying to load the class.
> On 10 Dec 2015, at 17:05, Matthias J. Sax wrote:
>
> Hi Squirrels,
>
> I was playing with a Flink Clojure WordCount example today.
> https://github.com/mjsax/flink-external/tree/master/flink-clojure
>
> After building the
n top of these
>> generic facilities we need to have a way to scope state not only by key but
>> also by windows (or better, some generic state scope).
>>
>> This is currently handled by the WindowOperator itself and would then
>> be delegated to the enhanced state inter
tes just be some API sugar on top of what we
>> have know (ValueState) or does it have some added functionality (like
>> incremental checkpoints for list states)?
>>
>> Gyula
>>
>> Aljoscha Krettek ezt írta (időpont: 2015. dec. 14.,
>> H, 11:03):
&g
Good write up. You could extend the Table of 1) a/b 2) a/b at the top with
“chaining” (but you already know this, I guess). Chaining changes all of these
and I think it can be tricky to know whether stuff is chained or not (for
users, and even for us developers…).
> On 13 Dec 2015, at 19:24, G
called
>> KvState?
>
>
> The OperatorState interface would be called KvState.
>
>
> On Mon, Dec 14, 2015 at 11:18 AM, Aljoscha Krettek
> wrote:
>
>> Yes, as Kostas said, it would initially nor provide more functionality but
>> it would enable us to add i
about
>> when do I accidentally introduce chaining. Or what do you think?
>>
>> Best,
>> Gábor
>>
>>
>>
>>
>> 2015-12-14 11:33 GMT+01:00 Aljoscha Krettek :
>>> Good write up. You could extend the Table of 1) a/b 2) a/b at the top
&
Hi,
I thought a bit about how to improve the handling of time in Flink, mostly as
it relates to windows. The problem is that mixing processing-time and
event-time windows in one topology is very hard (impossible) right now. Let my
explain it with this example:
val env: StreamExecutionEnvironmen
{
> > >if (env.getStreamTimeCharacteristic() ==
> > > TimeCharacteristic.ProcessingTime) {
> > > return ProcessingTimeTrigger.create();
> > >} else {
> > > return EventTimeTrigger.create();
> > >}
> > > }
> > >
> > > That jus
erride
> > > > > public Trigger
> > > > > getDefaultTrigger(StreamExecutionEnvironment env) {
> > > > >if (env.getStreamTimeCharacteristic() ==
> > > > > TimeCharacteristic.ProcessingTime) {
> > > > > return Proc
Hi,
the iteration operators (head and tail) don't have a StreamOperator, they
are pure tasks.
On Sat, Jan 2, 2016, 21:08 Matthias J. Sax wrote:
> Hi,
>
> I am working on FLINK-1870 and my changes break some unit tests. The
> problem is in streaming.api.IterateTest.
>
> I tracked the problem down
Nice to hear. :D
I think you can go ahead and add the Jira. About the renaming: I also think
that it would make sense to do it.
> On 04 Jan 2016, at 19:48, Vasiliki Kalavri wrote:
>
> Hello squirrels and happy new year!
>
> I'm reviving this thread to share some results and discuss next steps.
Hi,
I’m currently examining ways to 1) change the window operators to use the
partitioned state abstraction for window state and 2) implement state backends
for managed memory/out-of-core state.
I think it would be helpful to pull the state backend apart. Right now, for
example, the DbStateBack
I’m also for Approach #1. I like simplifying things.
> On 13 Jan 2016, at 14:25, Vasiliki Kalavri wrote:
>
> Hi,
>
> +1 for removing the Combinable annotation. Approach 1 sounds like the
> best option to me.
>
>
> On 13 January 2016 at 14:11, Till Rohrmann wrote:
>
>> Hi Fabian,
>>
>> tha
+1 on protecting the master
> On 13 Jan 2016, at 14:46, Márton Balassi wrote:
>
> +1
>
> On Wed, Jan 13, 2016 at 12:37 PM, Matthias J. Sax wrote:
>
>> +1
>>
>> On 01/13/2016 11:51 AM, Fabian Hueske wrote:
>>> @Stephan: You mean all tags should be protected, not only those under
>> rel?
>>>
>
Hi,
I think it’s more complicated than a missing import statement (the code has the
correct import statement). I’ll look into it.
Cheers,
Aljoscha
> On 17 Jan 2016, at 13:32, Stephan Ewen wrote:
>
> Hi!
>
> I think this no Scala version issue, you probably miss an import statement:
>
> "impor
Hi,
I see what you mean and I would also feel that they could have parentheses? On
the other hand, the methods are really side-effect free, they don’t modify the
original stream in any way, they just just create a new “shuffle operator” that
will affect operations performed on this shuffled stre
Hi,
one thing that is often overlooked are examples. So if you have experience in
some are and think you could benefit from implementing something in Flink then
we would also be happy to have that example as part of Flink or in some other
repository. This way users can learn by studying from mor
Hi,
the changes to the KMeans example look good so far. About moving everything to
external classes, IMHO we should do it, but I can also see why it is nice to
have the whole example contained in one file. So let’s see what the others
think.
Cheers,
Aljoscha
> On 21 Jan 2016, at 18:04, Stefano
Hi,
I think the refactoring of Partitioned State and the WindowOperator on state
work is almost ready. I also have the RocksDB state backend working. I’m
running some tests now on the cluster and should be able to open a PR tomorrow.
> On 25 Jan 2016, at 15:36, Stephan Ewen wrote:
>
> I agree
Hi,
sorry about not answering but I wanted to wait since I already voiced my
opinion on the PR.
I think it is better to assume an already running redis because it is easier to
work around clashes in running redis instances (ports, data directory, and
such). Then, however, care needs to be taken
Hi,
I think the problem is that the source finished before the extractor has the
chance to emit even a single watermark. This means that the topology will shut
down and the window operator does not emit in-flight windows upon shutdown.
Cheers,
Aljoscha
> On 26 Feb 2016, at 11:40, Nam-Luc Tran w
Very good, I’ll test the savepoints again and also try to hammer some of the
recent API fixes.
Could we also have the usual google doc for keeping track of the basic checks?
> On 01 Mar 2016, at 09:08, Robert Metzger wrote:
>
> Dear Flink community,
>
> Please vote on releasing the following c
By the way, this is the commits that where added since rc3, so most of the
testing that we already did should also be valid for this RC:
$ git log origin/release-1.0.0-rc3..origin/release-1.0.0-rc4
commit 1b0a8c4e9d5df35a7dea9cdd6d2c6e35489bfefa
Author: Robert Metzger
Date: Tue Mar 1 16:40:31
I think you have a point. Another user also just ran into problems with the
TypeExtractor. (The “Java Maps and TypeInformation” email).
So let’s figure out what needs to be changed to make it work for all people.
Cheers,
Aljoscha
> On 02 Mar 2016, at 11:15, Gyula Fóra wrote:
>
> Hey,
>
> I ha
e:
>
> Do we continue using the google shared doc for RC3 for the release testing
> coordination?
>
> On Wed, Mar 2, 2016 at 11:31 AM, Aljoscha Krettek
> wrote:
>
>> By the way, this is the commits that where added since rc3, so most of the
>> testing that w
+1
I think we have a winner. :D
The “boring” tests from the checklist should still hold for this RC and I now
ran a custom windowing job with state on RocksDB on Hadoop 2.7 with Scala 2.11.
I used the Yarn HA mode and shot down both JobManagers and TaskManagers and the
job restarted successful
Hi,
that call needs to be in the lock scope because we need to ensure that no
element processing/element emission happens while we forward the checkpoint
barrier and perform the checkpoint.
Let me illustrate with an example. Say we have a source. What can happen in the
source if the call is not
Other tasks like the StreamTask will handle broadcast and checkpoint then
> process element
>
> so lock is “no use” in those tasks。
>
> am i right?
>
> thanks your help very much!
>
>
> 发自我的 iPhone
>
>> 在 2016年3月10日,下午7:45,Aljoscha Krettek 写道:
>>
&g
Hi,
could you please open a Jira issue for that.
Cheers,
Aljoscha
> On 11 Mar 2016, at 06:41, janardhan shetty wrote:
>
> We need to update the Job.java file of 1.0 to hold the correct links of
> flink examples:
>
> Currently it is pointing to
> http://flink.apache.org/docs/latest/examples.html
By the way, I don’t think it’s a bug that addSink() returns the Java
DataStreamSink. Having a Scala specific version of a DataStreamSink would not
add functionality in this place, just code bloat.
> On 14 Mar 2016, at 10:05, Fabian Hueske wrote:
>
> Yes, we will have more of these issues in the
Hi folks,
I opened a Jira Issue: https://issues.apache.org/jira/browse/FLINK-3614
This is the text of the issue:
I propose to remove the special Non-Keyed Window Operator and implement
non-parallel windows by using the standard WindowOperator with a dummy
KeySelector.
Maintaining everything for
Hi,
you are right. Only the first operator in a chain can be a keyed operator.
Therefore, the “setKeyContextElement1” call is currently unnecessary. If you
want you can open an issue for that and work on removing it.
Cheers,
Aljoscha
> On 16 Mar 2016, at 09:24, Ma GuoWei wrote:
>
> hi,all
> T
Hi,
if you yourself (Gyula) don’t want to maintain it anymore in the Flink codebase
I would vote to move it to an external repository. If you are not using it
anymore I’m afraid no one will really work on it.
On more thing. When using the DB state backend savepoints don’t work.
Cleanup/compact
Hi,
I’m also sending this to @user because the Trigger API concerns users directly.
There are some things in the Trigger API that I think require some
improvements. The issues are trigger testability, fire semantics and composite
triggers and lateness. I started a document to keep track of thing
=sharing
I think all of this is very important for people working on event-time based
pipelines.
Feedback is very welcome and I hope that we can expand the document together
and come up with good solutions.
Cheers,
Aljoscha
> On 21 Mar 2016, at 17:46, Aljoscha Krettek wrote:
>
> Hi,
&
ations of it really hard to understand.
>
> +1 for the suggested changes.
> Are there plans to touch the Evictor interface as well? IMO, this needs a
> redesign as well.
>
> Fabian
>
> 2016-03-21 19:21 GMT+01:00 Aljoscha Krettek :
> Hi,
> my previous message might be a bi
Hi,
how are you printing the debug statements?
But yeah all the logic of renaming in progress files and cleaning up after a
failed job happens in restoreState(BucketState state). The steps are roughly
these:
1. Move current in-progress file to final location
2. truncate the file if necessary (i
Hi,
I’m aware of one critical fix and one somewhat critical fix since 1.0.0. One
concerns data loss in the RollingSink, the other is a bug in a window trigger.
I would like to release a bugfix release since some people are restricted to
using released versions and are also depending on the Rolli
Ok, then you should be able to change the log level to DEBUG in
conf/log4j.properties.
> On 23 Mar 2016, at 12:41, Vijay wrote:
>
> I think only the ERROR category gets displayed in the log file
>
> Regards,
> Vijay
>
> Sent from my iPhone
>
>> On Mar 23, 2
201 - 300 of 1384 matches
Mail list logo