Aljoscha Krettek created FLINK-5647:
---
Summary: Fix RocksDB Backend Cleanup
Key: FLINK-5647
URL: https://issues.apache.org/jira/browse/FLINK-5647
Project: Flink
Issue Type: Bug
Just a bit of clarification, the OperatorState stuff is independent of
keyed state backends, i.e. even if you use RocksDB the operator state will
not be stored in RocksDB, only keyed state is stored there.
Right now, when an operator state (ListState) is empty we will still write
some meta data ab
Hi,depending on which version of Flink you're using the answer changes. If
you use Flink 1.1 AggregatingProcessingTimeWindowOperator should be
responsible for executing that. In Flink 1.2 it should be WindowOperator.
For a quick overview of how scheduling works in Flink you could look at
this:
htt
It seems I'm in a bit of a minority here but I like the @R tags. There are
simply to many pull request for someone to keep track of all of them and if
someone things that a certain person would be good for reviewing a change
then tagging them helps them notice the PR.
I think the tag should not me
Aljoscha Krettek created FLINK-5616:
---
Summary: YarnPreConfiguredMasterHaServicesTest fails sometimes
Key: FLINK-5616
URL: https://issues.apache.org/jira/browse/FLINK-5616
Project: Flink
again.
> */
>
> but he incorrect, because code of both test fully equal,
> one difference line very similar on bug after refactoring with inserting
> recordReuse
>
> testSpillingHashJoinWithMassiveCollisions
> 353 while ((record = buildSide.next(record)) != null) {
>
Very nice. Good work, team! š
On Sat, Dec 24, 2016, 00:07 Fabian Hueske wrote:
> Thank you Ufuk for your work as release manager and everybody who
> contributed!
>
> Cheers, Fabian
>
> 2016-12-23 16:40 GMT+01:00 Ufuk Celebi :
>
> > The Flink PMC is pleased to announce the availability of Flink 1
xOutOfOrderness;
> if(potentialWM >= lastEmittedWatermark) {
> lastEmittedWatermark = potentialWM;
> }
> return new Watermark(lastEmittedWatermark);
> }
>
> I think those two implementation should use the same principle.
>
>
> Aljoscha Krettek-2 wrote
> > I'm afraid
Thanks a lot for going through the issues and preparing this list!
>From a first glance some can definitely be closed. I didn't yet find the
time to look through all of them but we should definitely work and cleaning
up our Jira.
Cheers,
Aljoscha
On Wed, 21 Dec 2016 at 18:19 Anton Solovev wrote
Aljoscha Krettek created FLINK-5374:
---
Summary: Extend Unit Tests for RegisteredBackendStateMetaInfo
Key: FLINK-5374
URL: https://issues.apache.org/jira/browse/FLINK-5374
Project: Flink
Aljoscha Krettek created FLINK-5373:
---
Summary: Extend Unit Tests for StateAssignmentOperation
Key: FLINK-5373
URL: https://issues.apache.org/jira/browse/FLINK-5373
Project: Flink
Issue
I'm afraid the doc is wrong here. The JavaDoc on Watermark says this about
watermarks:
"A Watermark tells operators that receive it that no elements with a
timestamp older or equal to the watermark timestamp should arrive at the
operator."
The system also relies on this fact, as visible in how ti
Aljoscha Krettek created FLINK-5372:
---
Summary: Fix
RocksDBAsyncSnapshotTest.testCancelFullyAsyncCheckpoints()
Key: FLINK-5372
URL: https://issues.apache.org/jira/browse/FLINK-5372
Project: Flink
:55 PM, Fabian Hueske
> wrote:
> >
> >> Hi,
> >>
> >> I merged the Table API refactoring changes:
> >>
> >> - RESOLVED Clean up the packages of the Table API (FLINK-4704)
> >> - RESOLVED Move Row to flink-core (FLINK-5186)
>
Aljoscha Krettek created FLINK-5366:
---
Summary: Add end-to-end tests for Savepoint Backwards Compatibility
Key: FLINK-5366
URL: https://issues.apache.org/jira/browse/FLINK-5366
Project: Flink
Aljoscha Krettek created FLINK-5363:
---
Summary: Fire timers when window state is currently empty
Key: FLINK-5363
URL: https://issues.apache.org/jira/browse/FLINK-5363
Project: Flink
Issue
Yes, I'm confident that we can finish the tests until then and merge the
code.
On Fri, Dec 16, 2016, 17:41 Robert Metzger wrote:
> Thank you for the update. Do you think you get it done until Monday
> evening?
>
> On Fri, Dec 16, 2016 at 5:23 PM, Aljoscha Krettek
> wrote:
Hi,
we're still working on making the backwards compatibility from 1.1
savepoints a reality. We have most of the code and some tests now but it
still needs some work. This is the issue that tracks the progress on the
operators that we would like to make backwards compatible:
https://issues.apache.o
Hi Steve,
I think this part of create_release_files.sh (our release script) is
helpful:
mvn clean deploy -Prelease,docs-and-source --settings deploysettings.xml
-DskipTests -Dgpg.executable=$GPG -Dgpg.keyname=$GPG_KEY
-Dgpg.passphrase=$GPG_PASSPHRASE -DretryFailedDeploymentCount=10
I think -Prele
Aljoscha Krettek created FLINK-5250:
---
Summary: Make AbstractUdfStreamOperator aware of WrappingFunction
Key: FLINK-5250
URL: https://issues.apache.org/jira/browse/FLINK-5250
Project: Flink
Aljoscha Krettek created FLINK-5240:
---
Summary: Properly Close StateBackend in StreamTask when
closing/canceling
Key: FLINK-5240
URL: https://issues.apache.org/jira/browse/FLINK-5240
Project: Flink
Aljoscha Krettek created FLINK-5237:
---
Summary: Consolidate and harmonize Window Translation Tests
Key: FLINK-5237
URL: https://issues.apache.org/jira/browse/FLINK-5237
Project: Flink
Issue
Aljoscha Krettek created FLINK-5181:
---
Summary: Add Tests in StateBackendTestBase that verify
Default-Value Behaviour
Key: FLINK-5181
URL: https://issues.apache.org/jira/browse/FLINK-5181
Project
If we move it to core, we have to untangle it from Scala, as Timo said. The
reason is that we would like to remove Scala from any user facing API maven
packages and if we had it in core everyone would have to suffix maven
packages with the Scala version.
On Fri, 25 Nov 2016 at 16:47 Anton Solovev
Hi,
this is indeed a bug (though I would see it more as a feature since I think
using the Checkpointed interface there can indeed be problematic, as Till
pointed out). The problem is that the Scala Wrapper functions have to
implement all kinds of interfaces so that they can forward to the wrapped
f
Aljoscha Krettek created FLINK-5155:
---
Summary: Deprecate ValueStateDescriptor constructors with default
value
Key: FLINK-5155
URL: https://issues.apache.org/jira/browse/FLINK-5155
Project: Flink
Aljoscha Krettek created FLINK-5154:
---
Summary: Duplicate TypeSerializer when writing RocksDB Snapshot
Key: FLINK-5154
URL: https://issues.apache.org/jira/browse/FLINK-5154
Project: Flink
I would be for also annotating library methods/classes. Maybe Robert has a
stronger opinion on this because he introduced these annotations.
On Tue, 22 Nov 2016 at 18:56 Greg Hogan wrote:
> Hi all,
>
> Should stable APIs in Flink's CEP, ML, and Gelly libraries be annotated
> @Public or restricte
t;> So much code changes. Can you show us the key changes code for the
> > object
> > >> copy?
> > >> Object reference maybe hold more deep reference, it can be a bomb.
> > >> Can we renew a object with its data or direct use kryo for object
> > &
+1 That sounds excellent.
On Wed, 23 Nov 2016 at 11:04 Till Rohrmann wrote:
> +1 for your proposal.
>
> Cheers,
> Till
>
> On Wed, Nov 23, 2016 at 9:33 AM, Fabian Hueske wrote:
>
> > I agree on this one.
> > Whenever we deprecate a method or a feature we should add a comment that
> > explains t
data or direct use kryo for object
> > serialization?
> > Iām not prefer object copy.
> >
> >
> > > On Nov 22, 2016, at 20:33, Fabian Hueske wrote:
> > >
> > > Does anybody have objections against copying the first record that goes
> > >
Aljoscha Krettek created FLINK-5130:
---
Summary: Remove Deprecated Methods from WindowedStream
Key: FLINK-5130
URL: https://issues.apache.org/jira/browse/FLINK-5130
Project: Flink
Issue Type
Aljoscha Krettek created FLINK-5126:
---
Summary: Remove Checked Exceptions from State Interfaces
Key: FLINK-5126
URL: https://issues.apache.org/jira/browse/FLINK-5126
Project: Flink
Issue
Aljoscha Krettek created FLINK-5125:
---
Summary: ContinuousFileProcessingCheckpointITCase is Flaky
Key: FLINK-5125
URL: https://issues.apache.org/jira/browse/FLINK-5125
Project: Flink
Issue
That's right, yes.
On Mon, 21 Nov 2016 at 19:14 Fabian Hueske wrote:
> Right, but that would be a much bigger change than "just" copying the
> *first* record that goes into the ReduceState, or am I missing something?
>
>
> 2016-11-21 18:41 GMT+01:00 Aljoscha Kr
en updating the state, but I
> > > think it'll be possible to perform asynchronous snapshots using
> > > HeapStateBackend (probably some changes to underlying data structures
> > would
> > > be needed) - which would bring more predictable performance.
> &
Hi,
I would be in favour of this since it brings things in line with the
RocksDB backend. This will, however, come with quite the performance
overhead, depending on how fast the TypeSerializer can copy.
Cheers,
Aljoscha
On Mon, 21 Nov 2016 at 11:30 Fabian Hueske wrote:
> Hi everybody,
>
> when
Aljoscha Krettek created FLINK-5061:
---
Summary: Remove ContinuousEventTimeTrigger
Key: FLINK-5061
URL: https://issues.apache.org/jira/browse/FLINK-5061
Project: Flink
Issue Type
Hi Zhenhao,
does this happen reproducibly? What happens after the failure? Will it
retry restoring and then succeed?
I have a suspicion that Yarn could be cleaning up some files that RocksDB
expects to be there while restoring.
Cheers,
Aljoscha
On Thu, 10 Nov 2016 at 14:24 wrote:
> Hi team,
>
Hi,
I recently created https://issues.apache.org/jira/browse/FLINK-4994 to
address what I think is a flaw in the window cleanup semantics. This has
the possibility of affecting people so I'd like to get some opinions and
also give people a heads-up.
Before going into what I'm proposing in the issu
Aljoscha Krettek created FLINK-5037:
---
Summary: Instability in AbstractUdfStreamOperatorLifecycleTest
Key: FLINK-5037
URL: https://issues.apache.org/jira/browse/FLINK-5037
Project: Flink
Aljoscha Krettek created FLINK-5035:
---
Summary: Don't Write TypeSerializer to Heap State Snapshot
Key: FLINK-5035
URL: https://issues.apache.org/jira/browse/FLINK-5035
Project: Flink
Aljoscha Krettek created FLINK-5034:
---
Summary: Don't Write StateDescriptor to RocksDB Snapshot
Key: FLINK-5034
URL: https://issues.apache.org/jira/browse/FLINK-5034
Project: Flink
Hi Mike,
I like your proposal. It correctly tackles two things that beginners will
appreciate: clearer structure and information about companies that are
using Flink. Especially that last part is important because it's easier to
adopt a new piece of software if you know that other big players are
a
Aljoscha Krettek created FLINK-5026:
---
Summary: Rename TimelyFlatMap to Process
Key: FLINK-5026
URL: https://issues.apache.org/jira/browse/FLINK-5026
Project: Flink
Issue Type: Improvement
Aljoscha Krettek created FLINK-5015:
---
Summary: Add Tests/ITCase for Kafka Per-Partition Watermarks
Key: FLINK-5015
URL: https://issues.apache.org/jira/browse/FLINK-5015
Project: Flink
Aljoscha Krettek created FLINK-5012:
---
Summary: Provide Timestamp in TimelyFlatMapFunction
Key: FLINK-5012
URL: https://issues.apache.org/jira/browse/FLINK-5012
Project: Flink
Issue Type
Aljoscha Krettek created FLINK-5003:
---
Summary: Provide Access to State Stores in Operator Snapshot
Context
Key: FLINK-5003
URL: https://issues.apache.org/jira/browse/FLINK-5003
Project: Flink
Aljoscha Krettek created FLINK-5000:
---
Summary: Rename Methods in ManagedInitializationContext
Key: FLINK-5000
URL: https://issues.apache.org/jira/browse/FLINK-5000
Project: Flink
Issue
Aljoscha Krettek created FLINK-4994:
---
Summary: Don't Clear Trigger State and Merging Window Set When
Purging
Key: FLINK-4994
URL: https://issues.apache.org/jira/browse/FLINK-4994
Project:
nu Zhang wrote:
> Hi Aljoscha,
>
> Have you started working on ProcessWindowFunction ? If not, may I take this
> task ?
>
> Thanks,
> Manu
>
>
> On Wed, Nov 2, 2016 at 5:16 PM Aljoscha Krettek
> wrote:
>
> > I think we reached consensus here so I would li
Aljoscha Krettek created FLINK-4993:
---
Summary: Don't Allow Trigger.onMerge() to return TriggerResult
Key: FLINK-4993
URL: https://issues.apache.org/jira/browse/FLINK-4993
Project:
I think we reached consensus here so I would like to mark this FLIP as
accepted. We will now process with implementing the first step, i.e. adding
the new ProcessWindowFunction.
On Mon, 1 Aug 2016 at 18:08 Aljoscha Krettek wrote:
> Alright, that seems reasonable. I updated the doc to add
Thanks, I'm having a look the the PR right now.
On Tue, 1 Nov 2016 at 04:57 Vishnu Viswanath
wrote:
> Hi,
>
> I have created a pull request for this:
> https://github.com/apache/flink/pull/2736
>
> Regards,
> Vishnu
>
> On Tue, Oct 18, 2016 at 3:34 AM, Alj
amespaces, even if it's simpler
> > and just a string, is the way to go.
> >
> > I'm really excited by this guys! I think the TimelyFlatMap and
> > TimelyCoFlatMap are going to get a LOT of use. This is gonna make a lot
> of
> > people happy.
> >
>
Hi,
thanks for trying to revive the discussion! I added some comments in the
doc.
Cheers,
Aljoscha
On Fri, 28 Oct 2016 at 12:05 venturadelmonte
wrote:
> Hello,
>
> I find this feature really cool because it would allow people to tackle
> scenarios requiring a more advanced "join" on multiple st
Hi Gyula,
if you look at the internal API you'll notice that it is pretty much like
your second proposal. Just for reference, the interface is roughly this:
public interface InternalTimerService {
long currentProcessingTime();
long currentWatermark();
void registerProcessingTimeTimer(N names
>
> > +1 for the features to include.
> >
> > What is the state of the Trigger DSL? How much is left to be done before
> > merging?
> >
> > Cheers,
> > Till
> >
> > On Tue, Oct 25, 2016 at 4:32 PM, Aljoscha Krettek
> > wrote:
> >
Aljoscha Krettek created FLINK-4959:
---
Summary: Write Documentation for Timely FlatMap Functions
Key: FLINK-4959
URL: https://issues.apache.org/jira/browse/FLINK-4959
Project: Flink
Issue
Aljoscha Krettek created FLINK-4957:
---
Summary: Provide API for TimelyCoFlatMapFunction
Key: FLINK-4957
URL: https://issues.apache.org/jira/browse/FLINK-4957
Project: Flink
Issue Type: Test
Aljoscha Krettek created FLINK-4955:
---
Summary: Add Translations Tests for
KeyedStream.flatMap(TimelyFlatMapFunction)
Key: FLINK-4955
URL: https://issues.apache.org/jira/browse/FLINK-4955
Project
Aljoscha Krettek created FLINK-4940:
---
Summary: Add Support for Broadcast/Global State
Key: FLINK-4940
URL: https://issues.apache.org/jira/browse/FLINK-4940
Project: Flink
Issue Type: Sub
Aljoscha Krettek created FLINK-4924:
---
Summary: Simplify Operator Test Harness Constructors
Key: FLINK-4924
URL: https://issues.apache.org/jira/browse/FLINK-4924
Project: Flink
Issue Type
+1 the schedule proposed so far.
Do we also want to get in the "Trigger DSL" that we've had brewing for a
while now?
On Mon, 17 Oct 2016 at 16:17 Stephan Ewen wrote:
> I think this sounds very reasonable, +1 to the schedule.
>
> I would definitely add
> - FLIP-10 (unify checkpoints and savepo
Aljoscha Krettek created FLINK-4907:
---
Summary: Add Test for Timers/State Provided by
AbstractStreamOperator
Key: FLINK-4907
URL: https://issues.apache.org/jira/browse/FLINK-4907
Project: Flink
or's state in hdfs I assume
> that on failure you are
> restarting the operator's tasks on other nodes, is that possible with
> RocksDB?
>
> Best,
> Ovidiu
>
> -Original Message-
> From: Aljoscha Krettek [mailto:aljos...@apache.org]
> Sent: Monday, Octobe
Hi Anton,
executeOnCollection() is only meant for executing Flink Jobs in the local
machine without bringing up a local (or actual) Flink cluster. So solving
the problem there does not really solve the problem.
The underlying problem is this: in a Map-Reduce world the way to count
elements of type
Aljoscha Krettek created FLINK-4892:
---
Summary: Snapshot TimerService using Key-Grouped State
Key: FLINK-4892
URL: https://issues.apache.org/jira/browse/FLINK-4892
Project: Flink
Issue Type
Aljoscha Krettek created FLINK-4884:
---
Summary: Eagerly Store MergingWindowSet in State in WindowOperator
Key: FLINK-4884
URL: https://issues.apache.org/jira/browse/FLINK-4884
Project: Flink
Hi,
the problem is that EvictingWindowOperator uses StreamRecordSerializer to
serialise the contents of the windows. This does not serialise timestamps
so when the objects are deserialised from RocksDB they all have
Long.MIN_VALUE as timestamp. The evictor in the program therefore always
evicts all
Aljoscha Krettek created FLINK-4877:
---
Summary: Refactorings around FLINK-3674 (User Function Timers)
Key: FLINK-4877
URL: https://issues.apache.org/jira/browse/FLINK-4877
Project: Flink
Aljoscha Krettek created FLINK-4866:
---
Summary: Make Trigger.clear() Abstract to Enforce Implementation
Key: FLINK-4866
URL: https://issues.apache.org/jira/browse/FLINK-4866
Project: Flink
Aljoscha Krettek created FLINK-4859:
---
Summary: Clearly Separate Responsibilities of StreamOperator and
StreamTask
Key: FLINK-4859
URL: https://issues.apache.org/jira/browse/FLINK-4859
Project
Aljoscha Krettek created FLINK-4858:
---
Summary: Remove Legacy Checkpointing Interfaces
Key: FLINK-4858
URL: https://issues.apache.org/jira/browse/FLINK-4858
Project: Flink
Issue Type
Perfect! Then it's pretty much what we discussed here:
https://issues.apache.org/jira/browse/FLINK-3947 and I'm very much in
favour of that. Just the implementation of RocksDB could be a bit tricky
but it should be doable.
Cheers,
Aljoscha
On Wed, 19 Oct 2016 at 11:43 Jark Wu wrote:
> Hi Xiaoga
Hi,
just making sure I understand this correctly. Would the MapState keys be
the same keys as the one provided when creating the KeyedStream or a
different key.
As an example, would it be like this:
DataStream> input = ...;
KeyedStream keyed = input.keyBy(0)
keyed.map( Tuple2 input -> mapState.pu
another class for holding (timestamp,value)
> tuple?
>
> Regards,
> Vishnu
>
> On Mon, Oct 17, 2016 at 4:19 AM, Aljoscha Krettek
> wrote:
>
> > Hi Vishnu,
> > what you suggested is spot on! Please go forward with it like this.
> >
> > One small sugge
https://ci.apache.org/projects/flink/flink-docs-release-1.2/monitoring/metrics.html
Or this:
https://ci.apache.org/projects/flink/flink-docs-release-1.1/apis/metrics.html
if
you prefer Flink 1.1
On Mon, 17 Oct 2016 at 19:16 amir bahmanyari wrote:
> Hi colleagues,
> Is there a link that described
s that we have to somehow get at the
> state in redis for checkpointing. And if we use only one Redis instance for
> all states then this will be problematic." - Aljoscha Krettek
>
> Any other update on this issue will help, not clear the status.
>
> Best,
> Ovidiu
>
>
;
> > vishnu.viswanat...@gmail.com> wrote:
> >
> > > Thank you Aljoscha,
> > >
> > > Yes, I agree we don't need ProcessingTimeEvcitor.
> > > I will change the current TimeEvictors to use EventTimeEvictor as
> > > suggested.
> > >
>
+Konstantin Knauf looping you in directly
because you used the "delete timer" feature in the past and even did some
changes to the timer system. Are you still relying on the fact that deleted
timers are actually deleted.
The main reason for wanting to get rid of delete timer is IMHO that
deleting
Hi Folks,
I'm in the process of implementing
https://issues.apache.org/jira/browse/FLINK-3674 and now
I'm having a bit of a problem with deciding how watermarks should be
treated for operators that have more than one input.
The problem is deciding when to fire event-time timers. For one-input
oper
.startObject("raw")
> .field("type","string")
> .field("index", "not_analyzed")
> .endObject()
> .endObject()
> .endObject()
>
>
switched to FAILED
>
>
> We try to add mapping request to elastic search. We cannot access to
> client attribute (it is private) in elasticsearch class.
>
>
> Is there any way to overcome this problem.
>
>
> Thanks,
>
>
> Ozan
>
>
>
> __
Aljoscha Krettek created FLINK-4675:
---
Summary: Remove Parameter from WindowAssigner.getDefaultTrigger()
Key: FLINK-4675
URL: https://issues.apache.org/jira/browse/FLINK-4675
Project: Flink
I don't think it's that easy. The streaming connectors have flink-streaming
as dependency while the batch connectors have the batch dependencies.
Combining them would mean that users always have all dependencies, right?
On Thu, 22 Sep 2016 at 15:41 Stephan Ewen wrote:
> +1 for Fabian's suggesti
Hi,
there is ClusterClient.getAccumulators(JobID jobID) which should be able to
get the accumulators for a running job. If you can construct a
ClusterClient that should be a good solution.
Cheers,
Aljoscha
On Wed, 21 Sep 2016 at 21:15 Chawla,Sumit wrote:
> Hi Sean
>
> My goal here is to get Use
eparate it from this FLIP
> and create a JIRA for it, what do you say?
>
> Please let me know your thoughts.
>
> Regards,
> Vishnu
>
> On Sun, Jul 31, 2016 at 12:07 PM, Aljoscha Krettek
> wrote:
>
> > Hi,
> > regarding a), b) and c): The WindowOperato
lly), and I would very much like to
> work on this now if I can justify it. If not, I would still very much like
> to work on this, but the timing will have to be different.
>
> Again, thank you Aljoscha, and I apologize for the rushed nature of my
> situation.
>
> Best,
>
?
>
> Dan
>
> On Mon, Sep 12, 2016 at 11:33 PM Aljoscha Krettek
> wrote:
>
> > Hi,
> > yes you guessed correctly: CheckpointedAsynchronously only works with
> > functions and not with the lower-level StreamOperator. You would have to
> > implement snapshotOp
Hi AJ,
the idea for evictors initially came from IBM Infosphere Streams, if I'm
not mistaken:
http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.dev.doc/doc/windowhandling.html
The
first version of the windowing system used a combination of
triggers/evictors to do the windowing
Hi,
yes you guessed correctly: CheckpointedAsynchronously only works with
functions and not with the lower-level StreamOperator. You would have to
implement snapshotOperatorState() and restoreState(). These interfaces are
quite low-level, though, and not stable. For example, in Flink 1.2 we're
refa
Aljoscha Krettek created FLINK-4602:
---
Summary: Move RocksDB backed to proper package
Key: FLINK-4602
URL: https://issues.apache.org/jira/browse/FLINK-4602
Project: Flink
Issue Type: Sub
Aljoscha Krettek created FLINK-4589:
---
Summary: Fix Merging of Covering Window in MergingWindowSet
Key: FLINK-4589
URL: https://issues.apache.org/jira/browse/FLINK-4589
Project: Flink
Issue
Aljoscha Krettek created FLINK-4588:
---
Summary: Fix Merging of Covering Window in MergingWindowSet
Key: FLINK-4588
URL: https://issues.apache.org/jira/browse/FLINK-4588
Project: Flink
Issue
Aljoscha Krettek created FLINK-4579:
---
Summary: Add StateBackendFactory for RocksDB Backend
Key: FLINK-4579
URL: https://issues.apache.org/jira/browse/FLINK-4579
Project: Flink
Issue Type
+1
I went over all the changes that we introduced since 1.1.1 and they look
good.
On Wed, 31 Aug 2016 at 14:53 Maximilian Michels wrote:
> Found a minor bug for detached job submissions but I wouldn't cancel
> the release for it: https://issues.apache.org/jira/browse/FLINK-4540
>
> On Wed, Aug
Our recent changes to make keyed state rescalable/key-group aware are
breaking queryable state because it is not yet made key-group aware. I
opened this Jira issue to track the fix for that:
https://issues.apache.org/jira/browse/FLINK-4556.
Sorry for the inconvenience.
Cheers,
Aljoscha
Aljoscha Krettek created FLINK-4556:
---
Summary: Make Queryable State Key-Group Aware
Key: FLINK-4556
URL: https://issues.apache.org/jira/browse/FLINK-4556
Project: Flink
Issue Type
+1 If you think it worthwhile you can add it to the template(s).
On Thu, 1 Sep 2016 at 10:38 Fabian Hueske wrote:
> Hi,
>
> I'm currently preparing a FLIP for Table API streaming aggregates and
> noticed that there is no section about how the task can be divided into
> subtasks.
>
> I think it w
901 - 1000 of 1653 matches
Mail list logo