Gary Yao created FLINK-8754:
---
Summary: TaskManagerInfo is not serializable
Key: FLINK-8754
URL: https://issues.apache.org/jira/browse/FLINK-8754
Project: Flink
Issue Type: Bug
Sihua Zhou created FLINK-8753:
-
Summary: Introduce Incremental savepoint
Key: FLINK-8753
URL: https://issues.apache.org/jira/browse/FLINK-8753
Project: Flink
Issue Type: New Feature
Is the `flink-connector-filesystem` connector supposed to work with the
latest hadoop-free Flink releases, say along with the `flink-s3-fs-presto`
filesystem implementation?
-Jamie
Elias Levy created FLINK-8752:
-
Summary: ClassNotFoundException when using the user code class
loader
Key: FLINK-8752
URL: https://issues.apache.org/jira/browse/FLINK-8752
Project: Flink
Issue
Elias Levy created FLINK-8751:
-
Summary: Canceling a job results in a InterruptedException in the
JM
Key: FLINK-8751
URL: https://issues.apache.org/jira/browse/FLINK-8751
Project: Flink
Issue
Till Rohrmann created FLINK-8749:
Summary: Release slots when scheduling operation is canceled in
ExecutionGraph
Key: FLINK-8749
URL: https://issues.apache.org/jira/browse/FLINK-8749
Project: Flink
zhijiang created FLINK-8747:
---
Summary: The tag of waiting for floating buffers in
RemoteInputChannel should be updated properly
Key: FLINK-8747
URL: https://issues.apache.org/jira/browse/FLINK-8747
Till Rohrmann created FLINK-8746:
Summary: Support rescaling of jobs which are not fully running
Key: FLINK-8746
URL: https://issues.apache.org/jira/browse/FLINK-8746
Project: Flink
Issue
Right now, I don’t think there is a way of doing that. I don’t think there is
something fundament against having a method that drops a state complete, data
and registered meta data. But so far that never existed and it seems nobody
ever needed it (or asked for it at least). The closest thing
I had a quick look at it, and we could do that, even for RocksDB: the method
does a meta data lookup similar to what state registration does, remove the
meta data and drop the column family. But until then, there is currently no
complete dropping a keyed state.
> Am 22.02.2018 um 12:19 schrieb
Do you have any suggestion how to completely delete an operator and keyed
state?
For operator state this seems to be easy enough, but what about completely
dropping a keyed state?
Gyula
Stefan Richter ezt írta (időpont: 2018. febr.
22., Cs, 11:46):
>
> Hi,
>
> I
Hi,
I don’t think that this is a bug, but rather a necessity that comes with the
(imo questionable) design of allowing lazy state registration. In this design,
just because a state is *currently* not registered does not mean that you can
simply drop it. Imagine that your code did *not yet*
They reason they didn't catch this is that the bug only occurs if users use a
custom timestamp/watermark assigner. But yes, we should be able to extend the
end-to-end tests to catch this.
> On 22. Feb 2018, at 11:05, Till Rohrmann wrote:
>
> If the Kafka connectors are
While we're on this:
https://beam.apache.org/blog/2017/08/16/splittable-do-fn.html
This is a concrete way of separating partition/shard/split discovery from their
reading. The nice thing about this is that you can mix-and-match "discovery
components" and "reader components". For example, for
+1
> On 22. Feb 2018, at 11:13, Till Rohrmann wrote:
>
> +1
>
> Cheers,
> Till
>
> On Wed, Feb 21, 2018 at 11:40 PM, Bowen Li wrote:
>
>> +1.
>>
>> flink-contrib is an experiment land, docker-flink should graduate from it.
>> This also helps to
+1
Cheers,
Till
On Wed, Feb 21, 2018 at 11:40 PM, Bowen Li wrote:
> +1.
>
> flink-contrib is an experiment land, docker-flink should graduate from it.
> This also helps to clean up flink-contrib/
>
>
>
> On Wed, Feb 21, 2018 at 1:41 PM, Stephan Ewen
Hi all,
We have discovered a fairly serious memory leak
in DefaultOperatorStateBackend, with broadcast (union) list states.
The problem seems to occur when a broadcast state name is changed, in order
to drop some state (intentionally).
Flink does not drop the "garbage" broadcast state, and
If the Kafka connectors are unusable with 1.4.1, then I would be in favor
of releasing 1.4.2 asap.
I'm wondering why the end-to-end Kafka tests did not catch this problem.
Maybe we could adapt them such that they guard against it in the future.
Cheers,
Till
On Thu, Feb 22, 2018 at 9:46 AM,
Chesnay Schepler created FLINK-8744:
---
Summary: Add annotations for documentation common/advanced options
Key: FLINK-8744
URL: https://issues.apache.org/jira/browse/FLINK-8744
Project: Flink
Chesnay Schepler created FLINK-8743:
---
Summary: Add annotation to override documented default
Key: FLINK-8743
URL: https://issues.apache.org/jira/browse/FLINK-8743
Project: Flink
Issue
Chesnay Schepler created FLINK-8742:
---
Summary: Move ConfigDocsGenerator annotationt to flink-annotations
Key: FLINK-8742
URL: https://issues.apache.org/jira/browse/FLINK-8742
Project: Flink
Hi all,
Unfortunately, we've discovered a bug in 1.4.1, which suggests that we
should almost immediately release another bugfix release:
https://issues.apache.org/jira/browse/FLINK-8741.
Since this issue was introduced only in 1.4.1, it might make sense to
release 1.4.2 with only the fix for
Tzu-Li (Gordon) Tai created FLINK-8741:
--
Summary: KafkaFetcher09/010/011 uses wrong user code classloader
Key: FLINK-8741
URL: https://issues.apache.org/jira/browse/FLINK-8741
Project: Flink
23 matches
Mail list logo