Linyu Yao created FLINK-20904:
-
Summary: Maven enforce goal dependency-convergence failed on
flink-avro-glue-schema-registry
Key: FLINK-20904
URL: https://issues.apache.org/jira/browse/FLINK-20904
Hi Arvid,
I saw that as soon as I sent the mail :)
Sorry all !!
Carsten
Am 08.01.21 um 13:10 schrieb Arvid Heise:
Hi Carsten,
you probably picked the wrong dev list. You probably want to go to the
Ignite user list instead.
Best,
Arvid
On Fri, Jan 8, 2021 at 3:53 PM Carsten
wrote:
Hi Carsten,
you probably picked the wrong dev list. You probably want to go to the
Ignite user list instead.
Best,
Arvid
On Fri, Jan 8, 2021 at 3:53 PM Carsten
wrote:
> Hello all,
>
> after an install marathon yesterday night, I was ready to test Ignite
> 2.9.1 + GridGainControl Center
Hi Till,
IIUC for application mode, we already allow to run user code in job manager
Till Rohrmann 于2021年1月8日周五 下午9:53写道:
> At the moment, this requirement has not come up very often. In general, I
> am always a bit cautious when adding functionality which executes user code
> in the
Till Rohrmann created FLINK-20903:
-
Summary: Remove SchedulerNG.initialize method
Key: FLINK-20903
URL: https://issues.apache.org/jira/browse/FLINK-20903
Project: Flink
Issue Type:
Yes, I meant true general purpose exactly-once :)
> There are some ideas about using a WAL (write ahead log) and then
periodically "shipping" that to Kafka but nothing concrete.
But that would still need to be using Kafka transactions for "shipping"
records. That's what I meant that one way or
Hello all,
after an install marathon yesterday night, I was ready to test Ignite
2.9.1 + GridGainControl Center 2020.12.
But when starting everything (4 nodes + control center) I got the
message "Core limit has been exceeded for the current license".
I was just wondering what the core
Suryanarayana Murthy Maganti created FLINK-20902:
Summary: Please remove my email id
Key: FLINK-20902
URL: https://issues.apache.org/jira/browse/FLINK-20902
Project: Flink
Great! Thanks for pushing this work.
Looking forward to the pull requests.
Best,
Jark
On Fri, 8 Jan 2021 at 17:57, Sebastian Liu wrote:
> Hi Jark,
>
> Cool, following your suggestions I have created three related subtasks
> under Flink-20791.
> Hope to assign these subtasks to me too, when you
Till Rohrmann created FLINK-20901:
-
Summary: Introduce DeclarativeSlotPool methods to set resource
requirements to absolute values
Key: FLINK-20901
URL: https://issues.apache.org/jira/browse/FLINK-20901
At the moment, this requirement has not come up very often. In general, I
am always a bit cautious when adding functionality which executes user code
in the JobManager because it can easily become a stability problem. On the
other hand, I can't think of a different solution other than polling the
Matthias created FLINK-20900:
Summary: Extend documentation guidelines to cover formatting of
commands
Key: FLINK-20900
URL: https://issues.apache.org/jira/browse/FLINK-20900
Project: Flink
Till or Chesnay (cc'ed), have you thought about adding a hook on the
JobMaster/JobManager to allow external systems to get push notifications
about submitted jobs.
If they are ok with such as future, would you maybe be interested in
implementing it yourself, Wenhao?
Best,
Aljoscha
On
godfrey he created FLINK-20899:
--
Summary: encounter ClassCastException when calculating cost in
HepPlanner
Key: FLINK-20899
URL: https://issues.apache.org/jira/browse/FLINK-20899
Project: Flink
Hi Nicholas,
Thanks for starting the discussion!
I think we might be able to simplify this a bit and re-use existing
functionality.
There is already `Source.restoreEnumerator()` and
`SplitEnumerator.snapshotState(). This seems to be roughly what the
Hybrid Source needs. When the initial
On 2021/01/08 10:00, Piotr Nowojski wrote:
Moreover I don't think there is a way to implement exactly once producer
without some use of transactions one way or another.
There are some ways I can think of. If messages have consistent IDs, we
could check whether a message is already in Kafka
On 2021/01/07 14:17, Pramod Immaneni wrote:
Is there a Kafka producer that can do exactly once semantic without the use
of transactions?
I'm afraid not right now. There are some ideas about using a WAL (write
ahead log) and then periodically "shipping" that to Kafka but nothing
concrete.
Sebastian Liu created FLINK-20898:
-
Summary: Code of BatchExpand & LocalNoGroupingAggregateWithoutKeys
grows beyond 64 KB
Key: FLINK-20898
URL: https://issues.apache.org/jira/browse/FLINK-20898
Timo Walther created FLINK-20897:
Summary: Support DataStream batch mode in StreamTableEnvironment
Key: FLINK-20897
URL: https://issues.apache.org/jira/browse/FLINK-20897
Project: Flink
Hi Jark,
Cool, following your suggestions I have created three related subtasks
under Flink-20791.
Hope to assign these subtasks to me too, when you have time. And I
will push forward the relevant implementation.
Jark Wu 于2021年1月8日周五 下午12:30写道:
> Hi Sebastian,
>
> I assigned the issue to you.
Sebastian Liu created FLINK-20896:
-
Summary: [Local Agg Pushdown] Support SupportsAggregatePushDown
for JDBC TableSource
Key: FLINK-20896
URL: https://issues.apache.org/jira/browse/FLINK-20896
Sebastian Liu created FLINK-20895:
-
Summary: [Local Agg Pushdown] Support LocalAggregatePushDown in
Blink planner
Key: FLINK-20895
URL: https://issues.apache.org/jira/browse/FLINK-20895
Project:
Sebastian Liu created FLINK-20894:
-
Summary: [Local Agg Pushdown] Introduce SupportsAggregatePushDown
interface
Key: FLINK-20894
URL: https://issues.apache.org/jira/browse/FLINK-20894
Project: Flink
Hi Pramod,
Moreover I don't think there is a way to implement exactly once producer
without some use of transactions one way or another.
Best,
Piotrek
pt., 8 sty 2021 o 09:34 Till Rohrmann napisał(a):
> Hi Pramod,
>
> Flink's Kafka connector uses transactions in order to support exactly once
Hi Pramod,
Flink's Kafka connector uses transactions in order to support exactly once
semantic.
Cheers,
Till
On Thu, Jan 7, 2021 at 11:17 PM Pramod Immaneni wrote:
> Is there a Kafka producer that can do exactly once semantic without the use
> of transactions?
>
> Thanks
>
25 matches
Mail list logo