FaxianZhao created FLINK-12309:
--
Summary: JDBCOoutputFormat and JDBCAppendTableSink float behavior
is not align
Key: FLINK-12309
URL: https://issues.apache.org/jira/browse/FLINK-12309
Project: Flink
Sounds reasonable to me. If it is a broken feature, then there is not much
value in it.
On Tue, Apr 23, 2019 at 7:50 PM Gary Yao wrote:
> Hi all,
>
> As the subject states, I am proposing to temporarily remove support for
> changing the parallelism of a job via the following syntax [1]:
>
>
Hi Till,
IMHO, allow adding hooks involves 2 steps.
1. Provide hook interface, and call these hook in flink (ClusterClient) at
the right place. This should be done by framework (flink)
2. Implement new hook implementation and add/register them into
framework(flink)
What I am doing is step 1 which
Hi Ken,
It’s a bad story for us: even for a small window we have a dozens of thousands
events per job with 10x in peaks or even more. And the number of jobs was known
to be high. So instead of N operations (our producer/consumer mechanism) with
shuffle/resorting (current flink realization) it w
Thank you so much for driving this, really great job!
About Flink internals: Agreed, that is not a good match. It needs to deep
technical understanding. Also, the benefit of having a technical writer
(make this easily understandable to less involved developers) is not as
helpful there.
Also regis
sunjincheng created FLINK-12308:
---
Summary: Support python language in Flink Table API
Key: FLINK-12308
URL: https://issues.apache.org/jira/browse/FLINK-12308
Project: Flink
Issue Type: New Feat
Jing Zhang created FLINK-12307:
--
Summary: Support translation from StreamExecWindowJoin to
StreamTransformation.
Key: FLINK-12307
URL: https://issues.apache.org/jira/browse/FLINK-12307
Project: Flink
Hi Till,
Thanks for your insightful and valuable comments!
The introduction of drive dispatcher is the functionality extension of
current existing dispatchers, and has somewhat change to the runtime. It
is mainly to manage the ephemeral session cluster lifecycle when user
detachedly submits an a
Kejian Li created FLINK-12306:
-
Summary: Change the name of variable "log" to Upper case "LOG"
Key: FLINK-12306
URL: https://issues.apache.org/jira/browse/FLINK-12306
Project: Flink
Issue Type: I
Alex Barnes created FLINK-12305:
---
Summary: Table API Clarification
Key: FLINK-12305
URL: https://issues.apache.org/jira/browse/FLINK-12305
Project: Flink
Issue Type: Improvement
Compo
Hi all!
Below are my notes on the discussion last week on how to collaborate
between Beam and Flink.
The discussion was between Tyler, Kenn, Luke, Ahmed, Xiaowei, Shaoxuan,
Jincheng, and me.
This represents my understanding of the discussion, please augment this
where I missed something or where
Hi all,
As the subject states, I am proposing to temporarily remove support for
changing the parallelism of a job via the following syntax [1]:
./bin/flink modify [job-id] -p [new-parallelism]
This is an experimental feature that we introduced with the first rollout of
FLIP-6 (Flink 1.5). Ho
John created FLINK-12304:
Summary: AvroInputFormat should support schema evolution
Key: FLINK-12304
URL: https://issues.apache.org/jira/browse/FLINK-12304
Project: Flink
Issue Type: Bug
Com
It would be awesome to get the DEBUG logs for JobMaster,
ZooKeeper, ZooKeeperCompletedCheckpointStore,
ZooKeeperStateHandleStore, CheckpointCoordinator.
Cheers,
Till
On Tue, Apr 23, 2019 at 2:37 PM Dyana Rose wrote:
> may take me a bit to get the logs as we're not always in a situation where
>
Hi everyone!
Ververica is running a brief survey to understand Apache Flink usage
and the needs of the community. We are hoping that this survey will help
identify common usage patterns, as well as pinpoint what are the most
needed features for Flink.
We'll share a report with a summary of findin
may take me a bit to get the logs as we're not always in a situation where
we've got enough hands free to run through the scenarios for a day.
Is that DEBUG JobManager, DEBUG ZooKeeper, or both you'd be interested in?
Thanks,
Dyana
On Tue, 23 Apr 2019 at 13:23, Till Rohrmann wrote:
> Hi Dyana,
Hi Dyana,
your analysis is almost correct. The only part which is missing is that the
lock nodes are created as ephemeral nodes. This should ensure that if a JM
process dies that the lock nodes will get removed by ZooKeeper. It depends
a bit on ZooKeeper's configuration how long it takes until Zk
Thanks for proposing this design document Shuiqiang. It is a very
interesting idea how to solve the problem of running multiple Flink jobs as
part of a single application. I like the idea since it does not require
many runtime changes apart from a session concept on the Dispatcher and it
would work
Matěj Novotný created FLINK-12303:
-
Summary: Scala 2.12 lambdas does not work in event classes inside
streams.
Key: FLINK-12303
URL: https://issues.apache.org/jira/browse/FLINK-12303
Project: Flink
lamber-ken created FLINK-12302:
--
Summary: fix the finalStatus of application when job not finished
Key: FLINK-12302
URL: https://issues.apache.org/jira/browse/FLINK-12302
Project: Flink
Issue T
Michael created FLINK-12301:
---
Summary: Scala value classes cannot be serialized anymore in case
classes in Flink 1.8.0
Key: FLINK-12301
URL: https://issues.apache.org/jira/browse/FLINK-12301
Project: Flink
Michael created FLINK-12300:
---
Summary: Unused Import warning with DataStream[Seq[...]]
Key: FLINK-12300
URL: https://issues.apache.org/jira/browse/FLINK-12300
Project: Flink
Issue Type: Bug
shiwuliang created FLINK-12299:
--
Summary: ExecutionConfig#setAutoWatermarkInterval should check
param(interval should not less than zero)
Key: FLINK-12299
URL: https://issues.apache.org/jira/browse/FLINK-12299
I think we should not expose the ClusterClient configuration via the
ExecutionEnvironment (env.getClusterClient().addJobListener) because this
is effectively the same as exposing the JobListener interface directly on
the ExecutionEnvironment. Instead I think it could be possible to provide a
Cluste
Dawid Wysakowicz created FLINK-12298:
Summary: Make column functions accept custom
Key: FLINK-12298
URL: https://issues.apache.org/jira/browse/FLINK-12298
Project: Flink
Issue Type: Impr
Hi All,
We would like to start a discussion thread about a new feature called Flink
Driver. A brief summary is following.
As mentioned in the discussion of Interactive Programming, user
applications might consist of multiple jobs and take long to finish.
Currently, when Flink runs applications wi
Thanks Kailash for bringing this up. I think this is a good idea. By
passing the ParquetWriter we gain much more flexibility.
I did a small PR on adding the ability to add compression to the Parquet
writer: https://github.com/apache/flink/pull/7547 But I believe this is the
wrong approach. For exa
27 matches
Mail list logo