Thanks Piotr for your reply!
It is a nice solution! By restricting the buffer using these properties, I
think maxConcurrentRequests attribute is indeed not necessary anymore.
On Tue, May 4, 2021 at 11:52 PM Piotr Nowojski
wrote:
> Hi Wenhao,
>
> As far as I know this is different compared to FLI
Matthias created FLINK-22564:
Summary: ITCASE_KUBECONFIG is not exported
Key: FLINK-22564
URL: https://issues.apache.org/jira/browse/FLINK-22564
Project: Flink
Issue Type: Bug
Component
Hi All,
I have a CDC use case where I want to capture and process debezium
logs that are streamed to Kafka via Debezium. As per all the flink examples
we have to pre create the schema of the tables where I want to perform a
write.
However my question is what if there is an alter table mod
Yes, thanks for managing the release, Dawid & Guowei! +1
On Tue, May 4, 2021 at 4:20 PM Etienne Chauchot
wrote:
> Congrats to everyone involved !
>
> Best
>
> Etienne
> On 03/05/2021 15:38, Dawid Wysakowicz wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> F
Seth Wiesman created FLINK-22563:
Summary: Add migration guide for new StateBackend interfaces
Key: FLINK-22563
URL: https://issues.apache.org/jira/browse/FLINK-22563
Project: Flink
Issue Typ
Thanks for the response Dawid. In this case I don't see a reason for
timeouts in JUnit. I agree with the previously mentioned points and I think
it doesn't make sense to use them in order to limit test duration
Piotrek
wt., 4 maj 2021 o 17:44 Dawid Wysakowicz
napisał(a):
> Hey all,
>
> Sorry I'
Hi Wenhao,
As far as I know this is different compared to FLINK-9083, as KafkaProducer
itself can back pressure writes if internal buffers are exhausted [1].
> The buffer.memory controls the total amount of memory available to the
producer for buffering. If records are sent faster than they can b
Hey all,
Sorry I've not been active in the thread. I think the conclusion is
rather positive and we do want to depend more on the Azure watchdog and
discourage timeouts in JUnit tests from now on. I'll update our coding
guidelines accordingly.
@Piotr Yes, this was a problem in the past, but that
Yes, thanks a lot for driving this release Arvid :)
Piotrek
czw., 29 kwi 2021 o 19:04 Till Rohrmann napisał(a):
> Great to hear. Thanks a lot for being our release manager Arvid and to
> everyone who has contributed to this release!
>
> Cheers,
> Till
>
> On Thu, Apr 29, 2021 at 4:11 PM Arvid H
Hi,
I'm ok with removing most or even all timeouts. Just one thing.
Reason behind using junit timeouts that I've heard (and I was adhering to
it) was that maven watchdog was doing the thread dump and killing the test
process using timeout based on logs inactivity. Some tests were by nature
prone
Congrats to everyone involved !
Best
Etienne
On 03/05/2021 15:38, Dawid Wysakowicz wrote:
|The Apache Flink community is very happy to announce the release of
Apache Flink 1.13.0.|
|Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available,
I just workaround it by using k8 secret and map it via
kubernetes.env.secretKeyRef into flink containers.
On Tue, May 4, 2021 at 3:08 PM Lukáš Drbal wrote:
> Hello all!
>
> We are using k8s native support and now we need to pass an env variable
> which contains # character in value.
>
> example:
Hello all!
We are using k8s native support and now we need to pass an env variable
which contains # character in value.
example:
containerized.master.env.FOO: foo#bar
Job is submitted by k8 cronjob which puts the config option mentioned
before into flink-conf.yaml and submit job via flink run-a
Hi Team,
I deployed Flink cluster on Kubernetes Cluster (Session Mode). i tried to
configure JM HA with zookeeper.
but leader election is not happening. JM are restarting and going into
crashloopBackoff mode.
[zk: localhost:2181(CONNECTED) 8] get /leader/resource_manager_lock
org.apache.zookeeper
Hi Flinksters,
Our repo which is a maven based java project(flink) went through SCA
scan using WhiteSource tool and following are the HIGH severity issues
reported. The target vulnerable jar is not found when we build the
dependency tree of the project.
Could any one let us know if flink uses the
Piotr Nowojski created FLINK-22562:
--
Summary: Translate recent updates to backpressure and
checkpointing docs
Key: FLINK-22562
URL: https://issues.apache.org/jira/browse/FLINK-22562
Project: Flink
zhang haoyan created FLINK-22561:
Summary: Restore checkpoint from another cluster
Key: FLINK-22561
URL: https://issues.apache.org/jira/browse/FLINK-22561
Project: Flink
Issue Type: Improveme
Chesnay Schepler created FLINK-22560:
Summary: Filter maven metadata from all jars
Key: FLINK-22560
URL: https://issues.apache.org/jira/browse/FLINK-22560
Project: Flink
Issue Type: Impro
Dawid Wysakowicz created FLINK-22559:
Summary:
UpsertKafkaTableITCase.testKafkaSourceSinkWithKeyAndFullValue fails with output
mismatch
Key: FLINK-22559
URL: https://issues.apache.org/jira/browse/FLINK-22559
Dawid Wysakowicz created FLINK-22558:
Summary: testcontainers fail to start ResourceReaper
Key: FLINK-22558
URL: https://issues.apache.org/jira/browse/FLINK-22558
Project: Flink
Issue Typ
Dawid Wysakowicz created FLINK-22557:
Summary: Japicmp fails on 1.12 branch
Key: FLINK-22557
URL: https://issues.apache.org/jira/browse/FLINK-22557
Project: Flink
Issue Type: Bug
21 matches
Mail list logo