qiunan created FLINK-25413:
--
Summary: Use append yarn and hadoop config to replace overwrite
Key: FLINK-25413
URL: https://issues.apache.org/jira/browse/FLINK-25413
Project: Flink
Issue Type: Improv
Hi Martjin,
Thanks for the effort on Flink 1.14.3. FLINK-25132 has been merged on master
and is waiting for CI on release-1.14. I think it can be closed today.
Cheers,
Qingsheng Ren
> On Dec 21, 2021, at 6:26 PM, Martijn Visser wrote:
>
> Hi everyone,
>
> I'm restarting this thread [1] w
srujankumar created FLINK-25412:
---
Summary: Upgrade of flink to 1.14.2 is showing internal server
errors in the UI
Key: FLINK-25412
URL: https://issues.apache.org/jira/browse/FLINK-25412
Project: Flink
Surendra Lalwani created FLINK-25411:
Summary: JsonRowSerializationSchema unable to parse TIMESTAMP_LTZ
fields
Key: FLINK-25411
URL: https://issues.apache.org/jira/browse/FLINK-25411
Project: Flin
Junfan Zhang created FLINK-25410:
Summary: Flink CLI should exit when app is accepted with detach
mode on Yarn
Key: FLINK-25410
URL: https://issues.apache.org/jira/browse/FLINK-25410
Project: Flink
Hi Konstantin,
Thanks for sharing your thoughts. Please see the reply inline below.
Thanks,
Jiangjie (Becket) Qin
On Tue, Dec 21, 2021 at 7:14 PM Konstantin Knauf wrote:
> Hi Becket, Hi Nicholas,
>
> Thanks for joining the discussion.
>
> 1 ) Personally, I would argue that we should only run
Sorry to join the discussion late.
+1 for dropping support for hadoop versions < 2.8 from my side.
TBH, warping the reflection based logic with safeguards sounds a bit
neither fish nor fowl to me. It weakens the major benefits that we look for
by dropping support for early versions.
- The codebas
Yuan Zhu created FLINK-25409:
Summary: Add cache metric to LookupFunction
Key: FLINK-25409
URL: https://issues.apache.org/jira/browse/FLINK-25409
Project: Flink
Issue Type: Improvement
I'm happy to announce that we have unanimously approved this release.
There are 4 approving votes, 3 of which are binding:
* Seth Wiesman (non-binding)
* Tzu-Li (Gordon) Tai (binding)
* Till Rohrmann (binding)
* Igal Shilman (binding)
There are no disapproving votes.
Thanks everyone!
Your high level layout make sense. However, I think there are a few
problems doing it with Flink:
(1) How to encode the watermark? It could break downstream consumers
that don't know what to do with them (eg, crash on deserialization)?
There is no guarantee that only a downstream Flink job con
CC user@f.a.o
Is anyone aware of something that blocks us from doing the upgrade?
D.
On Tue, Dec 21, 2021 at 5:50 PM David Morávek
wrote:
> Hi Martijn,
>
> from person experience, most Hadoop users are lagging behind the release
> lines by a lot, because upgrading a Hadoop cluster is not reall
Hi Martijn,
from person experience, most Hadoop users are lagging behind the release
lines by a lot, because upgrading a Hadoop cluster is not really a simply
task to achieve. I think for now, we can stay a bit conservative, nothing
blocks us for using 2.8.5 as we don't use any "newer" APIs in the
Hi Martijn,
Thank you for reviving this thread. I have opened a PR to backport the
Log4j Upgrade to 2.17.0 [1] to Flink 1.14 just now and am waiting for CI to
pass.
Cheers,
Konstantin
[1] https://issues.apache.org/jira/browse/FLINK-25375
On Tue, Dec 21, 2021 at 5:26 PM David Morávek wrote:
>
Zichen Liu created FLINK-25408:
--
Summary: Chinese Translation - Add documentation for KDS Async Sink
Key: FLINK-25408
URL: https://issues.apache.org/jira/browse/FLINK-25408
Project: Flink
Issue
Hi Martijn,
I'm also working on fixing the FLINK-25271 [1], but there's probably no
need to block a release on this as it only addresses test flakiness
[1] https://issues.apache.org/jira/browse/FLINK-25271
Best,
D.
On Tue, Dec 21, 2021 at 11:27 AM Martijn Visser
wrote:
> Hi everyone,
>
> I'm
Piotr Nowojski created FLINK-25407:
--
Summary: Network stack deadlock when cancellation happens during
initialisation
Key: FLINK-25407
URL: https://issues.apache.org/jira/browse/FLINK-25407
Project: F
Till Rohrmann created FLINK-25406:
-
Summary: Make sure that FileSystemBlobStore flushes all writes to
disk to persist stored blobs
Key: FLINK-25406
URL: https://issues.apache.org/jira/browse/FLINK-25406
Till Rohrmann created FLINK-25405:
-
Summary: Let BlobServer/BlobCache detect & delete corrupted blobs
Key: FLINK-25405
URL: https://issues.apache.org/jira/browse/FLINK-25405
Project: Flink
Is
Till Rohrmann created FLINK-25404:
-
Summary: Store blobs in /blobs
Key: FLINK-25404
URL: https://issues.apache.org/jira/browse/FLINK-25404
Project: Flink
Issue Type: Sub-task
Compon
Till Rohrmann created FLINK-25403:
-
Summary: Make BlobServer/BlobCache compatible with local working
directories
Key: FLINK-25403
URL: https://issues.apache.org/jira/browse/FLINK-25403
Project: Flink
Till Rohrmann created FLINK-25402:
-
Summary: FLIP-198: Working directory for Flink processes
Key: FLINK-25402
URL: https://issues.apache.org/jira/browse/FLINK-25402
Project: Flink
Issue Type:
Hi everyone,
I am happy to announce that FLIP-198 [1] has been accepted by this vote [2].
- Chesnay Schepler (binding)
- David Moravek
- Yang Wang (binding)
- Yun Tang (binding)
- Till Rohrmann (binding)
[1] https://cwiki.apache.org/confluence/x/ZZiqCw
[2] https://lists.apache.org/thread/5wylbxw
Thanks everyone for voting. The voting period has passed and we have enough
binding votes. I will close this vote and post the result as a separate
email.
Cheers,
Till
On Tue, Dec 21, 2021 at 3:58 PM Till Rohrmann wrote:
> +1 (binding)
>
> Cheers,
> Till
>
> On Fri, Dec 17, 2021 at 7:48 AM Yun
+1 (binding)
Cheers,
Till
On Fri, Dec 17, 2021 at 7:48 AM Yun Tang wrote:
> +1 (binding)
>
> Best
> Yun Tang
> --
> *From:* Yang Wang
> *Sent:* Friday, December 17, 2021 12:42
> *To:* dev
> *Cc:* Till Rohrmann
> *Subject:* Re: [VOTE] FLIP-198: Working directory fo
>> AFAIK state schema evolution should work both for native and canonical
>> savepoints.
Schema evolution does technically work for both formats, it happens after
the code paths have been unified, but the community has up until this point
considered that an unsupported feature. From my perspective
Hi Konstantin,
> In this context: will the native format support state schema evolution? If
> not, I am not sure, we can let the format default to native.
AFAIK state schema evolution should work both for native and canonical
savepoints.
Regarding what is/will be supported we will document as pa
Hi,
Like I said I've only just started thinking about how this can be
implemented (I'm currently still lacking a lot of knowledge).
So at this point I do not yet see why solving this in the transport (like
Kafka) is easier than solving it in the processing engine (like Flink).
In the normal scenar
+1 (binding)
- Checked that log4j was upgraded to 2.16.0
- Ran e2e tests
Thanks,
Igal.
On Tue, Dec 21, 2021 at 9:43 AM Till Rohrmann wrote:
> +1 (binding)
>
> - Checked signatures and checksums
> - Checked that Flink version has been bumped to 1.13.5
> - Built and ran tests based on source re
Hi Becket, Hi Nicholas,
Thanks for joining the discussion.
1 ) Personally, I would argue that we should only run user code in the
Jobmanager/Jobmaster if we can not avoid it. It seems wrong to me to
encourage users to e.g. run a webserver on the Jobmanager, or continuously
read patterns from a Ka
Hi folks,
I just finished reading the previous emails. It was a good discussion. Here
are my two cents:
*Is OperatorCoordinator a public API?*
Regarding the OperatorCoordinator, although it is marked as internal
interface at this point, I intended to make it a public interface when add
that in FL
Hi,
As Arvid appointed out, with some more checks, if a job vertex have multiple
downstream vertices and
they are all connected via blocking edges, they should not affect each other,
as long as we could
ensure the intermediate shuffle data do not get lost, the finished precedent
job vertex no
Hi everyone,
I'm restarting this thread [1] with a new subject, given that Flink 1.14.1
was a (cancelled) emergency release for the Log4j update and we've released
Flink 1.14.2 as an emergency release for Log4j updates [2].
To give an update, this is the current blocker for Flink 1.14.3:
* https
+1 (binding)
- Checked signatures and checksums
- Checked that Flink version has been bumped to 1.13.5
- Built and ran tests based on source release
Cheers,
Till
On Tue, Dec 21, 2021 at 7:05 AM Tzu-Li (Gordon) Tai
wrote:
> +1 (binding)
>
> - Checked hash and signatures
> - Checked diff contain
Hi Nico,
I am hopeful this will improve the developer experience quite a bit, in
particular for first time contributors. +1
Cheers,
Konstantin
On Thu, Dec 16, 2021 at 5:04 PM Till Rohrmann wrote:
> Thanks for drafting this proposal Nico.
>
> I hope that we can improve our development processe
Joyce.Li created FLINK-25401:
Summary: DefaultCompletedCheckpointStore may not return the latest
CompletedCheckpoint after JM failover.
Key: FLINK-25401
URL: https://issues.apache.org/jira/browse/FLINK-25401
wuzhiyu created FLINK-25400:
---
Summary: RocksDBStateBackend configurations does not work with
SavepointEnvironment
Key: FLINK-25400
URL: https://issues.apache.org/jira/browse/FLINK-25400
Project: Flink
Hi Yunfeng,
thanks for drafting this FLIP, this will be a great addition into the CEP
toolbox!
Apart from running user code in JM, which want to avoid in general, I'd
have one more another concern about using the OperatorCoordinator and that
is re-processing of the historical data. Any thoughts a
Till Rohrmann created FLINK-25399:
-
Summary: AZP fails with exit code 137 when running checkpointing
test cases
Key: FLINK-25399
URL: https://issues.apache.org/jira/browse/FLINK-25399
Project: Flink
38 matches
Mail list logo