Chesnay Schepler created FLINK-27459:
Summary: JsonJobGraphGenerationTest doesn't reset context
environment
Key: FLINK-27459
URL: https://issues.apache.org/jira/browse/FLINK-27459
Project: Flink
Hi all,
+1 for the release (non-binding). The check follows the Jira release
note[1] and is listed as follows.
- Verify that the source distributions of [2] do not contain any
binaries;
- Build the source distribution to ensure all source files have Apache
headers, and test functionality
Hi Alexander and Arvid,
Thanks for the discussion and sorry for my late response! We had an
internal discussion together with Jark and Leonard and I’d like to
summarize our ideas. Instead of implementing the cache logic in the table
runtime layer or wrapping around the user-provided table function
Gyula Fora created FLINK-27458:
--
Summary: Expose allowNonRestoredState flag in JobSpec
Key: FLINK-27458
URL: https://issues.apache.org/jira/browse/FLINK-27458
Project: Flink
Issue Type: Improvem
Etienne Chauchot created FLINK-27457:
Summary: CassandraOutputFormats should support flush
Key: FLINK-27457
URL: https://issues.apache.org/jira/browse/FLINK-27457
Project: Flink
Issue Typ
Let me add some information about the LegacySource.
If we want to disable the overdraft buffer for LegacySource.
Could we add the enableOverdraft in LocalBufferPool?
The default value is false. If the getAvailableFuture is called,
change enableOverdraft=true. It indicates whether there are
checks
Hi,
Thanks for your quick response.
For question 1/2/3, we think they are clear. We just need to discuss the
default value in PR.
For the legacy source, you are right. It's difficult for general
implementation.
Currently, we implement ensureRecordWriterIsAvailable() in
SourceFunction.SourceConte
Hi.
-- 1. Do you mean split this into two JIRAs or two PRs or two commits in a
PR?
Perhaps, the separated ticket will be better since this task has fewer
questions but we should find a solution for LegacySource first.
-- 2. For the first task, if the flink user disables the Unaligned
Ch
Thanks for kicking off the topic, Konstantin and Chesnay.
Also thanks Martijn, Godfrey and Xingbo for volunteering to be the release
manager. Given that release 1.16 would likely be a beefy release with a
bunch of major features already on their way, it might make sense to have
more release manage
+1 (binding)
* mvn clean package - PASSED
* checked signatures & checksums of source artifacts - OK
* went through quick start - WORKS
* skimmed over NOTICE file - LOOKS GOOD
I also read over the announcement blog post. In my opinion, we could try to
motivate the project a bit better. What is the
Hi all,
Yun, David M, David A, and I had an offline discussion and talked through a
couple of details that emerged from the discussion here. We believe, we have
found a consensus on these points and would like to share our points for
further feedback:
Let me try to get through the points that w
Hi,
Thanks for your feedback. I have a servel of questions.
1. Do you mean split this into two JIRAs or two PRs or two commits in a
PR?
2. For the first task, if the flink user disables the Unaligned
Checkpoint, do we ignore max buffers per channel? Because the overdraft
isn't usef
Hi,
We discuss about it a little with Dawid Wysakowicz. Here is some conclusion:
First of all, let's split this into two tasks.
The first task is about ignoring max buffers per channel. This means if
we request a memory segment from LocalBufferPool and the
maxBuffersPerChannel is reached for
David Anderson created FLINK-27456:
--
Summary: mistake and confusion with CEP example in docs
Key: FLINK-27456
URL: https://issues.apache.org/jira/browse/FLINK-27456
Project: Flink
Issue Type
Chesnay Schepler created FLINK-27455:
Summary: [JUnit5 Migration] SnapshotMigrationTestBase
Key: FLINK-27455
URL: https://issues.apache.org/jira/browse/FLINK-27455
Project: Flink
Issue Ty
Hi devs,
I want to start a discussion about Schema Evolution on the Flink Table
Store. [1]
In FLINK-21634, We plan to support many schema changes in Flink SQL.
But for the current Table Store, it may result in wrong data, unclear
evolutions.
In general, the user has these operations for schema:
Hi all!
+1 for the release (non-binding). I've tested the jar with a standalone
cluster and SQL client.
- Compiled the sources;
- Run through quick start guide;
- Test all supported data types;
- Check that table store jar has no conflict with orc / avro format and
kafka connector;
Chesnay Schepler created FLINK-27454:
Summary: Remove inheritance from TestBaseUtils
Key: FLINK-27454
URL: https://issues.apache.org/jira/browse/FLINK-27454
Project: Flink
Issue Type: Tec
Chesnay Schepler created FLINK-27453:
Summary: Cleanup TestBaseUtils
Key: FLINK-27453
URL: https://issues.apache.org/jira/browse/FLINK-27453
Project: Flink
Issue Type: Technical Debt
Chesnay Schepler created FLINK-27452:
Summary: Move Te
Key: FLINK-27452
URL: https://issues.apache.org/jira/browse/FLINK-27452
Project: Flink
Issue Type: Technical Debt
Report
Thanks for for driving this work, it's to be a useful feature.
About the flip-218, I have some questions.
1: Does our CTAS syntax support specify target table's schema including column
name and data type? I think it maybe a useful fature in case we want to change
the data types in target table i
Thanks Konstantin and Chesnay for starting the discussion. I'm also willing
to volunteer as the release manager if this is still open.
Regarding the feature freeze date, +1 to the end of mid August.
Best,
Xingbo
Zhu Zhu 于2022年4月29日周五 11:01写道:
> +1 for a 5 months release cycle.
> +1 target the
Hi Anton Kalashnikov,
I think you agree with we should limit the maximum number of overdraft
segments that each LocalBufferPool can apply for, right?
I prefer to hard code the maxOverdraftBuffers due to don't add the new
configuration. And I hope to hear more from the community.
Best wishes
fanr
Aitozi created FLINK-27451:
--
Summary: Enable the validator plugin in webhook
Key: FLINK-27451
URL: https://issues.apache.org/jira/browse/FLINK-27451
Project: Flink
Issue Type: Improvement
Thanks for driving this, Zhu and Lijie.
+1 for the overall proposal. Just share some cents here:
- Why do we need to expose
cluster.resource-blacklist.item.timeout-check-interval to the user?
I think the semantics of `cluster.resource-blacklist.item.timeout` is
sufficient for the user. How to gua
Hi.
Thanks for Paul's update.
> It's better we can also get the infos about the cluster where the job is
> running through the DESCRIBE statement.
I just wonder how the users can get the web ui in the application mode.
Therefore, it's better we can list the Web UI using the SHOW statement.
WDYT?
26 matches
Mail list logo