[jira] [Created] (FLINK-28725) flink-kubernetes-operator taskManager: replicas: 2 error
lizu18xz created FLINK-28725: Summary: flink-kubernetes-operator taskManager: replicas: 2 error Key: FLINK-28725 URL: https://issues.apache.org/jira/browse/FLINK-28725 Project: Flink Issue Type: Bug Reporter: lizu18xz version:v1.1.0 taskManager: replicas: 2 resource: memory: "1024m" cpu: 1 error validating data: ValidationError(FlinkDeployment.spec.taskManager): unknown field "replicas" in org.apache.flink.v1beta1.FlinkDeployment.spec.taskManager; if you choose to ignore these errors, turn validation off with --validate=false -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28724) Test
HuangRufei created FLINK-28724: -- Summary: Test Key: FLINK-28724 URL: https://issues.apache.org/jira/browse/FLINK-28724 Project: Flink Issue Type: Bug Reporter: HuangRufei -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: I want to contribute to Apache Flink.
Hi, Thanks for reaching out. Please follow https://flink.apache.org/contributing/contribute-code.html to start with your contribution. You don't need to be in any list to start your contribution. Best, Xingbo 黄如飞 <1369612...@qq.com.invalid> 于2022年7月28日周四 11:25写道: > Hi Guys, > I want to contribute to Apache Flink. > Would you please give me the permission as a contributor? > My JIRA username is Rufy666 > > > > > 黄如飞 > 1369612...@qq.com > > > >
I want to contribute to Apache Flink.
Hi Guys, I want to contribute to Apache Flink. Would you please give me the permission as a contributor? My JIRA username is Rufy666 ?? 1369612...@qq.com
[jira] [Created] (FLINK-28723) Support json format to serialize the MapData when its key is not STRING
Shengkai Fang created FLINK-28723: - Summary: Support json format to serialize the MapData when its key is not STRING Key: FLINK-28723 URL: https://issues.apache.org/jira/browse/FLINK-28723 Project: Flink Issue Type: Sub-task Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Reporter: Shengkai Fang Currently, the JSON format only supports serializing the Map when its key is STRING. We may convert the key to a JSON string. For example, we can convert the `MAP, ARRAY>` to the following string. {code:java} { "[1, 2, 3]": [ 1, 2, 3 ] } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28722) Hybrid Source should use .equals() for Integer comparison
Mason Chen created FLINK-28722: -- Summary: Hybrid Source should use .equals() for Integer comparison Key: FLINK-28722 URL: https://issues.apache.org/jira/browse/FLINK-28722 Project: Flink Issue Type: Improvement Components: Connectors / Common Affects Versions: 1.15.1 Reporter: Mason Chen Fix For: 1.16.0, 1.15.2 HybridSource should use .equals() for Integer comparison in filtering out the underlying sources. This causes the HybridSource to stop working when it hits the 128th source (would not work for anything past 127 sources). https://github.com/apache/flink/blob/release-1.14.3-rc1/flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/hybrid/HybridSourceSplitEnumerator.java#L358 A user reported this issue here: https://lists.apache.org/thread/7h2rblsdt7rjf85q9mhfht77bghtbswh -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28721) Support Protobuf in DataStream API
Martijn Visser created FLINK-28721: -- Summary: Support Protobuf in DataStream API Key: FLINK-28721 URL: https://issues.apache.org/jira/browse/FLINK-28721 Project: Flink Issue Type: New Feature Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) Reporter: Martijn Visser With FLINK-18202 merged and planned to be released in Flink 1.16, Flink will have support for Protobuf for Table API and SQL applications. Flink should also have support for Protobuf for DataStream API users, like already exists for CSV and Avro. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-250: Support Customized Kubernetes Schedulers Proposal
+1 (binding) Gyula On Wed, 27 Jul 2022 at 03:52, MrL wrote: > +1 (non-binding) > > > 2022年7月27日 09:18,William Wang 写道: > > > > +1 (non-binding) > > > > bo zhaobo 于2022年7月25日周一 09:38写道: > > > >> Hi all, > >> > >> Thank you very much for all feedback after the discussion in [2][3]. > >> Now I'd like to proceed with the vote for FLIP-250 [1], as no more > >> objections > >> or issues were raised in ML thread [2][3]. > >> > >> The vote will be opened until July 28th earliest(at least 72 hours) > unless > >> there is an objection or > >> insufficient votes. > >> > >> Thank you all. > >> > >> BR > >> > >> Bo Zhao > >> > >> [1] > >> > >> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-250%3A+Support+Customized+Kubernetes+Schedulers+Proposal > >> [2] https://lists.apache.org/thread/pf8dvbvqf845wh0x63z68jmhh4pvsbow > >> [3] https://lists.apache.org/thread/zbylkkc6jojrqwds7tt02k2t8nw62h26 > >> > >
Re: [VOTE] FLIP-243: Dedicated Opensearch connectors
Hello! I'd like to add my non-binding +1 for this FLIP. Full disclosure: as a colleague of Andriy, I sometimes hear the gory details of divergence between Elasticsearch and OpenSearch. Objectively, this is a good reason to create independent OpenSearch connectors. As a side comment, while Elasticsearch as a trademark and service mark never has an internal capital S, OpenSearch always does. All my best, Ryan On 2022/07/13 20:22:11 Andriy Redko wrote: > Hey Folks, > > Thanks a lot for all the feedback and comments so far. Based on the > discussion [1], > it seems like there is a genuine interest in supporting OpenSearch [2] > natively. With > that being said, I would like to start a vote on FLIP-243 [3]. > > The vote will last for at least 72 hours unless there is an objection or > insufficient votes. > > Thank you! > > [1] https://lists.apache.org/thread/jls0vqc7jb84jp14j4jok1pqfgo2cl30 > [2] https://opensearch.org/ > [3] > https://cwiki.apache.org/confluence/display/FLINK/FLIP-243%3A+Dedicated+Opensearch+connectors > > > Best Regards, > Andriy Redko > >
Re: [VOTE] FLIP-248: Introduce dynamic partition pruning
+1 (binding) Best, Jing Zhang Jingsong Li 于2022年7月27日周三 16:52写道: > +1 > > On Wed, Jul 27, 2022 at 3:30 PM Jark Wu wrote: > > > > +1 (binding) > > > > Best, > > Jark > > > > On Wed, 27 Jul 2022 at 13:34, Yun Gao > wrote: > > > > > +1 (binding) > > > > > > Thanks for proposing the FLIP! > > > > > > Best, > > > Yun Gao > > > > > > > > > -- > > > From:Jing Ge > > > Send Time:2022 Jul. 27 (Wed.) 03:40 > > > To:undefined > > > Subject:Re: [VOTE] FLIP-248: Introduce dynamic partition pruning > > > > > > +1 > > > Thanks for driving this! > > > > > > On Tue, Jul 26, 2022 at 4:01 PM godfrey he > wrote: > > > > > > > Hi everyone, > > > > > > > > Thanks for all the feedback so far. Based on the discussion[1] we > seem > > > > to have consensus, so I would like to start a vote on FLIP-248 for > > > > which the FLIP has now also been updated[2]. > > > > > > > > The vote will last for at least 72 hours (Jul 29th 14:00 GMT) unless > > > > there is an objection or insufficient votes. > > > > > > > > [1] https://lists.apache.org/thread/v0b8pfh0o7rwtlok2mfs5s6q9w5vw8h6 > > > > [2] > > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-248%3A+Introduce+dynamic+partition+pruning > > > > > > > > Best, > > > > Godfrey > > > > > > > >
Re: I want to contribute to Apache Flink.
Hi, Do you mean you want to have the access rights to the FLIP wiki page[1]? Your confluence ID is required in this case, which is different from the Jira ID. Generally, you don't need any specific permissions to do contributions. Please follow the introduction [1] and then find some jira issues you might want to work on. If you have any questions, please feel free to ask on the ML. Best regards, Jing [1] https://cwiki.apache.org/confluence/display/FLINK/Flink+Improvement+Proposals [2] https://flink.apache.org/contributing/how-to-contribute.html On Wed, Jul 27, 2022 at 2:08 PM 黄如飞 <1369612...@qq.com.invalid> wrote: > Hi Guys, > > I want to contribute to Apache Flink. > Would you please give me the permission as a contributor? > My JIRA ID is Rufy666. > > > > > > 黄如飞 > 1369612...@qq.com > > > >
I want to contribute to Apache Flink.
Hi Guys, I want to contribute to Apache Flink. Would you please give me the permission as a contributor? My JIRA ID is Rufy666. ?? 1369612...@qq.com
[jira] [Created] (FLINK-28720) Add Hive partition when flink has no data to write
tartarus created FLINK-28720: Summary: Add Hive partition when flink has no data to write Key: FLINK-28720 URL: https://issues.apache.org/jira/browse/FLINK-28720 Project: Flink Issue Type: Sub-task Components: Connectors / Hive Reporter: tartarus When writing data to a specified partition (static partition) of a Hive table with Flink SQL, the partition should be added just like Hive/Spark regardless of whether there is data written or not. we should also ensure that Insert into and insert overwrite semantics. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28719) Mapping a data source before window aggregation causes Flink to stop handle late events correctly.
Mykyta Mykhailenko created FLINK-28719: -- Summary: Mapping a data source before window aggregation causes Flink to stop handle late events correctly. Key: FLINK-28719 URL: https://issues.apache.org/jira/browse/FLINK-28719 Project: Flink Issue Type: Bug Components: API / DataStream Affects Versions: 1.15.1 Reporter: Mykyta Mykhailenko I have created a [repository|https://github.com/mykytamykhailenko/flink-map-with-issue] where I describe this issue in detail. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28718) SinkSavepointITCase.testRecoverFromSavepoint is unstable
Jingsong Lee created FLINK-28718: Summary: SinkSavepointITCase.testRecoverFromSavepoint is unstable Key: FLINK-28718 URL: https://issues.apache.org/jira/browse/FLINK-28718 Project: Flink Issue Type: Bug Components: Table Store Reporter: Jingsong Lee Fix For: table-store-0.2.0 https://github.com/apache/flink-table-store/runs/7537817210?check_suite_focus=true {code:java} Error: Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 185.274 s <<< FAILURE! - in org.apache.flink.table.store.connector.sink.SinkSavepointITCase Error: testRecoverFromSavepoint Time elapsed: 180.157 s <<< ERROR! org.junit.runners.model.TestTimedOutException: test timed out after 18 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.flink.table.store.connector.sink.SinkSavepointITCase.testRecoverFromSavepoint(SinkSavepointITCase.java:84) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28717) Table Store Hive connector will throw exception if primary key fields are not selected
Caizhi Weng created FLINK-28717: --- Summary: Table Store Hive connector will throw exception if primary key fields are not selected Key: FLINK-28717 URL: https://issues.apache.org/jira/browse/FLINK-28717 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.2.0, table-store-0.3.0 Reporter: Caizhi Weng Fix For: table-store-0.2.0, table-store-0.3.0 Table Store Hive connector implement projection pushdown by reading desired fields and set other unread fields to null. However the nullability of primary key fields are null, so {{RowData.FieldGetter}} will not check for null values for these types. This may cause exception when primary key fields are not selected and are set to null. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28716) uploading multiple files/form datas fail randomly when use rest api
hehuiyuan created FLINK-28716: - Summary: uploading multiple files/form datas fail randomly when use rest api Key: FLINK-28716 URL: https://issues.apache.org/jira/browse/FLINK-28716 Project: Flink Issue Type: Bug Reporter: hehuiyuan It can happen error randomly when use `jars/upload` rest api. {code:java} java.lang.IndexOutOfBoundsException: index: 1804, length: 1 (expected: range(0, 1804)) at org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.checkRangeBounds(AbstractByteBuf.java:1390) at org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.checkIndex0(AbstractByteBuf.java:1397) at org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1384) at org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.checkIndex(AbstractByteBuf.java:1379) at org.apache.flink.shaded.netty4.io.netty.buffer.AbstractByteBuf.getByte(AbstractByteBuf.java:355) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostBodyUtil.findDelimiter(HttpPostBodyUtil.java:238) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.loadDataMultipartOptimized(HttpPostMultipartRequestDecoder.java:1172) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.getFileUpload(HttpPostMultipartRequestDecoder.java:926) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.decodeMultipart(HttpPostMultipartRequestDecoder.java:572) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.findMultipartDisposition(HttpPostMultipartRequestDecoder.java:797) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.decodeMultipart(HttpPostMultipartRequestDecoder.java:511) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.findMultipartDelimiter(HttpPostMultipartRequestDecoder.java:663) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.decodeMultipart(HttpPostMultipartRequestDecoder.java:498) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.parseBodyMultipart(HttpPostMultipartRequestDecoder.java:463) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.parseBody(HttpPostMultipartRequestDecoder.java:432) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.offer(HttpPostMultipartRequestDecoder.java:347) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.offer(HttpPostMultipartRequestDecoder.java:54) at org.apache.flink.shaded.netty4.io.netty.handler.codec.http.multipart.HttpPostRequestDecoder.offer(HttpPostRequestDecoder.java:223) at org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:176) at org.apache.flink.runtime.rest.FileUploadHandler.channelRead0(FileUploadHandler.java:71) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28715) Throw better exception when file not found in reading
Jingsong Lee created FLINK-28715: Summary: Throw better exception when file not found in reading Key: FLINK-28715 URL: https://issues.apache.org/jira/browse/FLINK-28715 Project: Flink Issue Type: Improvement Components: Table Store Reporter: Jingsong Lee Fix For: table-store-0.2.0 When reading a file, if it is found that the file does not exist, it directly throws a file not found exception, which is often difficult for users to understand. We can make it more clear in the exception message, e.g. The file cannot be found, this may be because the read is too slow and the previous snapshot expired, you can configure a larger snapshot.time-retained or speed up your read. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28714) Resolve CVEs from beam-vendor-grpc-1_26_0-0.3
Bilna created FLINK-28714: - Summary: Resolve CVEs from beam-vendor-grpc-1_26_0-0.3 Key: FLINK-28714 URL: https://issues.apache.org/jira/browse/FLINK-28714 Project: Flink Issue Type: Bug Components: API / Python Affects Versions: 1.13.6 Reporter: Bilna The following CVEs comes from the transient dependency, BouncyCastle:1.54 through Apache Beam dependency in flink-python. CVE-2018-1000180, CVE-2016-1000352, CVE-2016-1000344, CVE-2016-1000340, CVE-2016-1000342, CVE-2016-1000343, CVE-2016-1000338 The issue comes from beam-vendor-grpc-1_26_0-0.3. The latest Flink uses apache beam 2.38.0 and its BouncyCastle version is 1.67. BouncyCastle should be of version 1.7 or greater grpc-Java:1.48.0 has removed BouncyCastle dependency. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28713) Remove unused curator-test dependency from flink-test-utils
Chesnay Schepler created FLINK-28713: Summary: Remove unused curator-test dependency from flink-test-utils Key: FLINK-28713 URL: https://issues.apache.org/jira/browse/FLINK-28713 Project: Flink Issue Type: Technical Debt Components: Build System, Tests Reporter: Chesnay Schepler Assignee: Chesnay Schepler Fix For: 1.16.0 Remove an unused dependency that also pulls in log4j1 into user projects. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28712) Default Changelog all when changelog producer is input
Jingsong Lee created FLINK-28712: Summary: Default Changelog all when changelog producer is input Key: FLINK-28712 URL: https://issues.apache.org/jira/browse/FLINK-28712 Project: Flink Issue Type: Improvement Components: Table Store Reporter: Jingsong Lee Assignee: Jingsong Lee Fix For: table-store-0.2.0 When changelog producer is input, It is implied that the file already contains all the changelogs -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28711) Hive connector implements SupportsDynamicFiltering interface
godfrey he created FLINK-28711: -- Summary: Hive connector implements SupportsDynamicFiltering interface Key: FLINK-28711 URL: https://issues.apache.org/jira/browse/FLINK-28711 Project: Flink Issue Type: Sub-task Components: Connectors / Hive Reporter: godfrey he Fix For: 1.16.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28710) Transform dpp ExecNode to StreamGraph
godfrey he created FLINK-28710: -- Summary: Transform dpp ExecNode to StreamGraph Key: FLINK-28710 URL: https://issues.apache.org/jira/browse/FLINK-28710 Project: Flink Issue Type: Sub-task Components: Table SQL / Planner Reporter: godfrey he -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28709) Implement dynamic filtering operators
godfrey he created FLINK-28709: -- Summary: Implement dynamic filtering operators Key: FLINK-28709 URL: https://issues.apache.org/jira/browse/FLINK-28709 Project: Flink Issue Type: Sub-task Reporter: godfrey he -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28708) Introduce planner rules to optimize dpp pattern
godfrey he created FLINK-28708: -- Summary: Introduce planner rules to optimize dpp pattern Key: FLINK-28708 URL: https://issues.apache.org/jira/browse/FLINK-28708 Project: Flink Issue Type: Sub-task Components: Table SQL / Planner Reporter: godfrey he Fix For: 1.16.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-247 Bulk fetch of table and column statistics for given partitions
+1 non-binding Best Rui Fan On Tue, Jul 26, 2022 at 4:05 PM Jingsong Li wrote: > +1 > > Best, > Jingsong > > On Tue, Jul 26, 2022 at 10:11 AM godfrey he wrote: > > > > +1 > > > > Best, > > Godfrey > > > > Jark Wu 于2022年7月25日周一 17:23写道: > > > > > > +1 (binding) > > > > > > Best, > > > Jark > > > > > > On Mon, 25 Jul 2022 at 15:10, Jing Ge wrote: > > > > > > > Hi all, > > > > > > > > Many thanks for all your feedback. Based on the discussion in [1], > I'd like > > > > to start a vote on FLIP-247 [2]. > > > > > > > > The vote will last for at least 72 hours unless there is an > objection or > > > > insufficient votes. > > > > > > > > Best regards, > > > > Jing > > > > > > > > [1] https://lists.apache.org/thread/sgd36d8s8crc822xt57jxvb6m1k6t07o > > > > [2] > > > > > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-247%3A+Bulk+fetch+of+table+and+column+statistics+for+given+partitions > > > > >
[jira] [Created] (FLINK-28707) Introduce interface SupportsDynamicPartitionPruning
godfrey he created FLINK-28707: -- Summary: Introduce interface SupportsDynamicPartitionPruning Key: FLINK-28707 URL: https://issues.apache.org/jira/browse/FLINK-28707 Project: Flink Issue Type: Sub-task Components: Table SQL / API Affects Versions: 1.16.0 Reporter: godfrey he Fix For: 1.16.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28706) FLIP-248: Introduce dynamic partition pruning
godfrey he created FLINK-28706: -- Summary: FLIP-248: Introduce dynamic partition pruning Key: FLINK-28706 URL: https://issues.apache.org/jira/browse/FLINK-28706 Project: Flink Issue Type: New Feature Components: Connectors / Hive, Runtime / Coordination, Table SQL / Planner, Table SQL / Runtime Affects Versions: 1.16.0 Reporter: godfrey he Please refer to https://cwiki.apache.org/confluence/display/FLINK/FLIP-248%3A+Introduce+dynamic+partition+pruning for more details -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28705) Update copyright year to 2014-2022 in NOTICE files
Nicholas Jiang created FLINK-28705: -- Summary: Update copyright year to 2014-2022 in NOTICE files Key: FLINK-28705 URL: https://issues.apache.org/jira/browse/FLINK-28705 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.3.0 Reporter: Nicholas Jiang Fix For: table-store-0.3.0 Copyright year of the NOTICE file in Flink Table Store should be '2014-2022'. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-248: Introduce dynamic partition pruning
+1 On Wed, Jul 27, 2022 at 3:30 PM Jark Wu wrote: > > +1 (binding) > > Best, > Jark > > On Wed, 27 Jul 2022 at 13:34, Yun Gao wrote: > > > +1 (binding) > > > > Thanks for proposing the FLIP! > > > > Best, > > Yun Gao > > > > > > -- > > From:Jing Ge > > Send Time:2022 Jul. 27 (Wed.) 03:40 > > To:undefined > > Subject:Re: [VOTE] FLIP-248: Introduce dynamic partition pruning > > > > +1 > > Thanks for driving this! > > > > On Tue, Jul 26, 2022 at 4:01 PM godfrey he wrote: > > > > > Hi everyone, > > > > > > Thanks for all the feedback so far. Based on the discussion[1] we seem > > > to have consensus, so I would like to start a vote on FLIP-248 for > > > which the FLIP has now also been updated[2]. > > > > > > The vote will last for at least 72 hours (Jul 29th 14:00 GMT) unless > > > there is an objection or insufficient votes. > > > > > > [1] https://lists.apache.org/thread/v0b8pfh0o7rwtlok2mfs5s6q9w5vw8h6 > > > [2] > > > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-248%3A+Introduce+dynamic+partition+pruning > > > > > > Best, > > > Godfrey > > > > >
[jira] [Created] (FLINK-28704) Document should add a description that change-log mode cannot support map and multiset data type
Jane Chan created FLINK-28704: - Summary: Document should add a description that change-log mode cannot support map and multiset data type Key: FLINK-28704 URL: https://issues.apache.org/jira/browse/FLINK-28704 Project: Flink Issue Type: Improvement Components: Table Store Affects Versions: table-store-0.2.0 Reporter: Jane Chan Fix For: table-store-0.2.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28703) Describe partitioned spark table lost partition info
Jane Chan created FLINK-28703: - Summary: Describe partitioned spark table lost partition info Key: FLINK-28703 URL: https://issues.apache.org/jira/browse/FLINK-28703 Project: Flink Issue Type: Bug Components: Table Store Affects Versions: table-store-0.2.0 Reporter: Jane Chan Fix For: table-store-0.2.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-28702) Why can't "scan.incremental.snapshot.enabled" be set when using datastream source in MySQL CDC connector
liujian created FLINK-28702: --- Summary: Why can't "scan.incremental.snapshot.enabled" be set when using datastream source in MySQL CDC connector Key: FLINK-28702 URL: https://issues.apache.org/jira/browse/FLINK-28702 Project: Flink Issue Type: Improvement Components: API / DataStream Reporter: liujian -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [VOTE] FLIP-248: Introduce dynamic partition pruning
+1 (binding) Best, Jark On Wed, 27 Jul 2022 at 13:34, Yun Gao wrote: > +1 (binding) > > Thanks for proposing the FLIP! > > Best, > Yun Gao > > > -- > From:Jing Ge > Send Time:2022 Jul. 27 (Wed.) 03:40 > To:undefined > Subject:Re: [VOTE] FLIP-248: Introduce dynamic partition pruning > > +1 > Thanks for driving this! > > On Tue, Jul 26, 2022 at 4:01 PM godfrey he wrote: > > > Hi everyone, > > > > Thanks for all the feedback so far. Based on the discussion[1] we seem > > to have consensus, so I would like to start a vote on FLIP-248 for > > which the FLIP has now also been updated[2]. > > > > The vote will last for at least 72 hours (Jul 29th 14:00 GMT) unless > > there is an objection or insufficient votes. > > > > [1] https://lists.apache.org/thread/v0b8pfh0o7rwtlok2mfs5s6q9w5vw8h6 > > [2] > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-248%3A+Introduce+dynamic+partition+pruning > > > > Best, > > Godfrey > > >
[jira] [Created] (FLINK-28701) Minimize the explosion range during failover for hybrid shuffle
Weijie Guo created FLINK-28701: -- Summary: Minimize the explosion range during failover for hybrid shuffle Key: FLINK-28701 URL: https://issues.apache.org/jira/browse/FLINK-28701 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination Affects Versions: 1.16.0 Reporter: Weijie Guo In hybrid shuffle mode, there are currently two strategies to control spilling. For the full spilling strategy, the data is guaranteed to be persistent to the disk after task finished. When a failover occurs, if the upstream has been finished, the data should be recovered directly from the disk file without re-compute. For selective spilling strategy, the entire topology must be restarted. -- This message was sent by Atlassian Jira (v8.20.10#820010)