[jira] [Created] (FLINK-34173) Implement CatalogTable.Builder

2024-01-19 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-34173:
--

 Summary: Implement CatalogTable.Builder
 Key: FLINK-34173
 URL: https://issues.apache.org/jira/browse/FLINK-34173
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes
Assignee: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34172) Add support for altering a distribution via ALTER TABLE

2024-01-19 Thread Jim Hughes (Jira)
Jim Hughes created FLINK-34172:
--

 Summary: Add support for altering a distribution via ALTER TABLE 
 Key: FLINK-34172
 URL: https://issues.apache.org/jira/browse/FLINK-34172
 Project: Flink
  Issue Type: Sub-task
Reporter: Jim Hughes






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34171) Cannot restore from savepoint when increasing parallelism of operator using reinterpretAsKeyedStream and RichAsyncFunction

2024-01-19 Thread Ken Burford (Jira)
Ken Burford created FLINK-34171:
---

 Summary: Cannot restore from savepoint when increasing parallelism 
of operator using reinterpretAsKeyedStream and RichAsyncFunction
 Key: FLINK-34171
 URL: https://issues.apache.org/jira/browse/FLINK-34171
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Runtime / Checkpointing, Runtime / 
State Backends
Affects Versions: 1.17.0
Reporter: Ken Burford


We recently upgraded from Flink 1.14.2 to 1.17.0. Our job has not materially 
changed beyond a few feature changes (enabling snapshot compression, unaligned 
checkpoints), but we're seeing the following exception when attempting to 
adjust the parallelism of our job up or down:
{code:java}
2024-01-16 12:29:50 java.lang.RuntimeException: Exception occurred while 
setting the current key context.     at 
org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.setCurrentKey(StreamOperatorStateHandler.java:373)
     at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.setCurrentKey(AbstractStreamOperator.java:508)
     at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement(AbstractStreamOperator.java:503)
     at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.setKeyContextElement1(AbstractStreamOperator.java:478)
     at 
org.apache.flink.streaming.api.operators.OneInputStreamOperator.setKeyContextElement(OneInputStreamOperator.java:36)
     at 
org.apache.flink.streaming.runtime.io.RecordProcessorUtils.lambda$getRecordProcessor$0(RecordProcessorUtils.java:59)
     at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:94)
     at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:75)
     at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
     at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:51)
     at 
org.apache.flink.streaming.api.operators.async.queue.StreamRecordQueueEntry.emitResult(StreamRecordQueueEntry.java:64)
     at 
org.apache.flink.streaming.api.operators.async.queue.UnorderedStreamElementQueue$Segment.emitCompleted(UnorderedStreamElementQueue.java:272)
     at 
org.apache.flink.streaming.api.operators.async.queue.UnorderedStreamElementQueue.emitCompletedElement(UnorderedStreamElementQueue.java:159)
     at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.outputCompletedElement(AsyncWaitOperator.java:393)
     at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.access$1800(AsyncWaitOperator.java:92)
     at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$ResultHandler.processResults(AsyncWaitOperator.java:621)
     at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator$ResultHandler.lambda$processInMailbox$0(AsyncWaitOperator.java:602)
     at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
     at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) 
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMail(MailboxProcessor.java:398)
     at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsNonBlocking(MailboxProcessor.java:383)
     at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:345)
     at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:229)
     at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:712)
     at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:675)
     at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)
     at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921)     
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)     at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)     at 
java.lang.Thread.run(Thread.java:750) Caused by: 
java.lang.IllegalArgumentException: Key group 30655 is not in 
KeyGroupRange{startKeyGroup=19346, endKeyGroup=19360}. Unless you're directly 
using low level state access APIs, this is most likely caused by 
non-deterministic shuffle key (hashCode and equals implementation).     at 
org.apache.flink.runtime.state.KeyGroupRangeOffsets.newIllegalKeyGroupException(KeyGroupRangeOffsets.java:37)
     at 
org.apache.flink.runtime.state.heap.InternalKeyContextImpl.setCurrentKeyGroupIndex(InternalKeyContextImpl.java:77)
     at 
org.apache.flink.runtime.state.AbstractKeyedStateBackend.setCurrentKey(AbstractKeyedStateBackend.java:250)
     at 

[jira] [Created] (FLINK-34170) Include the look up join conditions in the optimised plan.

2024-01-19 Thread david radley (Jira)
david radley created FLINK-34170:


 Summary: Include the look up join conditions in the optimised plan.
 Key: FLINK-34170
 URL: https://issues.apache.org/jira/browse/FLINK-34170
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: david radley


As per 
[https://github.com/apache/flink-connector-jdbc/pull/79#discussion_r1458664773]

[~libenchao] asked that I raise this issue to Include the look up join 
conditions in the optimised plan; in lime with the scan conditions. The JDBC 
and other lookup sources could then be updated to pick up the conditions from 
the plan. 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[ANNOUNCE] Apache Flink 1.18.1 released

2024-01-19 Thread Jing Ge
The Apache Flink community is very happy to announce the release of Apache
Flink 1.18.1, which is the first bugfix release for the Apache Flink 1.18
series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements
for this bugfix release:
https://flink.apache.org/2024/01/19/apache-flink-1.18.1-release-announcement/

Please note: Users that have state compression should not migrate to 1.18.1
(nor 1.18.0) due to a critical bug that could lead to data loss. Please
refer to FLINK-34063 for more information.

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12353640

We would like to thank all contributors of the Apache Flink community who
made this release possible! Special thanks to @Qingsheng Ren @Leonard Xu
 @Xintong Song @Matthias Pohl @Martijn Visser for the support during this
release.

A Jira task series based on the Flink release wiki has been created for
1.18.1 release. Tasks that need to be done by PMC have been explicitly
created separately. It will be convenient for the release manager to reach
out to PMC for those tasks. Any future patch release could consider cloning
it and follow the standard release process.
https://issues.apache.org/jira/browse/FLINK-33824

Feel free to reach out to the release managers (or respond to this thread)
with feedback on the release process. Our goal is to constantly improve the
release process. Feedback on what could be improved or things that didn't
go so well are appreciated.

Regards,
Jing


Re:退订

2024-01-19 Thread 米子日匀


退订



 



The following is the content of the forwarded email
From:"李乐" 
To:dev 
Date:2024-01-19 11:40:19
Subject:退订

退订


DataOutputSerializer serializing long UTF Strings

2024-01-19 Thread Péter Váry
Hi Team,

During the root cause analysis of an Iceberg serialization issue [1], we
have found that *DataOutputSerializer.writeUTF* has a hard limit on the
length of the string (64k). This is inherited from the *DataOutput.writeUTF*
method, where the JDK specifically defines this limit [2].

For our use-case we need to enable the possibility to serialize longer UTF
strings, so we will need to define a *writeLongUTF* method with a similar
specification than the *writeUTF*, but without the length limit.

My question is:
- Is it something which would be useful for every Flink user? Shall we add
this method to *DataOutputSerializer*?
- Is it very specific for Iceberg, and we should keep it in Iceberg
connector code?

Thanks,
Peter

[1] - https://github.com/apache/iceberg/issues/9410
[2] -
https://docs.oracle.com/javase/8/docs/api/java/io/DataOutput.html#writeUTF-java.lang.String-


Re: Re: [VOTE] FLIP-389: Annotate SingleThreadFetcherManager as PublicEvolving

2024-01-19 Thread Ron liu
+1(binding)

Best,
Ron

Xuyang  于2024年1月19日周五 14:00写道:

> +1 (non-binding)
>
>
> --
>
> Best!
> Xuyang
>
>
>
>
>
> 在 2024-01-19 10:16:23,"Qingsheng Ren"  写道:
> >+1 (binding)
> >
> >Thanks for the work, Hongshun!
> >
> >Best,
> >Qingsheng
> >
> >On Tue, Jan 16, 2024 at 11:21 AM Leonard Xu  wrote:
> >
> >> Thanks Hongshun for driving this !
> >>
> >> +1(binding)
> >>
> >> Best,
> >> Leonard
> >>
> >> > 2024年1月3日 下午8:04,Hongshun Wang  写道:
> >> >
> >> > Dear Flink Developers,
> >> >
> >> > Thank you for providing feedback on FLIP-389: Annotate
> >> > SingleThreadFetcherManager as PublicEvolving[1] on the discussion
> >> > thread[2]. The goal of the FLIP is as follows:
> >> >
> >> >   - To expose the SplitFetcherManager / SingleThreadFetcheManager as
> >> >   Public, allowing connector developers to easily create their own
> >> threading
> >> >   models in the SourceReaderBase by implementing addSplits(),
> >> removeSplits(),
> >> >   maybeShutdownFinishedFetchers() and other functions.
> >> >   - To hide the element queue from the connector developers and
> simplify
> >> >   the SourceReaderBase to consist of only SplitFetcherManager and
> >> >   RecordEmitter as major components.
> >> >
> >> >
> >> > Any additional questions regarding this FLIP? Looking forward to
> hearing
> >> > from you.
> >> >
> >> >
> >> > Thanks,
> >> > Hongshun Wang
> >> >
> >> >
> >> > [1]
> >> >
> >>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=278465498
> >> >
> >> > [2] https://lists.apache.org/thread/b8f509878ptwl3kmmgg95tv8sb1j5987
> >>
> >>
>


[jira] [Created] (FLINK-34169) [benchmark] CI fails during test running

2024-01-19 Thread Zakelly Lan (Jira)
Zakelly Lan created FLINK-34169:
---

 Summary: [benchmark] CI fails during test running
 Key: FLINK-34169
 URL: https://issues.apache.org/jira/browse/FLINK-34169
 Project: Flink
  Issue Type: Bug
  Components: Benchmarks
Reporter: Zakelly Lan
Assignee: Yunfeng Zhou


[CI link: 
https://github.com/apache/flink-benchmarks/actions/runs/7580834955/job/20647663157?pr=85|https://github.com/apache/flink-benchmarks/actions/runs/7580834955/job/20647663157?pr=85]

which says:
{code:java}
// omit some stack traces
Caused by: java.util.concurrent.ExecutionException: Boxed Error
1115at scala.concurrent.impl.Promise$.resolver(Promise.scala:87)
1116at 
scala.concurrent.impl.Promise$.scala$concurrent$impl$Promise$$resolveTry(Promise.scala:79)
1117at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:284)
1118at 
org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629)
1119at org.apache.pekko.actor.ActorRef.tell(ActorRef.scala:141)
1120at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:317)
1121at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:222)
1122... 22 more
1123Caused by: java.lang.NoClassDefFoundError: 
javax/activation/UnsupportedDataTypeException
1124at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.createKnownInputChannel(SingleInputGateFactory.java:387)
1125at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.lambda$createInputChannel$2(SingleInputGateFactory.java:353)
1126at 
org.apache.flink.runtime.shuffle.ShuffleUtils.applyWithShuffleTypeCheck(ShuffleUtils.java:51)
1127at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.createInputChannel(SingleInputGateFactory.java:333)
1128at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.createInputChannelsAndTieredStorageService(SingleInputGateFactory.java:284)
1129at 
org.apache.flink.runtime.io.network.partition.consumer.SingleInputGateFactory.create(SingleInputGateFactory.java:204)
1130at 
org.apache.flink.runtime.io.network.NettyShuffleEnvironment.createInputGates(NettyShuffleEnvironment.java:265)
1131at 
org.apache.flink.runtime.taskmanager.Task.(Task.java:418)
1132at 
org.apache.flink.runtime.taskexecutor.TaskExecutor.submitTask(TaskExecutor.java:821)
1133at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
1134at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
1135at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
1136at java.base/java.lang.reflect.Method.invoke(Method.java:566)
1137at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:309)
1138at 
org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
1139at 
org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:307)
1140... 23 more
1141Caused by: java.lang.ClassNotFoundException: 
javax.activation.UnsupportedDataTypeException
1142at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
1143at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
1144at 
java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:527)
1145... 39 more {code}





--
This message was sent by Atlassian Jira
(v8.20.10#820010)