Congratulations !!!
Best,
Jiadong Lu
On 2023/3/27 17:23, Yu Li wrote:
Dear Flinkers,
As you may have noticed, we are pleased to announce that Flink Table Store has
joined the Apache Incubator as a separate project called Apache
Paimon(incubating) [1] [2] [3]. The new project still aims at
Hi, Sofya
The MiniClusterWithClientResource does not provide UI by default.
But you can enable it by adding flink-runtime-web dependency to do some
debug.
Add this dependency to your pom.xml. And flink will load the web ui
automatically.
org.apache.flink
flink-runtime-web
Hello, I am having trouble when deploying Apache Flink 1.16.1 on 2 Google
Cloud instances with Docker Swarm. The JobManager is deployed on the
manager node and the TaskManager is deployed in the worker node. The
TaskManager seems to have trouble to communicate with ResourceManager on
JobManager
Hi,
I'm experimenting with the MiniClusterWithClientResource, below, and when I
print out the URL I'm not able to access a UI. Is
the MiniClusterWithClientResource expected to provide a UI?
Thanks
-Sofya
The commands for updating CRD seem to assume they are being run from the
flink-operator-repo. Accordingly, to run them in my environment I should run
them as:
```bash
# upgrade CRD
kubectl replace -f
Hey Martijn,
The version is 1.16.0
On Wed, Mar 29, 2023 at 5:43 PM Martijn Visser
wrote:
> Hi Reem,
>
> What's the Flink version where you're encountering this issue?
>
> Best regards,
>
> Martijn
>
> On Wed, Mar 29, 2023 at 5:18 PM Reem Razak via user
> wrote:
>
>> Hey there!
>>
>> We are
Hi Reem,
What's the Flink version where you're encountering this issue?
Best regards,
Martijn
On Wed, Mar 29, 2023 at 5:18 PM Reem Razak via user
wrote:
> Hey there!
>
> We are seeing a second Flink pipeline encountering similar issues when
> configuring both `withWatermarkAlignment` and
Hey there!
We are seeing a second Flink pipeline encountering similar issues when
configuring both `withWatermarkAlignment` and `withIdleness`. The
unexpected behaviour gets triggered after a Kafka cluster failover. Any
thoughts on there being an incompatibility between the two?
Thanks!
On Wed,
Hi
自增id可以为同一个作业的多个codegen类生成唯一类名
一般metaspace可以通过fullgc释放,你可以查看你的集群metaspace大小,是否触发了了fullgc
Best,
Shammon FY
On Wednesday, March 29, 2023, tanjialiang wrote:
> Hi all,
>我有一个通过flink kubernetes operator定时提交到同一个session作业(底层是将flink
>
Hi,
This error occurs when the data type can not be parsed. You could read this
part to see more details about the User-Defined Data Types[1].
Best,
Hang
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/table/types/#user-defined-data-types
柒朵 <1303809...@qq.com>
Exceptioninthread"main"org.apache.flink.table.api.ValidationException:Couldnotextractadatatypefrom'class
UserStatus'.PleasepasstherequireddatatypemanuallyorallowRAWtypes.
1303809...@qq.com
Hi,
1. In case of S3 FileSystem, Flink uses the multipart upload process [1]
for better performance. It might not be obvious at first by looking at the
docs, but it's noted at the bottom of the FileSystem page [2]
For more information you can also check FLINK-9751 and FLINK-9752
2. In case of
Congratulations!
Dong
On Mon, Mar 27, 2023 at 5:24 PM Yu Li wrote:
> Dear Flinkers,
>
>
>
> As you may have noticed, we are pleased to announce that Flink Table Store
> has joined the Apache Incubator as a separate project called Apache
> Paimon(incubating) [1] [2] [3]. The new project still
Congratulations!
Dong
On Mon, Mar 27, 2023 at 5:24 PM Yu Li wrote:
> Dear Flinkers,
>
>
>
> As you may have noticed, we are pleased to announce that Flink Table Store
> has joined the Apache Incubator as a separate project called Apache
> Paimon(incubating) [1] [2] [3]. The new project still
退订
Hi all,
我有一个通过flink kubernetes operator定时提交到同一个session作业(底层是将flink
sql转JobGraph的逻辑下推到了JobManager执行),当他跑了一段时间后,JobManager报了metaspace OOM.
经过排查后发现是flink sql codegen生成的代码类有一个自增ID,这些类在使用完后不会释放。
疑问:
1. flink sql codegen做这样的一个自增ID有什么特殊意义吗?
2. java中通过类加载器加载的类有什么办法可以释放?
Hi,
We are tying to use Flink's File sink to distribute files to AWS S3 storage. We
are using Flink provided Hadoop s3a connector as plugin.
We have some observations that we needed to clarify:
1. When using file sink for local filesystem distribution, we can see that the
sink creates 3
17 matches
Mail list logo