Hello:
Now we cann't add a shuffle-operation in a sql-job.
Sometimes , for example, I have a kafka-source(three partitions) with
parallelism three. And then I have a lookup-join function, I want process the
data distribute by id so that the data can split into thre parallelism evenly
(The
Not exactly, the flink distribution just doesn't include the scala api by
default.
For using scala, you can pack in your job jar both flink-table-api-scala and
flink-table-api-scala-bridge and keep the flink distribution structure, which
mean flink-table-planner_2.12-1.15.0.jar is under opt
Hi,
It throws this exception because the memory segment provided by Flink is
depleted.
How do you implement your writing logic? It seems that you are using apache
beam. What is the version of Flink engine? Can you provide more logs so we can
know why the memory segment depletion happened.
I
One more question: Are you changing the parallelism when resuming from
savepoint?
On Sun, May 8, 2022 at 4:05 PM Thomas Weise wrote:
>
> Hi Kevin,
>
> Unfortunately I did not find a way to test the savepoint scenario with
> the MiniCluster. Savepoints are not supported in the embedded mode.
>
Hi Kevin,
Unfortunately I did not find a way to test the savepoint scenario with
the MiniCluster. Savepoints are not supported in the embedded mode.
There is a way to hack around that, but then the state of the
enumerator won't be handled.
As for your original issue, is it reproducible
Hi Flink User Group,
I am a data engineer from Thoughtworks and faced an issue recently when
running some flink code . the same code ran fine when run with smaller file
but on increasing the file size it gave this error
Caused by: java.io.EOFException
at
十分感谢Yu Li老师的提醒,原邮件中第5个文档连接(即《10GiB TPCDS数据集测试结果》)已经更新至Google Docs [1]。
[1]
https://docs.google.com/spreadsheets/d/1nietTOrFg93p7k7L82lGPlUjwCpw97bWfP21xI_MLcE/edit?usp=sharing
Best,
Zhilong Hong
On Fri, May 6, 2022 at 4:51 PM Yu Li wrote:
> 感谢大家的分享和分析,也期待Flink在相关方向的持续优化!
>
> Let's make Flink
Thanks Yuxia, that works. Does that mean for one flink distribution, I can
either use java or use scala ? If so, it seems not user friendly.
On Sun, May 8, 2022 at 10:40 AM yuxia wrote:
> Hi, you can move the flink-table-planner-loader to the /opt. See more in
>
Hi !
Eventually, I came up with a solution following the instructions here
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/python/dependency_management/
module_home =
os.path.dirname(find_spec("python_lib_internally_containing_java_lib").origin)
jar_file = 'file:///' +
后来我把phoenix-pherf-4.16.1.jar和mysql-connector-java-5.1.27-bin.jar放到了hbase的lib下,报错信息如下:
Caused by: java.lang.ExceptionInInitializerError
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3230)
at
10 matches
Mail list logo