flink 1.18.0
例如我写下一条SQL:
select * from KafkaTable where id is not null;
IS NOT NULL应该属于系统内建函数,于是我找到相关代码:
public static final BuiltInFunctionDefinition IS_NOT_NULL =
BuiltInFunctionDefinition.newBuilder()
.name("isNotNull")
.kind(SCALAR)
Hi,
Need to fix my previous comment in the last reply - it should be totally
fine that the POM files for flink-connector-kafka 3.0.1-1.18 point to an
older version.
For example, in the ongoing flink-connector-opensearch release 1.1.0-1.18,
the POM files also still point to Flink 1.17.1 [1].
If
Hi all,
There seems to be an issue with the connector release scripts used in the
release process that doesn't correctly overwrite the flink.version property
in POMs.
I'll kick off a new release for 3.0.2 shortly to address this. Sorry for
overlooking this during the previous release.
Best,
Thanks Feng,
I think my challenge (and why I expected I’d need to use Java) is that there
will be parquet files with different schemas landing in the s3 bucket - so I
don’t want to hard-code the schema in a sql table definition.
I’m not sure if this is even possible? Maybe I would have to
Hi Oxlade
I think using Flink SQL can conveniently fulfill your requirements.
For S3 Parquet files, you can create a temporary table using a filesystem
connector[1] .
For Iceberg tables, FlinkSQL can easily integrate with the Iceberg
catalog[2].
Therefore, you can use Flink SQL to export S3
Hi all,
I'm attempting to create a POC in flink to create a pipeline to stream parquet
to a data warehouse in iceberg format.
Ideally - I'd like to watch a directory in s3 (minio locally) and stream those
to iceberg, doing the appropriate schema mapping/translation.
I guess first; does this
Hi Danny
thanks for taking a look into it and for the hint.
Your assumption is correct - It compiles when the base connector is
excluded.
In sbt:
"org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18"
exclude("org.apache.flink", "flink-connector-base"),
Günter
On 23.11.23 14:24,
看起来是类似的,不过这个报错应该是作业失败之后的报错,看还有没有其他的异常日志。
Best,
Feng
On Sat, Nov 4, 2023 at 3:26 PM Ray wrote:
> 各位专家:当前遇到如下问题1、场景:在使用Yarn场景下提交flink任务2、版本:Flink1.15.03、日志:查看yarn上的日志发下,版本上的问题2023-11-04
> 15:04:42,313 ERROR org.apache.flink.util.FatalExitExceptionHandler
> [] - FATAL: Thread
Hey all,
I believe this is because of FLINK-30400. Looking at the pom I cannot see
any other dependencies that would cause a problem. To workaround this, can
you try to remove that dependency from your build?
org.apache.flink
flink-connector-kafka
3.0.1-1.18
Hi, Gurnterh
It seems a bug for me that 3.0.1-1.18 flink Kafka connector use flink 1.17
dependency which lead to your issue.
I guess we need propose a new release for Kafka connector for fix this issue.
CC: Gordan, Danny, Martijn
Best,
Leonard
> 2023年11月14日 下午6:53,Alexey Novakov via user
Thanks Hang.
I got it now. I will check on this and get back to you.
Thanks,
Tauseef.
On Thu, 23 Nov 2023 at 17:29, Hang Ruan wrote:
> Hi, Tauseef.
>
> This error is not that you can not access the Kafka cluster. Actually,
> this error means that the JM cannot access its TM.
> Have you ever
Hi, Tauseef.
This error is not that you can not access the Kafka cluster. Actually, this
error means that the JM cannot access its TM.
Have you ever checked whether the JM is able to access the TM?
Best,
Hang
Tauseef Janvekar 于2023年11月23日周四 16:04写道:
> Dear Team,
>
> We are facing the below
Hi, patricia.
Can you attach full stack about the exception. It seems the thread reading
source is stuck.
--
Best!
Xuyang
At 2023-11-23 16:18:21, "patricia lee" wrote:
Hi,
Flink 1.18.0
Kafka Connector 3.0.1-1.18
Kafka v 3.2.4
JDK 17
I get error on class
Hi,
Flink 1.18.0
Kafka Connector 3.0.1-1.18
Kafka v 3.2.4
JDK 17
I get error on class
org.apache.flink.streaming.runtime.tasks.SourceStreamTask on
LegacySourceFunctionThread.run()
"java.util.concurrent.CompletableFuture@12d0b74 [Not completed, 1
dependents]
I am using the FlinkKafkaConsumer.
14 matches
Mail list logo