关于Flink SQL语句内的函数名和内建函数名称对应不上的问题

2023-11-23 Thread jinzhuguang
flink 1.18.0


例如我写下一条SQL:
 select * from KafkaTable where id is not null;

IS NOT NULL应该属于系统内建函数,于是我找到相关代码:

public static final BuiltInFunctionDefinition IS_NOT_NULL =
BuiltInFunctionDefinition.newBuilder()
.name("isNotNull")
.kind(SCALAR)

.inputTypeStrategy(wildcardWithCount(ConstantArgumentCount.of(1)))
.outputTypeStrategy(explicit(DataTypes.BOOLEAN().notNull()))
.build();

发现他的name是“ isNotNull”,和“is not null”对应不上。并且经过实际测验,确实证实了我的猜想:

DEBUG org.apache.flink.table.module.ModuleManager  [] - Cannot 
find FunctionDefinition 'is not null' from any loaded modules.


所以我很疑惑,SQL到底是在哪里找到了”is not null”这个函数的呢?

以下是调用栈:
@org.apache.flink.table.planner.catalog.FunctionCatalogOperatorTable.lookupOperatorOverloads()
at 
org.apache.calcite.sql.util.ChainedSqlOperatorTable.lookupOperatorOverloads(ChainedSqlOperatorTable.java:69)
at 
org.apache.calcite.sql.SqlUtil.lookupSubjectRoutinesByName(SqlUtil.java:609)
at 
org.apache.calcite.sql.SqlUtil.lookupSubjectRoutines(SqlUtil.java:535)
at org.apache.calcite.sql.SqlUtil.lookupRoutine(SqlUtil.java:486)
at org.apache.calcite.sql.SqlOperator.deriveType(SqlOperator.java:595)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6302)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl$DeriveTypeVisitor.visit(SqlValidatorImpl.java:6287)
at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:161)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1869)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1860)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateWhereOrOn(SqlValidatorImpl.java:4341)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateWhereClause(SqlValidatorImpl.java:4333)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3606)
at 
org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:64)
at 
org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:89)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:1050)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:1025)
at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:248)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:1000)
at 
org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:749)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:196)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:117)
at 
org.apache.flink.table.planner.operations.SqlNodeToOperationConversion.convert(SqlNodeToOperationConversion.java:261)
at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:106)
at 
org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:187)
at 
org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
at 
org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
at 
org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)


Re: dependency error with latest Kafka connector

2023-11-23 Thread Tzu-Li (Gordon) Tai
Hi,

Need to fix my previous comment in the last reply - it should be totally
fine that the POM files for flink-connector-kafka 3.0.1-1.18 point to an
older version.
For example, in the ongoing flink-connector-opensearch release 1.1.0-1.18,
the POM files also still point to Flink 1.17.1 [1].

If the user intends to compile their job for Flink 1.18.0, then they
overwrite the versions for flink-streaming-java and flink-clients
accordingly in the user POM.
The *-1.18 prefix on the externalized connector artifacts simply indicate
that they are guaranteed to be compilable and compatible with Flink 1.18.x
dependencies.

As a sanity check, I’ve also re-done the tests that were validated during
the 3.0.1 release process to see why any issues slipped, but it turns out
to be working fine:

   - built a fat uber jar from quickstart with Flink 1.18.0 for
   flink-streaming-java and flink-clients, and flink-connector-kafka version
   3.0.1-1.18
   - then submitted to local Flink cluster 1.18.0. Things worked as
   expected and the job ran fine.

Now, looking at mvn dependency:tree of the uber jar (see [3] below for the
output of dependency:tree), the only Flink dependency being pulled into the
user uber jar is flink-connector-base:1.17.0.

FLINK-30400 [2], as Danny pointed out, does intend to address this so that
flink-connector-base is not being bundled by user uber jars and should be
provided by Flink distributions, but AFAIK there were no breaking changes
for the APIs used by classes in flink-connector-base in 1.18.0, so things
should still remain compatible (as proven by my local testing).

Which leaves me wondering what was the actual issue that @guenterh.lists
bumped into in the first place? Am I missing something obvious?
Would like to clarify this before I kick off a new release.

Thanks,
Gordon

[1]
https://repository.apache.org/content/repositories/orgapacheflink-1666/org/apache/flink[%E2%80%A6].1.0-1.18/flink-connector-opensearch-parent-1.1.0-1.18.pom
[2] https://issues.apache.org/jira/browse/FLINK-30400
[3] mvn dependency:tree output for user job jar:

```
[INFO] com.tzulitai:testing-kafka:jar:1.0-SNAPSHOT
[INFO] +- org.apache.flink:flink-streaming-java:jar:1.18.0:provided
[INFO] | +- org.apache.flink:flink-core:jar:1.18.0:provided
[INFO] | | +- org.apache.flink:flink-annotations:jar:1.18.0:provided
[INFO] | | +- org.apache.flink:flink-metrics-core:jar:1.18.0:provided
[INFO] | | +- org.apache.flink:flink-shaded-asm-9:jar:9.5-17.0:provided
[INFO] | | +- org.apache.flink:flink-shaded-jackson:jar:2.14.2-17.0:provided
[INFO] | | +- org.apache.commons:commons-lang3:jar:3.12.0:provided
[INFO] | | +- org.apache.commons:commons-text:jar:1.10.0:provided
[INFO] | | +- com.esotericsoftware.kryo:kryo:jar:2.24.0:provided
[INFO] | | | +- com.esotericsoftware.minlog:minlog:jar:1.2:provided
[INFO] | | | \- org.objenesis:objenesis:jar:2.1:provided
[INFO] | | +- commons-collections:commons-collections:jar:3.2.2:provided
[INFO] | | \- org.apache.commons:commons-compress:jar:1.21:provided
[INFO] | +- org.apache.flink:flink-file-sink-common:jar:1.18.0:provided
[INFO] | +- org.apache.flink:flink-runtime:jar:1.18.0:provided
[INFO] | | +- org.apache.flink:flink-rpc-core:jar:1.18.0:provided
[INFO] | | +- org.apache.flink:flink-rpc-akka-loader:jar:1.18.0:provided
[INFO] | | +-
org.apache.flink:flink-queryable-state-client-java:jar:1.18.0:provided
[INFO] | | +- org.apache.flink:flink-hadoop-fs:jar:1.18.0:provided
[INFO] | | +- commons-io:commons-io:jar:2.11.0:provided
[INFO] | | +-
org.apache.flink:flink-shaded-netty:jar:4.1.91.Final-17.0:provided
[INFO] | | +-
org.apache.flink:flink-shaded-zookeeper-3:jar:3.7.1-17.0:provided
[INFO] | | +- org.javassist:javassist:jar:3.24.0-GA:provided
[INFO] | | +- org.xerial.snappy:snappy-java:jar:1.1.10.4:runtime
[INFO] | | \- org.lz4:lz4-java:jar:1.8.0:runtime
[INFO] | +- org.apache.flink:flink-java:jar:1.18.0:provided
[INFO] | | \- com.twitter:chill-java:jar:0.7.6:provided
[INFO] | +- org.apache.flink:flink-shaded-guava:jar:31.1-jre-17.0:provided
[INFO] | +- org.apache.commons:commons-math3:jar:3.6.1:provided
[INFO] | +- org.slf4j:slf4j-api:jar:1.7.36:runtime
[INFO] | \- com.google.code.findbugs:jsr305:jar:1.3.9:provided
[INFO] +- org.apache.flink:flink-clients:jar:1.18.0:provided
[INFO] | +- org.apache.flink:flink-optimizer:jar:1.18.0:provided
[INFO] | \- commons-cli:commons-cli:jar:1.5.0:provided
[INFO] +- org.apache.flink:flink-connector-kafka:jar:3.0.1-1.18:compile
[INFO] | +- org.apache.flink:flink-connector-base:jar:1.17.0:compile
[INFO] | +- org.apache.kafka:kafka-clients:jar:3.2.3:compile
[INFO] | | \- com.github.luben:zstd-jni:jar:1.5.2-1:runtime
[INFO] | +- com.fasterxml.jackson.core:jackson-core:jar:2.15.2:compile
[INFO] | +- com.fasterxml.jackson.core:jackson-databind:jar:2.15.2:compile
[INFO] | | \-
com.fasterxml.jackson.core:jackson-annotations:jar:2.15.2:compile
[INFO] | +-
com.fasterxml.jackson.datatype:jackson-datatype-jsr310:jar:2.15.2:compile
[INFO] | \-

Re: dependency error with latest Kafka connector

2023-11-23 Thread Tzu-Li (Gordon) Tai
Hi all,

There seems to be an issue with the connector release scripts used in the
release process that doesn't correctly overwrite the flink.version property
in POMs.

I'll kick off a new release for 3.0.2 shortly to address this. Sorry for
overlooking this during the previous release.

Best,
Gordon

On Thu, Nov 23, 2023 at 7:11 AM guenterh.lists 
wrote:

> Hi Danny
>
> thanks for taking a look into it and for the hint.
>
> Your assumption is correct - It compiles when the base connector is
> excluded.
>
> In sbt:
> "org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18"
> exclude("org.apache.flink", "flink-connector-base"),
>
> Günter
>
>
> On 23.11.23 14:24, Danny Cranmer wrote:
> > Hey all,
> >
> > I believe this is because of FLINK-30400. Looking at the pom I cannot see
> > any other dependencies that would cause a problem. To workaround this,
> can
> > you try to remove that dependency from your build?
> >
> > 
> >  org.apache.flink
> >  flink-connector-kafka
> >  3.0.1-1.18
> >  
> >  
> >  org.apache.flink
> >  flink-connector-base
> >  
> >  
> > 
> >
> >
> > Alternatively you can add it in:
> >
> > 
> >  org.apache.flink
> >  flink-connector-base
> >  1.18.0
> > 
> >
> > Sorry I am not sure how to do this in Scala SBT.
> >
> > Agree we should get this fixed and push a 3.0.2 Kafka connector.
> >
> > Thanks,
> > Danny
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-30400
> >
> > On Thu, Nov 23, 2023 at 12:39 PM Leonard Xu  wrote:
> >
> >> Hi, Gurnterh
> >>
> >> It seems a bug for me that  3.0.1-1.18 flink Kafka connector use  flink
> >>   1.17 dependency which lead to your issue.
> >>
> >> I guess we need propose a new release for Kafka connector for fix this
> >> issue.
> >>
> >> CC: Gordan, Danny, Martijn
> >>
> >> Best,
> >> Leonard
> >>
> >> 2023年11月14日 下午6:53,Alexey Novakov via user  写道:
> >>
> >> Hi Günterh,
> >>
> >> It looks like a problem with the Kafka connector release.
> >>
> https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka/3.0.1-1.18
> >> Compile dependencies are still pointing to Flink 1.17.
> >>
> >> Release person is already contacted about this or will be contacted
> soon.
> >>
> >> Best regards,
> >> Alexey
> >>
> >> On Mon, Nov 13, 2023 at 10:42 PM guenterh.lists <
> guenterh.li...@bluewin.ch>
> >> wrote:
> >>
> >>> Hello
> >>>
> >>> I'm getting a dependency error when using the latest Kafka connector in
> >>> a Scala project.
> >>>
> >>> Using the 1.17.1 Kafka connector compilation is ok.
> >>>
> >>> With
> >>>
> >>> "org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18"
> >>>
> >>> I get
> >>> [error] (update) sbt.librarymanagement.ResolveException: Error
> >>> downloading org.apache.flink:flink-connector-base:
> >>> [error]   Not found
> >>> [error]   Not found
> >>> [error]   not found:
> >>>
> >>>
> /home/swissbib/.ivy2/local/org.apache.flink/flink-connector-base/ivys/ivy.xml
> >>> [error]   not found:
> >>>
> >>>
> https://repo1.maven.org/maven2/org/apache/flink/flink-connector-base//flink-connector-base-.pom
> >>>
> >>> Seems Maven packaging is not correct.
> >>>
> >>> My sbt build file:
> >>>
> >>> ThisBuild / scalaVersion := "3.3.0"
> >>> val flinkVersion = "1.18.0"
> >>> val postgresVersion = "42.2.2"
> >>>
> >>> lazy val root = (project in file(".")).settings(
> >>> name := "flink-scala-proj",
> >>> libraryDependencies ++= Seq(
> >>>   "org.flinkextended" %% "flink-scala-api" % "1.17.1_1.1.0",
> >>>   "org.apache.flink" % "flink-clients" % flinkVersion % Provided,
> >>>   "org.apache.flink" % "flink-connector-files" % flinkVersion %
> >>> Provided,
> >>>
> >>> "org.apache.flink" % "flink-connector-kafka" % "1.17.1",
> >>> //"org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18",
> >>>
> >>> //"org.apache.flink" % "flink-connector-jdbc" % "3.1.1-1.17",
> >>> //"org.postgresql" % "postgresql" % postgresVersion,
> >>> "org.apache.flink" % "flink-connector-files" % flinkVersion %
> Provided,
> >>> //"org.apache.flink" % "flink-connector-base" % flinkVersion %
> Provided
> >>> )
> >>> )
> >>>
> >>>
> >>>
> >>> Thanks!
> >>>
> >>> --
> >>> Günter Hipler
> >>> https://openbiblio.social/@vog61
> >>> https://twitter.com/vog61
> >>>
> >>>
> --
> Günter Hipler
> https://openbiblio.social/@vog61
> https://twitter.com/vog61
>
>


Re: [EXTERNAL] Re: flink s3[parquet] -> s3[iceberg]

2023-11-23 Thread Oxlade, Dan
Thanks Feng,

I think my challenge (and why I expected I’d need to use Java) is that there 
will be parquet files with different schemas landing in the s3 bucket - so I 
don’t want to hard-code the schema in a sql table definition.

I’m not sure if this is even possible? Maybe I would have to write a job that 
accepts the schema, directory and iceberg target table as params and start 
instances of the job through the job api.

Unless reading the parquet to a temporary table  doesn’t need the schema 
definition? I couldn't really work things out from the links.

Dan

From: Feng Jin 
Sent: Thursday, November 23, 2023 6:49:11 PM
To: Oxlade, Dan 
Cc: user@flink.apache.org 
Subject: [EXTERNAL] Re: flink s3[parquet] -> s3[iceberg]

Hi Oxlade

I think using Flink SQL can conveniently fulfill your requirements.

For S3 Parquet files, you can create a temporary table using a filesystem 
connector[1] .
For Iceberg tables, FlinkSQL can easily integrate with the Iceberg catalog[2].

Therefore, you can use Flink SQL to export S3 files to Iceberg.

If you only need field mapping or transformation, I believe using Flink SQL + 
UDF (User-Defined Functions) would be sufficient to meet your needs.


[1].   
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/filesystem/#directory-watching
 
[nightlies.apache.org]
[2].  
https://iceberg.apache.org/docs/latest/flink-connector/#table-managed-in-hadoop-catalog
 
[iceberg.apache.org]


Best,
Feng


On Thu, Nov 23, 2023 at 11:23 PM Oxlade, Dan 
mailto:dan.oxl...@troweprice.com>> wrote:

Hi all,



I’m attempting to create a POC in flink to create a pipeline to stream parquet 
to a data warehouse in iceberg format.



Ideally – I’d like to watch a directory in s3 (minio locally) and stream those 
to iceberg, doing the appropriate schema mapping/translation.



I guess first; does this sound like a crazy idea?

Assuming not is anyone able to share examples that might get me going. I’ve 
found lots of iceberg and flink sql examples but I think I’ll need something in 
java to do the schema mapping. Also some examples reading parquet for s3 seem a 
little hard to come by.



I’m aware I’ll need a catalog, I can use nessie for the prototype. I’m also 
trying to use minio to get this all working locally but this might just be 
adding complexity at the moment.



TIA

Dan

T. Rowe Price International Ltd (registered number 3957748) is registered in 
England and Wales with its registered office at Warwick Court, 5 Paternoster 
Square, London EC4M 7DX. T. Rowe Price International Ltd is authorised and 
regulated by the Financial Conduct Authority. The company has a branch in Dubai 
International Financial Centre (regulated by the DFSA as a Representative 
Office).

T. Rowe Price (including T. Rowe Price International Ltd and its affiliates) 
and its associates do not provide legal or tax advice. Any tax-related 
discussion contained in this e-mail, including any attachments, is not intended 
or written to be used, and cannot be used, for the purpose of (i) avoiding any 
tax penalties or (ii) promoting, marketing, or recommending to any other party 
any transaction or matter addressed herein. Please consult your independent 
legal counsel and/or professional tax advisor regarding any legal or tax issues 
raised in this e-mail.

The contents of this e-mail and any attachments are intended solely for the use 
of the named addressee(s) and may contain confidential and/or privileged 
information. Any unauthorized use, copying, disclosure, or distribution of the 
contents of this e-mail is strictly prohibited by the sender and may be 
unlawful. If you are not the intended recipient, please notify the sender 
immediately and delete this e-mail.
T. Rowe Price International Ltd (registered number 3957748) is registered in 
England and Wales with its registered office at Warwick Court, 5 Paternoster 
Square, London EC4M 7DX. T. Rowe Price International Ltd is authorised and 
regulated by the Financial Conduct Authority. The company has a branch in Dubai 
International Financial Centre (regulated by the DFSA as a Representative 
Office).

T. Rowe Price (including T. Rowe Price International Ltd and its affiliates) 
and its associates do not 

Re: flink s3[parquet] -> s3[iceberg]

2023-11-23 Thread Feng Jin
Hi Oxlade

I think using Flink SQL can conveniently fulfill your requirements.

For S3 Parquet files, you can create a temporary table using a filesystem
connector[1] .
For Iceberg tables, FlinkSQL can easily integrate with the Iceberg
catalog[2].

Therefore, you can use Flink SQL to export S3 files to Iceberg.

If you only need field mapping or transformation, I believe using Flink SQL
+ UDF (User-Defined Functions) would be sufficient to meet your needs.


[1].
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/filesystem/#directory-watching
[2].
https://iceberg.apache.org/docs/latest/flink-connector/#table-managed-in-hadoop-catalog


Best,
Feng


On Thu, Nov 23, 2023 at 11:23 PM Oxlade, Dan 
wrote:

> Hi all,
>
>
>
> I’m attempting to create a POC in flink to create a pipeline to stream
> parquet to a data warehouse in iceberg format.
>
>
>
> Ideally – I’d like to watch a directory in s3 (minio locally) and stream
> those to iceberg, doing the appropriate schema mapping/translation.
>
>
>
> I guess first; does this sound like a crazy idea?
>
> Assuming not is anyone able to share examples that might get me going.
> I’ve found lots of iceberg and flink sql examples but I think I’ll need
> something in java to do the schema mapping. Also some examples reading
> parquet for s3 seem a little hard to come by.
>
>
>
> I’m aware I’ll need a catalog, I can use nessie for the prototype. I’m
> also trying to use minio to get this all working locally but this might
> just be adding complexity at the moment.
>
>
>
> TIA
>
> Dan
>
> T. Rowe Price International Ltd (registered number 3957748) is registered
> in England and Wales with its registered office at Warwick Court, 5
> Paternoster Square, London EC4M 7DX. T. Rowe Price International Ltd is
> authorised and regulated by the Financial Conduct Authority. The company
> has a branch in Dubai International Financial Centre (regulated by the DFSA
> as a Representative Office).
>
> T. Rowe Price (including T. Rowe Price International Ltd and its
> affiliates) and its associates do not provide legal or tax advice. Any
> tax-related discussion contained in this e-mail, including any attachments,
> is not intended or written to be used, and cannot be used, for the purpose
> of (i) avoiding any tax penalties or (ii) promoting, marketing, or
> recommending to any other party any transaction or matter addressed herein.
> Please consult your independent legal counsel and/or professional tax
> advisor regarding any legal or tax issues raised in this e-mail.
>
> The contents of this e-mail and any attachments are intended solely for
> the use of the named addressee(s) and may contain confidential and/or
> privileged information. Any unauthorized use, copying, disclosure, or
> distribution of the contents of this e-mail is strictly prohibited by the
> sender and may be unlawful. If you are not the intended recipient, please
> notify the sender immediately and delete this e-mail.
>


flink s3[parquet] -> s3[iceberg]

2023-11-23 Thread Oxlade, Dan
Hi all,

I'm attempting to create a POC in flink to create a pipeline to stream parquet 
to a data warehouse in iceberg format.

Ideally - I'd like to watch a directory in s3 (minio locally) and stream those 
to iceberg, doing the appropriate schema mapping/translation.

I guess first; does this sound like a crazy idea?
Assuming not is anyone able to share examples that might get me going. I've 
found lots of iceberg and flink sql examples but I think I'll need something in 
java to do the schema mapping. Also some examples reading parquet for s3 seem a 
little hard to come by.

I'm aware I'll need a catalog, I can use nessie for the prototype. I'm also 
trying to use minio to get this all working locally but this might just be 
adding complexity at the moment.

TIA
Dan
T. Rowe Price International Ltd (registered number 3957748) is registered in 
England and Wales with its registered office at Warwick Court, 5 Paternoster 
Square, London EC4M 7DX. T. Rowe Price International Ltd is authorised and 
regulated by the Financial Conduct Authority. The company has a branch in Dubai 
International Financial Centre (regulated by the DFSA as a Representative 
Office).

T. Rowe Price (including T. Rowe Price International Ltd and its affiliates) 
and its associates do not provide legal or tax advice. Any tax-related 
discussion contained in this e-mail, including any attachments, is not intended 
or written to be used, and cannot be used, for the purpose of (i) avoiding any 
tax penalties or (ii) promoting, marketing, or recommending to any other party 
any transaction or matter addressed herein. Please consult your independent 
legal counsel and/or professional tax advisor regarding any legal or tax issues 
raised in this e-mail.

The contents of this e-mail and any attachments are intended solely for the use 
of the named addressee(s) and may contain confidential and/or privileged 
information. Any unauthorized use, copying, disclosure, or distribution of the 
contents of this e-mail is strictly prohibited by the sender and may be 
unlawful. If you are not the intended recipient, please notify the sender 
immediately and delete this e-mail.


Re: dependency error with latest Kafka connector

2023-11-23 Thread guenterh.lists

Hi Danny

thanks for taking a look into it and for the hint.

Your assumption is correct - It compiles when the base connector is 
excluded.


In sbt:
"org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18" 
exclude("org.apache.flink", "flink-connector-base"),


Günter


On 23.11.23 14:24, Danny Cranmer wrote:

Hey all,

I believe this is because of FLINK-30400. Looking at the pom I cannot see
any other dependencies that would cause a problem. To workaround this, can
you try to remove that dependency from your build?


 org.apache.flink
 flink-connector-kafka
 3.0.1-1.18
 
 
 org.apache.flink
 flink-connector-base
 
 



Alternatively you can add it in:


 org.apache.flink
 flink-connector-base
 1.18.0


Sorry I am not sure how to do this in Scala SBT.

Agree we should get this fixed and push a 3.0.2 Kafka connector.

Thanks,
Danny

[1] https://issues.apache.org/jira/browse/FLINK-30400

On Thu, Nov 23, 2023 at 12:39 PM Leonard Xu  wrote:


Hi, Gurnterh

It seems a bug for me that  3.0.1-1.18 flink Kafka connector use  flink
  1.17 dependency which lead to your issue.

I guess we need propose a new release for Kafka connector for fix this
issue.

CC: Gordan, Danny, Martijn

Best,
Leonard

2023年11月14日 下午6:53,Alexey Novakov via user  写道:

Hi Günterh,

It looks like a problem with the Kafka connector release.
https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka/3.0.1-1.18
Compile dependencies are still pointing to Flink 1.17.

Release person is already contacted about this or will be contacted soon.

Best regards,
Alexey

On Mon, Nov 13, 2023 at 10:42 PM guenterh.lists 
wrote:


Hello

I'm getting a dependency error when using the latest Kafka connector in
a Scala project.

Using the 1.17.1 Kafka connector compilation is ok.

With

"org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18"

I get
[error] (update) sbt.librarymanagement.ResolveException: Error
downloading org.apache.flink:flink-connector-base:
[error]   Not found
[error]   Not found
[error]   not found:

/home/swissbib/.ivy2/local/org.apache.flink/flink-connector-base/ivys/ivy.xml
[error]   not found:

https://repo1.maven.org/maven2/org/apache/flink/flink-connector-base//flink-connector-base-.pom

Seems Maven packaging is not correct.

My sbt build file:

ThisBuild / scalaVersion := "3.3.0"
val flinkVersion = "1.18.0"
val postgresVersion = "42.2.2"

lazy val root = (project in file(".")).settings(
name := "flink-scala-proj",
libraryDependencies ++= Seq(
  "org.flinkextended" %% "flink-scala-api" % "1.17.1_1.1.0",
  "org.apache.flink" % "flink-clients" % flinkVersion % Provided,
  "org.apache.flink" % "flink-connector-files" % flinkVersion %
Provided,

"org.apache.flink" % "flink-connector-kafka" % "1.17.1",
//"org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18",

//"org.apache.flink" % "flink-connector-jdbc" % "3.1.1-1.17",
//"org.postgresql" % "postgresql" % postgresVersion,
"org.apache.flink" % "flink-connector-files" % flinkVersion % Provided,
//"org.apache.flink" % "flink-connector-base" % flinkVersion % Provided
)
)



Thanks!

--
Günter Hipler
https://openbiblio.social/@vog61
https://twitter.com/vog61



--
Günter Hipler
https://openbiblio.social/@vog61
https://twitter.com/vog61



Re: Flink-1.15版本

2023-11-23 Thread Feng Jin
看起来是类似的,不过这个报错应该是作业失败之后的报错,看还有没有其他的异常日志。


Best,
Feng

On Sat, Nov 4, 2023 at 3:26 PM Ray  wrote:

> 各位专家:当前遇到如下问题1、场景:在使用Yarn场景下提交flink任务2、版本:Flink1.15.03、日志:查看yarn上的日志发下,版本上的问题2023-11-04
> 15:04:42,313 ERROR org.apache.flink.util.FatalExitExceptionHandler
> [] - FATAL: Thread 'flink-akka.actor.internal-dispatcher-3' produced an
> uncaught exception. Stopping the process...java.lang.NoClassDefFoundError:
> akka/actor/dungeon/FaultHandling$$anonfun$handleNonFatalOrInterruptedException$1
> at
> akka.actor.dungeon.FaultHandling.handleNonFatalOrInterruptedException(FaultHandling.scala:334)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> akka.actor.dungeon.FaultHandling.handleNonFatalOrInterruptedException$(FaultHandling.scala:334)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> akka.actor.ActorCell.handleNonFatalOrInterruptedException(ActorCell.scala:411)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at akka.actor.ActorCell.invoke(ActorCell.scala:551)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at akka.dispatch.Mailbox.run(Mailbox.scala:231)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at akka.dispatch.Mailbox.exec(Mailbox.scala:243)
> [flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> [?:1.8.0_181]
> at
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> [?:1.8.0_181]
> at
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> [?:1.8.0_181]
> at
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
> [?:1.8.0_181]
> Caused by: java.lang.ClassNotFoundException:
> akka.actor.dungeon.FaultHandling$$anonfun$handleNonFatalOrInterruptedException$1
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> ~[?:1.8.0_181]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> ~[?:1.8.0_181]
> at
> org.apache.flink.core.classloading.ComponentClassLoader.loadClassFromComponentOnly(ComponentClassLoader.java:149)
> ~[flink-dist-1.15.0.jar:1.15.0]
> at
> org.apache.flink.core.classloading.ComponentClassLoader.loadClass(ComponentClassLoader.java:112)
> ~[flink-dist-1.15.0.jar:1.15.0]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ~[?:1.8.0_181]
> ... 11 more
> 2023-11-04 15:04:42,324 ERROR
> org.apache.flink.runtime.util.ClusterUncaughtExceptionHandler [] - WARNING:
> Thread 'flink-shutdown-hook-1' produced an uncaught exception. If you want
> to fail on uncaught exceptions, then configure
> cluster.uncaught-exception-handling accordingly
> java.lang.NoClassDefFoundError:
> scala/collection/convert/Wrappers$MutableSetWrapper
> at
> scala.collection.convert.AsScalaConverters.asScalaSet(AsScalaConverters.scala:126)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> scala.collection.convert.AsScalaConverters.asScalaSet$(AsScalaConverters.scala:124)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> akka.util.ccompat.package$JavaConverters$.asScalaSet(package.scala:86)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> scala.collection.convert.DecorateAsScala.$anonfun$asScalaSetConverter$1(DecorateAsScala.scala:59)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> scala.collection.convert.Decorators$AsScala.asScala(Decorators.scala:25)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> akka.actor.CoordinatedShutdown$tasks$.totalDuration(CoordinatedShutdown.scala:481)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> akka.actor.CoordinatedShutdown.totalTimeout(CoordinatedShutdown.scala:784)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> akka.actor.CoordinatedShutdown$.$anonfun$initJvmHook$1(CoordinatedShutdown.scala:271)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> at
> akka.actor.CoordinatedShutdown$$anon$3.run(CoordinatedShutdown.scala:814)
> ~[flink-rpc-akka_abb25197-9eee-499b-8c1f-f52d0afc4374.jar:1.15.0]
> Caused by: java.lang.ClassNotFoundException:
> scala.collection.convert.Wrappers$MutableSetWrapper
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> ~[?:1.8.0_181]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> ~[?:1.8.0_181]
> at
> org.apache.flink.core.classloading.ComponentClassLoader.loadClassFromComponentOnly(ComponentClassLoader.java:149)
> ~[flink-dist-1.15.0.jar:1.15.0]
> at
> 

Re: dependency error with latest Kafka connector

2023-11-23 Thread Danny Cranmer
Hey all,

I believe this is because of FLINK-30400. Looking at the pom I cannot see
any other dependencies that would cause a problem. To workaround this, can
you try to remove that dependency from your build?


org.apache.flink
flink-connector-kafka
3.0.1-1.18


org.apache.flink
flink-connector-base





Alternatively you can add it in:


org.apache.flink
flink-connector-base
1.18.0


Sorry I am not sure how to do this in Scala SBT.

Agree we should get this fixed and push a 3.0.2 Kafka connector.

Thanks,
Danny

[1] https://issues.apache.org/jira/browse/FLINK-30400

On Thu, Nov 23, 2023 at 12:39 PM Leonard Xu  wrote:

> Hi, Gurnterh
>
> It seems a bug for me that  3.0.1-1.18 flink Kafka connector use  flink
>  1.17 dependency which lead to your issue.
>
> I guess we need propose a new release for Kafka connector for fix this
> issue.
>
> CC: Gordan, Danny, Martijn
>
> Best,
> Leonard
>
> 2023年11月14日 下午6:53,Alexey Novakov via user  写道:
>
> Hi Günterh,
>
> It looks like a problem with the Kafka connector release.
> https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka/3.0.1-1.18
> Compile dependencies are still pointing to Flink 1.17.
>
> Release person is already contacted about this or will be contacted soon.
>
> Best regards,
> Alexey
>
> On Mon, Nov 13, 2023 at 10:42 PM guenterh.lists 
> wrote:
>
>> Hello
>>
>> I'm getting a dependency error when using the latest Kafka connector in
>> a Scala project.
>>
>> Using the 1.17.1 Kafka connector compilation is ok.
>>
>> With
>>
>> "org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18"
>>
>> I get
>> [error] (update) sbt.librarymanagement.ResolveException: Error
>> downloading org.apache.flink:flink-connector-base:
>> [error]   Not found
>> [error]   Not found
>> [error]   not found:
>>
>> /home/swissbib/.ivy2/local/org.apache.flink/flink-connector-base/ivys/ivy.xml
>> [error]   not found:
>>
>> https://repo1.maven.org/maven2/org/apache/flink/flink-connector-base//flink-connector-base-.pom
>>
>> Seems Maven packaging is not correct.
>>
>> My sbt build file:
>>
>> ThisBuild / scalaVersion := "3.3.0"
>> val flinkVersion = "1.18.0"
>> val postgresVersion = "42.2.2"
>>
>> lazy val root = (project in file(".")).settings(
>>name := "flink-scala-proj",
>>libraryDependencies ++= Seq(
>>  "org.flinkextended" %% "flink-scala-api" % "1.17.1_1.1.0",
>>  "org.apache.flink" % "flink-clients" % flinkVersion % Provided,
>>  "org.apache.flink" % "flink-connector-files" % flinkVersion %
>> Provided,
>>
>>"org.apache.flink" % "flink-connector-kafka" % "1.17.1",
>>//"org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18",
>>
>>//"org.apache.flink" % "flink-connector-jdbc" % "3.1.1-1.17",
>>//"org.postgresql" % "postgresql" % postgresVersion,
>>"org.apache.flink" % "flink-connector-files" % flinkVersion % Provided,
>>//"org.apache.flink" % "flink-connector-base" % flinkVersion % Provided
>>)
>> )
>>
>>
>>
>> Thanks!
>>
>> --
>> Günter Hipler
>> https://openbiblio.social/@vog61
>> https://twitter.com/vog61
>>
>>
>


Re: dependency error with latest Kafka connector

2023-11-23 Thread Leonard Xu
Hi, Gurnterh

It seems a bug for me that  3.0.1-1.18 flink Kafka connector use  flink  1.17 
dependency which lead to your issue.

I guess we need propose a new release for Kafka connector for fix this issue.

CC: Gordan, Danny, Martijn

Best,
Leonard

> 2023年11月14日 下午6:53,Alexey Novakov via user  写道:
> 
> Hi Günterh,
> 
> It looks like a problem with the Kafka connector release. 
> https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka/3.0.1-1.18
>  
> 
> Compile dependencies are still pointing to Flink 1.17.
> 
> Release person is already contacted about this or will be contacted soon.
> 
> Best regards,
> Alexey
> 
> On Mon, Nov 13, 2023 at 10:42 PM guenterh.lists  > wrote:
> Hello
> 
> I'm getting a dependency error when using the latest Kafka connector in 
> a Scala project.
> 
> Using the 1.17.1 Kafka connector compilation is ok.
> 
> With
> 
> "org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18"
> 
> I get
> [error] (update) sbt.librarymanagement.ResolveException: Error 
> downloading org.apache.flink:flink-connector-base:
> [error]   Not found
> [error]   Not found
> [error]   not found: 
> /home/swissbib/.ivy2/local/org.apache.flink/flink-connector-base/ivys/ivy.xml
> [error]   not found: 
> https://repo1.maven.org/maven2/org/apache/flink/flink-connector-base//flink-connector-base-.pom
>  
> 
> 
> Seems Maven packaging is not correct.
> 
> My sbt build file:
> 
> ThisBuild / scalaVersion := "3.3.0"
> val flinkVersion = "1.18.0"
> val postgresVersion = "42.2.2"
> 
> lazy val root = (project in file(".")).settings(
>name := "flink-scala-proj",
>libraryDependencies ++= Seq(
>  "org.flinkextended" %% "flink-scala-api" % "1.17.1_1.1.0",
>  "org.apache.flink" % "flink-clients" % flinkVersion % Provided,
>  "org.apache.flink" % "flink-connector-files" % flinkVersion % Provided,
> 
>"org.apache.flink" % "flink-connector-kafka" % "1.17.1",
>//"org.apache.flink" % "flink-connector-kafka" % "3.0.1-1.18",
> 
>//"org.apache.flink" % "flink-connector-jdbc" % "3.1.1-1.17",
>//"org.postgresql" % "postgresql" % postgresVersion,
>"org.apache.flink" % "flink-connector-files" % flinkVersion % Provided,
>//"org.apache.flink" % "flink-connector-base" % flinkVersion % Provided
>)
> )
> 
> 
> 
> Thanks!
> 
> -- 
> Günter Hipler
> https://openbiblio.social/@vog61 
> https://twitter.com/vog61 
> 



Re: Confluent Kafka conection error

2023-11-23 Thread Tauseef Janvekar
Thanks Hang.

I got it now. I will check on this and get back to you.

Thanks,
Tauseef.

On Thu, 23 Nov 2023 at 17:29, Hang Ruan  wrote:

> Hi, Tauseef.
>
> This error is not that you can not access the Kafka cluster. Actually,
> this error means that the JM cannot access its TM.
> Have you ever checked whether the JM is able to access the TM?
>
> Best,
> Hang
>
> Tauseef Janvekar  于2023年11月23日周四 16:04写道:
>
>> Dear Team,
>>
>> We are facing the below issue while connecting to confluent kafka
>> Can someone please help here.
>>
>> 2023-11-23 06:09:36,989 INFO  org.apache.flink.runtime.executiongraph.
>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (
>> 1/1) (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
>> switched from SCHEDULED to DEPLOYING.
>> 2023-11-23 06:09:36,994 INFO  org.apache.flink.runtime.executiongraph.
>> ExecutionGraph   [] - Deploying Source: src_source -> Sink: Print to
>> Std. Out (1/1) (attempt #0) with attempt id 
>> 496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0
>> and vertex id cbc357ccb763df2852fee8c4fc7d55f2_0 to flink-taskmanager:
>> 6122-23f057 @ flink-taskmanager.flink.svc.cluster.local (dataPort=46589)
>> with allocation id 80fe79389102bd305dd87a00247413eb
>> 2023-11-23 06:09:37,011 INFO
>>  org.apache.kafka.common.security.authenticator.AbstractLogin [] -
>> Successfully logged in.
>> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'key.deserializer'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'value.deserializer'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'client.id.prefix'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration '
>> partition.discovery.interval.ms' was supplied but isn't a known config.
>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration
>> 'commit.offsets.on.checkpoint' was supplied but isn't a known config.
>> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'enable.auto.commit'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,111 WARN  org.apache.kafka.clients.admin.
>> AdminClientConfig [] - The configuration 'auto.offset.reset'
>> was supplied but isn't a known config.
>> 2023-11-23 06:09:37,113 INFO  org.apache.kafka.common.utils.AppInfoParser
>>  [] - Kafka version: 3.2.2
>> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.AppInfoParser
>>  [] - Kafka commitId: 38c22ad893fb6cf5
>> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.AppInfoParser
>>  [] - Kafka startTimeMs: 1700719777111
>> 2023-11-23 06:09:37,117 INFO
>>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
>> [] - Starting the KafkaSourceEnumerator for consumer group null without
>> periodic partition discovery.
>> 2023-11-23 06:09:37,199 INFO  org.apache.flink.runtime.executiongraph.
>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (
>> 1/1) (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
>> switched from DEPLOYING to INITIALIZING.
>> 2023-11-23 06:09:37,302 INFO
>>  org.apache.flink.runtime.source.coordinator.SourceCoordinator [] -
>> Source Source: src_source registering reader for parallel task 0 (#0) @
>> flink-taskmanager
>> 2023-11-23 06:09:37,313 INFO  org.apache.flink.runtime.executiongraph.
>> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (
>> 1/1) (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
>> switched from INITIALIZING to RUNNING.
>> 2023-11-23 06:09:38,713 INFO
>>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
>> [] - Discovered new partitions: [aiops-3, aiops-2, aiops-1, aiops-0,
>> aiops-5, aiops-4]
>> 2023-11-23 06:09:38,719 INFO
>>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
>> [] - Assigning splits to readers {0=[[Partition: aiops-1, StartingOffset:
>> -1, StoppingOffset: -9223372036854775808], [Partition: aiops-2,
>> StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition:
>> aiops-0, StartingOffset: -1, StoppingOffset: -9223372036854775808], [
>> Partition: aiops-4, StartingOffset: -1, StoppingOffset: -
>> 9223372036854775808], [Partition: aiops-3, StartingOffset: -1,
>> StoppingOffset: -9223372036854775808], [Partition: aiops-5,
>> StartingOffset: -1, StoppingOffset: -9223372036854775808]]}
>> 2023-11-23 06:09:57,651 INFO  

Re: Confluent Kafka conection error

2023-11-23 Thread Hang Ruan
Hi, Tauseef.

This error is not that you can not access the Kafka cluster. Actually, this
error means that the JM cannot access its TM.
Have you ever checked whether the JM is able to access the TM?

Best,
Hang

Tauseef Janvekar  于2023年11月23日周四 16:04写道:

> Dear Team,
>
> We are facing the below issue while connecting to confluent kafka
> Can someone please help here.
>
> 2023-11-23 06:09:36,989 INFO  org.apache.flink.runtime.executiongraph.
> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (1
> /1) (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
> switched from SCHEDULED to DEPLOYING.
> 2023-11-23 06:09:36,994 INFO  org.apache.flink.runtime.executiongraph.
> ExecutionGraph   [] - Deploying Source: src_source -> Sink: Print to
> Std. Out (1/1) (attempt #0) with attempt id 
> 496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0
> and vertex id cbc357ccb763df2852fee8c4fc7d55f2_0 to flink-taskmanager:6122
> -23f057 @ flink-taskmanager.flink.svc.cluster.local (dataPort=46589) with
> allocation id 80fe79389102bd305dd87a00247413eb
> 2023-11-23 06:09:37,011 INFO
>  org.apache.kafka.common.security.authenticator.AbstractLogin [] -
> Successfully logged in.
> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
> AdminClientConfig [] - The configuration 'key.deserializer'
> was supplied but isn't a known config.
> 2023-11-23 06:09:37,109 WARN  org.apache.kafka.clients.admin.
> AdminClientConfig [] - The configuration 'value.deserializer'
> was supplied but isn't a known config.
> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
> AdminClientConfig [] - The configuration 'client.id.prefix'
> was supplied but isn't a known config.
> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
> AdminClientConfig [] - The configuration '
> partition.discovery.interval.ms' was supplied but isn't a known config.
> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
> AdminClientConfig [] - The configuration
> 'commit.offsets.on.checkpoint' was supplied but isn't a known config.
> 2023-11-23 06:09:37,110 WARN  org.apache.kafka.clients.admin.
> AdminClientConfig [] - The configuration 'enable.auto.commit'
> was supplied but isn't a known config.
> 2023-11-23 06:09:37,111 WARN  org.apache.kafka.clients.admin.
> AdminClientConfig [] - The configuration 'auto.offset.reset'
> was supplied but isn't a known config.
> 2023-11-23 06:09:37,113 INFO  org.apache.kafka.common.utils.AppInfoParser
>  [] - Kafka version: 3.2.2
> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.AppInfoParser
>  [] - Kafka commitId: 38c22ad893fb6cf5
> 2023-11-23 06:09:37,114 INFO  org.apache.kafka.common.utils.AppInfoParser
>  [] - Kafka startTimeMs: 1700719777111
> 2023-11-23 06:09:37,117 INFO
>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
> [] - Starting the KafkaSourceEnumerator for consumer group null without
> periodic partition discovery.
> 2023-11-23 06:09:37,199 INFO  org.apache.flink.runtime.executiongraph.
> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (1
> /1) (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
> switched from DEPLOYING to INITIALIZING.
> 2023-11-23 06:09:37,302 INFO  org.apache.flink.runtime.source.coordinator.
> SourceCoordinator [] - Source Source: src_source registering reader for
> parallel task 0 (#0) @ flink-taskmanager
> 2023-11-23 06:09:37,313 INFO  org.apache.flink.runtime.executiongraph.
> ExecutionGraph   [] - Source: src_source -> Sink: Print to Std. Out (1
> /1) (496f859d5379cd751a3fc473625125f3_cbc357ccb763df2852fee8c4fc7d55f2_0_0)
> switched from INITIALIZING to RUNNING.
> 2023-11-23 06:09:38,713 INFO
>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
> [] - Discovered new partitions: [aiops-3, aiops-2, aiops-1, aiops-0,
> aiops-5, aiops-4]
> 2023-11-23 06:09:38,719 INFO
>  org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
> [] - Assigning splits to readers {0=[[Partition: aiops-1, StartingOffset:
> -1, StoppingOffset: -9223372036854775808], [Partition: aiops-2,
> StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition:
> aiops-0, StartingOffset: -1, StoppingOffset: -9223372036854775808], [
> Partition: aiops-4, StartingOffset: -1, StoppingOffset: -
> 9223372036854775808], [Partition: aiops-3, StartingOffset: -1,
> StoppingOffset: -9223372036854775808], [Partition: aiops-5, StartingOffset:
> -1, StoppingOffset: -9223372036854775808]]}
> 2023-11-23 06:09:57,651 INFO  akka.remote.transport.ProtocolStateActor
>   [] - No response from remote for outbound association.
> Associate timed out after [2 ms].
> 2023-11-23 06:09:57,651 WARN  akka.remote.ReliableDeliverySupervisor
>   [] - Association with 

Re:Error FlinkConsumer in Flink 1.18.0

2023-11-23 Thread Xuyang
Hi, patricia.
Can you attach full stack about the exception. It seems the thread reading 
source is stuck.







--

Best!
Xuyang




At 2023-11-23 16:18:21, "patricia lee"  wrote:

Hi,


Flink 1.18.0
Kafka Connector 3.0.1-1.18
Kafka v 3.2.4
JDK 17


I get error on class org.apache.flink.streaming.runtime.tasks.SourceStreamTask 
on LegacySourceFunctionThread.run()


 "java.util.concurrent.CompletableFuture@12d0b74 [Not completed, 1 dependents]




I am using the FlinkKafkaConsumer.
I dont get this error message on Flink 1.17.1 and JDK 11.


Because of the error, the consumed data does no longer enters my process 
function.




Regards,
Pat



Error FlinkConsumer in Flink 1.18.0

2023-11-23 Thread patricia lee
Hi,

Flink 1.18.0
Kafka Connector 3.0.1-1.18
Kafka v 3.2.4
JDK 17

I get error on class
org.apache.flink.streaming.runtime.tasks.SourceStreamTask on
LegacySourceFunctionThread.run()

 "java.util.concurrent.CompletableFuture@12d0b74 [Not completed, 1
dependents]


I am using the FlinkKafkaConsumer.
I dont get this error message on Flink 1.17.1 and JDK 11.

Because of the error, the consumed data does no longer enters my process
function.


Regards,
Pat