[jira] [Created] (FLINK-24315) Flink native on k8s wacther thread will down,when k8s api server not work or network timeout

2021-09-16 Thread ouyangwulin (Jira)
ouyangwulin created FLINK-24315:
---

 Summary: Flink native on k8s wacther thread will down,when k8s api 
server not work or network timeout
 Key: FLINK-24315
 URL: https://issues.apache.org/jira/browse/FLINK-24315
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes
Affects Versions: 1.13.2, 1.14.0, 1.14.1
Reporter: ouyangwulin
 Fix For: 1.14.0, 1.14.1, 1.13.2


Jobmanager use fabric-client to watch api-server.When k8s api-server  or 
network problems. The watcher thread will closed ,  can use "jstack 1 && grep 
-i 'websocket'" to check the watcher thread is exists.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)
shiwuliang created FLINK-24314:
--

 Summary: Always use memory state backend with RocksDB
 Key: FLINK-24314
 URL: https://issues.apache.org/jira/browse/FLINK-24314
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: shiwuliang
 Attachments: image-2021-09-17-10-59-50-094.png

!image-2021-09-17-10-59-50-094.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24313) JdbcCatalogFactoryTest fails due to "Gave up waiting for server to start after 10000ms"

2021-09-16 Thread Xintong Song (Jira)
Xintong Song created FLINK-24313:


 Summary: JdbcCatalogFactoryTest fails due to "Gave up waiting for 
server to start after 1ms"
 Key: FLINK-24313
 URL: https://issues.apache.org/jira/browse/FLINK-24313
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: 1.14.0
Reporter: Xintong Song
 Fix For: 1.14.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24212&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d&l=14018

{code}
Sep 16 14:07:22 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 14.065 s <<< FAILURE! - in 
org.apache.flink.connector.jdbc.catalog.factory.JdbcCatalogFactoryTest
Sep 16 14:07:22 [ERROR] 
org.apache.flink.connector.jdbc.catalog.factory.JdbcCatalogFactoryTest  Time 
elapsed: 14.065 s  <<< ERROR!
Sep 16 14:07:22 java.io.IOException: Gave up waiting for server to start after 
1ms
Sep 16 14:07:22 at 
com.opentable.db.postgres.embedded.EmbeddedPostgres.waitForServerStartup(EmbeddedPostgres.java:308)
Sep 16 14:07:22 at 
com.opentable.db.postgres.embedded.EmbeddedPostgres.startPostmaster(EmbeddedPostgres.java:257)
Sep 16 14:07:22 at 
com.opentable.db.postgres.embedded.EmbeddedPostgres.(EmbeddedPostgres.java:146)
Sep 16 14:07:22 at 
com.opentable.db.postgres.embedded.EmbeddedPostgres$Builder.start(EmbeddedPostgres.java:554)
Sep 16 14:07:22 at 
com.opentable.db.postgres.junit.SingleInstancePostgresRule.pg(SingleInstancePostgresRule.java:46)
Sep 16 14:07:22 at 
com.opentable.db.postgres.junit.SingleInstancePostgresRule.before(SingleInstancePostgresRule.java:39)
Sep 16 14:07:22 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50)
Sep 16 14:07:22 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Sep 16 14:07:22 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Sep 16 14:07:22 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Sep 16 14:07:22 at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
Sep 16 14:07:22 at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
Sep 16 14:07:22 at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
Sep 16 14:07:22 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
Sep 16 14:07:22 at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
Sep 16 14:07:22 at 
java.util.Iterator.forEachRemaining(Iterator.java:116)
Sep 16 14:07:22 at 
java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
Sep 16 14:07:22 at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
Sep 16 14:07:22 at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
Sep 16 14:07:22 at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
Sep 16 14:07:22 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
Sep 16 14:07:22 at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
Sep 16 14:07:22 at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
Sep 16 14:07:22 at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:82)
Sep 16 14:07:22 at 
org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:73)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
Sep 16 14:07:22 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
Sep 16 14:07:22 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
Sep 16 14:07:22 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
Sep 16 14:07:22 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
Sep 16 14:07:22 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
Sep 16 14:07:22 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Sep 16 14:07:22 Caused by: java.sql.SQ

[jira] [Created] (FLINK-24312) 'New File Sink s3 end-to-end test' fails due to timeout

2021-09-16 Thread Xintong Song (Jira)
Xintong Song created FLINK-24312:


 Summary: 'New File Sink s3 end-to-end test' fails due to timeout
 Key: FLINK-24312
 URL: https://issues.apache.org/jira/browse/FLINK-24312
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Connectors / FileSystem
Affects Versions: 1.12.5
Reporter: Xintong Song
 Fix For: 1.12.6


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24194&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=ff888d9b-cd34-53cc-d90f-3e446d355529&l=12970



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [flink-statefun-playground] branch dev updated (dffc594 -> cccd603)

2021-09-16 Thread Fernando DalSotto
Unsubscribe

On Thu, Sep 16, 2021, 9:59 PM  wrote:

> This is an automated email from the ASF dual-hosted git repository.
>
> sjwiesman pushed a change to branch dev
> in repository
> https://gitbox.apache.org/repos/asf/flink-statefun-playground.git.
>
>
> from dffc594  [FLINK-24301] use Async transport on all statefun
> playground examples
>  add 330247b  [FLINK-24284][js] Add a JavaScript greeter
>  add 9edb3eb  [FLINK-24284][js] Add a JavaScript showcase
>  add cccd603  [FLINK-24284][js] Use the latest merged version of the
> JS SDK
>
> No new revisions were added by this update.
>
> Summary of changes:
>  javascript/.gitignore  |   2 +
>  {python => javascript}/greeter/.dockerignore   |   0
>  javascript/greeter/.gitignore  |   3 +
>  {go => javascript}/greeter/Dockerfile  |  20 +-
>  {go => javascript}/greeter/README.md   |   6 +-
>  .../greeter/apache-flink-statefun-3.2-SNAPSHOT.tgz | Bin 0 -> 49694 bytes
>  {python => javascript}/greeter/arch.png| Bin
>  {python => javascript}/greeter/docker-compose.yml  |   0
>  javascript/greeter/functions.js| 120 +
>  .../greeter}/input-example.json|   0
>  {python => javascript}/greeter/module.yaml |   6 +-
>  javascript/greeter/package.json|  12 +
>  javascript/showcase/.gitignore |   3 +
>  javascript/showcase/README.md  |  33 +++
>  .../apache-flink-statefun-3.2-SNAPSHOT.tgz | Bin 0 -> 49694 bytes
>  {python => javascript}/showcase/docker-compose.yml |   0
>  {python => javascript}/showcase/input-example.json |   0
>  {python => javascript}/showcase/module.yaml|   4 +-
>  javascript/showcase/package.json   |  15 ++
>  javascript/showcase/showcase/showcase.js   | 283
> +
>  .../showcase/showcase/showcase_custom_types.js |  14 +-
>  .../showcase}/showcase/showcase_custom_types.proto |   0
>  .../showcase/showcase/showcase_custom_types_pb.js  | 215 
>  23 files changed, 704 insertions(+), 32 deletions(-)
>  create mode 100644 javascript/.gitignore
>  copy {python => javascript}/greeter/.dockerignore (100%)
>  create mode 100644 javascript/greeter/.gitignore
>  copy {go => javascript}/greeter/Dockerfile (79%)
>  copy {go => javascript}/greeter/README.md (89%)
>  create mode 100644
> javascript/greeter/apache-flink-statefun-3.2-SNAPSHOT.tgz
>  copy {python => javascript}/greeter/arch.png (100%)
>  copy {python => javascript}/greeter/docker-compose.yml (100%)
>  create mode 100644 javascript/greeter/functions.js
>  copy {python/showcase => javascript/greeter}/input-example.json (100%)
>  copy {python => javascript}/greeter/module.yaml (93%)
>  create mode 100644 javascript/greeter/package.json
>  create mode 100644 javascript/showcase/.gitignore
>  create mode 100644 javascript/showcase/README.md
>  create mode 100644
> javascript/showcase/apache-flink-statefun-3.2-SNAPSHOT.tgz
>  copy {python => javascript}/showcase/docker-compose.yml (100%)
>  copy {python => javascript}/showcase/input-example.json (100%)
>  copy {python => javascript}/showcase/module.yaml (93%)
>  create mode 100644 javascript/showcase/package.json
>  create mode 100644 javascript/showcase/showcase/showcase.js
>  copy python/showcase/showcase_custom_types.proto =>
> javascript/showcase/showcase/showcase_custom_types.js (71%)
>  copy {python => javascript/showcase}/showcase/showcase_custom_types.proto
> (100%)
>  create mode 100644
> javascript/showcase/showcase/showcase_custom_types_pb.js
>


Re: [DISCUSS] Drop Scala Shell

2021-09-16 Thread Konstantin Knauf
Thanks for bringing this up.

+1 for dropping the Scala Shell or moving it out of Apache Flink if there
is interest in the community to develop it further.

On Thu, Sep 16, 2021 at 5:26 PM Seth Wiesman  wrote:

> +1
>
> The scala shell requires time and expertise that the contributors cannot
> provide. It will be great to let the broader community have the opportunity
> to foster and mature the scala shell outside the constraints imposed by the
> ASF.
>
> On Thu, Sep 16, 2021 at 9:02 AM Jeff Zhang  wrote:
>
> > Hi Martijn,
> >
> > It is a pity that the flink community doesn't have resources to maintain
> > scala shell which is useful for interactive user experience. We
> > (Zeppelin)didn't use flink scala shell module directly, instead we
> > customized the flink scala shell. So we are still interested in scala
> shell
> > module, for us if no one want to maintain scala shell, it would be better
> > to have it as flink 3rd party library, maybe in flink-packages.org
> >
> >
> > Martijn Visser  于2021年9月16日周四 下午7:21写道:
> >
> > > Hi everyone,
> > >
> > > As was discussed quite a while ago, the community has planned to drop
> > > support for Scala 2.11 [1]. I'm proposing to also drop the Scala Shell,
> > > which was briefly discussed in the email thread for dropping Scala 2.11
> > > [2].
> > >
> > > The Scala Shell doesn't work on Scala 2.12 [3] and there hasn't been
> much
> > > traction to get this working and it looks like the Flink community /
> > > committers don't want to maintain it anymore. Alternatively, if there's
> > > still someone interested in the Scala Shell, it could also be moved to
> a
> > > 3rd party repository. Removing the Scala Shell would/could help in
> > getting
> > > Scala 2.11 dropped quicker.
> > >
> > > Let me know your thoughts.
> > >
> > > Best regards,
> > >
> > > [1] https://issues.apache.org/jira/browse/FLINK-20845
> > > [2]
> > >
> > >
> >
> https://lists.apache.org/thread.html/ra817c5b54e3de48d80e5b4e0ae67941d387ee25cf9779f5ae37d0486%40%3Cdev.flink.apache.org%3E
> > > [3] https://issues.apache.org/jira/browse/FLINK-10911
> > >
> > >
> > > Martijn Visser | Product Manager
> > >
> > > mart...@ververica.com
> > >
> > > 
> > >
> > >
> > > Follow us @VervericaData
> > >
> > > --
> > >
> > > Join Flink Forward  - The Apache Flink
> > > Conference
> > >
> > > Stream Processing | Event Driven | Real Time
> > >
> >
> >
> > --
> > Best Regards
> >
> > Jeff Zhang
> >
>


-- 

Konstantin Knauf

https://twitter.com/snntrable

https://github.com/knaufk


Re: [DISCUSS] Drop Scala Shell

2021-09-16 Thread Seth Wiesman
+1

The scala shell requires time and expertise that the contributors cannot
provide. It will be great to let the broader community have the opportunity
to foster and mature the scala shell outside the constraints imposed by the
ASF.

On Thu, Sep 16, 2021 at 9:02 AM Jeff Zhang  wrote:

> Hi Martijn,
>
> It is a pity that the flink community doesn't have resources to maintain
> scala shell which is useful for interactive user experience. We
> (Zeppelin)didn't use flink scala shell module directly, instead we
> customized the flink scala shell. So we are still interested in scala shell
> module, for us if no one want to maintain scala shell, it would be better
> to have it as flink 3rd party library, maybe in flink-packages.org
>
>
> Martijn Visser  于2021年9月16日周四 下午7:21写道:
>
> > Hi everyone,
> >
> > As was discussed quite a while ago, the community has planned to drop
> > support for Scala 2.11 [1]. I'm proposing to also drop the Scala Shell,
> > which was briefly discussed in the email thread for dropping Scala 2.11
> > [2].
> >
> > The Scala Shell doesn't work on Scala 2.12 [3] and there hasn't been much
> > traction to get this working and it looks like the Flink community /
> > committers don't want to maintain it anymore. Alternatively, if there's
> > still someone interested in the Scala Shell, it could also be moved to a
> > 3rd party repository. Removing the Scala Shell would/could help in
> getting
> > Scala 2.11 dropped quicker.
> >
> > Let me know your thoughts.
> >
> > Best regards,
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-20845
> > [2]
> >
> >
> https://lists.apache.org/thread.html/ra817c5b54e3de48d80e5b4e0ae67941d387ee25cf9779f5ae37d0486%40%3Cdev.flink.apache.org%3E
> > [3] https://issues.apache.org/jira/browse/FLINK-10911
> >
> >
> > Martijn Visser | Product Manager
> >
> > mart...@ververica.com
> >
> > 
> >
> >
> > Follow us @VervericaData
> >
> > --
> >
> > Join Flink Forward  - The Apache Flink
> > Conference
> >
> > Stream Processing | Event Driven | Real Time
> >
>
>
> --
> Best Regards
>
> Jeff Zhang
>


Re: [DISCUSS] Drop Scala Shell

2021-09-16 Thread Jeff Zhang
Hi Martijn,

It is a pity that the flink community doesn't have resources to maintain
scala shell which is useful for interactive user experience. We
(Zeppelin)didn't use flink scala shell module directly, instead we
customized the flink scala shell. So we are still interested in scala shell
module, for us if no one want to maintain scala shell, it would be better
to have it as flink 3rd party library, maybe in flink-packages.org


Martijn Visser  于2021年9月16日周四 下午7:21写道:

> Hi everyone,
>
> As was discussed quite a while ago, the community has planned to drop
> support for Scala 2.11 [1]. I'm proposing to also drop the Scala Shell,
> which was briefly discussed in the email thread for dropping Scala 2.11
> [2].
>
> The Scala Shell doesn't work on Scala 2.12 [3] and there hasn't been much
> traction to get this working and it looks like the Flink community /
> committers don't want to maintain it anymore. Alternatively, if there's
> still someone interested in the Scala Shell, it could also be moved to a
> 3rd party repository. Removing the Scala Shell would/could help in getting
> Scala 2.11 dropped quicker.
>
> Let me know your thoughts.
>
> Best regards,
>
> [1] https://issues.apache.org/jira/browse/FLINK-20845
> [2]
>
> https://lists.apache.org/thread.html/ra817c5b54e3de48d80e5b4e0ae67941d387ee25cf9779f5ae37d0486%40%3Cdev.flink.apache.org%3E
> [3] https://issues.apache.org/jira/browse/FLINK-10911
>
>
> Martijn Visser | Product Manager
>
> mart...@ververica.com
>
> 
>
>
> Follow us @VervericaData
>
> --
>
> Join Flink Forward  - The Apache Flink
> Conference
>
> Stream Processing | Event Driven | Real Time
>


-- 
Best Regards

Jeff Zhang


[jira] [Created] (FLINK-24311) Support expression reduction for JSON construction functions

2021-09-16 Thread Jira
Ingo Bürk created FLINK-24311:
-

 Summary: Support expression reduction for JSON construction 
functions
 Key: FLINK-24311
 URL: https://issues.apache.org/jira/browse/FLINK-24311
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Ingo Bürk


For JSON construction functions such as JSON_OBJECT, we currently disable 
expression reduction. This is because these functions have special semantics 
where they behave differently depending on their call context. For example, 
JSON_OBJECT returns a JSON string. This means that a call like

 
{code:java}
JSON_OBJECT('A' VALUE JSON_OBJECT('B' VALUE 'C')){code}
would result in

 

 
{code:java}
{"A": "{\"B\": \"C\"}"}{code}
However, this is not user-friendly, and thus such nested calls are treated 
differently and instead result in the likely more intended outcome

 

 
{code:java}
{"A": {"B": "C"}}{code}
To make this work, the function looks at its operands and checks whether each 
operand is another RexCall to such a JSON construction function. If it is, it 
inserts it as a raw node instead.

 

 

This creates a problem during expression reduction. The RexCall will be 
replaced with a RexLiteral carrying the JSON string value. The function looking 
at the operands now cannot determine that this originated from such a RexCall 
anymore, and yields the unintended result once again. To prevent this, we 
currently disable expression reduction for these functions.

 

We should aim to once again allow such expressions to be reduced while still 
preserving the intended behavior. See [this 
comment|https://github.com/apache/flink/pull/17186#issuecomment-920783089] for 
a rough idea of how this could be achieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] Drop Scala Shell

2021-09-16 Thread Martijn Visser
Hi everyone,

As was discussed quite a while ago, the community has planned to drop
support for Scala 2.11 [1]. I'm proposing to also drop the Scala Shell,
which was briefly discussed in the email thread for dropping Scala 2.11 [2].

The Scala Shell doesn't work on Scala 2.12 [3] and there hasn't been much
traction to get this working and it looks like the Flink community /
committers don't want to maintain it anymore. Alternatively, if there's
still someone interested in the Scala Shell, it could also be moved to a
3rd party repository. Removing the Scala Shell would/could help in getting
Scala 2.11 dropped quicker.

Let me know your thoughts.

Best regards,

[1] https://issues.apache.org/jira/browse/FLINK-20845
[2]
https://lists.apache.org/thread.html/ra817c5b54e3de48d80e5b4e0ae67941d387ee25cf9779f5ae37d0486%40%3Cdev.flink.apache.org%3E
[3] https://issues.apache.org/jira/browse/FLINK-10911


Martijn Visser | Product Manager

mart...@ververica.com




Follow us @VervericaData

--

Join Flink Forward  - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time


[jira] [Created] (FLINK-24310) A bug in the BufferingSink example in the doc

2021-09-16 Thread Jun Qin (Jira)
Jun Qin created FLINK-24310:
---

 Summary: A bug in the BufferingSink example in the doc
 Key: FLINK-24310
 URL: https://issues.apache.org/jira/browse/FLINK-24310
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: Jun Qin


The following line in the BufferingSink on [this 
page|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/state/#operator-state]
 has a bug:
if (bufferedElements.size() == threshold) {
It should be {{>=}} instead of {{==}} , because when restoring from a 
checkpoint during downscaling, the task may get more elements than the 
threshold. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24309) NoClassDefFoundError occurs when using spring-data-redis for event notification in a flink cluster environment

2021-09-16 Thread WangYu (Jira)
WangYu created FLINK-24309:
--

 Summary: NoClassDefFoundError occurs when using spring-data-redis 
for event notification in a flink cluster environment
 Key: FLINK-24309
 URL: https://issues.apache.org/jira/browse/FLINK-24309
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.13.0
 Environment: Flink 1.13.0 zookeeper cluster

1 JobManager

3 TaskManagers

Every TaskManager has 8 slots

And parallelism of every subtask is 6
Reporter: WangYu


I used spring-data-redis in my Flink application to implement notifications for 
subscribing to refresh events. The code is as follows:

@Bean
public RedisMessageListenerContainer redisContainer(RedisConnectionFactory 
redisConnectionFactory, MessageListenerAdapter commonListenerAdapter) {
    RedisMessageListenerContainer container = new 
RedisMessageListenerContainer();
    container.setConnectionFactory(redisConnectionFactory);
    container.addMessageListener(commonListenerAdapter, new     
ChannelTopic(ApplicationConstants.REDIS_TOPIC_NAME));
    return container;
}


@Bean
MessageListenerAdapter commonListenerAdapter(RedisReceiver redisReceiver) {
    MessageListenerAdapter messageListenerAdapter = new                    
MessageListenerAdapter(redisReceiver, "onMessage");
    messageListenerAdapter.setSerializer(jacksonSerializer());
    return messageListenerAdapter;
}

It works very well in the local environment, application can receive 
notifications of refresh events through redis. 

But when I deployed the application to the Flink cluster, I got an error 
message:

2021-09-16 16:22:05,197 WARN  io.netty.channel.AbstractChannelHandlerContext    
           [] - An exception 'java.lang.NoClassDefFoundError: 
io/netty/util/internal/logging/InternalLogLevel' [enable DEBUG level for full 
stacktrace] was thrown by a user handler's exceptionCaught() method while 
handling the following exception:2021-09-16 16:22:05,197 WARN  
io.netty.channel.AbstractChannelHandlerContext               [] - An exception 
'java.lang.NoClassDefFoundError: 
io/netty/util/internal/logging/InternalLogLevel' [enable DEBUG level for full 
stacktrace] was thrown by a user handler's exceptionCaught() method while 
handling the following exception:java.lang.NoClassDefFoundError: 
org/springframework/data/redis/connection/DefaultMessage at 
org.springframework.data.redis.connection.lettuce.LettuceMessageListener.message(LettuceMessageListener.java:43)
 
~[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
org.springframework.data.redis.connection.lettuce.LettuceMessageListener.message(LettuceMessageListener.java:29)
 
~[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.lettuce.core.pubsub.PubSubEndpoint.notifyListeners(PubSubEndpoint.java:191) 
~[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.lettuce.core.pubsub.PubSubEndpoint.notifyMessage(PubSubEndpoint.java:180) 
~[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.lettuce.core.pubsub.PubSubCommandHandler.doNotifyMessage(PubSubCommandHandler.java:217)
 
~[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.lettuce.core.pubsub.PubSubCommandHandler.decode(PubSubCommandHandler.java:123)
 
~[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:591) 
~[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
 
[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
 
[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
 
[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
 
[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
 
[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
 
[blob_p-e4b9f89a7dac271b36297079ae0306cc0273f659-a280a62d8bc228a1b65e682dcf2f8472:?]
 at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)

[jira] [Created] (FLINK-24308) Translate KafkaSink docs to chinese

2021-09-16 Thread Fabian Paul (Jira)
Fabian Paul created FLINK-24308:
---

 Summary: Translate KafkaSink docs to chinese
 Key: FLINK-24308
 URL: https://issues.apache.org/jira/browse/FLINK-24308
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.14.0
Reporter: Fabian Paul


With https://issues.apache.org/jira/browse/FLINK-23664 only the English 
documentation was updated. We also have to update the Chinese docs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)