Re: Dynamin Windowing in with Pyflink

2023-05-17 Thread Dian Fu
Hi Nawaz,

>> My concern is, as Flink does not support dynamic windows, is this
approach going against Flink Architecture.
Per my understanding, the session window could be seen as a kind of dynamic
window. Besides, Flink also supports user-defined window with which users
should also be able to implement dynamic window. So I think what you are
doing is a good solution and will not be against Flink Architecture.

Regards,
Dian

On Tue, May 16, 2023 at 7:08 PM Nawaz Nayeem via user 
wrote:

> Hey, I’ve been trying to emulate the behavior of a dynamic window, as
> Flink does not support dynamic window sizes. My operator inherits from
> KeyedProcessFunction, and I’m only using KeyedStates to manipulate the
> window_size. I’m clearing the KeyedStates when my bucket(window) is
> complete, to reset the bucket size.
>
> My concern is, as Flink does not support dynamic windows, is this approach
> going against Flink Architecture? Like will it break checkpointing
> mechanism in distributed systems? It's been noted that I’m only using
> KeyedStates for maintaining or implementing the dynamic window.
>
>
> Any feedback would be appreciated.
> Thank you.
>
> [image: SELISE]
>
> SELISE Group
> Zürich: The Circle 37, 8058 Zürich-Airport, Switzerland
> Munich: Tal 44, 80331 München, Germany
> Dubai: Building 3, 3rd Floor, Dubai Design District, Dubai, United Arab
> Emirates
> Dhaka: Midas Center, Road 16, Dhanmondi, Dhaka 1209, Bangladesh
> Thimphu: Bhutan Innovation Tech Center, Babesa, P.O. Box 633, Thimphu,
> Bhutan
>
> Visit us: www.selisegroup.com
>
> *Important Note: This e-mail and any attachment are confidential and may
> contain trade secrets and may well also be legally privileged or otherwise
> protected from disclosure. If you have received it in error, you are on
> notice of its status. Please notify us immediately by reply e-mail and then
> delete this e-mail and any attachment from your system. If you are not the
> intended recipient please understand that you must not copy this e-mail or
> any attachment or disclose the contents to any other person. Thank you for
> your cooperation.*
>


Re: 使用flink sql创建版本视图无法正常使用

2023-05-17 Thread Shammon FY
Hi,

你邮件里的图片无法显示,也没办法看到具体的错误信息

Best,
Shammon FY


On Thu, May 18, 2023 at 10:15 AM arkey w  wrote:

> flink版本:1.14.5
> 在项目使用版本表时,准备使用版本视图,但创建后无法正常使用。后根据官网提供的示例(  Versioned Tables | Apache Flink
> 
> )进行验证也同样无法使用,创建sql如下:
> 创建事实表:
> [image: image.png]
>
> 创建版本视图:
> [image: image.png]
> [image: image.png]
>
>
> Temporal Join的结果出现了报错:
> [image: image.png]
>
> 在desc视图的时候发现视图并没有主键以及事件时间字段,而join的时候也因此报了错。
> 是我操作哪里有问题吗,要如何才能正确使用版本视图?
>
>


使用flink sql创建版本视图无法正常使用

2023-05-17 Thread arkey w
flink版本:1.14.5
在项目使用版本表时,准备使用版本视图,但创建后无法正常使用。后根据官网提供的示例(  Versioned Tables | Apache Flink

)进行验证也同样无法使用,创建sql如下:
创建事实表:
[image: image.png]

创建版本视图:
[image: image.png]
[image: image.png]


Temporal Join的结果出现了报错:
[image: image.png]

在desc视图的时候发现视图并没有主键以及事件时间字段,而join的时候也因此报了错。
是我操作哪里有问题吗,要如何才能正确使用版本视图?


Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.5.0 released

2023-05-17 Thread Márton Balassi
Thanks, awesome! :-)

On Wed, May 17, 2023 at 2:24 PM Gyula Fóra  wrote:

> The Apache Flink community is very happy to announce the release of Apache
> Flink Kubernetes Operator 1.5.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache Flink
> applications and their lifecycle through native k8s tooling like kubectl.
>
> Release highlights:
>  - Autoscaler improvements
>  - Operator stability, observability improvements
>
> Release blogpost:
>
> https://flink.apache.org/2023/05/17/apache-flink-kubernetes-operator-1.5.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink Kubernetes Operator can be found at:
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>
> Official Docker image for Flink Kubernetes Operator applications can be
> found at: https://hub.docker.com/r/apache/flink-kubernetes-operator
>
> The full release notes are available in Jira:
> https://issues.apache.org/jira/projects/FLINK/versions/12352931
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
> Regards,
> Gyula Fora
>


Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.5.0 released

2023-05-17 Thread Márton Balassi
Thanks, awesome! :-)

On Wed, May 17, 2023 at 2:24 PM Gyula Fóra  wrote:

> The Apache Flink community is very happy to announce the release of Apache
> Flink Kubernetes Operator 1.5.0.
>
> The Flink Kubernetes Operator allows users to manage their Apache Flink
> applications and their lifecycle through native k8s tooling like kubectl.
>
> Release highlights:
>  - Autoscaler improvements
>  - Operator stability, observability improvements
>
> Release blogpost:
>
> https://flink.apache.org/2023/05/17/apache-flink-kubernetes-operator-1.5.0-release-announcement/
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for Flink Kubernetes Operator can be found at:
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
>
> Official Docker image for Flink Kubernetes Operator applications can be
> found at: https://hub.docker.com/r/apache/flink-kubernetes-operator
>
> The full release notes are available in Jira:
> https://issues.apache.org/jira/projects/FLINK/versions/12352931
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
> Regards,
> Gyula Fora
>


Re: MSI Auth to Azure Storage Account with Flink Apache Operator not working

2023-05-17 Thread Surendra Singh Lilhore
Hi Derocco,

Good to hear that it is working. Let me create a Jira ticket and update the
document.

-Surendra


On Wed, May 17, 2023 at 9:29 PM DEROCCO, CHRISTOPHER  wrote:

> Surendra,
>
>
>
> Your recommended config change fixed my issue. Azure Managed Service
> Identity works for me now and I can write checkpoints to ADLSGen2 storage.
> My client id is the managed identity that is attached to the azure
> Kubernetes nodepools. For anyone else facing this issue, my configurations
> to get this working in the Kubernetes yaml are:
>
>
>
> flinkConfigurations:
>
> fs.azure.createRemoteFileSystemDuringInitialization: "true"
>
> fs.azure.account.oauth.provider.type..
> dfs.core.windows.net:
> *org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider*
>
> fs.azure.account.oauth2.msi.tenant. .
> dfs.core.windows.net: 
>
> fs.azure.account.oauth2.client.id. .
> dfs.core.windows.net: 
>
> fs.azure.account.oauth2.client.endpoint. .
> dfs.core.windows.net: https://login.microsoftonline.com/ ID>/oauth2/token
>
>
>
> Also this environment variable has to be added to the Kubernetes yaml
> configuration
>
>
>
>   containers:
>
> # Do not change the main container name
>
> - name: flink-main-container
>
>   env:
>
>   - name: ENABLE_BUILT_IN_PLUGINS
>
> value: flink-azure-fs-hadoop-1.16.1.jar
>
>
>
>
>
> This azure managed service identity configuration should be added to the
> flink docs. I couldn’t find anywhere that the
> fs.azure.account.oauth.provider.type had to be set as
> *org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider*
>
>
>
>
>
> *From:* Surendra Singh Lilhore 
> *Sent:* Tuesday, May 16, 2023 11:46 PM
> *To:* Ivan Webber 
> *Cc:* DEROCCO, CHRISTOPHER ; Shammon FY ;
> user@flink.apache.org
> *Subject:* Re: MSI Auth to Azure Storage Account with Flink Apache
> Operator not working
>
>
>
> Hi DEROCCO,
>
>
>
> Flink uses shaded jars for the Hadoop Azure Storage plugin, so in order to
> correct the ClassNotFoundException, you need to adjust the configuration.
> Please configure the MSITokenProvider as shown below.
>
>
>
> fs.azure.account.oauth.provider.type:
> *org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider*
>
>
>
> Thanks
>
> Surendra
>
>
>
>
>
> On Wed, May 17, 2023 at 5:32 AM Ivan Webber via user <
> user@flink.apache.org> wrote:
>
> When you create your cluster you probably need to ensure the following
> settings are set. I briefly looked into MSI but ended up using Azure Key
> Vault with CSI-storage driver for initial prototype (
> https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/csi-secrets-store-driver.md#upgrade-an-existing-aks-cluster-with-azure-key-vault-provider-for-secrets-store-csi-driver-support
> 
> ).
>
>
>
> For me it helped to think about it as Hadoop configuration.
>
>
>
> If you do get MSI working I would be interested in hearing what made it
> work for you, so be sure to update the docs or put it on this thread.
>
>
>
> * To create from scratch*
>
> Create an AKS cluster with the required settings.
>
> ```bash
>
> # create an AKS cluster with pod-managed identity and Azure CNI
>
> az aks create --resource-group $RESOURCE_GROUP --name $CLUSTER
> --enable-managed-identity --network-plugin azure --enable-pod-identity
>
> ```
>
>
>
> I hope that is somehow helpful.
>
>
>
> Best of luck,
>
>
>
> Ivan
>
>
>
> *From: *DEROCCO, CHRISTOPHER 
> *Sent: *Monday, May 8, 2023 3:40 PM
> *To: *Shammon FY 
> *Cc: *user@flink.apache.org
> *Subject: *[EXTERNAL] RE: MSI Auth to Azure Storage Account with Flink
> Apache Operator not working
>
>
>
> You don't often get email from cd9...@att.com. Learn why this is important
> 
>
> Shammon,
>
>
>
> I’m still having trouble setting the package in my cluster environment. I 
> have these lines added to my dockerfile
>
> mkdir ./plugins/azure-fs-hadoop
>
> cp ./opt/flink-azure-fs-hadoop-1.16.0.jar ./plugins/azure-fs-hadoop/
>
>
>
> according to the flink docs here (
> https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/filesystems/azure/
> 
> )
>
> This should enable the flink-azure-fs-hadoop jar in the environment which
> has the classes to 

RE: MSI Auth to Azure Storage Account with Flink Apache Operator not working

2023-05-17 Thread DEROCCO, CHRISTOPHER
Surendra,

Your recommended config change fixed my issue. Azure Managed Service Identity 
works for me now and I can write checkpoints to ADLSGen2 storage. My client id 
is the managed identity that is attached to the azure Kubernetes nodepools. For 
anyone else facing this issue, my configurations to get this working in the 
Kubernetes yaml are:

flinkConfigurations:
fs.azure.createRemoteFileSystemDuringInitialization: "true"

fs.azure.account.oauth.provider.type..dfs.core.windows.net:
 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider
fs.azure.account.oauth2.msi.tenant. 
.dfs.core.windows.net: 
fs.azure.account.oauth2.client.id. 
.dfs.core.windows.net: 
fs.azure.account.oauth2.client.endpoint. 
.dfs.core.windows.net: 
https://login.microsoftonline.com//oauth2/token

Also this environment variable has to be added to the Kubernetes yaml 
configuration

  containers:
# Do not change the main container name
- name: flink-main-container
  env:
  - name: ENABLE_BUILT_IN_PLUGINS
value: flink-azure-fs-hadoop-1.16.1.jar


This azure managed service identity configuration should be added to the flink 
docs. I couldn’t find anywhere that the fs.azure.account.oauth.provider.type 
had to be set as 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider


From: Surendra Singh Lilhore 
Sent: Tuesday, May 16, 2023 11:46 PM
To: Ivan Webber 
Cc: DEROCCO, CHRISTOPHER ; Shammon FY ; 
user@flink.apache.org
Subject: Re: MSI Auth to Azure Storage Account with Flink Apache Operator not 
working

Hi DEROCCO,

Flink uses shaded jars for the Hadoop Azure Storage plugin, so in order to 
correct the ClassNotFoundException, you need to adjust the configuration. 
Please configure the MSITokenProvider as shown below.

fs.azure.account.oauth.provider.type: 
org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider


Thanks
Surendra


On Wed, May 17, 2023 at 5:32 AM Ivan Webber via user 
mailto:user@flink.apache.org>> wrote:
When you create your cluster you probably need to ensure the following settings 
are set. I briefly looked into MSI but ended up using Azure Key Vault with 
CSI-storage driver for initial prototype 
(https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/csi-secrets-store-driver.md#upgrade-an-existing-aks-cluster-with-azure-key-vault-provider-for-secrets-store-csi-driver-support).

For me it helped to think about it as Hadoop configuration.

If you do get MSI working I would be interested in hearing what made it work 
for you, so be sure to update the docs or put it on this thread.

 To create from scratch
Create an AKS cluster with the required settings.
```bash
# create an AKS cluster with pod-managed identity and Azure CNI
az aks create --resource-group $RESOURCE_GROUP --name $CLUSTER 
--enable-managed-identity --network-plugin azure --enable-pod-identity
```

I hope that is somehow helpful.

Best of luck,

Ivan

From: DEROCCO, CHRISTOPHER
Sent: Monday, May 8, 2023 3:40 PM
To: Shammon FY
Cc: user@flink.apache.org
Subject: [EXTERNAL] RE: MSI Auth to Azure Storage Account with Flink Apache 
Operator not working

You don't often get email from cd9...@att.com. Learn why 
this is 
important

Shammon,



I’m still having trouble setting the package in my cluster environment. I have 
these lines added to my dockerfile

mkdir ./plugins/azure-fs-hadoop

cp ./opt/flink-azure-fs-hadoop-1.16.0.jar ./plugins/azure-fs-hadoop/

according to the flink docs here 
(https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/filesystems/azure/)
This should enable the flink-azure-fs-hadoop jar in the environment which has 
the classes to enable the adls2 MSI authentication.
I also have the following dependency in my pom to add it to the FAT Jar.


org.apache.flink
flink-azure-fs-hadoop
${flink.version}


However, I still get the class not found error and the flink job is not able to 
authenticate to the azure storage account to store its checkpoints. I’m not 
sure what other configuration 

RE: MSI Auth to Azure Storage Account with Flink Apache Operator not working

2023-05-17 Thread DEROCCO, CHRISTOPHER
Ivan,

How did you use Azure Key Vault with CSI because the flink operator uses a 
configmap and not a Kubernetes secret to create the flink-conf file? I have 
also tried using pod-identities as well as the new workload identity 
(https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview) to no 
avail. It seems to be an issue with configuring 
flink-azure-fs-hadoop-1.16.0.jar with using the flink operator.

From: Ivan Webber 
Sent: Tuesday, May 16, 2023 8:01 PM
To: DEROCCO, CHRISTOPHER ; Shammon FY 
Cc: user@flink.apache.org
Subject: RE: MSI Auth to Azure Storage Account with Flink Apache Operator not 
working

When you create your cluster you probably need to ensure the following settings 
are set. I briefly looked into MSI but ended up using Azure Key Vault with 
CSI-storage driver for initial prototype 
(https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/csi-secrets-store-driver.md#upgrade-an-existing-aks-cluster-with-azure-key-vault-provider-for-secrets-store-csi-driver-support).

For me it helped to think about it as Hadoop configuration.

If you do get MSI working I would be interested in hearing what made it work 
for you, so be sure to update the docs or put it on this thread.

 To create from scratch
Create an AKS cluster with the required settings.
```bash
# create an AKS cluster with pod-managed identity and Azure CNI
az aks create --resource-group $RESOURCE_GROUP --name $CLUSTER 
--enable-managed-identity --network-plugin azure --enable-pod-identity
```

I hope that is somehow helpful.

Best of luck,

Ivan

From: DEROCCO, CHRISTOPHER
Sent: Monday, May 8, 2023 3:40 PM
To: Shammon FY
Cc: user@flink.apache.org
Subject: [EXTERNAL] RE: MSI Auth to Azure Storage Account with Flink Apache 
Operator not working

You don't often get email from cd9...@att.com. Learn why 
this is 
important

Shammon,



I’m still having trouble setting the package in my cluster environment. I have 
these lines added to my dockerfile

mkdir ./plugins/azure-fs-hadoop

cp ./opt/flink-azure-fs-hadoop-1.16.0.jar ./plugins/azure-fs-hadoop/

according to the flink docs here 
(https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/deployment/filesystems/azure/)
This should enable the flink-azure-fs-hadoop jar in the environment which has 
the classes to enable the adls2 MSI authentication.
I also have the following dependency in my pom to add it to the FAT Jar.


org.apache.flink
flink-azure-fs-hadoop
${flink.version}


However, I still get the class not found error and the flink job is not able to 
authenticate to the azure storage account to store its checkpoints. I’m not 
sure what other configuration pieces I’m missing. Has anyone had successful 
with writing checkpoints to Azure ADLS2gen Storage with managed service 
identity (MSI) authentication.?



From: Shammon FY mailto:zjur...@gmail.com>>
Sent: Friday, May 5, 2023 8:38 PM
To: DEROCCO, CHRISTOPHER mailto:cd9...@att.com>>
Cc: user@flink.apache.org
Subject: Re: MSI Auth to Azure Storage Account with Flink Apache Operator not 
working

Hi DEROCCO,

I think you can check the startup command of the job on k8s to see if the jar 
file is in the classpath.

If your job is DataStream, you need to add hadoop azure dependency in your 
project, and if it is an SQL job, you need to include this jar file in your 
Flink release package. Or you can also add this package in your cluster 
environment.

Best,
Shammon FY


On Fri, May 5, 2023 at 10:21 PM DEROCCO, CHRISTOPHER 
mailto:cd9...@att.com>> wrote:
How can I add the package to the flink job or check if it is there?

From: Shammon FY mailto:zjur...@gmail.com>>
Sent: Thursday, May 4, 2023 9:59 PM
To: DEROCCO, CHRISTOPHER mailto:cd9...@att.com>>
Cc: user@flink.apache.org
Subject: Re: MSI Auth to Azure Storage Account with Flink Apache Operator not 
working

Hi DEROCCO,

I think you need to check whether there is a hadoop-azure jar file in the 
classpath of your flink job. From an error message 'Caused by: 
java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider 

[ANNOUNCE] Apache Flink Kubernetes Operator 1.5.0 released

2023-05-17 Thread Gyula Fóra
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.5.0.

The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.

Release highlights:
 - Autoscaler improvements
 - Operator stability, observability improvements

Release blogpost:
https://flink.apache.org/2023/05/17/apache-flink-kubernetes-operator-1.5.0-release-announcement/

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink Kubernetes Operator can be found at:
https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator

Official Docker image for Flink Kubernetes Operator applications can be
found at: https://hub.docker.com/r/apache/flink-kubernetes-operator

The full release notes are available in Jira:
https://issues.apache.org/jira/projects/FLINK/versions/12352931

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Gyula Fora


[ANNOUNCE] Apache Flink Kubernetes Operator 1.5.0 released

2023-05-17 Thread Gyula Fóra
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.5.0.

The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.

Release highlights:
 - Autoscaler improvements
 - Operator stability, observability improvements

Release blogpost:
https://flink.apache.org/2023/05/17/apache-flink-kubernetes-operator-1.5.0-release-announcement/

The release is available for download at:
https://flink.apache.org/downloads.html

Maven artifacts for Flink Kubernetes Operator can be found at:
https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator

Official Docker image for Flink Kubernetes Operator applications can be
found at: https://hub.docker.com/r/apache/flink-kubernetes-operator

The full release notes are available in Jira:
https://issues.apache.org/jira/projects/FLINK/versions/12352931

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Gyula Fora


RocksDB segfault on state restore

2023-05-17 Thread Gyula Fóra
Hi All!

We are encountering an error on a larger stateful job (around 1 TB + state)
on restore from a rocksdb checkpoint. The taskmanagers keep crashing with a
segfault coming from the rocksdb native logic and seem to be related to the
FlinkCompactionFilter mechanism.

The gist with the full error report:  report:
https://gist.github.com/gyfora/f307aa570d324d063e0ade9810f8bb25

The core part is here:
V  [libjvm.so+0x79478f]  Exceptions::
(Thread*, char const*, int, oopDesc*)+0x15f
V  [libjvm.so+0x960a68]  jni_Throw+0x88
C  [librocksdbjni-linux64.so+0x222aa1]
 JavaListElementFilter::NextUnexpiredOffset(rocksdb::Slice const&, long,
long) const+0x121
C  [librocksdbjni-linux64.so+0x6486c1]
 rocksdb::flink::FlinkCompactionFilter::ListDecide(rocksdb::Slice const&,
std::string*) const+0x81
C  [librocksdbjni-linux64.so+0x648bea]
 rocksdb::flink::FlinkCompactionFilter::FilterV2(int, rocksdb::Slice
const&, rocksdb::CompactionFilter::ValueType, rocksdb::Slice const&,
std::string*, std::string*) const+0x14a

Has anyone encountered a similar issue before?

Thanks
Gyula


flink sql case when 中文数据写入doris出现乱码

2023-05-17 Thread casel.chen
使用flink sql写mysql表数据到doris表,发现case 
when语句判断交易类型使用了中文,写入后在doris查出是乱码,而mysql其他中文字段写入是正确的,想问一下这个sql中出现的乱码问题要解决?

Flink Sql erroring at runtime Flink 1.16

2023-05-17 Thread neha goyal
Hello,

Looks like there is a bug with Flink 1.16's IF operator. If I use UPPER or
TRIM functions(there might be more such functions), I am getting the
exception. These functions used to work fine with Flink 1.13.
select
  if(
address_id = 'a',
'default',
upper(address_id)
  ) as address_id
from
  feature_test

sample Input sent to my Kafka Source:   {"address_id":"mydata"}
It should be reproducible. Please try it.

2023-05-05 23:30:24
java.io.IOException: Failed to deserialize consumer record due to
at StreamExecCalc$14237.processElement_split1961(Unknown Source)
at StreamExecCalc$14237.processElement(Unknown Source)
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
... 28 more


flink integrate redis

2023-05-17 Thread 湘晗刚
Flinkwritedatatoredis,howtoassuredataaccuracy,forexampleiwangtosaveseveralmonthsdataeveryuser,,ifusestate,firstsavetothestate,thenjustsettoredis,butifuserdataistoolarge,iwillneedmuchmemory,istherebetterway?
Thanksinadvance
Kobe24