Could you give more details?
Thanks



------------------ ???????? ------------------
??????:                                                                         
                                               "??????"                         
                                                           
<appleyu...@foxmail.com&gt;;
????????:&nbsp;2020??10??3??(??????) ????9:30
??????:&nbsp;"David Anderson"<dander...@apache.org&gt;;
????:&nbsp;"user"<user@flink.apache.org&gt;;
????:&nbsp;?????? need help about "incremental checkpoint",Thanks



where's the actual path?
I can only get one path from the WEB UI


Is it possible that this error happened in step 5 is due to my code's&nbsp; 
fault?


------------------ ???????? ------------------
??????:                                                                         
                                               "??????"                         
                                                           
<753743...@qq.com&gt;;
????????:&nbsp;2020??10??3??(??????) ????9:13
??????:&nbsp;"David Anderson"<dander...@apache.org&gt;;
????:&nbsp;"user"<user@flink.apache.org&gt;;
????:&nbsp;?????? need help about "incremental checkpoint",Thanks



Thanks~!!


I have compared your command with mine in step 5.
Mine is:
&nbsp; &nbsp; &nbsp; &nbsp;"flink run -s 
hdfs://Desktop:9000/tmp/flinkck/df6d62a43620f258155b8538ef0ddf1b/chk-22 -c 
StateWordCount datastream_api-1.0-SNAPSHOT.jar"
yours is:
$ bin/flink run -s 
hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3 <jar 
file&gt; [args]
They are the same.
Could you tell me where am I wrong?
------------------------------------------------------------------------------------------------------------------------
Maybe the error is not caused by this command?
"Unexpected state handle type, expected: 
class org.apache.flink.runtime.state.KeyGroupsStateHandle, 
but found: 
class org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle"
----------------------------------------------------------------------------------------------------------------------------------


Thanks~


------------------ ???????? ------------------
??????:                                                                         
                                               "David Anderson"                 
                                                                   
<dander...@apache.org&gt;;
????????:&nbsp;2020??10??3??(??????) ????0:05
??????:&nbsp;"??????"<753743...@qq.com&gt;;
????:&nbsp;"user"<user@flink.apache.org&gt;;
????:&nbsp;Re: need help about "incremental checkpoint",Thanks



If hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3 was 
written by the RocksDbStateBackend, then you can use it to recover if the new 
job is also using the RocksDbStateBackend. The command would be

$ bin/flink run -s 
hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3 <jar 
file&gt; [args]

The ":" character is meant to indicate that you should not use the literal 
string "checkpointMetaDataPath", but rather replace that with the actual path. 
Do not include the : character.

David

On Fri, Oct 2, 2020 at 5:58 PM ?????? <753743...@qq.com&gt; wrote:
&gt;
&gt; I have read the official document
&gt; 
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/state/checkpoints.html#directory-structure
&gt;
&gt; at the end of above link,it said:
&gt;
&gt; $ bin/flink run -s :checkpointMetaDataPath [:runArgs]
&gt;
&gt; I have tried the above command in previous experiment,but still no luck.
&gt; And why the above official command has " :" after "run -s"?
&gt; I guess " :" not necessary.
&gt;
&gt; Could you tell me what the right command is to recover(resume) from 
incremental checkpoint(RocksdbStateBackEnd)?
&gt;
&gt; Much Thanks~!
&gt;
&gt;
&gt; ------------------ ???????? ------------------
&gt; ??????: "??????" <appleyu...@foxmail.com&gt;;
&gt; ????????: 2020??10??2??(??????) ????11:41
&gt; ??????: "David Anderson"<dander...@apache.org&gt;;
&gt; ????: "user"<user@flink.apache.org&gt;;
&gt; ????: ?????? need help about "incremental checkpoint",Thanks
&gt;
&gt; Thanks for your replies~!
&gt;
&gt; Could you tell me what the right command is to recover from checkpoint 
&nbsp;manually using Rocksdb file?
&gt;
&gt; I understand that checkpoint is for automatically recovery,
&gt; but in this experiment I stop it by force(input 4 error in nc -lk 9999),
&gt; Is there a way to recover from incremental checkpoint manually ( with 
RocksdbStateBackend)?
&gt;
&gt; I can only find 
hdfs://Desktop:9000/tmp/flinkck/1de98c1611c134d915d19ded33aeab54/chk-3 &nbsp;in 
my WEB UI (I guess this is only used for fsStateBackend)
&gt;
&gt; Thanks for your help~!
&gt;
&gt; ------------------ ???????? ------------------
&gt; ??????: "David Anderson" <dander...@apache.org&gt;;
&gt; ????????: 2020??10??2??(??????) ????11:24
&gt; ??????: "??????"<appleyu...@foxmail.com&gt;;
&gt; ????: "user"<user@flink.apache.org&gt;;
&gt; ????: Re: need help about "incremental checkpoint",Thanks
&gt;
&gt;&gt; Write in RocksDbStateBackend.
&gt;&gt; Read in FsStateBackend.
&gt;&gt; It's NOT a match.
&gt;
&gt;
&gt; Yes, that is right. Also, this does not work:
&gt;
&gt; Write in FsStateBackend
&gt; Read in RocksDbStateBackend
&gt;
&gt; For questions and support in Chinese, you can use the 
user...@flink.apache.org. See the instructions at 
https://flink.apache.org/zh/community.html for how to join the list.
&gt;
&gt; Best,
&gt; David
&gt;
&gt; On Fri, Oct 2, 2020 at 4:45 PM ?????? <appleyu...@foxmail.com&gt; wrote:
&gt;&gt;
&gt;&gt; Thanks for your replies~!
&gt;&gt;
&gt;&gt; My English is poor ,I have an understanding of your replies:
&gt;&gt;
&gt;&gt; Write in RocksDbStateBackend.
&gt;&gt; Read in FsStateBackend.
&gt;&gt; It's NOT a match.
&gt;&gt; So I'm wrong in step 5?
&gt;&gt; Is my above understanding right?
&gt;&gt;
&gt;&gt; Thanks for your help.
&gt;&gt;
&gt;&gt; ------------------ ???????? ------------------
&gt;&gt; ??????: "David Anderson" <dander...@apache.org&gt;;
&gt;&gt; ????????: 2020??10??2??(??????) ????10:35
&gt;&gt; ??????: "??????"<appleyu...@foxmail.com&gt;;
&gt;&gt; ????: "user"<user@flink.apache.org&gt;;
&gt;&gt; ????: Re: need help about "incremental checkpoint",Thanks
&gt;&gt;
&gt;&gt; It looks like you were trying to resume from a checkpoint taken with 
the FsStateBackend into a revised version of the job that uses the 
RocksDbStateBackend. Switching state backends in this way is not supported: 
checkpoints and savepoints are written in a state-backend-specific format, and 
can only be read by the same backend that wrote them.
&gt;&gt;
&gt;&gt; It is possible, however, to migrate between state backends using the 
State Processor API [1].
&gt;&gt;
&gt;&gt; [1] 
https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html
&gt;&gt;
&gt;&gt; Best,
&gt;&gt; David
&gt;&gt;
&gt;&gt; On Fri, Oct 2, 2020 at 4:07 PM ?????? <appleyu...@foxmail.com&gt; 
wrote:
&gt;&gt;&gt;
&gt;&gt;&gt; I want to do an experiment of"incremental checkpoint"
&gt;&gt;&gt;
&gt;&gt;&gt; my code is:
&gt;&gt;&gt;
&gt;&gt;&gt; https://paste.ubuntu.com/p/DpTyQKq6Vk/
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;
&gt;&gt;&gt;
&gt;&gt;&gt; pom.xml is:
&gt;&gt;&gt;
&gt;&gt;&gt; <?xml version="1.0" encoding="UTF-8"?&gt;
&gt;&gt;&gt; <project xmlns="http://maven.apache.org/POM/4.0.0";
&gt;&gt;&gt; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
&gt;&gt;&gt; xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd"&gt;
&gt;&gt;&gt; <modelVersion&gt;4.0.0</modelVersion&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <groupId&gt;example</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;datastream_api</artifactId&gt;
&gt;&gt;&gt; <version&gt;1.0-SNAPSHOT</version&gt;
&gt;&gt;&gt; <build&gt;
&gt;&gt;&gt; <plugins&gt;
&gt;&gt;&gt; <plugin&gt;
&gt;&gt;&gt; <groupId&gt;org.apache.maven.plugins</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;maven-compiler-plugin</artifactId&gt;
&gt;&gt;&gt; <version&gt;3.1</version&gt;
&gt;&gt;&gt; <configuration&gt;
&gt;&gt;&gt; <source&gt;1.8</source&gt;
&gt;&gt;&gt; <target&gt;1.8</target&gt;
&gt;&gt;&gt; </configuration&gt;
&gt;&gt;&gt; </plugin&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <plugin&gt;
&gt;&gt;&gt; <groupId&gt;org.scala-tools</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;maven-scala-plugin</artifactId&gt;
&gt;&gt;&gt; <version&gt;2.15.2</version&gt;
&gt;&gt;&gt; <executions&gt;
&gt;&gt;&gt; <execution&gt;
&gt;&gt;&gt; <goals&gt;
&gt;&gt;&gt; <goal&gt;compile</goal&gt;
&gt;&gt;&gt; <goal&gt;testCompile</goal&gt;
&gt;&gt;&gt; </goals&gt;
&gt;&gt;&gt; </execution&gt;
&gt;&gt;&gt; </executions&gt;
&gt;&gt;&gt; </plugin&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;
&gt;&gt;&gt;
&gt;&gt;&gt; </plugins&gt;
&gt;&gt;&gt; </build&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <dependencies&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <!-- 
https://mvnrepository.com/artifact/org.apache.flink/flink-streaming-scala --&gt;
&gt;&gt;&gt; <dependency&gt;
&gt;&gt;&gt; <groupId&gt;org.apache.flink</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;flink-streaming-scala_2.11</artifactId&gt;
&gt;&gt;&gt; <version&gt;1.11.1</version&gt;
&gt;&gt;&gt; <!-<scope&gt;provided</scope&gt;-&gt;
&gt;&gt;&gt; </dependency&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <!-<dependency&gt;-&gt;
&gt;&gt;&gt; <!-<groupId&gt;org.apache.flink</groupId&gt;-&gt;
&gt;&gt;&gt; <!-<artifactId&gt;flink-streaming-java_2.12</artifactId&gt;-&gt;
&gt;&gt;&gt; <!-<version&gt;1.11.1</version&gt;-&gt;
&gt;&gt;&gt; <!-<!?C<scope&gt;compile</scope&gt;?C&gt;-&gt;
&gt;&gt;&gt; <!-</dependency&gt;-&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <dependency&gt;
&gt;&gt;&gt; <groupId&gt;org.apache.flink</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;flink-clients_2.11</artifactId&gt;
&gt;&gt;&gt; <version&gt;1.11.1</version&gt;
&gt;&gt;&gt; </dependency&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;
&gt;&gt;&gt;
&gt;&gt;&gt; <dependency&gt;
&gt;&gt;&gt; <groupId&gt;org.apache.flink</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;flink-statebackend-rocksdb_2.11</artifactId&gt;
&gt;&gt;&gt; <version&gt;1.11.2</version&gt;
&gt;&gt;&gt; <!-<scope&gt;test</scope&gt;-&gt;
&gt;&gt;&gt; </dependency&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <dependency&gt;
&gt;&gt;&gt; <groupId&gt;org.apache.hadoop</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;hadoop-client</artifactId&gt;
&gt;&gt;&gt; <version&gt;3.3.0</version&gt;
&gt;&gt;&gt; </dependency&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <dependency&gt;
&gt;&gt;&gt; <groupId&gt;org.apache.flink</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;flink-core</artifactId&gt;
&gt;&gt;&gt; <version&gt;1.11.1</version&gt;
&gt;&gt;&gt; </dependency&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <!-<dependency&gt;-&gt;
&gt;&gt;&gt; <!-<groupId&gt;org.slf4j</groupId&gt;-&gt;
&gt;&gt;&gt; <!-<artifactId&gt;slf4j-simple</artifactId&gt;-&gt;
&gt;&gt;&gt; <!-<version&gt;1.7.25</version&gt;-&gt;
&gt;&gt;&gt; <!-<scope&gt;compile</scope&gt;-&gt;
&gt;&gt;&gt; <!-</dependency&gt;-&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <!-- https://mvnrepository.com/artifact/org.apache.flink/flink-cep 
--&gt;
&gt;&gt;&gt; <dependency&gt;
&gt;&gt;&gt; <groupId&gt;org.apache.flink</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;flink-cep_2.11</artifactId&gt;
&gt;&gt;&gt; <version&gt;1.11.1</version&gt;
&gt;&gt;&gt; </dependency&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <dependency&gt;
&gt;&gt;&gt; <groupId&gt;org.apache.flink</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;flink-cep-scala_2.11</artifactId&gt;
&gt;&gt;&gt; <version&gt;1.11.1</version&gt;
&gt;&gt;&gt; </dependency&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; <dependency&gt;
&gt;&gt;&gt; <groupId&gt;org.apache.flink</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;flink-scala_2.11</artifactId&gt;
&gt;&gt;&gt; <version&gt;1.11.1</version&gt;
&gt;&gt;&gt; </dependency&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;
&gt;&gt;&gt;
&gt;&gt;&gt; <dependency&gt;
&gt;&gt;&gt; <groupId&gt;org.projectlombok</groupId&gt;
&gt;&gt;&gt; <artifactId&gt;lombok</artifactId&gt;
&gt;&gt;&gt; <version&gt;1.18.4</version&gt;
&gt;&gt;&gt; <!-<scope&gt;provided</scope&gt;-&gt;
&gt;&gt;&gt; </dependency&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; </dependencies&gt;
&gt;&gt;&gt; </project&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;
&gt;&gt;&gt;
&gt;&gt;&gt; the error I got is:
&gt;&gt;&gt;
&gt;&gt;&gt; https://paste.ubuntu.com/p/49HRYXFzR2/
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;
&gt;&gt;&gt;
&gt;&gt;&gt; some of the above error is:
&gt;&gt;&gt;
&gt;&gt;&gt; Caused by: java.lang.IllegalStateException: Unexpected state 
handle type, expected: class 
org.apache.flink.runtime.state.KeyGroupsStateHandle, but found: class 
org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;
&gt;&gt;&gt;
&gt;&gt;&gt; The steps are:
&gt;&gt;&gt;
&gt;&gt;&gt; 1.mvn clean scala:compile compile package
&gt;&gt;&gt;
&gt;&gt;&gt; 2.nc -lk 9999
&gt;&gt;&gt;
&gt;&gt;&gt; 3.flink run -c wordcount_increstate 
&nbsp;datastream_api-1.0-SNAPSHOT.jar
&gt;&gt;&gt; Job has been submitted with JobID df6d62a43620f258155b8538ef0ddf1b
&gt;&gt;&gt;
&gt;&gt;&gt; 4.input the following conents in nc -lk 9999
&gt;&gt;&gt;
&gt;&gt;&gt; before
&gt;&gt;&gt; error
&gt;&gt;&gt; error
&gt;&gt;&gt; error
&gt;&gt;&gt; error
&gt;&gt;&gt;
&gt;&gt;&gt; 5.
&gt;&gt;&gt;
&gt;&gt;&gt; flink run -s 
hdfs://Desktop:9000/tmp/flinkck/df6d62a43620f258155b8538ef0ddf1b/chk-22 -c 
StateWordCount datastream_api-1.0-SNAPSHOT.jar
&gt;&gt;&gt;
&gt;&gt;&gt; Then the above error happens.
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;
&gt;&gt;&gt;
&gt;&gt;&gt; Please help,Thanks~!
&gt;&gt;&gt;
&gt;&gt;&gt;
&gt;&gt;&gt; I have tried to subscried to user@flink.apache.org;
&gt;&gt;&gt;
&gt;&gt;&gt; but no replies.If possible ,send to appleyu...@foxmail.com with 
your valuable replies,thanks.
&gt;&gt;&gt;
&gt;&gt;&gt; &nbsp;

Reply via email to