The following is the environment I use:
1. flink.version: 1.9.0
2. java version "1.8.0_212"
3. scala version: 2.11.12
When I wrote the following code in the scala programming language, I found
the following error:
// set up the batch execution environment
val bbSettings =
The following is the environment I use:
1. flink.version: 1.9.0
2. java version "1.8.0_212"
3. scala version: 2.11.12
When I wrote the following code in the scala programming language, I found
the following error:
// set up the batch execution environment
val bbSettings =
ase-notes/flink-1.7.html#savepoints-being-used-for-recovery
>
>
> Regards,
>
> Kien
>
>
> On 1/23/2019 6:57 PM, Ben Yan wrote:
> > hi:
> >
> > Can I delete this savepoint directory immediately after the job
> > resumes running from the savepoint directory?
> >
> > Best
> > Ben
>
hi:
Can I delete this savepoint directory immediately after the job resumes
running from the savepoint directory?
Best
Ben
reason, what other
ways can quickly restore this state data back quickly, for example, through
some kind of offline task is to quickly recover state data from offline
data, so that streaming jobs can be launched from this recovered state data.
Best
Ben
Ben Yan 于2018年12月8日周六 上午11:08写道:
> I h
yarn/local/usercache/yarn/appcache/application_1544101169829_0038/
total 12
drwx--x--- 13 yarn hadoop 4096 Dec 8 02:29 filecache
drwxr-xr-x 2 yarn hadoop 4096 Dec 8 02:53 localState
drwxr-xr-x 2 yarn hadoop 4096 Dec 8 02:53
rocksdb-lib-7ef4471db8d3b8c1bdcfa4dba4d95a36
Ben Yan 于2018年12月8日周六
f the listed file (the link target)
> does exist before linking, which it should in my opinion because it is
> listed in the directory.
>
> On 7. Dec 2018, at 16:33, Ben Yan wrote:
>
> The version of the recovered checkpoint is also 1.7.0 .
>
> Stefan Richter 于2018年12月7日周五 下
iteration is driven by listing the
> existing files. Can you somehow monitor which files and directories are
> created during the restore attempt?
>
> On 7. Dec 2018, at 15:53, Ben Yan wrote:
>
> hi ,Stefan
>
> Thank you for your explanation. I used flink1.6.2, which is with
u use
> a different Flink version before (which worked)?
>
> Best,
> Stefan
>
> On 7. Dec 2018, at 11:28, Ben Yan wrote:
>
> hi . I am using flink-1.7.0. I am using RockDB and hdfs as statebackend,
> but recently I found the following exception when the job resumed from
Thanks. If you need me to provide information, please let me know, I will
provide relevant information.
Piotr Nowojski 于2018年12月7日周五 下午7:31写道:
> Adding back user mailing list.
>
> Andrey, could you take a look at this?
>
> Piotrek
>
> On 7 Dec 2018, at 12:28, Ben Yan wro
hi . I am using flink-1.7.0. I am using RockDB and hdfs as statebackend,
but recently I found the following exception when the job resumed from the
checkpoint. Task-local state is always considered a secondary copy, the
ground truth of the checkpoint state is the primary copy in the distributed
Hi Stephan,
Will [ https://issues.apache.org/jira/browse/FLINK-5479 ] (Per-partition
watermarks in FlinkKafkaConsumer should consider idle partitions) be
included in 1.6? As we are seeing more users with this issue on the mailing
lists.
Thanks.
Ben
2018-06-05 5:29 GMT+08:00 Che Lui Shum :
>
> On Apr 10, 2018, at 7:32 PM, Ben Yan <yan.xiao.bin.m...@gmail.com> wrote:
>
> Hi Chesnay:
>
> I think it would be better without such a limitation.I want to
> consult another problem. When I use BucketingSink(I use aws s3), the filename
> of a few f
ext%20~%20%22BucketingSink%22>
> On Apr 10, 2018, at 6:29 PM, Ben Yan <yan.xiao.bin.m...@gmail.com> wrote:
>
> Hi Fabian.
>
> If I use ProcessFunction , I can get it! But I want to know that how
> to get Kafka timestamp in like flatmap and map method
<https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/process_function.html#the-processfunction>
>
> 2018-03-30 7:45 GMT+02:00 Ben Yan <yan.xiao.bin.m...@gmail.com
> <mailto:yan.xiao.bin.m...@gmail.com>&g
hi,
Is that what you mean?
See :
https://issues.apache.org/jira/browse/FLINK-8500?page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel=16377145#comment-16377145
16 matches
Mail list logo