gt;>
>>> Regards,
>>> Vinay Patil
>>>
>>> On Thu, Jun 29, 2017 at 6:01 PM, gerryzhou [via Apache Flink User Mailing
>>> List archive.] <ml+s2336050n1406...@n4.nabble.com
>>> <mailto:ml+s2336050n1406...@n4.nabble.com>> wrote:
>
gt;
>>> On Thu, Jun 29, 2017 at 6:01 PM, gerryzhou [via Apache Flink User
>>> Mailing List archive.] <ml+s2336050n1406...@n4.nabble.com> wrote:
>>>
>>>> Hi, Vinay,
>>>> I observed a similar problem in flink 1.3.0 with rocksdb. I wonder
&g
k User Mailing
>> List archive.] <ml+s2336050n1406...@n4.nabble.com> wrote:
>>
>>> Hi, Vinay,
>>> I observed a similar problem in flink 1.3.0 with rocksdb. I wonder
>>> how to use FRocksDB as you mentioned above. Thanks.
>>>
>>>
lem in flink 1.3.0 with rocksdb. I wonder how
>> to use FRocksDB as you mentioned above. Thanks.
>>
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpoin
ved a similar problem in flink 1.3.0 with rocksdb. I wonder
>> how to use FRocksDB as you mentioned above. Thanks.
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-flink-use
Hi, Vinay,
> I observed a similar problem in flink 1.3.0 with rocksdb. I wonder how
> to use FRocksDB as you mentioned above. Thanks.
>
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-flink-user-mailing-list-archive.23360
o use FRocksDB as you mentioned above. Thanks.
>
>
>
> --
> View this message in context:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p14063.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
> how to use FRocksDB as you mentioned above. Thanks.
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-
> Checkpointing-with-RocksDB-a
Hi, Vinay,
I observed a similar problem in flink 1.3.0 with rocksdb. I wonder how
to use FRocksDB as you mentioned above. Thanks.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend
gt; > The memory is continuously increasing and data is not getting flushed
>>> to
>>> > disk.
>>> >
>>> > I have attached the snapshot for reference.
>>> >
>>> > Also the data processed till now is only 17GB and above 120GB
t;
>> > I have attached the snapshot for reference.
>> >
>> > Also the data processed till now is only 17GB and above 120GB memory is
>> > getting used.
>> >
>> > Is there any change wrt RocksDB configurations
>> >
>> > <http:/
ser-mailing-list-archive.2336050.n4.
> nabble.com/file/n14013/TM_Memory_Usage.png>
> >
> > Regards,
> > Vinay Patil
> >
> >
> >
> > --
> > View this message in context: http://apache-flink-user-maili
> ng-list-archive.2336050.n4.nabble.com/Re-Checkpointing-
> with-RocksDB-as-statebackend-tp11752p14013.html
> > Sent from the Apache Flink User Mailing List archive. mailing list
> archive at Nabble.com.
>
>
rchive.2336050.n4.nabble.com/file/n14013/TM_Memory_Usage.png>
>
>
> Regards,
> Vinay Patil
>
>
>
> --
> View this message in context:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p14013.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
RocksDB configurations
<http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/file/n14013/TM_Memory_Usage.png>
Regards,
Vinay Patil
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-stateb
;>>>>>>>>> Best,
>>>>>>>>>>> Stefan
>>>>>>>>>>>
>>>>>>>>>>> Am 14.03.2017 um 15:31 schrieb Vishnu Viswanath <[hidden email]
>>>>>>>>>>> <http:
; The bucketing sink performs rename operations during the checkpoint and if
>> it tries to rename a file that is not yet consistent that would cause a
>> FileNotFound exception which would fail the checkpoint.
>>
>>
>>
>> Stephan,
>>
>>
being so slow after a
>>>>>>>>> certain size-per-key that it basically brings down the streaming
>>>>>>>>> program
>>>>>>>>> and the snapshots.
>>>>>>>>>
>>>>>>>>> We
of times.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2) The bucketing sink performs multiple renames over the
>>>>>>>>>> lifetime of a file, occurring when a checkpoint starts and then
&g
r/SendEmail.jtp?type=node=12209=3>> wrote:
>>>>>>>
>>>>>>>> @vinay Can you try to not set the buffer timeout at all? I am
>>>>>>>> actually not sure what would be the effect of setting it to a negative
>>>>>&g
;>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> 2) The bucketing sink performs multiple renames over the
>>>>>>>>>> lifetime of a file, occurring when a checkpoint starts and then
>>>>>
ry to not set the buffer timeout at all? I am
>>>>>>>> actually not sure what would be the effect of setting it to a negative
>>>>>>>> value, that can be a cause of problems...
>>>>>>>>
>>>>>>>>
>>>&
t;[hidden email]
>>>>>>> <http:///user/SendEmail.jtp?type=node=12209=4>> wrote:
>>>>>>>
>>>>>>>> Vinay,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> The buc
t;>>
>>>>>>>
>>>>>>>
>>>>>>> The bucketing sink performs rename operations during the checkpoint
>>>>>>> and if it tries to rename a file that is not yet consistent that would
>>>&g
t is not yet consistent that would
>>>>>> cause a FileNotFound exception which would fail the checkpoint.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Stephan,
>>>>>>
>>>>>>
>&
;>
>>>>>
>>>>> Currently my aws fork contains some very specific assumptions about
>>>>> the pipeline that will in general only hold for my pipeline. This is
>>>>> because there were still some open questions that I had about how
>>
>>>>
>>>> Currently my aws fork contains some very specific assumptions about the
>>>> pipeline that will in general only hold for my pipeline. This is because
>>>> there were still some open questions that I had about how to solve
>>>>
Jira issue with
> more specific.
>
>
>
> Seth Wiesman
>
>
>
> From: vinay patil <vinay18.pa...@gmail.com <mailto:vinay18.pa...@gmail.com>>
> Reply-To: "user@flink.apache.org <mailto:user@flink.apache.org>"
> <user@flink.apache.org &l
case. I will comment on the Jira issue
>>> with more specific.
>>>
>>>
>>>
>>> Seth Wiesman
>>>
>>>
>>>
>>> *From: *vinay patil <vinay18.pa...@gmail.com>
>>> *Reply-To: *"user@flink.apache.org&
gt;
>> Seth Wiesman
>>
>>
>>
>> *From: *vinay patil <vinay18.pa...@gmail.com>
>> *Reply-To: *"user@flink.apache.org" <user@flink.apache.org>
>> *Date: *Monday, February 27, 2017 at 1:05 PM
>> *To: *"user@flink.apache.org"
>
> Seth Wiesman
>
>
>
> *From: *vinay patil <vinay18.pa...@gmail.com>
> *Reply-To: *"user@flink.apache.org" <user@flink.apache.org>
> *Date: *Monday, February 27, 2017 at 1:05 PM
> *To: *"user@flink.apache.org" <user@flink.apache.org>
Reply-To: "user@flink.apache.org" <user@flink.apache.org>
Date: Monday, February 27, 2017 at 1:05 PM
To: "user@flink.apache.org" <user@flink.apache.org>
Subject: Re: Checkpointing with RocksDB as statebackend
Hi Seth,
Thank you for your suggestion.
But if the issu
>
>>
>> *From: *vinay patil <[hidden email]
>> <http:///user/SendEmail.jtp?type=node=11943=1>>
>> *Reply-To: *"[hidden email]
>> <http:///user/SendEmail.jtp?type=node=11943=2>" <[hidden email]
>> <http:///user/SendEmail.jtp?
e.org" <user@flink.apache.org>
> *Date: *Saturday, February 25, 2017 at 10:50 AM
> *To: *"user@flink.apache.org" <user@flink.apache.org>
> *Subject: *Re: Checkpointing with RocksDB as statebackend
>
>
>
> HI Stephan,
>
> Just to avoid the confusion
,
Seth Wiesman
From: vinay patil <vinay18.pa...@gmail.com>
Reply-To: "user@flink.apache.org" <user@flink.apache.org>
Date: Saturday, February 25, 2017 at 10:50 AM
To: "user@flink.apache.org" <user@flink.apache.org>
Subject: Re: Checkpointing with RocksDB as s
gt; and 3 it
>>>>> is stuck at the Kafka source after 50%
>>>>> (The data sent till now by Kafka source 1 is 65GB and sent by source 2
>>>>> is
>>>>> 15GB )
>>>>>
>>>>> Within 10minutes 15M records
)
>>>
>>> Within 10minutes 15M records were processed, and for the next 16minutes
>>> the
>>> pipeline is stuck , I don't see any progress beyond 15M because of
>>> checkpoints getting failed consistently.
>>>
>>> <http://apache-flink-user
-user-mailing-list-archive.2336050.n4.
>> nabble.com/file/n11882/Checkpointing_Failed.png>
>>
>>
>>
>> --
>> View this message in context: http://apache-flink-user-maili
>> ng-list-archive.2336050.n4.nabble.com/Re-Checkpointing-
>> with-RocksDB-as-
nts getting failed consistently.
>
> <http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/file/n11882/Checkpointing_Failed.png>
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Re-
-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11882.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
asynchronously.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11879.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.
>> nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-
>> tp11752p11831.html
>> To start a new topic unde
ted among all TM's.
> Why does this happen ?
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-
> Checkpointing-with-RocksDB-as-statebackend-tp1
TM's.
Why does this happen ?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11831.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
hread.run(Thread.java:745)
>
> Regards,
> Vinay Patil
>
>
>
> --
> View this message in context: http://apache-flink-user-
> mailing-list-archive.2336050.n4.nabble.com/Re-
> Checkpointing-with-RocksDB-as-statebackend-tp11752p11799.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive
> at Nabble.com.
>
ally, a large amount of memory is consumed by RocksDB to store
>>>>>> necessary indices. To avoid the unlimited growth in the memory
>>>>>> consumption, you can put these indices into block cache (set
>>>>>> CacheIndexAndFilterBlock t
size.
>>>>
>>>> You can also increase the number of backgroud threads to improve the
>>>> performance of flushes and compactions (via MaxBackgroundFlushes and
>>>> MaxBackgroudCompactions).
>>>>
>>>> In YARN clusters, task managers will be killed if their memory
>>>> utilizat
n context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-tp11752p11759.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
s the allocation size. Currently Flink does not count the
>>> memory used by RocksDB in the allocation. We are working on fine-grained
>>> resource allocation (see FLINK-5131). It may help to avoid such problems.
>>>
>>> May the information helps you.
>>>
&
avoid such problems.
>>
>> May the information helps you.
>>
>> Regards,
>> Xiaogang
>>
>>
>> ----------
>> 发件人:Vinay Patil <[hidden email]
>> <http:///user/SendEmail.jtp?type=node=11731=0&g
dden email]
> <http:///user/SendEmail.jtp?type=node=11731=0>>
> 发送时间:2017年2月17日(星期五) 21:19
> 收件人:user <[hidden email]
> <http:///user/SendEmail.jtp?type=node=11731=1>>
> 主 题:Re: Checkpointing with RocksDB as statebackend
>
> Hi Guys,
>
> There seems
9收件人:user
<user@flink.apache.org>主 题:Re: Checkpointing with RocksDB as statebackend
Hi Guys,
There seems to be some issue with RocksDB memory utilization.
Within few minutes of job run the physical memory usage increases by 4-5 GB and
it keeps on increasing.
I have tried different options for Max
t; > processed. (For Ex : In 13minutes 20M records were processed and when
>> the
>> > checkpoint took time there was no progress for the next 10minutes)
>> >
>> > I have even tried to set max checkpoint timeout to 3min, but in that
>> case as
>>
getting failed.
> >
> > I have set RocksDB FLASH_SSD_OPTION
> > What could be the issue ?
> >
> > P.S. I am writing to 3 S3 sinks
> >
> > checkpointing_issue.PNG
> > <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/f
> the
>> > checkpoint took time there was no progress for the next 10minutes)
>> >
>> > I have even tried to set max checkpoint timeout to 3min, but in that
>> case as
>> > well multiple checkpoints were getting failed.
>> >
>> > I
ist-archive.2336050.n4.nabble.com/Checkpointing-with-RocksDB-as-statebackend-tp11640.html
> > Sent from the Apache Flink User Mailing List archive. mailing list
> archive at Nabble.com.
>
>
> --
> If you reply to this email, your message will be added
OPTION
>> > What could be the issue ?
>> >
>> > P.S. I am writing to 3 S3 sinks
>> >
>> > checkpointing_issue.PNG
>> > <http://apache-flink-user-mailing-list-archive.2336050.n4.
>> nabble.com/file/n11640/checkpointing_issue.PNG>
>&
> >
> > P.S. I am writing to 3 S3 sinks
> >
> > checkpointing_issue.PNG
> > <http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/file/n11640/checkpointing_issue.PNG>
> >
> >
> >
> > --
> > View this message in conte
t;
>
>
>
>
> --
> View this message in context:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpointing-with-RocksDB-as-statebackend-tp11640.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
.n4.nabble.com/file/n11640/checkpointing_issue.PNG>
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpointing-with-RocksDB-as-statebackend-tp11640.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
59 matches
Mail list logo