Re: cannot unsubscribe

2022-07-14 Thread Ryan Trollip
Mikael

Also not receiving confirmation email

On Thu, Jul 14, 2022, 10:53 AM אריאל מסלאטון 
wrote:

> Thanks, for trying to assist Mikael!
> Unfortunately I didn't receive any confirmation email,
> I checked all of my gmail labels for it, spam, forums etc...
>
> ‫בתאריך יום ה׳, 14 ביולי 2022 ב-11:48 מאת ‪Mikael‬‏ <‪
> mikael.arons...@gmail.com‬‏>:‬
>
>> Hi!
>>
>> No need to do that ;)
>>
>> I assume you don't get a confirmation email either ? any chance that may
>> end up in spam or something ? it should not but you never know, if I
>> remember correct you need to confirm your unsubscription so you should
>> receive an email.
>>
>> Mikael
>>
>>
>> On 2022-07-14 10:43, אריאל מסלאטון wrote:
>>
>> Hi MiKael,
>>
>> I've verified it multiple times...
>> I can send screenshots to prove it, if it will help :)
>>
>> ‫בתאריך יום ה׳, 14 ביולי 2022 ב-11:37 מאת ‪Mikael‬‏ <‪
>> mikael.arons...@gmail.com‬‏>:‬
>>
>>> Hi!
>>>
>>> It should work ok, make sure you use the same email as you used to
>>> subscribe so there is no hickup there.
>>>
>>> Mikael
>>> On 2022-07-14 10:28, אריאל מסלאטון wrote:
>>>
>>> I tried that, multiple times...
>>> Still receiving emails
>>>
>>> ‫בתאריך יום ה׳, 14 ביולי 2022 ב-10:29 מאת ‪Pavel Tupitsyn‬‏ <‪
>>> ptupit...@apache.org‬‏>:‬
>>>
 Please send any text to user-unsubscr...@ignite.apache.org

 ‪On Thu, Jul 14, 2022 at 10:24 AM ‫אריאל מסלאטון‬‎ <
 arielmasla...@gmail.com> wrote:‬

> I have been unsubscribing from the mailing list but I keep getting
> emails.
> I sent unsubscribe messages to every mailing list and it didn't help.
>
> Can you please assist?
> Regards
>
> --
> *054-2116997*
> *arielmasla...@gmail.com *
>

>>>
>>> --
>>> *054-2116997*
>>> *arielmasla...@gmail.com *
>>>
>>>
>>
>> --
>> *054-2116997*
>> *arielmasla...@gmail.com *
>>
>>
>
> --
> *054-2116997*
> *arielmasla...@gmail.com *
>


Re: cannot unsubscribe

2022-07-14 Thread Ryan Trollip
Same issue

On Thu, Jul 14, 2022, 9:24 AM אריאל מסלאטון  wrote:

> I have been unsubscribing from the mailing list but I keep getting emails.
> I sent unsubscribe messages to every mailing list and it didn't help.
>
> Can you please assist?
> Regards
>
> --
> *054-2116997*
> *arielmasla...@gmail.com *
>


Re: Defrag?

2021-07-03 Thread Ryan Trollip
A rebuild of the cash reduced the size of the data dramatically.
Apparently ignite is not doing anything to rebalance or clean up pages.
I can't see how anyone using ignite native seriously will not have this
problem.

I wonder if this impacts the indexing also? And could be part of the lousy
performance we are having with ignite native.


On Wed, Jun 30, 2021, 8:27 AM Ryan Trollip  wrote:

> Hey Ilya
>
> It's the data tables that keep growing not the WAL.
> We will try to rebuild the cache and see if that fixes the issue
>
> On Mon, Jun 28, 2021 at 8:46 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Is it WAL (wal/) that is growing or checkpoint space (db/)? If latter,
>> any specific caches that are growing unbound?
>>
>> If letter, you can try creating a new cache, moving the relevant data to
>> this new cache, switch to using it, and then drop the old cache - should
>> reclaim the space.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 28 июн. 2021 г. в 17:34, Ryan Trollip :
>>
>>> Is this why the native disk storage just keeps growing and does not
>>> reduce after we delete from ignite using SQL?
>>> We are up to 80GB on disk now on some instances. We implemented a custom
>>> archiving feature to move older data out of ignite cache to a PostgresSQL
>>> database but when we delete that data from ignite instance, the disk data
>>> size ignite is using stays the same, and then keeps growing, and
>>> growing
>>>
>>> On Thu, Jun 24, 2021 at 7:10 PM Denis Magda  wrote:
>>>
>>>> Ignite fellows,
>>>>
>>>> I remember some of us worked on the persistence defragmentation
>>>> features. Has it been merged?
>>>>
>>>> @Valentin Kulichenko  probably you know
>>>> the latest state.
>>>>
>>>> -
>>>> Denis
>>>>
>>>> On Thu, Jun 24, 2021 at 11:59 AM Ilya Kasnacheev <
>>>> ilya.kasnach...@gmail.com> wrote:
>>>>
>>>>> Hello!
>>>>>
>>>>> You can probably drop the entire cache and then re-populate it via
>>>>> loadCache(), etc.
>>>>>
>>>>> Regards,
>>>>> --
>>>>> Ilya Kasnacheev
>>>>>
>>>>>
>>>>> ср, 23 июн. 2021 г. в 21:47, Ryan Trollip :
>>>>>
>>>>>> Thanks, Ilya, we may have to consider moving back to non-native
>>>>>> storage and caching more selectively as the performance degrades when 
>>>>>> there
>>>>>> is a lot of write/delete activity or tables with large amounts of rows.
>>>>>> This is with SQL with indexes and the use of query plans etc.
>>>>>>
>>>>>> Is there any easy way to rebuild the entire native database after
>>>>>> hours? e.g. with a batch run on the weeknds?
>>>>>>
>>>>>> On Wed, Jun 23, 2021 at 7:39 AM Ilya Kasnacheev <
>>>>>> ilya.kasnach...@gmail.com> wrote:
>>>>>>
>>>>>>> Hello!
>>>>>>>
>>>>>>> I don't think there's anything ready to use, but "killing
>>>>>>> performance" from fragmentation is also not something reported too 
>>>>>>> often.
>>>>>>>
>>>>>>> Regards,
>>>>>>> --
>>>>>>> Ilya Kasnacheev
>>>>>>>
>>>>>>>
>>>>>>> ср, 16 июн. 2021 г. в 04:39, Ryan Trollip >>>>>> >:
>>>>>>>
>>>>>>>> We see continual very large growth to data with ignite native. We
>>>>>>>> have a very chatty use case that's creating and deleting stuff often. 
>>>>>>>> The
>>>>>>>> data on disk just keeps growing at an explosive rate. So much so we 
>>>>>>>> ported
>>>>>>>> this to a DB to see the difference and the DB is much smaller. I was
>>>>>>>> searching to see if someone has the same issue. This is also killing
>>>>>>>> performance.
>>>>>>>>
>>>>>>>> Founds this:
>>>>>>>>
>>>>>>>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-47%3A+Native+persistence+defragmentation
>>>>>>>>
>>>>>>>> Apparently, there is no auto-rebalancing of pages? or cleanup of
>>>>>>>> pages?
>>>>>>>>
>>>>>>>> Has anyone implemented a workaround to rebuild the cache and
>>>>>>>> indexes say on a weekly basis to get it to behave reasonably?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>


Re: Defrag?

2021-06-30 Thread Ryan Trollip
Hey Ilya

It's the data tables that keep growing not the WAL.
We will try to rebuild the cache and see if that fixes the issue

On Mon, Jun 28, 2021 at 8:46 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Is it WAL (wal/) that is growing or checkpoint space (db/)? If latter, any
> specific caches that are growing unbound?
>
> If letter, you can try creating a new cache, moving the relevant data to
> this new cache, switch to using it, and then drop the old cache - should
> reclaim the space.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 28 июн. 2021 г. в 17:34, Ryan Trollip :
>
>> Is this why the native disk storage just keeps growing and does not
>> reduce after we delete from ignite using SQL?
>> We are up to 80GB on disk now on some instances. We implemented a custom
>> archiving feature to move older data out of ignite cache to a PostgresSQL
>> database but when we delete that data from ignite instance, the disk data
>> size ignite is using stays the same, and then keeps growing, and
>> growing
>>
>> On Thu, Jun 24, 2021 at 7:10 PM Denis Magda  wrote:
>>
>>> Ignite fellows,
>>>
>>> I remember some of us worked on the persistence defragmentation
>>> features. Has it been merged?
>>>
>>> @Valentin Kulichenko  probably you know
>>> the latest state.
>>>
>>> -
>>> Denis
>>>
>>> On Thu, Jun 24, 2021 at 11:59 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
>>>> Hello!
>>>>
>>>> You can probably drop the entire cache and then re-populate it via
>>>> loadCache(), etc.
>>>>
>>>> Regards,
>>>> --
>>>> Ilya Kasnacheev
>>>>
>>>>
>>>> ср, 23 июн. 2021 г. в 21:47, Ryan Trollip :
>>>>
>>>>> Thanks, Ilya, we may have to consider moving back to non-native
>>>>> storage and caching more selectively as the performance degrades when 
>>>>> there
>>>>> is a lot of write/delete activity or tables with large amounts of rows.
>>>>> This is with SQL with indexes and the use of query plans etc.
>>>>>
>>>>> Is there any easy way to rebuild the entire native database after
>>>>> hours? e.g. with a batch run on the weeknds?
>>>>>
>>>>> On Wed, Jun 23, 2021 at 7:39 AM Ilya Kasnacheev <
>>>>> ilya.kasnach...@gmail.com> wrote:
>>>>>
>>>>>> Hello!
>>>>>>
>>>>>> I don't think there's anything ready to use, but "killing
>>>>>> performance" from fragmentation is also not something reported too often.
>>>>>>
>>>>>> Regards,
>>>>>> --
>>>>>> Ilya Kasnacheev
>>>>>>
>>>>>>
>>>>>> ср, 16 июн. 2021 г. в 04:39, Ryan Trollip :
>>>>>>
>>>>>>> We see continual very large growth to data with ignite native. We
>>>>>>> have a very chatty use case that's creating and deleting stuff often. 
>>>>>>> The
>>>>>>> data on disk just keeps growing at an explosive rate. So much so we 
>>>>>>> ported
>>>>>>> this to a DB to see the difference and the DB is much smaller. I was
>>>>>>> searching to see if someone has the same issue. This is also killing
>>>>>>> performance.
>>>>>>>
>>>>>>> Founds this:
>>>>>>>
>>>>>>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-47%3A+Native+persistence+defragmentation
>>>>>>>
>>>>>>> Apparently, there is no auto-rebalancing of pages? or cleanup of
>>>>>>> pages?
>>>>>>>
>>>>>>> Has anyone implemented a workaround to rebuild the cache and indexes
>>>>>>> say on a weekly basis to get it to behave reasonably?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>


Re: Defrag?

2021-06-28 Thread Ryan Trollip
Is this why the native disk storage just keeps growing and does not reduce
after we delete from ignite using SQL?
We are up to 80GB on disk now on some instances. We implemented a custom
archiving feature to move older data out of ignite cache to a PostgresSQL
database but when we delete that data from ignite instance, the disk data
size ignite is using stays the same, and then keeps growing, and
growing

On Thu, Jun 24, 2021 at 7:10 PM Denis Magda  wrote:

> Ignite fellows,
>
> I remember some of us worked on the persistence defragmentation features.
> Has it been merged?
>
> @Valentin Kulichenko  probably you know
> the latest state.
>
> -
> Denis
>
> On Thu, Jun 24, 2021 at 11:59 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> You can probably drop the entire cache and then re-populate it via
>> loadCache(), etc.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 23 июн. 2021 г. в 21:47, Ryan Trollip :
>>
>>> Thanks, Ilya, we may have to consider moving back to non-native storage
>>> and caching more selectively as the performance degrades when there is a
>>> lot of write/delete activity or tables with large amounts of rows. This is
>>> with SQL with indexes and the use of query plans etc.
>>>
>>> Is there any easy way to rebuild the entire native database after hours?
>>> e.g. with a batch run on the weeknds?
>>>
>>> On Wed, Jun 23, 2021 at 7:39 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
>>>> Hello!
>>>>
>>>> I don't think there's anything ready to use, but "killing performance"
>>>> from fragmentation is also not something reported too often.
>>>>
>>>> Regards,
>>>> --
>>>> Ilya Kasnacheev
>>>>
>>>>
>>>> ср, 16 июн. 2021 г. в 04:39, Ryan Trollip :
>>>>
>>>>> We see continual very large growth to data with ignite native. We have
>>>>> a very chatty use case that's creating and deleting stuff often. The data
>>>>> on disk just keeps growing at an explosive rate. So much so we ported this
>>>>> to a DB to see the difference and the DB is much smaller. I was searching
>>>>> to see if someone has the same issue. This is also killing performance.
>>>>>
>>>>> Founds this:
>>>>>
>>>>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-47%3A+Native+persistence+defragmentation
>>>>>
>>>>> Apparently, there is no auto-rebalancing of pages? or cleanup of
>>>>> pages?
>>>>>
>>>>> Has anyone implemented a workaround to rebuild the cache and indexes
>>>>> say on a weekly basis to get it to behave reasonably?
>>>>>
>>>>> Thanks
>>>>>
>>>>


Re: Defrag?

2021-06-23 Thread Ryan Trollip
Thanks, Ilya, we may have to consider moving back to non-native storage and
caching more selectively as the performance degrades when there is a lot of
write/delete activity or tables with large amounts of rows. This is with
SQL with indexes and the use of query plans etc.

Is there any easy way to rebuild the entire native database after hours?
e.g. with a batch run on the weeknds?

On Wed, Jun 23, 2021 at 7:39 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I don't think there's anything ready to use, but "killing performance"
> from fragmentation is also not something reported too often.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 16 июн. 2021 г. в 04:39, Ryan Trollip :
>
>> We see continual very large growth to data with ignite native. We have a
>> very chatty use case that's creating and deleting stuff often. The data on
>> disk just keeps growing at an explosive rate. So much so we ported this to
>> a DB to see the difference and the DB is much smaller. I was searching to
>> see if someone has the same issue. This is also killing performance.
>>
>> Founds this:
>>
>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-47%3A+Native+persistence+defragmentation
>>
>> Apparently, there is no auto-rebalancing of pages? or cleanup of pages?
>>
>> Has anyone implemented a workaround to rebuild the cache and indexes say
>> on a weekly basis to get it to behave reasonably?
>>
>> Thanks
>>
>


Defrag?

2021-06-15 Thread Ryan Trollip
We see continual very large growth to data with ignite native. We have a
very chatty use case that's creating and deleting stuff often. The data on
disk just keeps growing at an explosive rate. So much so we ported this to
a DB to see the difference and the DB is much smaller. I was searching to
see if someone has the same issue. This is also killing performance.

Founds this:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-47%3A+Native+persistence+defragmentation

Apparently, there is no auto-rebalancing of pages? or cleanup of pages?

Has anyone implemented a workaround to rebuild the cache and indexes say on
a weekly basis to get it to behave reasonably?

Thanks


Defrag, rebalance, rebuild

2021-06-14 Thread Ryan Trollip
Hey all

We see continual very large growth to data with ignite native. We have a
very chatty use case that's creating and deleting stuff often. The data on
disk just keeps growing at an explosive rate. So much so we ported this to
a DB to see the difference and the DB is much smaller. I was searching to
see if someone has the same issue. This is also killing performance.

Founds this:
https://cwiki.apache.org/confluence/display/IGNITE/IEP-47%3A+Native+persistence+defragmentation

Apparently, there is no auto-rebalancing of pages? or cleanup of pages?
seems fundimental

Has anyone implemented a workaround to rebuild the cache and indexes say on
a weekly basis to get it to behave reasonably?

Thanks
Ryan


Unsubscribe

2021-02-18 Thread Ryan Trollip
I sent an unsubscribe request but am still getting emails. Please remove
from the list


Re: Disappointing SQL update performance - no stored procedures?

2021-01-21 Thread Ryan Trollip
StephenThanks, the team is going through all of the performance guides for
a second time. We also understand that this operates on a version of the H2
database and are looking at the documentation around that also.
The performance bottleneck is for large transactions which comprise of,
what in a SQL DB would be, a number of select into statements. Selects with
joins, and an insert of a few thousand rows per transaction, with a dozen
of concurrent transactions.
We are running the client in spring boot, in the same JVM but not running
Ignite as compute node. Would that make any difference?
What we are trying tomorrow is bulk insert or "streaming" features, to see
if we can get better performance from that.
After a considerable amount of profiling, we also noticed garbage
collection resource challenges for 2nd and 3rd level cleanup casing large
delays, now and again, and are looking into forcing that in areas and
ensuring all objects are ready for cleanup as soon as possible.

On Thu, Jan 21, 2021 at 2:39 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> This is the General Performance Tips guide: general-perf-tips
> <https://ignite.apache.org/docs/latest/perf-and-troubleshooting/general-perf-tips>
>
> There’s no one-size-fits-all solution, but as a general point, Ignite is
> optimised to work as a cluster. Operating a single server node isn’t where
> you get the best performance relative to other solutions.
>
> The compute grid *is* Ignite’s stored procedure. Using colocated compute
> is one of the key mechanisms for getting the best performance as it avoids
> copying data across the network.
>
> Regards,
> Stephen
>
> On 20 Jan 2021, at 19:23, Ryan Trollip  wrote:
>
> Hey all
>
> For structured data read access is great, but large transactional updates
> to data are very not great, for us. We are working on a single node larger
> machine, so nothing fancy at all.
> In working to trouble shoot this, our dev team has tried a large number of
> different configurations over the past weeks and made good progress turning
> performance, but frankly, it's still nowhere near what I would expect from
> a standard SQL Server database with stored procedures, and it's using far
> more memory.
>
> One major issue is the back and forth with the client application. I've
> done some reading up on using compute grid instead but found this older
> article where they test the performance and for large inserts is many times
> slower than a SQL stored procedure.
> https://cs.ulb.ac.be/public/_media/teaching/ignite_2017.pdf
>
> Is this still the case? or is there a better way to approach this that we
> are missing? Would code written in a compute grid solve these performance
> issues? are compiled stored procedures planned for the future?
>
> Thanks
> Ryan
>
>
>
>


Disappointing SQL update performance - no stored procedures?

2021-01-20 Thread Ryan Trollip
Hey all

For structured data read access is great, but large transactional updates
to data are very not great, for us. We are working on a single node larger
machine, so nothing fancy at all.
In working to trouble shoot this, our dev team has tried a large number of
different configurations over the past weeks and made good progress turning
performance, but frankly, it's still nowhere near what I would expect from
a standard SQL Server database with stored procedures, and it's using far
more memory.

One major issue is the back and forth with the client application. I've
done some reading up on using compute grid instead but found this older
article where they test the performance and for large inserts is many times
slower than a SQL stored procedure.
https://cs.ulb.ac.be/public/_media/teaching/ignite_2017.pdf

Is this still the case? or is there a better way to approach this that we
are missing? Would code written in a compute grid solve these performance
issues? are compiled stored procedures planned for the future?

Thanks
Ryan


Re: Any custom eviction policies to flush data from memory to disk

2021-01-20 Thread Ryan Trollip
Understood, thanks for the explanation Stephen

On Tue, Jan 19, 2021 at 10:00 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> Ignite *pages* its data. So if you access a record, it will transparency
> be copied from disk to memory. It doesn’t proactively go to disk and pull
> in records you might need.
>
> On 19 Jan 2021, at 16:18, Ryan Trollip  wrote:
>
> Stephen
>
> It seems the problem is how ignite deals with this was not clear to me. To
> make sure I understand this correctly:
> Ignite with native persistence on, automatically overflows pages to disk
> using the least recently used policy - LRU. (which is great)
> Pages could be a partial row in a table. But that's ok because a SQL query
> that includes that will pull from RAM and disk automatically under the
> covers. i.e. it's all auto-magically done for us.
>
> What is still not clear is, as branches are deleted or we scale-up servers
> and more memory is then made available, will it rotate back into memory
> from disk what was rotated out? assuming reverse LRU order?
>
> Thanks!!
> Ryan
>
>
> On Tue, Jan 19, 2021 at 7:28 AM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> As long as new branches — to use your analogy — are in memory, why does
>> it matter that a few others are too? The least recently used branches will
>> automatically (LRU) be purged from memory if space is needed for new
>> branches.
>>
>> In fact, if you’re worried about available memory, a *time based* eviction
>> policy won’t work. What if you expect to have 100 branches and size your
>> cluster appropriately. Suddenly, 1000 branches are created. Boom!
>>
>> With a space-based eviction policy — as is the default with Ignite native
>> persistence — that works just fine.
>>
>> You could create a cache with an eviction policy. When the records are
>> deleted after a week, you can have a process listening to delete events and
>> copy the record to a different cache in a small data region.
>>
>> So what you’re asking for is possible, it’s just more complicated and
>> less effective than the alternative.
>>
>> On 19 Jan 2021, at 14:04, Ryan Trollip  wrote:
>>
>> Stephen
>>
>> Let's use an analogy of projects in source control. Let's say we have a
>> very active community of developers, they are creating 100 new branches a
>> day. Each branch has a few thousand objects and associated properties etc.
>> but these developers don't clean up by deleting branches.
>> We want new branches to be cached in memory and available high
>> performance read and write, but older branches to go to disk to save on
>> memory hardware needs, since many are abandoned.
>> The policy could read something like this: Branches that have not been
>> accessed in 1 week, move to disk. On branch access, if on disk, move back
>> to RAM.
>>
>> Thanks
>> Ryan
>>
>> On Tue, Jan 19, 2021 at 2:33 AM Stephen Darlington <
>> stephen.darling...@gridgain.com> wrote:
>>
>>> I guess I’m still not clear why you need to explicitly remove them from
>>> memory.
>>>
>>> By virtue of using native persistence, they’re already on disk. If you
>>> load new data, the old entries will eventually be flushed from memory (but
>>> remain on disk). What do you gain by removing entries from memory at a
>>> specific time?
>>>
>>> Regards,
>>> Stephen
>>>
>>> > On 19 Jan 2021, at 06:02, Naveen  wrote:
>>> >
>>> > Hi Stephen
>>> >
>>> > on the same mail chain, we also data like OTP (one time passwds) which
>>> are
>>> > no relevant after a while, but we dont want to expire or delete them,
>>> just
>>> > get them flushed to disk, like wise we do have other requirements
>>> where data
>>> > is very relevant only for a certain duration, later on its not
>>> important.
>>> > Thats the whole idea of exploring eviction policies
>>> >
>>> > Naveen
>>> >
>>> >
>>> >
>>> > --
>>> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>>
>>>
>>
>>
>
>


Re: Any custom eviction policies to flush data from memory to disk

2021-01-19 Thread Ryan Trollip
Stephen

It seems the problem is how ignite deals with this was not clear to me. To
make sure I understand this correctly:
Ignite with native persistence on, automatically overflows pages to disk
using the least recently used policy - LRU. (which is great)
Pages could be a partial row in a table. But that's ok because a SQL query
that includes that will pull from RAM and disk automatically under the
covers. i.e. it's all auto-magically done for us.

What is still not clear is, as branches are deleted or we scale-up servers
and more memory is then made available, will it rotate back into memory
from disk what was rotated out? assuming reverse LRU order?

Thanks!!
Ryan


On Tue, Jan 19, 2021 at 7:28 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> As long as new branches — to use your analogy — are in memory, why does it
> matter that a few others are too? The least recently used branches will
> automatically (LRU) be purged from memory if space is needed for new
> branches.
>
> In fact, if you’re worried about available memory, a *time based* eviction
> policy won’t work. What if you expect to have 100 branches and size your
> cluster appropriately. Suddenly, 1000 branches are created. Boom!
>
> With a space-based eviction policy — as is the default with Ignite native
> persistence — that works just fine.
>
> You could create a cache with an eviction policy. When the records are
> deleted after a week, you can have a process listening to delete events and
> copy the record to a different cache in a small data region.
>
> So what you’re asking for is possible, it’s just more complicated and less
> effective than the alternative.
>
> On 19 Jan 2021, at 14:04, Ryan Trollip  wrote:
>
> Stephen
>
> Let's use an analogy of projects in source control. Let's say we have a
> very active community of developers, they are creating 100 new branches a
> day. Each branch has a few thousand objects and associated properties etc.
> but these developers don't clean up by deleting branches.
> We want new branches to be cached in memory and available high performance
> read and write, but older branches to go to disk to save on memory hardware
> needs, since many are abandoned.
> The policy could read something like this: Branches that have not been
> accessed in 1 week, move to disk. On branch access, if on disk, move back
> to RAM.
>
> Thanks
> Ryan
>
> On Tue, Jan 19, 2021 at 2:33 AM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> I guess I’m still not clear why you need to explicitly remove them from
>> memory.
>>
>> By virtue of using native persistence, they’re already on disk. If you
>> load new data, the old entries will eventually be flushed from memory (but
>> remain on disk). What do you gain by removing entries from memory at a
>> specific time?
>>
>> Regards,
>> Stephen
>>
>> > On 19 Jan 2021, at 06:02, Naveen  wrote:
>> >
>> > Hi Stephen
>> >
>> > on the same mail chain, we also data like OTP (one time passwds) which
>> are
>> > no relevant after a while, but we dont want to expire or delete them,
>> just
>> > get them flushed to disk, like wise we do have other requirements where
>> data
>> > is very relevant only for a certain duration, later on its not
>> important.
>> > Thats the whole idea of exploring eviction policies
>> >
>> > Naveen
>> >
>> >
>> >
>> > --
>> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>>
>>
>
>


Re: Any custom eviction policies to flush data from memory to disk

2021-01-19 Thread Ryan Trollip
Stephen

Let's use an analogy of projects in source control. Let's say we have a
very active community of developers, they are creating 100 new branches a
day. Each branch has a few thousand objects and associated properties etc.
but these developers don't clean up by deleting branches.
We want new branches to be cached in memory and available high performance
read and write, but older branches to go to disk to save on memory hardware
needs, since many are abandoned.
The policy could read something like this: Branches that have not been
accessed in 1 week, move to disk. On branch access, if on disk, move back
to RAM.

Thanks
Ryan

On Tue, Jan 19, 2021 at 2:33 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> I guess I’m still not clear why you need to explicitly remove them from
> memory.
>
> By virtue of using native persistence, they’re already on disk. If you
> load new data, the old entries will eventually be flushed from memory (but
> remain on disk). What do you gain by removing entries from memory at a
> specific time?
>
> Regards,
> Stephen
>
> > On 19 Jan 2021, at 06:02, Naveen  wrote:
> >
> > Hi Stephen
> >
> > on the same mail chain, we also data like OTP (one time passwds) which
> are
> > no relevant after a while, but we dont want to expire or delete them,
> just
> > get them flushed to disk, like wise we do have other requirements where
> data
> > is very relevant only for a certain duration, later on its not important.
> > Thats the whole idea of exploring eviction policies
> >
> > Naveen
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Re: Any custom eviction policies to flush data from memory to disk

2021-01-18 Thread Ryan Trollip
Thanks Stephen!

Sigh...we moved from third-party persistence to ignite persistence, some
time back, to simplify the architecture.
I guess we are going to reverse that, set up a separate region for the data
in question or programmatically create our own eviction events.

Thanks much for the quick response!
Ryan

On Mon, Jan 18, 2021 at 9:18 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> If you’re using third-party persistence (or some other way to shift data
> into Ignite), you can use *Expiry* policies:
>
> https://ignite.apache.org/docs/latest/configuring-caches/expiry-policies
>
> Using this, records that are not used in a given period of time can be
> removed.
>
> Eviction is based on *space*, expiry on *time*.
>
> Regards,
> Stephen
>
> On 18 Jan 2021, at 16:13, Ryan Trollip  wrote:
>
> Hey Stephen
>
> We have groups of data that used actively for a while, then not used for a
> while, but may become active again later.
> The intention here is to keep/cache active data in memory with ignite and
> to somehow "archive" less active data to disk (free up its memory) based on
> a policy, table, region or something, without going back and forth to a
> separate database/warehouse implementation to off-load and on-load this
> data from disk into the ignite cache.
>
> Thanks
> Ryan
>
>
>
> On Mon, Jan 18, 2021 at 6:30 AM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> What is your use case here? Why you need to evict data from memory on a
>> schedule rather than when it needs the memory?
>>
>> (The short version is that you can’t, but maybe if we understood what
>> you’re trying to do we can figure something out.)
>>
>> Regards,
>> Stephen
>>
>> > On 18 Jan 2021, at 11:46, Naveen  wrote:
>> >
>> > Hi
>> >
>> > Apart from LRU, we dont have any other eviction policy.
>> > If we want to flush or evict a record with a ttl , we need to build a
>> new
>> > eviction policy right ?
>> > OR any other ways of achieving this
>> >
>> > Thanks
>> > Naveen
>> >
>> >
>> >
>> > --
>> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>>
>>
>
>


Re: Any custom eviction policies to flush data from memory to disk

2021-01-18 Thread Ryan Trollip
Hey Stephen

We have groups of data that used actively for a while, then not used for a
while, but may become active again later.
The intention here is to keep/cache active data in memory with ignite and
to somehow "archive" less active data to disk (free up its memory) based on
a policy, table, region or something, without going back and forth to a
separate database/warehouse implementation to off-load and on-load this
data from disk into the ignite cache.

Thanks
Ryan



On Mon, Jan 18, 2021 at 6:30 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> What is your use case here? Why you need to evict data from memory on a
> schedule rather than when it needs the memory?
>
> (The short version is that you can’t, but maybe if we understood what
> you’re trying to do we can figure something out.)
>
> Regards,
> Stephen
>
> > On 18 Jan 2021, at 11:46, Naveen  wrote:
> >
> > Hi
> >
> > Apart from LRU, we dont have any other eviction policy.
> > If we want to flush or evict a record with a ttl , we need to build a new
> > eviction policy right ?
> > OR any other ways of achieving this
> >
> > Thanks
> > Naveen
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Performance turning - large transaction

2021-01-14 Thread Ryan Trollip
Hey all

Looking for an independent consultant that can help us with some ignite
performance turning and a general architecture/configuration health check.
Part-time couple hours here and there, or in a block of time. Please
contact me directly.

Thanks
Ryan