Ignition Start - Timeout if connection is unsuccessful

2019-09-10 Thread Mahesh Renduchintala
Hello

We are currently using Ignition.Start to get the handle for thick client.

>> ignite = Ignition.start(cfg);

As I understand, this API is a blocking API, unless the connection is 
successfully established.

However, in some scenarios, where the thick client is unable to connect 
properly, it is preferable to have a timeout option as specified below.
>> ignite = Ignition.start(cfg,  timeout);

is this already available today? If not, can you take it as an enhancement 
request for 2.8.

The reason why I ask is, in some scenarios, when a thick client comes up for 
the very first time, we see thick client making an attempt to connect to ignite 
servers almost in an infinite loop.
Previously, I raised this infinite loop connecting issue before.
http://apache-ignite-users.70518.x6.nabble.com/client-reconnect-working-td28570.html

regards
mahesh






Re: IgniteCache.invoke deadlock example

2019-09-10 Thread Evangelos Morakis
Hi Andrei, 
Thanks a lot for your reply. Relatively to the dummy code I provided I take it 
you mean the following ? 

Cache Person personsCache=...
 
THREAD 1
personsCache.invoke("personKey1", new EntryProcessor() {

@Override public Object process(MutableEntryentry, Object... 
args) { 

Person person= entry.getValue();
entry.setValue(person.setOccupation(“foo”));

Some additional calculation here involving key personKey2 <- - -some other key?

return null;
 } 
 });
 
THREAD 2
personsCache.invoke("personKey2", new EntryProcessor() {

@Override public Object process(MutableEntryentry, Object... 
args) { 

Person person= entry.getValue();
entry.setValue(person.setOccupation(“foo”));

Some additional calculation here involving key personKey1 <- - -some other key?

return null;
 } 
 });

 The 2 threads WILL deadlock in that situation. 

So, could those  keys of the person cache (for example) that I have marked 
above, be the ones that are designated in the documentation (and hence in your 
explanation as well) as “other keys” ? 

Thanks
Evangelos Morakis


> On 9 Sep 2019, at 20:40, Andrei Aleksandrov  wrote:
> 
> Hello,
> 
> When you use the entry processor then you lock only provided key. So when you 
> tries to work with other keys (different from provided one) that are being 
> processed somewhere in other threads then deadlock is possible because other 
> thread can take lock on these other keys and wait for provided one. 
> Otherwise, entry processor will wait for these other keys. It's typical 
> deadlock.
> 
> Sorry, I will not provide the example but hopes that my explanation is clear.
> 
> BR,
> Andrei
> 
> 9/7/2019 6:31 PM, Evangelos Morakis пишет:
>> 
>> Dear igniters, 
>> 
>> I would like to elicit your expert 
>> advice in regards to how ignite differentiates 
>> on the use of a call to: 1)IgniteCompute.affinityRun(...)
>> and 
>> 2)IgniteCache.invoke(...)
>> 
>>  as far as dead locks are concerned. According to the documentation the main 
>> difference is that method 2 above, operates within a lock. Specifically the 
>> doc quotes:
>> “EntryProcessors are executed atomically within a lock on the given cache 
>> key.”
>> Now it even comes with a warning that is meant to show how it is supposed to 
>> be used (or conversely NOT to be used):
>> “You should not access other keys from within the EntryProcessor logic as it 
>> may cause a deadlock.”
>> But this phrase “other keys” to what kind of keys does it refer to?  The 
>> remaining keys of the passed in cache?  For e.g. :
>>  Assume a persons cache...
>> Cache Person personsCache=...
>> 
>> personsCache.invoke("personKey", new EntryProcessor() {
>> @Override public Object process(MutableEntry entry, 
>> Object... args) { 
>> Person person= entry.getValue(); entry.setValue(person.setOccupation(“foo”));
>> return null;
>>  } 
>>  });
>> In other words can someone provide an example based on the above dummy code  
>> that would make invoke deadlock so that I could get an understanding of what 
>> the documentation refers to?
>> 
>> Thanks 
>> 
>> Evangelos Morakis
>> 


Re: Altered sql table (adding new columns) does not reflect in Spark shell

2019-09-10 Thread Shravya Nethula
Thank you Andrei.


Regards,

Shravya Nethula,

BigData Developer,

[cid:ca42104d-9443-490c-87d3-af6f7d4f5077]

Hyderabad.


From: Andrei Aleksandrov 
Sent: Tuesday, September 10, 2019 8:22 PM
To: user@ignite.apache.org 
Subject: Re: Altered sql table (adding new columns) does not reflect in Spark 
shell


Hi,

Yes, I can confirm that this is the issue. I filed next ticket for it:

https://issues.apache.org/jira/browse/IGNITE-12159

BR,
Andrei

9/7/2019 10:00 PM, Shravya Nethula пишет:
Hi,

I created and altered the table using the following queries:

a. CREATE TABLE person (id LONG, name VARCHAR(64), age LONG, city_id DOUBLE, 
zip_code LONG, PRIMARY KEY (name)) WITH "backups=1"
b. ALTER TABLE person ADD COLUMN (first_name VARCHAR(64), last_name VARCHAR(64))

*The changes (columns added from above Alter table SQL) are correct when 
verified from GridGain.

However, when I use Spark shell, couldn't find the columns added through Alter 
table SQL (above (b) query).
Is there any configuration that I am missing? (Attached ignite-config file for 
reference)

Executed the following commands in Spark shell:

Step 1: Connected to Spark shell:
/usr/hdp/2.6.5.1100-53/spark2/bin/spark-shell --jars 
/opt/jar/ignite-core-2.7.0.jar,/opt/jar/ignite-spark-2.7.0.jar,/opt/jar/ignite-spring-2.7.0.jar,"/opt/jar/commons-logging-1.1.3.jar","/opt/jar/spark-core_2.11-2.3.0.jar","/opt/jar/spring-core-4.3.18.RELEASE.jar","/opt/jar/spring-beans-4.3.18.RELEASE.jar","/opt/jar/spring-aop-4.3.18.RELEASE.jar","/opt/jar/spring-context-4.3.18.RELEASE.jar","/opt/jar/spring-tx-4.3.18.RELEASE.jar","/opt/jar/spring-jdbc-4.3.18.RELEASE.jar","/opt/jar/spring-expression-4.3.18.RELEASE.jar","/opt/jar/cache-api-1.0.0.jar","/opt/jar/annotations-13.0.jar","/opt/jar/ignite-shmem-1.0.0.jar","/opt/jar/ignite-indexing-2.7.0.jar","/opt/jar/lucene-analyzers-common-7.4.0.jar","/opt/jar/lucene-core-7.4.0.jar","/opt/jar/h2-1.4.197.jar","/opt/jar/commons-codec-1.11.jar","/opt/jar/lucene-queryparser-7.4.0.jar","/opt/jar/spark-sql_2.11-2.3.0.jar"
 --driver-memory 4g

Step 2: Ran the import statements:

import org.apache.ignite.{ Ignite, Ignition }

import org.apache.ignite.spark.IgniteDataFrameSettings._

import org.apache.spark.sql.{DataFrame, Row, SQLContext}

val CONFIG = "file:///opt/ignite-config.xml"

Step3: Read a table

var df = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
CONFIG).option(OPTION_TABLE, "person").load()

df.show();





Regards,

Shravya Nethula,

BigData Developer,

[cid:part3.1F39D5BF.FC7F03E0@gmail.com]

Hyderabad.


Data from multiple MySQL DBs

2019-09-10 Thread Kurt Semba
Hi all,

I need to sync data from multiple MySQL databases into Ignite.

All those MySQL databases follow the same schema / same tables but obviously 
contain different data. They are separate instances of the same DB and we want 
to pull all that data into Ignite to have a central store to query against.

What would be the best strategy to solve this?

Create a dedicated Cache for each DB instance and then find a way to create a 
UNION SQL query over all those Caches?
Find a way to define all the MySQL DBs as data sources in the Spring XML 
cluster config file and hope that Ignite pulls the same type of data from all 
DBs?

I’m open for suggestions 😊
Thanks
Kurt


Re: Altered sql table (adding new columns) does not reflect in Spark shell

2019-09-10 Thread Andrei Aleksandrov

Hi,

Yes, I can confirm that this is the issue. I filed next ticket for it:

https://issues.apache.org/jira/browse/IGNITE-12159

BR,
Andrei

9/7/2019 10:00 PM, Shravya Nethula пишет:

Hi,

*I created and altered the table using the following queries: *

a. CREATE TABLE person (id LONG, name VARCHAR(64), age LONG, city_id 
DOUBLE, zip_code LONG, PRIMARY KEY (name))WITH "backups=1"
b. ALTER TABLE person ADD COLUMN (first_name VARCHAR(64), last_name 
VARCHAR(64))


**The changes (columns added from above Alter table SQL) are correct 
when verified from GridGain. *


*However, when I use Spark shell, couldn't find the columns added 
through Alter table SQL (above (b) query).
Is there any configuration that I am missing? (Attached ignite-config 
file for reference)


Executed the following commands in Spark shell:*

Step 1: Connected to Spark shell:
/usr/hdp/2.6.5.1100-53/spark2/bin/spark-shell --jars 
/opt/jar/ignite-core-2.7.0.jar,/opt/jar/ignite-spark-2.7.0.jar,/opt/jar/ignite-spring-2.7.0.jar,"/opt/jar/commons-logging-1.1.3.jar","/opt/jar/spark-core_2.11-2.3.0.jar","/opt/jar/spring-core-4.3.18.RELEASE.jar","/opt/jar/spring-beans-4.3.18.RELEASE.jar","/opt/jar/spring-aop-4.3.18.RELEASE.jar","/opt/jar/spring-context-4.3.18.RELEASE.jar","/opt/jar/spring-tx-4.3.18.RELEASE.jar","/opt/jar/spring-jdbc-4.3.18.RELEASE.jar","/opt/jar/spring-expression-4.3.18.RELEASE.jar","/opt/jar/cache-api-1.0.0.jar","/opt/jar/annotations-13.0.jar","/opt/jar/ignite-shmem-1.0.0.jar","/opt/jar/ignite-indexing-2.7.0.jar","/opt/jar/lucene-analyzers-common-7.4.0.jar","/opt/jar/lucene-core-7.4.0.jar","/opt/jar/h2-1.4.197.jar","/opt/jar/commons-codec-1.11.jar","/opt/jar/lucene-queryparser-7.4.0.jar","/opt/jar/spark-sql_2.11-2.3.0.jar" 
--driver-memory 4g


Step 2: Ran the import statements:

import org.apache.ignite.{ Ignite, Ignition }

import org.apache.ignite.spark.IgniteDataFrameSettings._

import org.apache.spark.sql.{DataFrame, Row, SQLContext}

val CONFIG = "file:///opt/ignite-config.xml"

Step3: Read a table

var df = spark.read.format(FORMAT_IGNITE).option(OPTION_CONFIG_FILE, 
CONFIG).option(OPTION_TABLE, "person").load()


df.show();





Regards,

Shravya Nethula,

BigData Developer,


Hyderabad.



Re: Job Stealing node not stealing jobs

2019-09-10 Thread Pascoe Scholle
Thanks for the prompt response. I have looked the
WeightedRandomLoadBalancingSpi. It does not look like one can set the
number of parallel jobs though and this is big requirement. Also, it is
inevitable that there will be nodes which will sit idle, due to the nature
of jobs that will be deployed on the nodes and the job stealer just seems
like the perfect solution. Regardless, I have used the code provided for
the job stealing spi on the docs page and it isnt functioning as intended.


On Tue, 10 Sep 2019 at 11:34, Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> I don’t know the answer to your jon stealing question, but I do wonder if
> that’s the right configuration for your requirements. Why not use the
> weighted load balancer (https://apacheignite.readme.io/docs/load-balancing)?
> That’s designed to work in cases where nodes are of differing sizes.
>
> Regards,
> Stephen
>
> On 10 Sep 2019, at 10:19, Pascoe Scholle 
> wrote:
>
> Hello,
>
> is there any update on this?
>
> We have not been able to resolve this issue
>
> Kind regards
>
>
> On Wed, 04 Sep 2019 at 07:44, Pascoe Scholle 
> wrote:
>
>> Hi,
>>
>> attached a small scala project. Just set the build path to src after
>> building and compiling with sbt.
>>
>> We want to execute processes that happen outside the JVM. These processes
>> can be extremely memory intensive which is why I am limiting the
>> number of parallel jobs that can be executed on a machine.
>>
>> I have one desktop that has a lot more memory available and can thus
>> execute more jobs in parallel. As all jobs take roughly the same amount of
>> time, this machine will have completed its jobs much faster. I want it to
>> then take jobs from the nodes started on weaker machines once it has
>> completed all its tasks.
>>
>> Does that make sense?
>>
>> Hope this helps.
>>
>> BR,
>> Pascoe
>>
>> On Tue, 3 Sep 2019 at 17:29, Andrei Aleksandrov 
>> wrote:
>>
>>> Hi,
>>>
>>> Some remarks about job stealing SPI:
>>>
>>> 1)You have some nodes that can proceed the tasks of some compute job.
>>> 2)Tasks will be executed in public thread pool by default:
>>> https://apacheignite.readme.io/docs/thread-pools#section-public-pool
>>> 3)If some node thread pool is busy then some task of compute job can be
>>> executed on other node.
>>>
>>> In next cases it will not work:
>>>
>>> 1)In case if you choose specific node for your compute task
>>> 2)In case if you do affinity call (the same as above but node will be
>>> choose by affinity mapping)
>>>
>>> According to your case:
>>>
>>> It's not clear for me what exactly you try to do. Possible job stealing
>>> didn't work because of your weak node began executions of some tasks in
>>> public pool but just do it longer then faster one.
>>>
>>> Could you please share your full reproducer for investigation?
>>>
>>> BR,
>>> Andrei
>>>
>>> 9/3/2019 1:43 PM, Pascoe Scholle пишет:
>>> > HI there,
>>> >
>>> > I have asked this question, however I asked it under a different and
>>> > resolved topic, so I posted the quest under a more suitable title. I
>>> > hope thats ok
>>> >
>>> > We have tried to configure two compute server nodes one of which is
>>> > running on a weaker machine. The node running on the more powerful
>>> > machine always finished its tasks far before
>>> > the weaker node and then sits idle.
>>> >
>>> > The node is not even sending a steal request, so I must have
>>> > configured something wrong.
>>> >
>>> > I have attached the code for both nodes if you could kindly point out
>>> > what I am missing , I would really appreciate it!
>>> >
>>> >
>>>
>>
>
>


Re: Job Stealing node not stealing jobs

2019-09-10 Thread Stephen Darlington
I don’t know the answer to your jon stealing question, but I do wonder if 
that’s the right configuration for your requirements. Why not use the weighted 
load balancer (https://apacheignite.readme.io/docs/load-balancing 
)? That’s designed to work 
in cases where nodes are of differing sizes.

Regards,
Stephen

> On 10 Sep 2019, at 10:19, Pascoe Scholle  wrote:
> 
> Hello,
> 
> is there any update on this?
> 
> We have not been able to resolve this issue
> 
> Kind regards
> 
> 
> On Wed, 04 Sep 2019 at 07:44, Pascoe Scholle  > wrote:
> Hi,
> 
> attached a small scala project. Just set the build path to src after building 
> and compiling with sbt.
> 
> We want to execute processes that happen outside the JVM. These processes can 
> be extremely memory intensive which is why I am limiting the 
> number of parallel jobs that can be executed on a machine.
> 
> I have one desktop that has a lot more memory available and can thus execute 
> more jobs in parallel. As all jobs take roughly the same amount of time, this 
> machine will have completed its jobs much faster. I want it to then take jobs 
> from the nodes started on weaker machines once it has completed all its tasks.
> 
> Does that make sense?
> 
> Hope this helps.
> 
> BR,
> Pascoe
> 
> On Tue, 3 Sep 2019 at 17:29, Andrei Aleksandrov  > wrote:
> Hi,
> 
> Some remarks about job stealing SPI:
> 
> 1)You have some nodes that can proceed the tasks of some compute job.
> 2)Tasks will be executed in public thread pool by default:
> https://apacheignite.readme.io/docs/thread-pools#section-public-pool 
> 
> 3)If some node thread pool is busy then some task of compute job can be 
> executed on other node.
> 
> In next cases it will not work:
> 
> 1)In case if you choose specific node for your compute task
> 2)In case if you do affinity call (the same as above but node will be 
> choose by affinity mapping)
> 
> According to your case:
> 
> It's not clear for me what exactly you try to do. Possible job stealing 
> didn't work because of your weak node began executions of some tasks in 
> public pool but just do it longer then faster one.
> 
> Could you please share your full reproducer for investigation?
> 
> BR,
> Andrei
> 
> 9/3/2019 1:43 PM, Pascoe Scholle пишет:
> > HI there,
> >
> > I have asked this question, however I asked it under a different and 
> > resolved topic, so I posted the quest under a more suitable title. I 
> > hope thats ok
> >
> > We have tried to configure two compute server nodes one of which is 
> > running on a weaker machine. The node running on the more powerful 
> > machine always finished its tasks far before
> > the weaker node and then sits idle.
> >
> > The node is not even sending a steal request, so I must have 
> > configured something wrong.
> >
> > I have attached the code for both nodes if you could kindly point out 
> > what I am missing , I would really appreciate it!
> >
> >




Re: Job Stealing node not stealing jobs

2019-09-10 Thread Pascoe Scholle
Hello,

is there any update on this?

We have not been able to resolve this issue

Kind regards


On Wed, 04 Sep 2019 at 07:44, Pascoe Scholle 
wrote:

> Hi,
>
> attached a small scala project. Just set the build path to src after
> building and compiling with sbt.
>
> We want to execute processes that happen outside the JVM. These processes
> can be extremely memory intensive which is why I am limiting the
> number of parallel jobs that can be executed on a machine.
>
> I have one desktop that has a lot more memory available and can thus
> execute more jobs in parallel. As all jobs take roughly the same amount of
> time, this machine will have completed its jobs much faster. I want it to
> then take jobs from the nodes started on weaker machines once it has
> completed all its tasks.
>
> Does that make sense?
>
> Hope this helps.
>
> BR,
> Pascoe
>
> On Tue, 3 Sep 2019 at 17:29, Andrei Aleksandrov 
> wrote:
>
>> Hi,
>>
>> Some remarks about job stealing SPI:
>>
>> 1)You have some nodes that can proceed the tasks of some compute job.
>> 2)Tasks will be executed in public thread pool by default:
>> https://apacheignite.readme.io/docs/thread-pools#section-public-pool
>> 3)If some node thread pool is busy then some task of compute job can be
>> executed on other node.
>>
>> In next cases it will not work:
>>
>> 1)In case if you choose specific node for your compute task
>> 2)In case if you do affinity call (the same as above but node will be
>> choose by affinity mapping)
>>
>> According to your case:
>>
>> It's not clear for me what exactly you try to do. Possible job stealing
>> didn't work because of your weak node began executions of some tasks in
>> public pool but just do it longer then faster one.
>>
>> Could you please share your full reproducer for investigation?
>>
>> BR,
>> Andrei
>>
>> 9/3/2019 1:43 PM, Pascoe Scholle пишет:
>> > HI there,
>> >
>> > I have asked this question, however I asked it under a different and
>> > resolved topic, so I posted the quest under a more suitable title. I
>> > hope thats ok
>> >
>> > We have tried to configure two compute server nodes one of which is
>> > running on a weaker machine. The node running on the more powerful
>> > machine always finished its tasks far before
>> > the weaker node and then sits idle.
>> >
>> > The node is not even sending a steal request, so I must have
>> > configured something wrong.
>> >
>> > I have attached the code for both nodes if you could kindly point out
>> > what I am missing , I would really appreciate it!
>> >
>> >
>>
>


Re: Cache expiry policy not deleting records from disk(native persistence)

2019-09-10 Thread Shiva Kumar
I have filed a bug https://issues.apache.org/jira/browse/IGNITE-12152 but
this is same as https://issues.apache.org/jira/browse/IGNITE-10862
Any idea on the timeline of these tickets?
In the documentation
https://apacheignite.readme.io/v2.7/docs/expiry-policies
it says when native persistence is enabled "*expired entries are removed
from both memory and disk tiers*" but in the disk it just mark the pages as
unwanted pages and same disk space used by these unwanted pages will be
used to store new pages but it will not remove unwanted pages from disk and
so it will not release disk space used by these unwanted pages.

here is the developer's discussion link
http://apache-ignite-developers.2346864.n4.nabble.com/How-to-free-up-space-on-disc-after-removing-entries-from-IgniteCache-with-enabled-PDS-td39839.html


On Mon, Sep 9, 2019 at 11:53 PM Shiva Kumar 
wrote:

> Hi
> I have deployed ignite on kubernetes and configured two seperate
> persistent volume for WAL and persistence.
> The issue Iam facing is same as
> https://issues.apache.org/jira/browse/IGNITE-10862
>
> Thanks
> Shiva
>
> On Mon, 9 Sep, 2019, 10:47 PM Andrei Aleksandrov, 
> wrote:
>
>> Hello,
>>
>> I guess that generated WAL will take this disk space. Please read about
>> WAL here:
>>
>> https://apacheignite.readme.io/docs/write-ahead-log
>>
>> Please provide the size of every folder under /opt/ignite/persistence.
>>
>> BR,
>> Andrei
>> 9/6/2019 9:45 PM, Shiva Kumar пишет:
>>
>> Hi all,
>> I have set cache expiry policy like this
>>
>>
>>
>>
>> 
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>>   
>>   
>>   
>>   
>>   
>> > factory-method="factoryOf">
>>   
>> 
>>   
>>   
>> 
>>   
>> 
>>   
>>
>> 
>> 
>>
>>
>>
>> And batch inserting records to one of the table which is created with
>> above cache template.
>> Around 10 minutes, I ingested ~1.5GB of data and after 10 minutes records
>> started reducing(expiring) when I monitored from sqlline.
>>
>> 0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;
>> 
>>
>> COUNT(ID)
>> 
>>
>> 248896
>> 
>> 1 row selected (0.86 seconds)
>> 0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;
>> 
>>
>> COUNT(ID)
>> 
>>
>> 222174
>> 
>> 1 row selected (0.313 seconds)
>> 0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;
>> 
>>
>> COUNT(ID)
>> 
>>
>> 118154
>> 
>> 1 row selected (0.15 seconds)
>> 0: jdbc:ignite:thin://192.168.*.*:10800>
>> 0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;
>> 
>>
>> COUNT(ID)
>> 
>>
>> 76061
>> 
>> 1 row selected (0.106 seconds)
>> 0: jdbc:ignite:thin://192.168.*.*:10800>
>> 0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;
>> 
>>
>> COUNT(ID)
>> 
>>
>> 41671
>> 
>> 1 row selected (0.063 seconds)
>> 0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;
>> 
>>
>> COUNT(ID)
>> 
>>
>> 18455
>> 
>> 1 row selected (0.037 seconds)
>> 0: jdbc:ignite:thin://192.168.*.*:10800> select count(ID) from DIMENSIONS;
>> 
>>
>> COUNT(ID)
>> 
>>
>> 0
>> 
>> 1 row selected (0.014 seconds)
>>
>>
>> But in the meantime, the disk space used by the persistence store was in
>> the same usage level instead of decreasing.
>>
>>
>> [ignite@ignite-cluster-ign-shiv-0 ignite]$ while true ; do df -h
>> /opt/ignite/persistence/; sleep 1s; done
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/vdj 15G 1.6G 14G 11% /opt/ignite/persistence
>> Filesystem Size Used Avail Use% Mo

Data region LRU offheap algo not working

2019-09-10 Thread rick_tem
Hi,

I am trying to find out why it appears the RANDOM_LRU algo doesn't seem to
work with the following config.  Log attached, as well...after the log of
the below...

2019-09-09 11:04:03.557 WARN  [sys-stripe-5-#6%TemenosGrid%]
IgniteCacheDatabaseSharedManager - Page-based evictions started. Consider
increasing 'maxSize' on Data Region configuration: 1G_Region

after a few minutes you see memory steadily decrease.  What information in
the log will help me determine how many pages are freed, etc?

Thanks,
Rick

dateRepo1.out
  
dataRepo2.out
  

























--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/