Re: Setting custom Log location - log4j

2017-08-10 Thread userx
Hi Val,

Can you help me out with the configuration change for log4j ? I have
provided the value of LOG_HOME in environment variables in eclipse. Here is
what I have











I am having similar problem setting persistentStorePath property, 







If I put the debug point in setPersistentStore method what I receive there
is ${LOG_HOME}/PersistentStore
rather than C:\\users which is the value of environment variable LOG_HOME.

This is quite a problem for me.







--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Setting-custom-Log-location-log4j-tp16106p16118.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Streaming test

2017-08-10 Thread Jessie Lin
Val, thanks for pointing it out. Now I call AtomicLong Function from
service#execute() and it's working. Thank you very much!

Jessie

On Thu, Aug 10, 2017 at 3:08 PM, vkulichenko 
wrote:

> Jessie,
>
> You still call atomicLong() method from Service#init(). As I already
> mentioned, this is causing the startup hang. You should move
> IgniteAtomicLong creation out of init() method to avoid it.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Streaming-test-tp14039p16113.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Running Spark SQL on Spark Thrift Server with Ignite

2017-08-10 Thread vkulichenko
Ravi,

If you need to speed up SQL, you should make sure Ignite uses indexes to
execute queries. I think you can do the following:
- Create Hive RDD and map it to RDD of key value pairs.
- Create new IgniteRDD on top of a cache and use IgniteRDD#savePairs method
to load data from Hive to Ignite.
- IgniteRDD#sql method to execute queries.

Note that SQL needs to be configured in Ignite (i.e. you need to specify
queryable fields, indexes, etc.). More information here:
https://apacheignite.readme.io/docs/sql-queries

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Running-Spark-SQL-on-Spark-Thrift-Server-with-Ignite-tp16087p16115.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Apache Tez support for Ignite

2017-08-10 Thread vkulichenko
Ravi,

Have you seen the Hadoop Accelerator?
https://apacheignite-fs.readme.io/docs/hadoop-accelerator

It also provides custom implementation of MR engine.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Apache-Tez-support-for-Ignite-tp16086p16114.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Visor: cache counts are twice of what is expected

2017-08-10 Thread vkulichenko
Hi Roger,

It's a known issue and fixed in master. You can try to build from there or
check the latest nightly build:
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Visor-cache-counts-are-twice-of-what-is-expected-tp16110p16112.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Setting custom Log location - log4j

2017-08-10 Thread vkulichenko
Path for log files is ${IGNITE_HOME}/work/log/, as specified in the log4j
file. If log4j logger is used and if configuration was not changed, then
most likely the change to IGNITE_HOME property variable you made was not
picked by the process. You can check it in the log - Ignite prints out
IGNITE_HOME value on startup.

However, if the purpose of this is to only change log files location, then
it's better to modify file appender ignite-log4j.xml.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Setting-custom-Log-location-log4j-tp16106p16111.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Visor: cache counts are twice of what is expected

2017-08-10 Thread Roger Fischer (CW)
Hello,

the cache counts that are shown in Visor seem to be twice the number that is 
expected.

I am using ver. 2.1.0#20170720-sha1:a6ca5c8a, with native persistence.

For a replicated cache, with 363 objects loaded (select count(*) returns 363):

Nodes for: FabricCache(@c0)
+===+
|   Node ID8(@), IP   | CPUs | Heap Used | CPU Load |   Up Time|
 Size | Hi/Mi/Rd/Wr |
+===+
| E90ED5E1(@n2), 10.24.51.150 | 4| 20.79 %   | 0.57 %   | 00:22:30:440 | 
Total: 726   | Hi: 0   |
| |  |   |  |  |   
Heap: 363  | Mi: 0   |
| |  |   |  |  |   
Off-Heap: 363  | Rd: 0   |
| |  |   |  |  |   
Off-Heap Memory: 0 | Wr: 0   |
+-+--+---+--+--+--+-+
| 31CC5BE0(@n1), 10.24.51.187 | 4| 19.38 %   | 0.33 %   | 00:22:36:115 | 
Total: 726   | Hi: 0   |
| |  |   |  |  |   
Heap: 363  | Mi: 0   |
| |  |   |  |  |   
Off-Heap: 363  | Rd: 0   |
| |  |   |  |  |   
Off-Heap Memory: 0 | Wr: 0   |
+-+--+---+--+--+--+-+
| 5BB20689(@n0), 10.24.51.190 | 4| 18.40 %   | 0.40 %   | 00:22:42:711 | 
Total: 726   | Hi: 0   |
| |  |   |  |  |   
Heap: 363  | Mi: 0   |
| |  |   |  |  |   
Off-Heap: 363  | Rd: 0   |
| |  |   |  |  |   
Off-Heap Memory: 0 | Wr: 0   |
+---+

Each node shows 363 objects on the heap and 363 objects off-heap, for a total 
of 726 (twice as many as expected).

Similar for a partitioned cache, with one backup. 4.8M objects are loaded 
(select count(*) ...):

Nodes for: StatsCache(@c1)
+===+
|   Node ID8(@), IP   | CPUs | Heap Used | CPU Load |   Up Time|
 Size | Hi/Mi/Rd/Wr |
+===+
| E90ED5E1(@n2), 10.24.51.150 | 4| 27.44 %   | 0.67 %   | 01:06:31:092 | 
Total: 6496068   | Hi: 0   |
| |  |   |  |  |   
Heap: 3248034  | Mi: 0   |
| |  |   |  |  |   
Off-Heap: 3248034  | Rd: 0   |
| |  |   |  |  |   
Off-Heap Memory: 0 | Wr: 0   |
+-+--+---+--+--+--+-+
| 31CC5BE0(@n1), 10.24.51.187 | 4| 33.12 %   | 0.33 %   | 01:06:36:770 | 
Total: 6236714   | Hi: 0   |
| |  |   |  |  |   
Heap: 3118357  | Mi: 0   |
| |  |   |  |  |   
Off-Heap: 3118357  | Rd: 0   |
| |  |   |  |  |   
Off-Heap Memory: 0 | Wr: 0   |
+-+--+---+--+--+--+-+
| 5BB20689(@n0), 10.24.51.190 | 4| 20.17 %   | 0.43 %   | 01:06:43:367 | 
Total: 6467218   | Hi: 0   |
| |  |   |  |  |   
Heap: 3233609  | Mi: 0   |
| |  |   |  |  |   
Off-Heap: 3233609  | Rd: 0   |
| |  |   |  |  |   
Off-Heap Memory: 0 | Wr: 0   |
+---+

The expectation is that each node would have 2/3 of the objects (1/3 as primary 
and another 1/3 as backup). That would be 3.2M objects.

But each node has about 6.4M objects, twice as many as expected. Again it seems 
that each objects is both on and off heap.

Is Visor reporting incorrectly, or are objects stored twice?

Thanks...

Roger



Re: Streaming test

2017-08-10 Thread Jessie Lin
Val, please see thread print attached.
This is take after a server is run by "bin\ignite.bat
config\ignite-writebehind.xml" and the service initialization didn't
complete.
Thank you very much for helping out!

"srvc-deploy-#33%null%" #59 prio=5 os_prio=0 tid=0x577b8000
nid=0x1ef8 waiting on condition [0x608fe000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xc09fbdc8> (a
java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at org.apache.ignite.internal.util.IgniteUtils.await(IgniteUtils.java:7419)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.awaitInitialization(DataStructuresProcessor.java:1112)
at
org.apache.ignite.internal.processors.datastructures.DataStructuresProcessor.atomicLong(DataStructuresProcessor.java:517)
at
org.apache.ignite.internal.IgniteKernal.atomicLong(IgniteKernal.java:3436)
at com.sample.SampleServiceImpl.init(SampleServiceImpl.java:63)

Jessie

On Wed, Aug 9, 2017 at 2:29 PM, vkulichenko 
wrote:

> I can't reproduce it, your project works fine for me. Can you attach thread
> dumps?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Streaming-test-tp14039p16088.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
12484:
2017-08-10 12:50:26
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.91-b14 mixed mode):

"sys-#56%null%" #82 prio=5 os_prio=0 tid=0x58ad8000 nid=0x33dc waiting 
on condition [0x5b49f000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xc0277e70> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

"sys-#55%null%" #81 prio=5 os_prio=0 tid=0x58ad7800 nid=0x1c50 waiting 
on condition [0x5ab9e000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xc0277e70> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

"sys-#54%null%" #80 prio=5 os_prio=0 tid=0x58ad6800 nid=0x2a40 waiting 
on condition [0x5b5ff000]
   java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0xc0277e70> (a 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

"sys-#53%null%" #79 prio=5 os_prio=0 tid=0x586f1000 nid=0x3f3c waiting 
on condition [0x5a84e000]
   

Re: Persistent store and eviction policy.

2017-08-10 Thread Denis Magda
Folks,

I’ve updated the documentation avoiding any misunderstanding - " If Ignite 
Persistence is enabled then the page-based evictions have no effect because the 
oldest pages will be evicted from RAM automatically if there is not enough 
space available.”

https://apacheignite.readme.io/v2.1/docs/evictions 


BTW, why can’t we make the eviction customizable when the store is used? Don’t 
see any issue with that from the user perspective.

—
Denis

> On Aug 10, 2017, at 2:18 AM, Ivan Rakov  wrote:
> 
> Hi!
> 
> 
> I'm afraid, description of page-based eviction in documentation is not quite 
> correct.
> Page-based eviction (RANDOM_LRU or RANDOM_2_LRU) can be activated only if 
> persistent store is disabled. It defines algorithm for choosing page in RAM 
> to remove all contents completely.
> On the other hand, when persistent store is enabled, eviction from RAM to 
> disk is enabled by default and is not customizable by user. So answer to 
> first question 1 is no - you shouldn't specify anything in configuration to 
> make disk eviction work.
> 
> About question 2 - You will lose partition after crash of N1. Losing of 
> partition is always undesirable scenario. By default, partition will be 
> reset, and you can't say for sure which version of partition (old from disk 
> or new from another node) will be resolved as actual after join of N1.
> Safe solution - configure backup nodes. Your data will be safe at N3.
> Another safe solution - set safe partition loss policy (e.g. 
> PartitionLossPolicy.READ_WRITE_SAFE). All reads/writes from lost partition 
> will throw an exception until crashed node N1 returns to the topology. After 
> that, "1" entry will be recovered.
> 
> 
> Thanks for raising this topic, we'll fix the documentation soon.
> Best Regards, 
> Ivan Rakov
> 
> On 10.08.2017 5:38, userx wrote:
>> Hi team,
>> 
>> I was going through the documentation of durable memory at
>> https://apacheignite.readme.io/docs/durable-memory 
>> 
>> 
>> As per the documentation, durable memory comes into picture when
>> PersistentStore configuration is enabled. Now durable memory uses both
>> RAM(hot data) and disk (superset). When the RAM part reaches a threshold
>> (80% by default as per the documentation), the durable data retains only hot
>> data in RAM and rest on the disk.
>> 
>> QUESTION 1
>> So does that mean that there is a default eviction policy which comes into
>> existence ? Or does the user explicitly has to mention the same in
>> configuration ? What happens if he does not mention the eviction policy in
>> configuration ?
>> 
>> Suppose there are 2 nodes N1 (different physical box) and N2 (different
>> physical box) and the data is distributed in PARTITIONED mode and persistent
>> store is enabled. 
>> 
>> Here is the example of entries
>> 
>> N1-> "1","X"
>> N2-> "2","Y"
>> 
>> QUESTION 2
>> Suppose N1 crashes and goes down and does not come up at all for say 5
>> hours. Is "1" retrievable at all during that time if N1 went down after an
>> entry to its WAL file or we loose N1 ? If say the entry was not able to be
>> made in WAL file and should we have configured a back up as N3 (different
>> physical box), would it have saved "1" ?
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> View this message in context: 
>> http://apache-ignite-users.70518.x6.nabble.com/Persistent-store-and-eviction-policy-tp16092.html
>>  
>> 
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
> 



Re: Persistent store and eviction policy.

2017-08-10 Thread userx
Thank you.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Persistent-store-and-eviction-policy-tp16092p16107.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Setting custom Log location - log4j

2017-08-10 Thread userx
Hi all,

I intend to use log4j for Apache Ignite logging. The steps I followed are

1) add maven dependency of 
org.apache.ignite
ignite-log4j
2.0.0

2) set the following in IgniteConfiguration file





3) When I start the server, it generates log at $IGNITE_HOME/work location.
I put in a debug point to see if Log4JLogger is getting instantiated and it
does.

4) Then I set an environment variable in eclipse say LOG_HOME and replaced
IGNITE_HOME with the same.
When I restarted the server it is still writing at IGNITE_HOME. 

How can I change the location of my Ignite logs to a custom location I have
set up in eclipse environment variables ? I intend to use the same strategy
and let WAL happen at a custom location as well.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Setting-custom-Log-location-log4j-tp16106.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite fails to allocate more memory then initially allocated when maxSize property provided in MemoryPolicyConfiguration

2017-08-10 Thread afedotov
Hi,

Provided configuration works as expected on Ignite version 2.1.
On my side, in this configuration, I got 3 memory segment allocations.

Have you tried requesting cache.size() after populating the cache?
Please provide the full configuration files so as I could check them.

Kind regards,
Alex.

On Thu, Aug 10, 2017 at 6:58 PM, smironchyk [via Apache Ignite Users] <
ml+s70518n16101...@n6.nabble.com> wrote:

> Hi! I am trying to configure and test my custom memory policy for the
> simple 1 client - 1 server node topology.
>
> In order to do this I added memory configuration for my server node like
> this with 1GB_Region_Eviction memory policy configured.
>
> 
> 
>
> 
>
> 
> 
> 
>  value="1GB_Region_Eviction"/>
> 
> 
> 
>  value="RANDOM_2_LRU"/>
> 
>
> 
> 
> 
> 
>
> My client node connects to server and starts putting new entries with size
> ~ 2MB to cache with interval in 1 sec. See client config below.
> 
> 
>
> 
>
> 
> 
> 
> 
> 
> 
>  value="1GB_Region_Eviction"/>
>
>  value="FULL_SYNC"/>
> 
> 
> 
>  factory-method="factoryOf">
> 
> 
> 
> 
> 
> 
> ...
> 
>
> I've got message in server node log like this
> [12:34:16,622][INFO][sys-stripe-1-#2%null%][PageMemoryNoStoreImpl]
> Allocated next memory segment [plcName=1GB_Region_Eviction, chunkSize=268.4
> MB]
> And after this no more memory chunks allocated by node. And my maxSize
> ignored by cache.
> What am I doing wrong here. Please advise.
>
> my-client-example-memory-policies.xml
> 
> my-example-memory-policies.xml
> 
> MyMemoryPoliciesExample.java
> 
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-
> fails-to-allocate-more-memory-then-initially-allocated-when-
> maxSize-property-provided-in-Memon-tp16101.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-fails-to-allocate-more-memory-then-initially-allocated-when-maxSize-property-provided-in-Memon-tp16101p16105.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Apache Tez support for Ignite

2017-08-10 Thread ravi
Hi
 Thanks for the details. I have sent the email to subscribe now and got
confirmation back. The reason i have asked the question about Tez and LLAP
is apache hive community has deprecated the MR as execution engine and
moving towards Tez and LLAP, Will Ignite have equivalent In Memory Tez/LLAP
implementation similar to In memory map reduce?.

Regards
Ravi 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Apache-Tez-support-for-Ignite-tp16086p16104.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: EOFException in Hadoop Datanode when writing to Ignite

2017-08-10 Thread Mikhail
Hi  Rodrigo,

I'm not sure how Flink works, but to write to IGFS you need to use special
implantation of  HDFS:

fs.igfs.impl
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem
  
  
fs.AbstractFileSystem.igfs.impl
org.apache.ignite.hadoop.fs.v2.IgniteHadoopFileSystem
  

Somehow you need to make Flink to use this implementations, if it uses it's
own implementation it won't work.
I don't know how to configure Flink for this, I think it's a question for
Flink community.

Thanks,
Mikhail.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/EOFException-in-Hadoop-Datanode-when-writing-to-Ignite-tp15221p16103.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: EOFException in Hadoop Datanode when writing to Ignite

2017-08-10 Thread Mikhail
Hi  Rodrigo,

I'm not sure how Flink works, but to write to IGFS you need to use special
implantation of  HDFS:

fs.igfs.impl
org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem
  
  
fs.AbstractFileSystem.igfs.impl
org.apache.ignite.hadoop.fs.v2.IgniteHadoopFileSystem
  

Somehow you need to make Flink to use this implementations, if it uses it's
own implementation it won't work.
I don't know how to configure Flink for this, I think it's a question for
Flink community.

Thanks,
Mikhail.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/EOFException-in-Hadoop-Datanode-when-writing-to-Ignite-tp15221p16102.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Activating Cluster taking too long

2017-08-10 Thread ezhuravlev
Hi,

Please share full logs from all nodes so I can help in investigating of your
problem.

Evgenii



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Activating-Cluster-taking-too-long-tp16093p16099.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Persistent store and eviction policy.

2017-08-10 Thread Alexey Kukushkin
Hi,
1. Persistence and Data Eviction are alternative options to handle 
out-of-memory scenarios.Persistence makes Ignite fill all available RAM and 
move the oldest page to the “disk” part of the cache when there is not enough 
memory.
Data eviction policy makes Ignite to completely remove some entries or memory 
pages from the cache depending on the policy.Persistence comes at cost of 
performance but gives us virtually unlimited reliable cache.Use the one or the 
other.

2. No, if you have no backups configured and a node goes down the data becomes 
unavailable. Persistence will not help - only partitions that a nodes owns are 
persisted on the node. Configure backups to address the issue.
Best regards, Alexey


On Thursday, August 10, 2017, 5:38:50 AM GMT+3, userx  
wrote:

Hi team,

I was going through the documentation of durable memory at
https://apacheignite.readme.io/docs/durable-memory

As per the documentation, durable memory comes into picture when
PersistentStore configuration is enabled. Now durable memory uses both
RAM(hot data) and disk (superset). When the RAM part reaches a threshold
(80% by default as per the documentation), the durable data retains only hot
data in RAM and rest on the disk.

QUESTION 1
So does that mean that there is a default eviction policy which comes into
existence ? Or does the user explicitly has to mention the same in
configuration ? What happens if he does not mention the eviction policy in
configuration ?

Suppose there are 2 nodes N1 (different physical box) and N2 (different
physical box) and the data is distributed in PARTITIONED mode and persistent
store is enabled. 

Here is the example of entries

N1-> "1","X"
N2-> "2","Y"

QUESTION 2
Suppose N1 crashes and goes down and does not come up at all for say 5
hours. Is "1" retrievable at all during that time if N1 went down after an
entry to its WAL file or we loose N1 ? If say the entry was not able to be
made in WAL file and should we have configured a back up as N3 (different
physical box), would it have saved "1" ?







--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Persistent-store-and-eviction-policy-tp16092.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Activating Cluster taking too long

2017-08-10 Thread iostream
Hi!

I am experimenting with v2.1 persistence store enabled.

1. Created 8 caches and pumped data into them.
2. Restarted the ignite cluster.
3. Waited for all server nodes to join the cluster.
4. called Ignite.active(true);

I observed the cluster activation time is more than 1 hour with the
following configuration after restarting my Ignite cluster. Is this an
expected behaviour? Please advice what can be configured to reduce the
activation time. Thanks!

*Number of clients* - 8
*Number of ignite servers* - 8
*Number of caches* - 8
*Disk usage for persistence store per server node* = around 50 GB

*Cache configuration* -

cacheConfig.setAtomicityMode(TRANSACTIONAL);
cacheConfig.setCacheMode(PARTITIONED);
cacheConfig.setBackups(1);
cacheConfig.setCopyOnRead(TRUE);
cacheConfig.setPartitionLossPolicy(IGNORE);
cacheConfig.setQueryParallelism(2);
cacheConfig.setReadFromBackup(TRUE);
cacheConfig.setRebalanceBatchSize(524288);
cacheConfig.setRebalanceThrottle(100);
cacheConfig.setRebalanceTimeout(1);
cacheConfig.setIndexedTypes(A.class, B.class);
cacheConfig.setOnheapCacheEnabled(FALSE);

*Client and Server Configuration* -


http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>




































*What I found in logs* -

  ^-- CPU [cur=0.1%, avg=0.98%, GC=0%]
^-- PageMemory [pages=2268513]
^-- Heap [used=409MB, free=59.99%, comm=1023MB]
^-- Non heap [used=69MB, free=95.43%, comm=71MB]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=7, qSize=0]
^-- Outbound messages queue [size=0]
[07:32:26,656][WARNING][exchange-worker-#50%null%][diagnostic] Failed to
wait for partition map exchange [topVer=AffinityTopologyVersion [topVer=16,
minorTopVer=1], node=dcb07329-c5d6-404c-b4b1-3c0225e99a62]. Dumping pending
objects that might be the cause: 
[07:32:35,344][INFO][tcp-disco-ip-finder-cleaner-#4%null%][TcpDiscoveryZookeeperIpFinder]
ZooKeeper IP Finder resolved addresses: [/10.120.201.127:47500,
/10.120.132.193:47500, /10.120.204.163:47500, /10.120.199.162:47500,
/10.120.201.180:47500, /10.120.199.166:47500, /127.0.0.1:47500,
/10.120.194.122:47500, /10.120.190.154:47500]
[07:32:36,657][WARNING][exchange-worker-#50%null%][diagnostic] Failed to
wait for partition map exchange [topVer=AffinityTopologyVersion [topVer=16,
minorTopVer=1], node=dcb07329-c5d6-404c-b4b1-3c0225e99a62]. Dumping pending
objects that might be the cause: 
[07:32:46,658][WARNING][exchange-worker-#50%null%][diagnostic] Failed to
wait for partition map exchange [topVer=AffinityTopologyVersion [topVer=16,
minorTopVer=1], node=dcb07329-c5d6-404c-b4b1-3c0225e99a62]. Dumping pending
objects that might be the cause: 
[07:32:56,659][WARNING][exchange-worker-#50%null%][diagnostic] Failed to
wait for partition map exchange [topVer=AffinityTopologyVersion [topVer=16,
minorTopVer=1], node=dcb07329-c5d6-404c-b4b1-3c0225e99a62]. Dumping pending
objects that might be the cause: 








--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Activating-Cluster-taking-too-long-tp16093.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.