Ignite 2.7.6: Memory Leak with Direct buffers in TCPCommunication SPI

2020-02-25 Thread Mahesh Renduchintala
Hi,

We have been searching for a cause of memory leak in ignite server nodes for 
many weeks now.
The memory leak exists and below is the scenario.

Scenario

  *   Scenario
 *   Our ignite servers have about 50GB of data. Two servers were baselined
 *   There are about 10 client nodes connected
 *   OffHeap memory - 64GB per node, 48GB heap (XMS, XMS parameters)
  *   Test
 *   start all 10 nodes
 *   wait for 10 nodes to be connected
 *   sleep for 30 mins
 *   Stop all 10 nodes (docker rm -f X)
 *   Loop 1 to 4 steps infinitely

Observations

 *   With every iteration, about 1GB of memory disappears from the heap f 
servers
 *   After about 100 iterations, server nodes crash reporting OOM

Workaround

 *   The memory leak does not occur when directBuffer is made false as 
below. Meaning TCP buffers are only onHeap
*















Hope you can reproduce this problem at your end.
This occurs with any default configuration of ignite servers. This is why, I am 
not sending mine.

regards
Mahesh



 *   The only work around we found is to use on


Re: TEST mail

2020-02-25 Thread Denis Magda
Who are you trying to share in the body? Attaching smth? Try to stick to
the plain text.

-
Denis


On Tue, Feb 25, 2020 at 9:35 AM Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> my mails with body contents are not showing up in ignite user list. Sent
> the same mail multiple times but no luck.
> Anyone knows the issue?
>
> On Tue, Feb 25, 2020 at 10:45 PM Rick Alexander 
> wrote:
>
>> Test MAIL reply
>>
>> On Tue, Feb 25, 2020, 12:00 PM Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>>
>>>


Re: Read through not working as expected in case of Replicated cache

2020-02-25 Thread Denis Magda
Ignite Dev team,

This sounds like an issue in our replicated cache implementation rather
than an expected behavior. Especially, if partitioned caches don't have
such a specificity.

Who can explain why write-through needs to be enabled for replicated caches
to reload an entry from an underlying database properly/consistently?

-
Denis


On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I think this is by design. You may suggest edits on readme.io.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> prasadbhalerao1...@gmail.com>:
>
>> Hi,
>>
>> Is this a bug or the cache is designed to work this way?
>>
>> If it is as-designed, can this behavior be updated in ignite
>> documentation?
>>
>> Thanks,
>> Prasad
>>
>> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> I have discussed this with fellow Ignite developers, and they say read
>>> through for replicated cache would work where there is either:
>>>
>>> - writeThrough enabled and all changes do through it.
>>> - database contents do not change for already read keys.
>>>
>>> I can see that neither is met in your case, so you can expect the
>>> behavior that you are seeing.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde :
>>>
 I am using Ignite 2.6 version.

 I am starting 3 server nodes with a replicated cache and 1 client node.
 Cache configuration is as follows.
 Read-through true on but write-through is false. Load data by key is
 implemented as given below in cache-loader.

 Steps to reproduce issue:
 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
 is just removed from cache but present in DB as write-through is false)
 2) Invoke IgniteCache.get() method for the same key in step 1.
 3) Now query the cache from client node. Every invocation returns
 different results.
 Sometimes it returns reloaded entry, sometime returns the results
 without reloaded entry.

 Looks like read-through is not replicating the reloaded entry on all
 nodes in case of REPLICATED cache.

 So to investigate further I changed the cache mode to PARTITIONED and
 set the backup count to 3 i.e. total number of nodes present in cluster (to
 mimic REPLICATED behavior).
 This time it worked as expected.
 Every invocation returned the same result with reloaded entry.

 *  private CacheConfiguration networkCacheCfg() {*



















 *CacheConfiguration networkCacheCfg = new
 CacheConfiguration<>(CacheName.NETWORK_CACHE.name
 ());
 networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
 networkCacheCfg.setWriteThrough(false);
 networkCacheCfg.setReadThrough(true);
 networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
 networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
   //networkCacheCfg.setBackups(3);
 networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
 Factory storeFactory =
 FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
 networkCacheCfg.setCacheStoreFactory(storeFactory);
 networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
 NetworkData.class);networkCacheCfg.setSqlIndexMaxInlineSize(65);
 RendezvousAffinityFunction affinityFunction = new
 RendezvousAffinityFunction();
 affinityFunction.setExcludeNeighbors(false);
 networkCacheCfg.setAffinity(affinityFunction);
 networkCacheCfg.setStatisticsEnabled(true);   //
 networkCacheCfg.setNearConfiguration(nearCacheConfiguration());return
 networkCacheCfg;  }*

 @Override
 public V load(K k) throws CacheLoaderException {
 V value = null;
 DataSource dataSource = springCtx.getBean(DataSource.class);
 try (Connection connection = dataSource.getConnection();
  PreparedStatement statement = 
 connection.prepareStatement(loadByKeySql)) {
 //statement.setObject(1, k.getId());
 setPreparedStatement(statement,k);
 try (ResultSet rs = statement.executeQuery()) {
 if (rs.next()) {
 value = rowMapper.mapRow(rs, 0);
 }
 }
 } catch (SQLException e) {

 throw new CacheLoaderException(e.getMessage(), e);
 }

 return value;
 }


 Thanks,

 Akash




Re: NodeOrder in GridCacheVersion

2020-02-25 Thread Prasad Bhalerao
Hi,

> Ignite Version: 2.6
> No of nodes: 4
>
> I am getting following exception while committing transaction.
>
> Although I just reading the value from this cache inside transaction and I
> am sure that the  cache and "cache entry" read is not being modified out
> this transaction on any other node.
>
> So I debugged the code and found out that it fails in following code on 2
> nodes out of 4 nodes.
>
> GridDhtTxPrepareFuture#checkReadConflict -
> GridCacheEntryEx#checkSerializableReadVersion
>
> GridCacheVersion version failing for equals check are given below for 2
> different caches. I can see that it failing because of change in nodeOrder
> of cache.
>
> 1) Can some please explain the significance of the nodeOrder w.r.t Grid
> and cache? When does it change?
> 2) How to solve this problem?
>
> Cache : Addons (Node 2)
> serReadVer of entry read inside Transaction: GridCacheVersion
> [topVer=194120123, order=4, nodeOrder=2]
> version on node3: GridCacheVersion [topVer=194120123, order=4, nodeOrder=1]
>
> Cache : Subscription  (Node 3)
> serReadVer of entry read inside Transaction:  GridCacheVersion
> [topVer=194120123, order=1, nodeOrder=2]
> version on node2:  GridCacheVersion [topVer=194120123, order=1,
> nodeOrder=10]
>
>
> *EXCEPTION:*
>
> Caused by:
> org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException:
> Failed to prepare transaction, read/write conflict
>


>
> Thanks,
> Prasad
>


NodeOrder in GridCacheVersion

2020-02-25 Thread Prasad Bhalerao
Hi,

1) What is nodeOrder in grid cache version?
2) When does it change?
3) How does it affect transaction?

I think my transaction is failing due to this change in nodeorder.

GridCacheVersion [topVer=194120123, order=4, nodeOrder=2]

Thanks,
Prasad


Re: TEST mail

2020-02-25 Thread Prasad Bhalerao
my mails with body contents are not showing up in ignite user list. Sent
the same mail multiple times but no luck.
Anyone knows the issue?

On Tue, Feb 25, 2020 at 10:45 PM Rick Alexander 
wrote:

> Test MAIL reply
>
> On Tue, Feb 25, 2020, 12:00 PM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
>
>>
>>


Re: TEST mail

2020-02-25 Thread Rick Alexander
Test MAIL reply

On Tue, Feb 25, 2020, 12:00 PM Prasad Bhalerao 
wrote:

>
>


TEST mail

2020-02-25 Thread Prasad Bhalerao



Re: ERROR: h2 Unsupported connection setting "MULTI_THREADED"

2020-02-25 Thread Ilya Kasnacheev
Hello!

I recommend tweaking your dependency importing to make sure you exclude all
H2 versions but the one needed by Apache Ignite.

With Maven, mvn dependency:tree to the rescue.

H2 is currently the heart of Ignite's SQL and, as such, it is not
negotiable.

Regards,
-- 
Ilya Kasnacheev


вт, 25 февр. 2020 г. в 16:19, Andrew Munn :

> Yes I'm using spring boot.  Can Ignite be updated to work with the latest
> h2?
>
> On Fri, Feb 21, 2020, 6:25 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I've heard about issues with e.g. Spring Boot overriding h2 database
>> version and breaking our runtime. I'm not sure who else does that.
>>
>> Regards,
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 20 февр. 2020 г. в 19:24, Andrew Munn :
>>
>>> Thanks.  Adding this runtime dependency to build.gradle fixed it:
>>>
>>> dependencies {
>>> runtime("com.h2database:h2:1.4.197")
>>> ...
>>> compile group: 'org.apache.ignite', name: 'ignite-spring', version:
>>> '2.7.6'
>>> compile group: 'org.apache.ignite', name: 'ignite-core', version:
>>> '2.7.6'
>>> }
>>>
>>> But I suspect this should be getting enforced automatically if using h2
>>> ver1.4.200 breaks something.  Am I specifying the Ignite dependency
>>> incorrectly?
>>>
>>>
>>> On Thu, Feb 20, 2020 at 4:08 AM Taras Ledkov 
>>> wrote:
>>>
 Hi,

 Ignite uses H2 version 1.4.197 (see [1]).


 [1]. https://github.com/apache/ignite/blob/master/parent/pom.xml#L74

 On 20.02.2020 4:36, Andrew Munn wrote:
 > I'm building/running my client app w/Gradle and I'm seeing this
 > error.  Am I overloading the Ingite H2 fork with the real H2 or
 > something?  It appears I have the latest h2:
 >
 > [.gradle]$ find ./ -name *h2*
 > ./caches/modules-2/metadata-2.82/descriptors/com.h2database
 > ./caches/modules-2/metadata-2.82/descriptors/com.h2database/h2
 > ./caches/modules-2/files-2.1/com.h2database
 > ./caches/modules-2/files-2.1/com.h2database/h2
 >
 ./caches/modules-2/files-2.1/com.h2database/h2/1.4.200/6178ecda6e9fea8739a3708729efbffd88be43e3/h2-1.4.200.pom
 >
 ./caches/modules-2/files-2.1/com.h2database/h2/1.4.200/f7533fe7cb8e99c87a43d325a77b4b678ad9031a/h2-1.4.200.jar
 >
 >
 >
 > 2020-02-19 19:59:28.229 ERROR 102356 --- [   main]
 > o.a.i.internal.IgniteKernal%dev-cluster  : Exception during start
 > processors, node will be stopped and close connections
 > org.apache.ignite.internal.processors.query.IgniteSQLException:
 Failed
 > to initialize system DB connection:
 >
 jdbc:h2:mem:b52dce26-ba01-4051-9130-e087e19fab4f;LOCK_MODE=3;MULTI_THREADED=1;DB_CLOSE_ON_EXIT=FALSE;DEFAULT_LOCK_TIMEOUT=1;FUNCTIONS_IN_SCHEMA=true;OPTIMIZE_REUSE_RESULTS=0;QUERY_CACHE_SIZE=0;MAX_OPERATION_MEMORY=0;BATCH_JOINS=1;ROW_FACTORY="org.apache.ignite.internal.processors.query.h2.opt.GridH2PlainRowFactory";DEFAULT_TABLE_ENGINE=org.apache.ignite.internal.processors.query.h2.opt.GridH2DefaultTableEngine
 > at
 >
 org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.systemConnection(IgniteH2Indexing.java:434)

 > ~[ignite-indexing-2.7.6.jar:2.7.6]
 > at
 >
 org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSystemStatement(IgniteH2Indexing.java:699)

 > ~[ignite-indexing-2.7.6.jar:2.7.6]
 > at
 >
 org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.createSchema0(IgniteH2Indexing.java:646)

 > ~[ignite-indexing-2.7.6.jar:2.7.6]
 > at
 >
 org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(IgniteH2Indexing.java:3257)

 > ~[ignite-indexing-2.7.6.jar:2.7.6]
 > at
 >
 org.apache.ignite.internal.processors.query.GridQueryProcessor.start(GridQueryProcessor.java:248)

 > ~[ignite-core-2.7.6.jar:2.7.6]
 > at
 >
 org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1700)

 > ~[ignite-core-2.7.6.jar:2.7.6]
 > at
 > org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1017)
 > ~[ignite-core-2.7.6.jar:2.7.6]
 > at
 >
 org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)

 > ~[ignite-core-2.7.6.jar:2.7.6]
 > at
 >
 org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)

 > ~[ignite-core-2.7.6.jar:2.7.6]
 > at
 > org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
 > ~[ignite-core-2.7.6.jar:2.7.6]
 > at
 >
 org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076)

 > ~[ignite-core-2.7.6.jar:2.7.6]
 > at
 > org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962)
 

Re: Long running query

2020-02-25 Thread Vladimir Pligin
Hi,

The most useful information here is the plan that the RDBMS uses.
Is it possible to share it?

I suppose that using of either the IDX_ORG_MEDP_DRUG index or the
IDX_DRUG_ID index is not absolutely correct.
My bet is an index on (DRUG_ID, ORG_ID) could help here but it's to be
checked.

And it's important to clarify whether you really need a left join here.
Inner join would help to use separate "where" clause to have different
indexes for the join and for the filtering.
What do you think, does it make any sense here? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ERROR: h2 Unsupported connection setting "MULTI_THREADED"

2020-02-25 Thread Andrew Munn
Yes I'm using spring boot.  Can Ignite be updated to work with the latest
h2?

On Fri, Feb 21, 2020, 6:25 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> I've heard about issues with e.g. Spring Boot overriding h2 database
> version and breaking our runtime. I'm not sure who else does that.
>
> Regards,
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Ilya Kasnacheev
>
>
> чт, 20 февр. 2020 г. в 19:24, Andrew Munn :
>
>> Thanks.  Adding this runtime dependency to build.gradle fixed it:
>>
>> dependencies {
>> runtime("com.h2database:h2:1.4.197")
>> ...
>> compile group: 'org.apache.ignite', name: 'ignite-spring', version:
>> '2.7.6'
>> compile group: 'org.apache.ignite', name: 'ignite-core', version:
>> '2.7.6'
>> }
>>
>> But I suspect this should be getting enforced automatically if using h2
>> ver1.4.200 breaks something.  Am I specifying the Ignite dependency
>> incorrectly?
>>
>>
>> On Thu, Feb 20, 2020 at 4:08 AM Taras Ledkov 
>> wrote:
>>
>>> Hi,
>>>
>>> Ignite uses H2 version 1.4.197 (see [1]).
>>>
>>>
>>> [1]. https://github.com/apache/ignite/blob/master/parent/pom.xml#L74
>>>
>>> On 20.02.2020 4:36, Andrew Munn wrote:
>>> > I'm building/running my client app w/Gradle and I'm seeing this
>>> > error.  Am I overloading the Ingite H2 fork with the real H2 or
>>> > something?  It appears I have the latest h2:
>>> >
>>> > [.gradle]$ find ./ -name *h2*
>>> > ./caches/modules-2/metadata-2.82/descriptors/com.h2database
>>> > ./caches/modules-2/metadata-2.82/descriptors/com.h2database/h2
>>> > ./caches/modules-2/files-2.1/com.h2database
>>> > ./caches/modules-2/files-2.1/com.h2database/h2
>>> >
>>> ./caches/modules-2/files-2.1/com.h2database/h2/1.4.200/6178ecda6e9fea8739a3708729efbffd88be43e3/h2-1.4.200.pom
>>> >
>>> ./caches/modules-2/files-2.1/com.h2database/h2/1.4.200/f7533fe7cb8e99c87a43d325a77b4b678ad9031a/h2-1.4.200.jar
>>> >
>>> >
>>> >
>>> > 2020-02-19 19:59:28.229 ERROR 102356 --- [   main]
>>> > o.a.i.internal.IgniteKernal%dev-cluster  : Exception during start
>>> > processors, node will be stopped and close connections
>>> > org.apache.ignite.internal.processors.query.IgniteSQLException: Failed
>>> > to initialize system DB connection:
>>> >
>>> jdbc:h2:mem:b52dce26-ba01-4051-9130-e087e19fab4f;LOCK_MODE=3;MULTI_THREADED=1;DB_CLOSE_ON_EXIT=FALSE;DEFAULT_LOCK_TIMEOUT=1;FUNCTIONS_IN_SCHEMA=true;OPTIMIZE_REUSE_RESULTS=0;QUERY_CACHE_SIZE=0;MAX_OPERATION_MEMORY=0;BATCH_JOINS=1;ROW_FACTORY="org.apache.ignite.internal.processors.query.h2.opt.GridH2PlainRowFactory";DEFAULT_TABLE_ENGINE=org.apache.ignite.internal.processors.query.h2.opt.GridH2DefaultTableEngine
>>> > at
>>> >
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.systemConnection(IgniteH2Indexing.java:434)
>>>
>>> > ~[ignite-indexing-2.7.6.jar:2.7.6]
>>> > at
>>> >
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSystemStatement(IgniteH2Indexing.java:699)
>>>
>>> > ~[ignite-indexing-2.7.6.jar:2.7.6]
>>> > at
>>> >
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.createSchema0(IgniteH2Indexing.java:646)
>>>
>>> > ~[ignite-indexing-2.7.6.jar:2.7.6]
>>> > at
>>> >
>>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.start(IgniteH2Indexing.java:3257)
>>>
>>> > ~[ignite-indexing-2.7.6.jar:2.7.6]
>>> > at
>>> >
>>> org.apache.ignite.internal.processors.query.GridQueryProcessor.start(GridQueryProcessor.java:248)
>>>
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> >
>>> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1700)
>>>
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> > org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1017)
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> >
>>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
>>>
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> >
>>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
>>>
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> > org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> >
>>> org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076)
>>>
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> > org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962)
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> > org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:861)
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> > org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:731)
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at
>>> > org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:700)
>>> > ~[ignite-core-2.7.6.jar:2.7.6]
>>> > at org.apache.ignite.Ignition.start(Ignition.java:348)
>>> > ~[ignite-core-2.7.6.ja

Re: Read through not working as expected in case of Replicated cache

2020-02-25 Thread Ilya Kasnacheev
Hello!

I think this is by design. You may suggest edits on readme.io.

Regards,
-- 
Ilya Kasnacheev


пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao :

> Hi,
>
> Is this a bug or the cache is designed to work this way?
>
> If it is as-designed, can this behavior be updated in ignite documentation?
>
> Thanks,
> Prasad
>
> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I have discussed this with fellow Ignite developers, and they say read
>> through for replicated cache would work where there is either:
>>
>> - writeThrough enabled and all changes do through it.
>> - database contents do not change for already read keys.
>>
>> I can see that neither is met in your case, so you can expect the
>> behavior that you are seeing.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde :
>>
>>> I am using Ignite 2.6 version.
>>>
>>> I am starting 3 server nodes with a replicated cache and 1 client node.
>>> Cache configuration is as follows.
>>> Read-through true on but write-through is false. Load data by key is
>>> implemented as given below in cache-loader.
>>>
>>> Steps to reproduce issue:
>>> 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
>>> is just removed from cache but present in DB as write-through is false)
>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>>> 3) Now query the cache from client node. Every invocation returns
>>> different results.
>>> Sometimes it returns reloaded entry, sometime returns the results
>>> without reloaded entry.
>>>
>>> Looks like read-through is not replicating the reloaded entry on all
>>> nodes in case of REPLICATED cache.
>>>
>>> So to investigate further I changed the cache mode to PARTITIONED and
>>> set the backup count to 3 i.e. total number of nodes present in cluster (to
>>> mimic REPLICATED behavior).
>>> This time it worked as expected.
>>> Every invocation returned the same result with reloaded entry.
>>>
>>> *  private CacheConfiguration networkCacheCfg() {*
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *CacheConfiguration networkCacheCfg = new
>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>>> ());
>>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>>> networkCacheCfg.setWriteThrough(false);
>>> networkCacheCfg.setReadThrough(true);
>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>>> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>>   //networkCacheCfg.setBackups(3);
>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>>> Factory storeFactory =
>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>>> NetworkData.class);networkCacheCfg.setSqlIndexMaxInlineSize(65);
>>> RendezvousAffinityFunction affinityFunction = new
>>> RendezvousAffinityFunction();
>>> affinityFunction.setExcludeNeighbors(false);
>>> networkCacheCfg.setAffinity(affinityFunction);
>>> networkCacheCfg.setStatisticsEnabled(true);   //
>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());return
>>> networkCacheCfg;  }*
>>>
>>> @Override
>>> public V load(K k) throws CacheLoaderException {
>>> V value = null;
>>> DataSource dataSource = springCtx.getBean(DataSource.class);
>>> try (Connection connection = dataSource.getConnection();
>>>  PreparedStatement statement = 
>>> connection.prepareStatement(loadByKeySql)) {
>>> //statement.setObject(1, k.getId());
>>> setPreparedStatement(statement,k);
>>> try (ResultSet rs = statement.executeQuery()) {
>>> if (rs.next()) {
>>> value = rowMapper.mapRow(rs, 0);
>>> }
>>> }
>>> } catch (SQLException e) {
>>>
>>> throw new CacheLoaderException(e.getMessage(), e);
>>> }
>>>
>>> return value;
>>> }
>>>
>>>
>>> Thanks,
>>>
>>> Akash
>>>
>>>


RE: Re: Sequence with ODBC

2020-02-25 Thread Abhay Gupta
Can you pls suggest some alternate way of generating the same in the application which Is good for Primary Key like UUID where the issue is that it is 128bit . Regards, Abhay Gupta From: Igor SapegoSent: 25 February 2020 17:37To: userSubject: Re: Sequence with ODBC There is no such option currently, AFAIKBest Regards,Igor  On Tue, Feb 25, 2020 at 3:02 PM Abhay Gupta  wrote:Hi , Do we have a way to have Autoincrement field in Database for use in Thin Client / UNIX ODBC . The Atomic sequence help is available with Java when JAVA ignite is used but it does not tell if the same is available through thin client protocol or not . Regards, Abhay Gupta  


Re: Sequence with ODBC

2020-02-25 Thread Igor Sapego
There is no such option currently, AFAIK

Best Regards,
Igor


On Tue, Feb 25, 2020 at 3:02 PM Abhay Gupta  wrote:

> Hi ,
>
>
>
> Do we have a way to have Autoincrement field in Database for use in Thin
> Client / UNIX ODBC . The Atomic sequence help is available with Java when
> JAVA ignite is used but it does not tell if the same is available through
> thin client protocol or not .
>
>
>
> Regards,
>
>
>
> Abhay Gupta
>
>
>


Sequence with ODBC

2020-02-25 Thread Abhay Gupta
Hi , Do we have a way to have Autoincrement field in Database for use in Thin Client / UNIX ODBC . The Atomic sequence help is available with Java when JAVA ignite is used but it does not tell if the same is available through thin client protocol or not . Regards, Abhay Gupta 


Re: Is Apache ignite support tiering or it only support caching??

2020-02-25 Thread Stephen Darlington
Ignite supports persistence to disk. It’s configured using data regions. So you 
could have one data region that’s entirely in memory and another that’s on 
disk. We don’t call them “tiers” but that’s effectively what they allow.

> On 25 Feb 2020, at 05:57, Preet  wrote:
> 
> I am new to ignite. I think in data regions we are specifying how much size
> we want to allocate and it is from RAM. Yaa,I agree that we can de slice or
> dice memory but how to specify other tier than RAM. What if I want to
> allocate 1gb data region from dev/sda or any available device ?? 
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: How to access IGFS file written one node from other node in cluster ??

2020-02-25 Thread aealexsandrov
As was mentioned above IGFS was deprecated and not recommended for use. 

Just use native HDFS API for working with files:

https://hadoop.apache.org/docs/stable/api/org/apache/hadoop/fs/FileSystem.html

In case if you have some SQL over HDFS (Hive, Impala) then you can set up
the Ignite cache store.

https://apacheignite.readme.io/docs/3rd-party-store



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ClusterTopologyServerNotFoundException

2020-02-25 Thread prudhvibiruda
Hi,
Even I am getting the same exception with cachemode.replicated.
But my requirement is that my ignite node shouldn't wait for other nodes in
the cluster. 
In our case , even when one node is down , the other should be working.So
that's why we didn't define the baseline topology. 
Can you give any alternate solution for this.
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/