Hi,
I tried this scenario with version 2.7.6 and issue is still there with
2.7.6.
I can not go with version 2.7.6 due to IGNITE-10884. This
issue(IGNITE-10884) if fixed but not yet released.
Could you please let me know what is the workaround for replicated cache
issue.

Thanks,
Akash


On Tue, Oct 29, 2019 at 8:53 PM Ilya Kasnacheev <ilya.kasnach...@gmail.com>
wrote:

> Hello!
>
> I remember that we had this issue. Have you tried 2.7.6 yet?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <akashshi...@gmail.com>:
>
>> I am using Ignite 2.6 version.
>>
>> I am starting 3 server nodes with a replicated cache and 1 client node.
>> Cache configuration is as follows.
>> Read-through true on but write-through is false. Load data by key is
>> implemented as given below in cache-loader.
>>
>> Steps to reproduce issue:
>> 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
>> is just removed from cache but present in DB as write-through is false)
>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>> 3) Now query the cache from client node. Every invocation returns
>> different results.
>> Sometimes it returns reloaded entry, sometime returns the results
>> without reloaded entry.
>>
>> Looks like read-through is not replicating the reloaded entry on all
>> nodes in case of REPLICATED cache.
>>
>> So to investigate further I changed the cache mode to PARTITIONED and set
>> the backup count to 3 i.e. total number of nodes present in cluster (to
>> mimic REPLICATED behavior).
>> This time it worked as expected.
>> Every invocation returned the same result with reloaded entry.
>>
>> *  private CacheConfiguration networkCacheCfg() {*
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *    CacheConfiguration networkCacheCfg = new
>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>> <http://CacheName.NETWORK_CACHE.name>());
>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>> networkCacheCfg.setWriteThrough(false);
>> networkCacheCfg.setReadThrough(true);
>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>   //networkCacheCfg.setBackups(3);
>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>> Factory<NetworkDataCacheLoader> storeFactory =
>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>> NetworkData.class);    networkCacheCfg.setSqlIndexMaxInlineSize(65);
>> RendezvousAffinityFunction affinityFunction = new
>> RendezvousAffinityFunction();
>> affinityFunction.setExcludeNeighbors(false);
>> networkCacheCfg.setAffinity(affinityFunction);
>> networkCacheCfg.setStatisticsEnabled(true);   //
>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());    return
>> networkCacheCfg;  }*
>>
>> @Override
>> public V load(K k) throws CacheLoaderException {
>>     V value = null;
>>     DataSource dataSource = springCtx.getBean(DataSource.class);
>>     try (Connection connection = dataSource.getConnection();
>>          PreparedStatement statement = 
>> connection.prepareStatement(loadByKeySql)) {
>>         //statement.setObject(1, k.getId());
>>         setPreparedStatement(statement,k);
>>         try (ResultSet rs = statement.executeQuery()) {
>>             if (rs.next()) {
>>                 value = rowMapper.mapRow(rs, 0);
>>             }
>>         }
>>     } catch (SQLException e) {
>>
>>         throw new CacheLoaderException(e.getMessage(), e);
>>     }
>>
>>     return value;
>> }
>>
>>
>> Thanks,
>>
>> Akash
>>
>>

Reply via email to