Ignite Dev team,

This sounds like an issue in our replicated cache implementation rather
than an expected behavior. Especially, if partitioned caches don't have
such a specificity.

Who can explain why write-through needs to be enabled for replicated caches
to reload an entry from an underlying database properly/consistently?

-
Denis


On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <ilya.kasnach...@gmail.com>
wrote:

> Hello!
>
> I think this is by design. You may suggest edits on readme.io.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> prasadbhalerao1...@gmail.com>:
>
>> Hi,
>>
>> Is this a bug or the cache is designed to work this way?
>>
>> If it is as-designed, can this behavior be updated in ignite
>> documentation?
>>
>> Thanks,
>> Prasad
>>
>> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> I have discussed this with fellow Ignite developers, and they say read
>>> through for replicated cache would work where there is either:
>>>
>>> - writeThrough enabled and all changes do through it.
>>> - database contents do not change for already read keys.
>>>
>>> I can see that neither is met in your case, so you can expect the
>>> behavior that you are seeing.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <akashshi...@gmail.com>:
>>>
>>>> I am using Ignite 2.6 version.
>>>>
>>>> I am starting 3 server nodes with a replicated cache and 1 client node.
>>>> Cache configuration is as follows.
>>>> Read-through true on but write-through is false. Load data by key is
>>>> implemented as given below in cache-loader.
>>>>
>>>> Steps to reproduce issue:
>>>> 1) Delete an entry from cache using IgniteCache.remove() method. (Entry
>>>> is just removed from cache but present in DB as write-through is false)
>>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
>>>> 3) Now query the cache from client node. Every invocation returns
>>>> different results.
>>>> Sometimes it returns reloaded entry, sometime returns the results
>>>> without reloaded entry.
>>>>
>>>> Looks like read-through is not replicating the reloaded entry on all
>>>> nodes in case of REPLICATED cache.
>>>>
>>>> So to investigate further I changed the cache mode to PARTITIONED and
>>>> set the backup count to 3 i.e. total number of nodes present in cluster (to
>>>> mimic REPLICATED behavior).
>>>> This time it worked as expected.
>>>> Every invocation returned the same result with reloaded entry.
>>>>
>>>> *  private CacheConfiguration networkCacheCfg() {*
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *    CacheConfiguration networkCacheCfg = new
>>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
>>>> <http://CacheName.NETWORK_CACHE.name>());
>>>> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
>>>> networkCacheCfg.setWriteThrough(false);
>>>> networkCacheCfg.setReadThrough(true);
>>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
>>>> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>>>>   //networkCacheCfg.setBackups(3);
>>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
>>>> Factory<NetworkDataCacheLoader> storeFactory =
>>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
>>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
>>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
>>>> NetworkData.class);    networkCacheCfg.setSqlIndexMaxInlineSize(65);
>>>> RendezvousAffinityFunction affinityFunction = new
>>>> RendezvousAffinityFunction();
>>>> affinityFunction.setExcludeNeighbors(false);
>>>> networkCacheCfg.setAffinity(affinityFunction);
>>>> networkCacheCfg.setStatisticsEnabled(true);   //
>>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());    return
>>>> networkCacheCfg;  }*
>>>>
>>>> @Override
>>>> public V load(K k) throws CacheLoaderException {
>>>>     V value = null;
>>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
>>>>     try (Connection connection = dataSource.getConnection();
>>>>          PreparedStatement statement = 
>>>> connection.prepareStatement(loadByKeySql)) {
>>>>         //statement.setObject(1, k.getId());
>>>>         setPreparedStatement(statement,k);
>>>>         try (ResultSet rs = statement.executeQuery()) {
>>>>             if (rs.next()) {
>>>>                 value = rowMapper.mapRow(rs, 0);
>>>>             }
>>>>         }
>>>>     } catch (SQLException e) {
>>>>
>>>>         throw new CacheLoaderException(e.getMessage(), e);
>>>>     }
>>>>
>>>>     return value;
>>>> }
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> Akash
>>>>
>>>>

Reply via email to