Ivan, thanks for stepping in.

Prasad, is Ivan's assumption correct that you query the data with SQL under
the observed circumstances? My guess is that you were referring to the
key-value APIs as long as the issue is gone when the write-through is
enabled.

-
Denis


On Fri, Feb 28, 2020 at 2:30 PM Ivan Pavlukhin <vololo...@gmail.com> wrote:

> As I understand the thing here is in combination of read-through and
> SQL. SQL queries do not read from underlying storage when read-through
> is configured. And an observed result happens because query from a
> client node over REPLICATED cache picks random server node (kind of
> load-balancing) to retrieve data. Following happens in the described
> case:
> 1. Value is loaded to a cache from an underlying storage on a primary
> node when cache.get is called.
> 2. Query is executed multiple times and when the chose node is the
> primary node then the value is observed. On other nodes the value is
> absent.
>
> Actually, behavior for PARTITIONED cache is similar, but an
> inconsistency is not observed because SQL queries read data from the
> primary node there. If the primary node leaves a cluster then an SQL
> query will not see the value anymore. So, the same inconsistency will
> appear.
>
> Best regards,
> Ivan Pavlukhin
>
> пт, 28 февр. 2020 г. в 13:23, Prasad Bhalerao <
> prasadbhalerao1...@gmail.com>:
> >
> > Can someone please comment on this?
> >
> > On Wed, Feb 26, 2020 at 6:04 AM Denis Magda <dma...@apache.org> wrote:
> >
> > > Ignite Dev team,
> > >
> > > This sounds like an issue in our replicated cache implementation rather
> > > than an expected behavior. Especially, if partitioned caches don't have
> > > such a specificity.
> > >
> > > Who can explain why write-through needs to be enabled for replicated
> caches
> > > to reload an entry from an underlying database properly/consistently?
> > >
> > > -
> > > Denis
> > >
> > >
> > > On Tue, Feb 25, 2020 at 5:11 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Hello!
> > > >
> > > > I think this is by design. You may suggest edits on readme.io.
> > > >
> > > > Regards,
> > > > --
> > > > Ilya Kasnacheev
> > > >
> > > >
> > > > пн, 24 февр. 2020 г. в 17:28, Prasad Bhalerao <
> > > > prasadbhalerao1...@gmail.com>:
> > > >
> > > >> Hi,
> > > >>
> > > >> Is this a bug or the cache is designed to work this way?
> > > >>
> > > >> If it is as-designed, can this behavior be updated in ignite
> > > >> documentation?
> > > >>
> > > >> Thanks,
> > > >> Prasad
> > > >>
> > > >> On Wed, Oct 30, 2019 at 7:19 PM Ilya Kasnacheev <
> > > >> ilya.kasnach...@gmail.com> wrote:
> > > >>
> > > >>> Hello!
> > > >>>
> > > >>> I have discussed this with fellow Ignite developers, and they say
> read
> > > >>> through for replicated cache would work where there is either:
> > > >>>
> > > >>> - writeThrough enabled and all changes do through it.
> > > >>> - database contents do not change for already read keys.
> > > >>>
> > > >>> I can see that neither is met in your case, so you can expect the
> > > >>> behavior that you are seeing.
> > > >>>
> > > >>> Regards,
> > > >>> --
> > > >>> Ilya Kasnacheev
> > > >>>
> > > >>>
> > > >>> вт, 29 окт. 2019 г. в 18:18, Akash Shinde <akashshi...@gmail.com>:
> > > >>>
> > > >>>> I am using Ignite 2.6 version.
> > > >>>>
> > > >>>> I am starting 3 server nodes with a replicated cache and 1 client
> > > node.
> > > >>>> Cache configuration is as follows.
> > > >>>> Read-through true on but write-through is false. Load data by key
> is
> > > >>>> implemented as given below in cache-loader.
> > > >>>>
> > > >>>> Steps to reproduce issue:
> > > >>>> 1) Delete an entry from cache using IgniteCache.remove() method.
> > > (Entry
> > > >>>> is just removed from cache but present in DB as write-through is
> > > false)
> > > >>>> 2) Invoke IgniteCache.get() method for the same key in step 1.
> > > >>>> 3) Now query the cache from client node. Every invocation returns
> > > >>>> different results.
> > > >>>> Sometimes it returns reloaded entry, sometime returns the results
> > > >>>> without reloaded entry.
> > > >>>>
> > > >>>> Looks like read-through is not replicating the reloaded entry on
> all
> > > >>>> nodes in case of REPLICATED cache.
> > > >>>>
> > > >>>> So to investigate further I changed the cache mode to PARTITIONED
> and
> > > >>>> set the backup count to 3 i.e. total number of nodes present in
> > > cluster (to
> > > >>>> mimic REPLICATED behavior).
> > > >>>> This time it worked as expected.
> > > >>>> Every invocation returned the same result with reloaded entry.
> > > >>>>
> > > >>>> *  private CacheConfiguration networkCacheCfg() {*
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> *    CacheConfiguration networkCacheCfg = new
> > > >>>> CacheConfiguration<>(CacheName.NETWORK_CACHE.name
> > > >>>> <http://CacheName.NETWORK_CACHE.name>());
> > > >>>>
> networkCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> > > >>>> networkCacheCfg.setWriteThrough(false);
> > > >>>> networkCacheCfg.setReadThrough(true);
> > > >>>> networkCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);
> > > >>>>
> > >
> networkCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> > > >>>>   //networkCacheCfg.setBackups(3);
> > > >>>> networkCacheCfg.setCacheMode(CacheMode.REPLICATED);
> > > >>>> Factory<NetworkDataCacheLoader> storeFactory =
> > > >>>> FactoryBuilder.factoryOf(NetworkDataCacheLoader.class);
> > > >>>> networkCacheCfg.setCacheStoreFactory(storeFactory);
> > > >>>> networkCacheCfg.setIndexedTypes(DefaultDataAffinityKey.class,
> > > >>>> NetworkData.class);
> networkCacheCfg.setSqlIndexMaxInlineSize(65);
> > > >>>> RendezvousAffinityFunction affinityFunction = new
> > > >>>> RendezvousAffinityFunction();
> > > >>>> affinityFunction.setExcludeNeighbors(false);
> > > >>>> networkCacheCfg.setAffinity(affinityFunction);
> > > >>>> networkCacheCfg.setStatisticsEnabled(true);   //
> > > >>>> networkCacheCfg.setNearConfiguration(nearCacheConfiguration());
> > > return
> > > >>>> networkCacheCfg;  }*
> > > >>>>
> > > >>>> @Override
> > > >>>> public V load(K k) throws CacheLoaderException {
> > > >>>>     V value = null;
> > > >>>>     DataSource dataSource = springCtx.getBean(DataSource.class);
> > > >>>>     try (Connection connection = dataSource.getConnection();
> > > >>>>          PreparedStatement statement =
> > > connection.prepareStatement(loadByKeySql)) {
> > > >>>>         //statement.setObject(1, k.getId());
> > > >>>>         setPreparedStatement(statement,k);
> > > >>>>         try (ResultSet rs = statement.executeQuery()) {
> > > >>>>             if (rs.next()) {
> > > >>>>                 value = rowMapper.mapRow(rs, 0);
> > > >>>>             }
> > > >>>>         }
> > > >>>>     } catch (SQLException e) {
> > > >>>>
> > > >>>>         throw new CacheLoaderException(e.getMessage(), e);
> > > >>>>     }
> > > >>>>
> > > >>>>     return value;
> > > >>>> }
> > > >>>>
> > > >>>>
> > > >>>> Thanks,
> > > >>>>
> > > >>>> Akash
> > > >>>>
> > > >>>>
> > >
>

Reply via email to