0 less than on the start).
>
> Am I missing something?
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *Roman Novichenok
> *Sent: *2 октября 2018 г. 22:21
> *To: *user@ignite.apache.org
> *Subject: *Re: Partition Loss Policy options
>
>
>
> Anton,
Thanks. I understand relying on user to determine if data is upto date
when Ignite is used as a cache. With native persistence, Ignite is the
source of the data. If some partitions become unavailable, there's no way
for data to become outdated. Feels like there should be a configuration
setting
I was going over failure recovery scenarios, trying to understand logic
behind lost partitions functionality. In the case of native persistence,
Ignite fully manages data persistence and availability. If enough nodes in
the cluster become unavailable resulting in partitions marked lost, Ignite
ke
with persistence enabled.
2. creates 2 caches - with setBackups(1)
3. populates caches with 1000 elements
4. runs a sql query and prints result size: 1000
5. stops 2 nodes
6. runs sql query from step 4 and prints result size: (something less than
1000).
Thanks,
Roman
On Tue, Oct 2, 2018 at 2:07 PM
I was looking at scan queries. cache.query(new ScanQuery()) returns partial
results.
On Tue, Oct 2, 2018 at 1:37 PM akurbanov wrote:
> Hello Roman,
>
> Correct me if I'm mistaken, you are talking about SQL queries. That was
> fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
PartitionLossPolicy controls access level to the cache in case any
partitions for this cache are not available on the cluster. As far as I
can see this policy is only consulted for cache put/get operations. Is
there a way to prevent queries from returning results (force an exception)
when cache d