Re: partition loss policy exception??

2020-09-03 Thread Igor Belyakov
In case of java thin client usage, you should expect that
"org.apache.ignite.internal.client.thin.ClientServerError" will be thrown
on "get" operation execution for the lost partition. Here is an example:

org.apache.ignite.internal.client.thin.ClientServerError: Ignite failed to
process request [626]: class
org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
Failed to execute cache operation (all partition owners have left the grid,
partition data has been lost) [cacheName=testcache, part=625,
key=UserKeyCacheObjectImpl [part=625, val=625, hasValBytes=false]] (server
status code [1])
at
org.apache.ignite.internal.client.thin.TcpClientChannel.processNextResponse(TcpClientChannel.java:326)
at
org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:234)
at
org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:171)
at
org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:160)
at
org.apache.ignite.internal.client.thin.ReliableChannel.affinityService(ReliableChannel.java:222)
at
org.apache.ignite.internal.client.thin.TcpClientCache.cacheSingleKeyOperation(TcpClientCache.java:509)
at
org.apache.ignite.internal.client.thin.TcpClientCache.get(TcpClientCache.java:111)
at com.test.App.main(App.java:56)

Igor

On Wed, Sep 2, 2020 at 8:35 AM kay  wrote:

> Hello, I have a question about PartitionLossPolicy..
>
> I configure partitionLossPolicy for 'READ_ONLY_SAFE' specific cache and If
> some reason that I don't know occur a partition loss.
>
> Then, when I access to loss partition for read using java thin client, What
> happend?
>
> Is it throw a exception? If throw a Exception, What is a Exception Name??
>
>
> I'll waiting for reply.
>
> Thank you so much.
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


partition loss policy exception??

2020-09-01 Thread kay
Hello, I have a question about PartitionLossPolicy..

I configure partitionLossPolicy for 'READ_ONLY_SAFE' specific cache and If
some reason that I don't know occur a partition loss.

Then, when I access to loss partition for read using java thin client, What
happend?

Is it throw a exception? If throw a Exception, What is a Exception Name??


I'll waiting for reply.

Thank you so much.








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: partition loss policy

2020-05-28 Thread akorensh
Hi,
   See: https://apacheignite.readme.io/docs/partition-loss-policies
  Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


partition loss policy

2020-05-28 Thread kay
Hello, I got a question about partition loss policy for cache.

I'm not sure what is partition loss and why this policy need.

If there are 4 nodes and backups=1, 2 nodes are crashed and die.
Then, There are only 2 nodes. 
Maybe *some data *are lost<--( is this partition loss??.)  If there saved
crashed node. 

If some data are lost and I use java thin client to get data(lost one).
it will be null data return..

so I don't think there is no neccessary partition loss policy for my case..
is it right?? Am I understand exactly??



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Partition Loss Policy mismatch blocks server restart

2019-10-23 Thread Evgeniy Rudenko
   
>
>
>
>  class="org.apache.ignite.cache.QueryIndex">
>
>  value="proto"/>
>
> 
>
>
>
>
> 
>
>  value="kafkaTime"/>
>
> 
>
>
>
>  class="org.apache.ignite.cache.QueryIndex">
>
>  value="ingestTime"/>
>
> 
>
>
>
>  class="org.apache.ignite.cache.QueryIndex">
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
>
>
>
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
>  class="org.apache.ignite.cache.CacheKeyConfiguration">
>
> 
>
>
>
> 
>
> 
>
> 
>
>
>
>
>
> 
>
> 
>
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>
> 
>
> 
>
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>
> 
>
> 
>
> 
>
>  
>
>  10.207.86.89:47500..47509
>
>  10.207.86.52:47500..47509
>
>  10.207.86.37:47500..47509
>
>  10.207.86.99:47500..47509
>
>  10.207.86.51:47500..47509
>
>
> 10.207.86.112:47500..47509
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
>
>
> *From: *Evgeniy Rudenko 
> *Reply-To: *"user@ignite.apache.org" 
> *Date: *Tuesday, October 22, 2019 at 2:27 AM
> *To: *"user@ignite.apache.org" 
> *Subject: *Re: Partition Loss Policy mismatch blocks server restart
>
>
>
> But where do you define AlphaCaseTelProtobufCache? It is missed in the
> attached config.
>
>
>
> This is expected behaviour if it is some new cache and you are adding it
> in the code of server node, because you already have a cache with different
> PartitionLossPolicy in your running cluster's group. Just change it's loss
> policy to READ_WRITE_SAFE and you will be able to join server node.
>
>
>
> On Mon, Oct 21, 2019 at 7:53 PM Conrad Mukai (cmukai) 
> wrote:
>
> Here is a log of the failed start up:
>
>
>
> cp: can't stat '/opt/ignite/apache-ignite/libs/optional/ignite-spring': No
> such file or directory
>
> cp: can't stat '/opt/ignite/apache-ignite/libs/optional/ignite-indexing':
> No such file or directory
>
> [21:54:08]__  
>
> [21:54:08]   /  _/ ___/ |/ /  _/_  __/ __/
>
> [21:54:08]  _/ // (7 7// /  / / / _/
>
> [21:54:08] /___/\___/_/|_/___/ /_/ /___/
>
> [21:54:08]
>
> [21:54:08] ver. 2.7.6#20190911-sha1:21f7ca41
>
> [21:54:08] 2019 Copyright(C) Apache Software Foundation
>
> [21:54:08]
>
> [21:54:08] Ignite documentation: http://ignite.apache.org
>
> [21:54:08]
>
> [21:54:08] Quiet mode.
>
> [21:54:08]   ^-- Logging to file
> '/opt/ignite/apache-ignite/work/log/ignite-7e6a9d33.log'
>
> [21:54:08]   ^-- Logging by 'Log4JLogger [quiet=true,
> config=/opt/ignite/apache-ignite/config/ignite-log4j.xml]'
>
> [21:54:08]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
> or "-v" to ignite.{sh|bat}
>
> [21:54:08]
>
> [21:54:08] OS: Linux 3.10.0-957.1.3.el7.x86_64 amd64
>
> [21:54:08] VM information: OpenJDK Runtime Environment 1.8.0_212-b04
> IcedTea OpenJDK 64-Bit Server VM 25.212-b04
>
> [21:54:08] Configured plugins:
>
> [21:54:08]   ^-- None
>
> [21:54:08]
>
> [21:54:08] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
> [tryS

Re: Partition Loss Policy mismatch blocks server restart

2019-10-22 Thread Evgeniy Rudenko
But where do you define AlphaCaseTelProtobufCache? It is missed in the
attached config.

This is expected behaviour if it is some new cache and you are adding it in
the code of server node, because you already have a cache with different
PartitionLossPolicy in your running cluster's group. Just change it's loss
policy to READ_WRITE_SAFE and you will be able to join server node.

On Mon, Oct 21, 2019 at 7:53 PM Conrad Mukai (cmukai) 
wrote:

> Here is a log of the failed start up:
>
>
>
> cp: can't stat '/opt/ignite/apache-ignite/libs/optional/ignite-spring': No
> such file or directory
>
> cp: can't stat '/opt/ignite/apache-ignite/libs/optional/ignite-indexing':
> No such file or directory
>
> [21:54:08]__  
>
> [21:54:08]   /  _/ ___/ |/ /  _/_  __/ __/
>
> [21:54:08]  _/ // (7 7// /  / / / _/
>
> [21:54:08] /___/\___/_/|_/___/ /_/ /___/
>
> [21:54:08]
>
> [21:54:08] ver. 2.7.6#20190911-sha1:21f7ca41
>
> [21:54:08] 2019 Copyright(C) Apache Software Foundation
>
> [21:54:08]
>
> [21:54:08] Ignite documentation: http://ignite.apache.org
>
> [21:54:08]
>
> [21:54:08] Quiet mode.
>
> [21:54:08]   ^-- Logging to file
> '/opt/ignite/apache-ignite/work/log/ignite-7e6a9d33.log'
>
> [21:54:08]   ^-- Logging by 'Log4JLogger [quiet=true,
> config=/opt/ignite/apache-ignite/config/ignite-log4j.xml]'
>
> [21:54:08]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
> or "-v" to ignite.{sh|bat}
>
> [21:54:08]
>
> [21:54:08] OS: Linux 3.10.0-957.1.3.el7.x86_64 amd64
>
> [21:54:08] VM information: OpenJDK Runtime Environment 1.8.0_212-b04
> IcedTea OpenJDK 64-Bit Server VM 25.212-b04
>
> [21:54:08] Configured plugins:
>
> [21:54:08]   ^-- None
>
> [21:54:08]
>
> [21:54:08] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
> [tryStop=false, timeout=0, super=AbstractFailureHandler
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
> SYSTEM_CRITICAL_OPERATION_TIMEOUT
>
> [21:54:08] Message queue limit is set to 0 which may lead to potential
> OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due
> to message queues growth on sender and receiver sides.
>
> [21:54:08] Security status [authentication=off, tls/ssl=off]
>
> [21:54:08] Automatically adjusted max WAL archive size to 8.0 GiB (to
> override, use DataStorageConfiguration.setMaxWalArhiveSize)
>
> [2019-10-20 21:54:10,107][ERROR][main][IgniteKernal] Exception during
> start processors, node will be stopped and close connections
>
> class org.apache.ignite.IgniteCheckedException: Partition Loss Policy
> mismatch for caches related to the same group [groupName=group_data_loom,
> existingCache=AlphaCaseTelProtobufCache1,
> existingPartitionLossPolicy=READ_WRITE_SAFE,
> startingCache=AlphaCaseTelProtobufCache, startingPartitionLossPolicy=IGNORE]
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheGroupsAttributesMismatch(GridCacheUtils.java:1052)
>
> at
> org.apache.ignite.internal.processors.cache.ClusterCachesInfo.validateCacheGroupConfiguration(ClusterCachesInfo.java:1965)
>
> at
> org.apache.ignite.internal.processors.cache.ClusterCachesInfo.onStart(ClusterCachesInfo.java:152)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnStart(GridCacheProcessor.java:762)
>
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.onReadyForRead(GridCacheProcessor.java:737)
>
> at
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetastorageReadyForRead(GridCacheDatabaseSharedManager.java:409)
>
> at
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readMetastore(GridCacheDatabaseSharedManager.java:675)
>
> at
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetaStorageSubscribersOnReadyForRead(GridCacheDatabaseSharedManager.java:4730)
>
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1048)
>
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
>
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
>
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
>
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076)
>
> at
> org.apache.ignite.internal.IgnitionEx.start(

Re: Partition Loss Policy mismatch blocks server restart

2019-10-21 Thread Conrad Mukai (cmukai)
Here is a log of the failed start up:

cp: can't stat '/opt/ignite/apache-ignite/libs/optional/ignite-spring': No such 
file or directory
cp: can't stat '/opt/ignite/apache-ignite/libs/optional/ignite-indexing': No 
such file or directory
[21:54:08]__  
[21:54:08]   /  _/ ___/ |/ /  _/_  __/ __/
[21:54:08]  _/ // (7 7// /  / / / _/
[21:54:08] /___/\___/_/|_/___/ /_/ /___/
[21:54:08]
[21:54:08] ver. 2.7.6#20190911-sha1:21f7ca41
[21:54:08] 2019 Copyright(C) Apache Software Foundation
[21:54:08]
[21:54:08] Ignite documentation: http://ignite.apache.org
[21:54:08]
[21:54:08] Quiet mode.
[21:54:08]   ^-- Logging to file 
'/opt/ignite/apache-ignite/work/log/ignite-7e6a9d33.log'
[21:54:08]   ^-- Logging by 'Log4JLogger [quiet=true, 
config=/opt/ignite/apache-ignite/config/ignite-log4j.xml]'
[21:54:08]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or 
"-v" to ignite.{sh|bat}
[21:54:08]
[21:54:08] OS: Linux 3.10.0-957.1.3.el7.x86_64 amd64
[21:54:08] VM information: OpenJDK Runtime Environment 1.8.0_212-b04 IcedTea 
OpenJDK 64-Bit Server VM 25.212-b04
[21:54:08] Configured plugins:
[21:54:08]   ^-- None
[21:54:08]
[21:54:08] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler 
[tryStop=false, timeout=0, super=AbstractFailureHandler 
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT
[21:54:08] Message queue limit is set to 0 which may lead to potential OOMEs 
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to 
message queues growth on sender and receiver sides.
[21:54:08] Security status [authentication=off, tls/ssl=off]
[21:54:08] Automatically adjusted max WAL archive size to 8.0 GiB (to override, 
use DataStorageConfiguration.setMaxWalArhiveSize)
[2019-10-20 21:54:10,107][ERROR][main][IgniteKernal] Exception during start 
processors, node will be stopped and close connections
class org.apache.ignite.IgniteCheckedException: Partition Loss Policy mismatch 
for caches related to the same group [groupName=group_data_loom, 
existingCache=AlphaCaseTelProtobufCache1, 
existingPartitionLossPolicy=READ_WRITE_SAFE, 
startingCache=AlphaCaseTelProtobufCache, startingPartitionLossPolicy=IGNORE]
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheGroupsAttributesMismatch(GridCacheUtils.java:1052)
at 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.validateCacheGroupConfiguration(ClusterCachesInfo.java:1965)
at 
org.apache.ignite.internal.processors.cache.ClusterCachesInfo.onStart(ClusterCachesInfo.java:152)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnStart(GridCacheProcessor.java:762)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.onReadyForRead(GridCacheProcessor.java:737)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetastorageReadyForRead(GridCacheDatabaseSharedManager.java:409)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readMetastore(GridCacheDatabaseSharedManager.java:675)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetaStorageSubscribersOnReadyForRead(GridCacheDatabaseSharedManager.java:4730)
at 
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1048)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1730)
at 
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1158)
at 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1076)
at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:962)
at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:861)
at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:731)
at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:700)
at org.apache.ignite.Ignition.start(Ignition.java:348)
at 
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301)
[2019-10-20 21:54:10,115][ERROR][main][IgniteKernal] Got exception while 
starting (will rollback startup routine).
class org.apache.ignite.IgniteCheckedException: Partition Loss Policy mismatch 
for caches related to the same group [groupName=group_data_loom, 
existingCache=AlphaCaseTelProtobufCache1, 
existingPartitionLossPolicy=READ_WRITE_SAFE, 
startingCache=AlphaCaseTelProtobufCache, startingPartitionLossPolicy=IGNORE]
at 
org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheGroupsAttributesMismatch(GridCacheUtils

Re: Partition Loss Policy mismatch blocks server restart

2019-10-21 Thread Evgeniy Rudenko
Hi Conrad

All caches in the group should have the same partitionLossPolicy. Cache
with different partitionLossPolicy should not be allowed to join the group.

Could you tell what version of Ignite you are using. Also could you attach
full logs and full xml configurations to check.

On Mon, Oct 21, 2019 at 1:16 AM Conrad Mukai (cmukai) 
wrote:

> I set up a cluster of server nodes with the following cacheConfiguration:
>
>
>
> 
>
> 
>
> 
>
>  class="org.apache.ignite.configuration.CacheConfiguration">
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
> Apparently an application uploaded a client configuration with a cache
> group and now I cannot restart the cluster with the original configuration.
> I get the following error:
>
>
>
> Caused by: class org.apache.ignite.IgniteCheckedException: Partition Loss
> Policy mismatch for caches related to the same group
> [groupName=group_data_loom, existingCache=AlphaCaseTelProtobufCache1,
> existingPartitionLossPolicy=READ_WRITE_SAFE,
> startingCache=AlphaCaseTelProtobufCache, startingPartitionLossPolicy=IGNORE]
>
>
>
> First question is how can I restart my cluster. The second question is
> this really due to the client configuration, and if so why is it possible
> for a client to break the entire service restart?
>
>
>
> Thanks in advance,
>
> Conrad
>
>
>


-- 
Best regards,
Evgeniy


Partition Loss Policy mismatch blocks server restart

2019-10-20 Thread Conrad Mukai (cmukai)
I set up a cluster of server nodes with the following cacheConfiguration:












Apparently an application uploaded a client configuration with a cache group 
and now I cannot restart the cluster with the original configuration. I get the 
following error:

Caused by: class org.apache.ignite.IgniteCheckedException: Partition Loss 
Policy mismatch for caches related to the same group 
[groupName=group_data_loom, existingCache=AlphaCaseTelProtobufCache1, 
existingPartitionLossPolicy=READ_WRITE_SAFE, 
startingCache=AlphaCaseTelProtobufCache, startingPartitionLossPolicy=IGNORE]

First question is how can I restart my cluster. The second question is this 
really due to the client configuration, and if so why is it possible for a 
client to break the entire service restart?

Thanks in advance,
Conrad



RE: Partition Loss Policy options

2018-10-10 Thread Stanislav Lukyanov
Oh, sure, you’re right.
Turns out it only works this way for in-memory caches, need to fix it for 
persistence.
Filed https://issues.apache.org/jira/browse/IGNITE-9841 for this.

Thanks for reporting this!

Stan

From: Roman Novichenok
Sent: 9 октября 2018 г. 22:46
To: user@ignite.apache.org
Subject: Re: Partition Loss Policy options

Stan,
thanks for looking into it.  I agree with you that this is the observed 
behaviour, but it is not what I would expect.

My expectation would be to get an exception when I attempt to query unavailable 
partitions.  Ideally this behavior would be dependent on the 
PartitionLossPolicy.  When READ_ONLY_SAFE or READ_WRITE_SAFE policy is 
selected, and the query condition does not explicitly specify which partitions 
it is interested in, then query should fail. Query could implicitly specify 
partitions by including indexed value in the where clause.  Simpler 
implementation could just raise exceptions on queries when policy is ..._SAFE 
and some partitions are unavailable.

Thanks again,
Roman

On Tue, Oct 9, 2018 at 2:54 PM Stanislav Lukyanov  
wrote:
Hi,
 
I’ve tried your test and it works as expected, with some partitions lost and 
the final size being ~850 (~150 less than on the start).
Am I missing something?
 
Thanks,
Stan
 
From: Roman Novichenok
Sent: 2 октября 2018 г. 22:21
To: user@ignite.apache.org
Subject: Re: Partition Loss Policy options
 
Anton,
thanks for quick response.  Not sure if I'm setting wrong expectations.  Just 
tried sql query and that exhibited the same behavior.  Created a pull request 
with a test: https://github.com/novicr/ignite/pull/3.  
 
The test goes through the following steps:
1. creates a 4 node cluster with persistence enabled.  
2. creates 2 caches - with setBackups(1)
3. populates caches with 1000 elements
4. runs a sql query and prints result size: 1000
5. stops 2 nodes 
6. runs sql query from step 4 and prints result size: (something less than 
1000).
 
Thanks,
Roman
 
 
On Tue, Oct 2, 2018 at 2:07 PM Roman Novichenok  
wrote:
I was looking at scan queries. cache.query(new ScanQuery()) returns partial 
results. 
 
On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:
Hello Roman,

Correct me if I'm mistaken, you are talking about SQL queries. That was
fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
delivered in 2.7 release.

Regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
 



Re: Partition Loss Policy options

2018-10-09 Thread Roman Novichenok
Stan,
thanks for looking into it.  I agree with you that this is the observed
behaviour, but it is not what I would expect.

My expectation would be to get an exception when I attempt to query
unavailable partitions.  Ideally this behavior would be dependent on the
PartitionLossPolicy.  When READ_ONLY_SAFE or READ_WRITE_SAFE policy is
selected, and the query condition does not explicitly specify which
partitions it is interested in, then query should fail. Query could
implicitly specify partitions by including indexed value in the where
clause.  Simpler implementation could just raise exceptions on queries when
policy is ..._SAFE and some partitions are unavailable.

Thanks again,
Roman

On Tue, Oct 9, 2018 at 2:54 PM Stanislav Lukyanov 
wrote:

> Hi,
>
>
>
> I’ve tried your test and it works as expected, with some partitions lost
> and the final size being ~850 (~150 less than on the start).
>
> Am I missing something?
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *Roman Novichenok 
> *Sent: *2 октября 2018 г. 22:21
> *To: *user@ignite.apache.org
> *Subject: *Re: Partition Loss Policy options
>
>
>
> Anton,
>
> thanks for quick response.  Not sure if I'm setting wrong expectations.
> Just tried sql query and that exhibited the same behavior.  Created a pull
> request with a test: https://github.com/novicr/ignite/pull/3.
>
>
>
> The test goes through the following steps:
>
> 1. creates a 4 node cluster with persistence enabled.
>
> 2. creates 2 caches - with setBackups(1)
>
> 3. populates caches with 1000 elements
>
> 4. runs a sql query and prints result size: 1000
>
> 5. stops 2 nodes
>
> 6. runs sql query from step 4 and prints result size: (something less than
> 1000).
>
>
>
> Thanks,
>
> Roman
>
>
>
>
>
> On Tue, Oct 2, 2018 at 2:07 PM Roman Novichenok <
> roman.noviche...@gmail.com> wrote:
>
> I was looking at scan queries. cache.query(new ScanQuery()) returns
> partial results.
>
>
>
> On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:
>
> Hello Roman,
>
> Correct me if I'm mistaken, you are talking about SQL queries. That was
> fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
> ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
> delivered in 2.7 release.
>
> Regards,
> Anton
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


RE: Partition Loss Policy options

2018-10-09 Thread Stanislav Lukyanov
Hi,

I’ve tried your test and it works as expected, with some partitions lost and 
the final size being ~850 (~150 less than on the start).
Am I missing something?

Thanks,
Stan

From: Roman Novichenok
Sent: 2 октября 2018 г. 22:21
To: user@ignite.apache.org
Subject: Re: Partition Loss Policy options

Anton,
thanks for quick response.  Not sure if I'm setting wrong expectations.  Just 
tried sql query and that exhibited the same behavior.  Created a pull request 
with a test: https://github.com/novicr/ignite/pull/3.  

The test goes through the following steps:
1. creates a 4 node cluster with persistence enabled.  
2. creates 2 caches - with setBackups(1)
3. populates caches with 1000 elements
4. runs a sql query and prints result size: 1000
5. stops 2 nodes 
6. runs sql query from step 4 and prints result size: (something less than 
1000).

Thanks,
Roman


On Tue, Oct 2, 2018 at 2:07 PM Roman Novichenok  
wrote:
I was looking at scan queries. cache.query(new ScanQuery()) returns partial 
results. 

On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:
Hello Roman,

Correct me if I'm mistaken, you are talking about SQL queries. That was
fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
delivered in 2.7 release.

Regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Partition Loss Policy options

2018-10-02 Thread Roman Novichenok
Anton,
thanks for quick response.  Not sure if I'm setting wrong expectations.
Just tried sql query and that exhibited the same behavior.  Created a pull
request with a test: https://github.com/novicr/ignite/pull/3.

The test goes through the following steps:
1. creates a 4 node cluster with persistence enabled.
2. creates 2 caches - with setBackups(1)
3. populates caches with 1000 elements
4. runs a sql query and prints result size: 1000
5. stops 2 nodes
6. runs sql query from step 4 and prints result size: (something less than
1000).

Thanks,
Roman


On Tue, Oct 2, 2018 at 2:07 PM Roman Novichenok 
wrote:

> I was looking at scan queries. cache.query(new ScanQuery()) returns
> partial results.
>
> On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:
>
>> Hello Roman,
>>
>> Correct me if I'm mistaken, you are talking about SQL queries. That was
>> fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
>> ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
>> delivered in 2.7 release.
>>
>> Regards,
>> Anton
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Partition Loss Policy options

2018-10-02 Thread Roman Novichenok
I was looking at scan queries. cache.query(new ScanQuery()) returns partial
results.

On Tue, Oct 2, 2018 at 1:37 PM akurbanov  wrote:

> Hello Roman,
>
> Correct me if I'm mistaken, you are talking about SQL queries. That was
> fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
> ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
> delivered in 2.7 release.
>
> Regards,
> Anton
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Partition Loss Policy options

2018-10-02 Thread akurbanov
Hello Roman,

Correct me if I'm mistaken, you are talking about SQL queries. That was
fixed under https://issues.apache.org/jira/browse/IGNITE-8927, primary
ticket is https://issues.apache.org/jira/browse/IGNITE-8834, will be
delivered in 2.7 release.

Regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Partition Loss Policy options

2018-10-02 Thread Roman Novichenok
PartitionLossPolicy controls access level to the cache in case any
partitions for this cache are not available on the cluster.  As far as I
can see this policy is only consulted for cache put/get operations.  Is
there a way to prevent queries from returning results (force an exception)
when cache data is partially unavailable?

thanks,
Roman