Re: GridTimeoutProcessor - Timeout has occurred - Too frequent messages

2019-03-08 Thread userx
Hi,

Any thoughts on  the same.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: TPS does not increase even though new server nodes added

2019-03-08 Thread javastuff....@gmail.com
Checkout below -

   java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.PrintStream.println(PrintStream.java:805)
- waiting to lock <0x85d6d798> (a java.io.PrintStream)
at
com.example.ignite.service.IgniteCacheImpl.doInTransaction(IgniteCacheImpl.java:57)
at
com.example.ignite.cotroller.ItemIgniteController.cache(ItemIgniteController.java:34)

Stop writing to the output stream, probably synchronized operations like
System.out

Thanks,
-Sam 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


isolated cluster configuration

2019-03-08 Thread javastuff....@gmail.com
Hi,

We have a requirement for 2 separate cache cluster isolated from each other.
We have 2 separate configuration file and java program to initialize.  
We achieved it by using non-intersecting IP and Port for different cluster
while using TCP discovery.

However, we need to achieve the same using DB discovery. Is there a way to
configure 2 separate cache cluster using DB discovery? 

Thanks,
Sam  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Affinity aware version of ICompute.Apply() in Ignite C# client

2019-03-08 Thread Raymond Wilson
Thanks Pavel :)

Sent from my iPhone

On 8/03/2019, at 9:47 PM, Pavel Tupitsyn  wrote:

>  - Is this a capability defined in the Java client but not yet available
in the C# client?
There is no such API in Java AFAIK

>  - If not, is the standard practice to include the argument into the
compute func state that is sent to the node to execute it?
Yes


Thanks,
Pavel



On Thu, Mar 7, 2019 at 11:19 PM Raymond Wilson 
wrote:

> I have a current workflow where I use ICompute.Apply() and
> ICompute.BroadCast to execute a compute function with an argument and
> return a response. In the former I want any one of a number of nodes in the
> projection to execute the compute func, in the latter I want all the nodes
> to execute the compute func.
>
> I have a new workflow where I want to execute a compute func with an
> argument on a node based on an affinity key.
>
> ICompute defines AffinityCall() which takes a cache name, an affinity key
> relevant to the cache, and a compute func, then returns a result. However,
> unlike Apply and Broadcast it does not accept an argument parameter.
>
> Looking through the ICompute interface it seems that there is no similar
> method available that does take an argument.
>
> Two questions:
>
> - Is this a capability defined in the Java client but not yet available in
> the C# client?
> - If not, is the standard practice to include the argument into the
> compute func state that is sent to the node to execute it?
>
> Thanks,
> Raymond.
>
>
>


Re: Why does write behind require to enable write through?

2019-03-08 Thread relax ken
Thanks! Very helpful! That is all I wanted to know

On Fri, 8 Mar 2019, 09:04 Павлухин Иван,  wrote:

> Hi,
>
> Ignite CacheConfiguration inherits setWriteThrough method from JCache
> API [1]. Enabled writeThrough in JCache means that cache is backed by
> some other storage to which writes are propagated along with writes to
> to the cache. Unfortunately (or not?) JCache uses write-through term
> while Ignite supports write-behind write propagation policy. Perhaps
> API is not clear enough but I think that JCache conformance is the
> clue here.
>
> [1]
> https://static.javadoc.io/javax.cache/cache-api/1.1.0/javax/cache/configuration/MutableConfiguration.html#setWriteThrough-boolean-
>
> вт, 5 мар. 2019 г. в 16:59, relax ken :
> >
> > Thanks Ilya. I guess conceptually there are many explanations and
> definitions about those two on Internet which may agree, disagree, or
> consensus on some point. My question is more about their impact when they
> are true or false in Ignite.
> >
> > For example, if it's always the case, why doesn't Ignite just
> encapsulate this kind assumption, take care it and auto set write through
> true while write behind is set to true. Why does Ignite give this kind of
> option and warning? Will there be any difference when write behind is true
> but write through is not true? try to understand deeper about those options
> to avoid any unexpected behaviour.
> >
> > On Tue, Mar 5, 2019 at 1:46 PM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
> >>
> >> Hello!
> >>
> >> It is because write-behing is a kind of write-through. Like random
> access memory is a kind of computer memory.
> >>
> >> Regards,
> >> --
> >> Ilya Kasnacheev
> >>
> >>
> >> вт, 5 мар. 2019 г. в 14:43, relax ken :
> >>>
> >>> Hi,
> >>>
> >>> I am new to Ignite. When I enable write behind, I always get a warning
> "Write-behind mode for the cache store also requires
> CacheConfiguration.setWriteThrough(true) property." Why does write behind
> require write through when I am using write behind only?
> >>>
> >>> Here is my configuration
> >>>
> >>> CacheConfiguration cconfig = new
> CacheConfiguration<>();
> >>> cconfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> >>> cconfig.setCacheMode(CacheMode.PARTITIONED);
> >>> cconfig.setName(Constants.DocCacheName);
> >>> cconfig.setExpiryPolicyFactory(TouchedExpiryPolicy.factoryOf(new
> Duration(TimeUnit.SECONDS, cacheConfig.cacheExpirySecs)));
> >>>
> >>> cconfig.setWriteBehindEnabled(true);
> >>>
> cconfig.setWriteBehindFlushFrequency(cacheConfig.writeBehindFlushIntervalSec);
> // MS
> >>>
> >>>
> >>> Thanks
> >>>
> >>> Ken
>
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


Re: cache is moved to a read-only state

2019-03-08 Thread Aat
visor> cache -slp -c=marketData
Lost partitions for cache: marketData (12)
+=+
| Interval |Partitions|
+=+
| 86-816   | 86, 115, 241, 469, 632, 677, 719, 781, 791, 816, |
| 892-1014 | 892, 1014|
+-+
visor>




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: cache is moved to a read-only state

2019-03-08 Thread Aat
from git :


 PartitionLossPolicy partLossPlc = grp.config().getPartitionLossPolicy();

if (grp.needsRecovery() && !recovery) {
if (!read && (partLossPlc == READ_ONLY_SAFE || partLossPlc ==
READ_ONLY_ALL))
return new IgniteCheckedException("Failed to write to cache
(cache is moved to a read-only state): " +
cctx.name());
}

and in my config i setted : 
   .setPartitionLossPolicy(READ_ONLY_ALL)
 

/**
 * All writes to the cache will be failed with an exception. All reads
will proceed as if all partitions
 * were in a consistent state. The result of reading from a lost
partition is undefined and may be different
 * on different nodes in the cluster.
 */
READ_ONLY_ALL,


What's the best configuration ?

IGNORE ?

i m going to test






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: cache is moved to a read-only state

2019-03-08 Thread Aat
my MarketData services is splitted into four publishers : who put values in
the same cache.
Maybe Cache does not support concurrency write ?
__

public class CacheConfig extends CacheConfiguration {

setCacheMode(CacheMode.PARTITIONED);
setGroupName("quotationFeed");

setStoreByValue(false)
.setWriteThrough(false)
.setReadThrough(false)

.setBackups(1)
.setPartitionLossPolicy(READ_ONLY_ALL)
.setWriteSynchronizationMode(PRIMARY_SYNC)

.setRebalanceMode(CacheRebalanceMode.SYNC)
.setRebalanceThrottle(100)
.setRebalanceBatchSize(2* 1024*1024)

.setStatisticsEnabled(true)
.setManagementEnabled(true);

return this;
}
}
__
etPair(BELDER, B.KBC9M6000C)]
  2 javax.cache.CacheException: class
org.apache.ignite.IgniteCheckedException: Failed to write to cache (cache is
moved to a read-only state): marketData
  3 at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
  4 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1758)
  5 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.putAll(IgniteCacheProxyImpl.java:1171)
  6 at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:868)
  7 at
com.socgen.folab.applications.qpw.domain.processor.RtMarketDataContainer.dispatchToCache(RtMarketDataContainer.java:243)
  8 at
com.socgen.folab.applications.qpw.domain.processor.RtMarketDataContainer$3.run(RtMarketDataContainer.java:179)
  9 at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 10 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 11 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 12 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 13 at java.lang.Thread.run(Thread.java:748)
 14 Caused by: org.apache.ignite.IgniteCheckedException: Failed to write
to cache (cache is moved to a read-only state): marketData
 15 at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.validateCache(GridDhtTopologyFutureAdapter.java:97)
 16 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:638)
 17 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:248)
 18 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAll0(GridDhtAtomicCache.java:1105)
 19 at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAll0(GridDhtAtomicCache.java:653)
 20 at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAll(GridCacheAdapter.java:2932)
 21 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.putAll(IgniteCacheProxyImpl.java:1168)
 22 ... 8 common frames omitted
__
I tried Stream package but i get another issue ( stream close )...

Comment : On service flush these message and the other continue to publish.

any idea ?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Can I update specific field of a binaryobject

2019-03-08 Thread Justin Ji
Besides the question above, I have another question.
If I use a composed key like this:
public class DpKey implements Serializable {
//key=devId + "_" + dpId
private String key;
@AffinityKeyMapped
private String devId;
   //getter setter
}

Now I need to add records like this,
IgniteCache cache = ignite.cache("cache");
cache.query(new SqlFieldsQuery("MERGE INTO t_cache(id, devId, dpId)" +
   " values (1, 'devId001', '001'), (2, 'devId002', '002')"));
And I also need to get value with igniteCache.get(key), but I do not know
what is the key.

Can I assign a key when inserting a record?







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


GridTimeoutProcessor - Timeout has occurred - Too frequent messages

2019-03-08 Thread userx
Hi team,

I am using ignite 2.7 and started the ignite server nodes with following
configuration
igniteInstanceName=XXX
 pubPoolSize=16
 svcPoolSize=4
 callbackPoolSize=40
 stripedPoolSize=8
 sysPoolSize=32
 mgmtPoolSize=4
 igfsPoolSize=40
 dataStreamerPoolSize=8
 utilityCachePoolSize=40
 utilityCacheKeepAliveTime=6
 p2pPoolSize=2
 qryPoolSize=40
 igniteHome=null
 igniteWorkDir=/XXX/Ignite/Work/XXX
 mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@3e694b3f
 nodeId=2aa67c80-f252-439b-978c-8c36dd0a6af2
 marsh=BinaryMarshaller []
 marshLocJobs=false
 daemon=false
 p2pEnabled=false
 netTimeout=5000
 sndRetryDelay=1000
 sndRetryCnt=3
 metricsHistSize=1
 metricsUpdateFreq=2000
 metricsExpTime=9223372036854775807
 discoSpi=TcpDiscoverySpi [addrRslvr=null
 sockTimeout=0
 ackTimeout=0
 marsh=null
 reconCnt=10
 reconDelay=2000
 maxAckTimeout=60
 forceSrvMode=false
 clientReconnectDisabled=false
 internalLsnr=null]
 segPlc=STOP
 segResolveAttempts=2
 waitForSegOnStart=true
 allResolversPassReq=true
 segChkFreq=1
 commSpi=TcpCommunicationSpi [connectGate=null

connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@783ec989
 enableForcibleNodeKill=false
 enableTroubleshootingLog=false
 locAddr=null
 locHost=null
 locPort=47100
 locPortRange=100
 shmemPort=-1
 directBuf=true
 directSndBuf=true
 idleConnTimeout=60
 connTimeout=5000
 maxConnTimeout=60
 reconCnt=10
 sockSndBuf=32768
 sockRcvBuf=32768
 msgQueueLimit=1024
 slowClientQueueLimit=1024
 nioSrvr=null
 shmemSrv=null
 usePairedConnections=true
 connectionsPerNode=1
 tcpNoDelay=true
 filterReachableAddresses=false
 ackSndThreshold=16
 unackedMsgsBufSize=0
 sockWriteTimeout=2000
 boundTcpPort=-1
 boundTcpShmemPort=-1
 selectorsCnt=16
 selectorSpins=0
 addrRslvr=null
 ctxInitLatch=java.util.concurrent.CountDownLatch@1ddd3478[Count = 1]
 stopping=false]
 evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@f973499
 colSpi=NoopCollisionSpi []
 deploySpi=LocalDeploymentSpi []
 indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@50825a02
 addrRslvr=null

encryptionSpi=org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi@68809cc7
 clientMode=false
 rebalanceThreadPoolSize=8
 txCfg=TransactionConfiguration [txSerEnabled=false
 dfltIsolation=REPEATABLE_READ
 dfltConcurrency=PESSIMISTIC
 dfltTxTimeout=0
 txTimeoutOnPartitionMapExchange=0
 pessimisticTxLogSize=0
 pessimisticTxLogLinger=1
 tmLookupClsName=null
 txManagerFactory=null
 useJtaSync=false]
 cacheSanityCheckEnabled=true
 discoStartupDelay=6
 deployMode=SHARED
 p2pMissedCacheSize=100
 locHost=null
 timeSrvPortBase=31100
 timeSrvPortRange=100
 failureDetectionTimeout=6
 sysWorkerBlockedTimeout=null
 clientFailureDetectionTimeout=3
 metricsLogFreq=6
 hadoopCfg=null
 connectorCfg=ConnectorConfiguration [jettyPath=null
 host=null
 port=11211
 noDelay=true
 directBuf=false
 sndBufSize=32768
 rcvBufSize=32768
 idleQryCurTimeout=60
 idleQryCurCheckFreq=6
 sndQueueLimit=0
 selectorCnt=4
 idleTimeout=7000
 sslEnabled=false
 sslClientAuth=false
 sslCtxFactory=null
 sslFactory=null
 portRange=100
 threadPoolSize=40
 msgInterceptor=null]
 odbcCfg=null
 warmupClos=null
 atomicCfg=AtomicConfiguration [seqReserveSize=1000
 cacheMode=PARTITIONED
 backups=1
 aff=null
 grpName=null]
 classLdr=null
 sslCtxFactory=null
 platformCfg=null
 binaryCfg=null
 memCfg=null
 pstCfg=null
 dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040
 sysRegionMaxSize=104857600
 pageSize=4096
 concLvl=16
 dfltDataRegConf=DataRegionConfiguration [name=default
 maxSize=8589934592
 initSize=268435456
 swapPath=null
 pageEvictionMode=DISABLED
 evictionThreshold=0.9
 emptyPagesPoolSize=100
 metricsEnabled=true
 metricsSubIntervalCount=5
 metricsRateTimeInterval=6
 persistenceEnabled=true
 checkpointPageBufSize=2147483648]
 dataRegions=null
 storagePath=/XXX/Ignite/IgnitePersistentStore/XXX
 checkpointFreq=60
 lockWaitTime=1
 checkpointThreads=4
 checkpointWriteOrder=SEQUENTIAL
 walHistSize=20
 maxWalArchiveSize=8589934592
 walSegments=32
 walSegmentSize=536870912
 walPath=/XXX/Ignite/WalStore/XXX
 walArchivePath=/XXX/Ignite/WalStore/XXX/archive
 metricsEnabled=false
 walMode=FSYNC
 walTlbSize=131072
 walBuffSize=536870912
 walFlushFreq=3
 walFsyncDelay=1000
 walRecordIterBuffSize=67108864
 alwaysWriteFullPages=false

fileIOFactory=org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory@2b97cc1f
 metricsSubIntervalCnt=5
 metricsRateTimeInterval=6
 walAutoArchiveAfterInactivity=-1
 writeThrottlingEnabled=true
 walCompactionEnabled=false
 walCompactionLevel=1
 checkpointReadLockTimeout=0]
 activeOnStart=true
 autoActivation=true
 longQryWarnTimeout=3000
 sqlConnCfg=null
 cliConnCfg=ClientConnectorConfiguration [host=null
 port=10800
 portRange=100
 sockSndBufSize=0
 sockRcvBufSize=0
 tcpNoDelay=true
 maxOpenCursorsPerConn=128
 threadPoolSize=40
 idleTimeout=0
 jdbcEnabled=true
 odbcEnabled=true
 thinCliEnabl

Re: Why does write behind require to enable write through?

2019-03-08 Thread Павлухин Иван
Hi,

Ignite CacheConfiguration inherits setWriteThrough method from JCache
API [1]. Enabled writeThrough in JCache means that cache is backed by
some other storage to which writes are propagated along with writes to
to the cache. Unfortunately (or not?) JCache uses write-through term
while Ignite supports write-behind write propagation policy. Perhaps
API is not clear enough but I think that JCache conformance is the
clue here.

[1] 
https://static.javadoc.io/javax.cache/cache-api/1.1.0/javax/cache/configuration/MutableConfiguration.html#setWriteThrough-boolean-

вт, 5 мар. 2019 г. в 16:59, relax ken :
>
> Thanks Ilya. I guess conceptually there are many explanations and definitions 
> about those two on Internet which may agree, disagree, or consensus on some 
> point. My question is more about their impact when they are true or false in 
> Ignite.
>
> For example, if it's always the case, why doesn't Ignite just encapsulate 
> this kind assumption, take care it and auto set write through true while 
> write behind is set to true. Why does Ignite give this kind of option and 
> warning? Will there be any difference when write behind is true but write 
> through is not true? try to understand deeper about those options to avoid 
> any unexpected behaviour.
>
> On Tue, Mar 5, 2019 at 1:46 PM Ilya Kasnacheev  
> wrote:
>>
>> Hello!
>>
>> It is because write-behing is a kind of write-through. Like random access 
>> memory is a kind of computer memory.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 5 мар. 2019 г. в 14:43, relax ken :
>>>
>>> Hi,
>>>
>>> I am new to Ignite. When I enable write behind, I always get a warning 
>>> "Write-behind mode for the cache store also requires 
>>> CacheConfiguration.setWriteThrough(true) property." Why does write behind 
>>> require write through when I am using write behind only?
>>>
>>> Here is my configuration
>>>
>>> CacheConfiguration cconfig = new CacheConfiguration<>();
>>> cconfig.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>>> cconfig.setCacheMode(CacheMode.PARTITIONED);
>>> cconfig.setName(Constants.DocCacheName);
>>> cconfig.setExpiryPolicyFactory(TouchedExpiryPolicy.factoryOf(new 
>>> Duration(TimeUnit.SECONDS, cacheConfig.cacheExpirySecs)));
>>>
>>> cconfig.setWriteBehindEnabled(true);
>>> cconfig.setWriteBehindFlushFrequency(cacheConfig.writeBehindFlushIntervalSec);
>>>  // MS
>>>
>>>
>>> Thanks
>>>
>>> Ken



-- 
Best regards,
Ivan Pavlukhin


Re: Affinity aware version of ICompute.Apply() in Ignite C# client

2019-03-08 Thread Pavel Tupitsyn
>  - Is this a capability defined in the Java client but not yet available
in the C# client?
There is no such API in Java AFAIK

>  - If not, is the standard practice to include the argument into the
compute func state that is sent to the node to execute it?
Yes


Thanks,
Pavel



On Thu, Mar 7, 2019 at 11:19 PM Raymond Wilson 
wrote:

> I have a current workflow where I use ICompute.Apply() and
> ICompute.BroadCast to execute a compute function with an argument and
> return a response. In the former I want any one of a number of nodes in the
> projection to execute the compute func, in the latter I want all the nodes
> to execute the compute func.
>
> I have a new workflow where I want to execute a compute func with an
> argument on a node based on an affinity key.
>
> ICompute defines AffinityCall() which takes a cache name, an affinity key
> relevant to the cache, and a compute func, then returns a result. However,
> unlike Apply and Broadcast it does not accept an argument parameter.
>
> Looking through the ICompute interface it seems that there is no similar
> method available that does take an argument.
>
> Two questions:
>
> - Is this a capability defined in the Java client but not yet available in
> the C# client?
> - If not, is the standard practice to include the argument into the
> compute func state that is sent to the node to execute it?
>
> Thanks,
> Raymond.
>
>
>


Can I update specific field of a binaryobject

2019-03-08 Thread BinaryTree
Hi Igniters - 


As far as I am known, igniteCache.put(K, V) will replace the value of K with V, 
but sometimes I just would like to update a specific field of V instead of the 
whole object.
I know that I can update the specific field via 
igniteCache.query(SqlFieldsQuery),  but how-ignite-sql-works writes that ignite 
will generate SELECT queries internally before UPDATE or DELETE a set of 
records, so it may not be as good as igniteCache.put(K, V), so is there a way 
can update a specific field without QUERY?

cache is moved to a read-only state

2019-03-08 Thread Aat
Hello, after 5 minutes i have this exception :

2541 javax.cache.CacheException: class
org.apache.ignite.IgniteCheckedException: Failed to write to cache (cache is
moved to a read-only state): marketData
   2542 at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1337)
   2543 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:1758)
   2544 at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.putAll(IgniteCacheProxyImpl.java:1171)
   2545 at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.putAll(GatewayProtectedCacheProxy.java:868)
   2546 at
com.socgen.folab.applications.qpw.domain.processor.RtMarketDataContainer.dispatchToCache(RtMarketDataContainer.java:243)
   2547 at
com.socgen.folab.applications.qpw.domain.processor.RtMarketDataContainer$3.run(RtMarketDataContainer.java:179)
   2548 at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   2549 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   2550 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   2551 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   2552 at java.lang.Thread.run(Thread.java:748)
   2553 Caused by: org.apache.ignite.IgniteCheckedException: Failed to write
to cache (cache is moved to a read-only state): marketData


Thank for your help;
__




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/