java.lang.OutOfMemoryError: Java heap space

2017-02-21 Thread Saifullah Zahid
Hi,

Following is my Metrics for local node but getting out of memory exception
(details below).

Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=80a92be0, name=null, uptime=00:01:00:001]
^-- H/N/C [hosts=1, nodes=1, CPUs=8]
^-- CPU [cur=13.4%, avg=14.68%, GC=0%]
^--* Heap [used=3431MB, free=58.11%, comm=8192MB]*
^--* Non heap [used=47MB, free=-1%, comm=49MB] <=== Don't what this
mean?*
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=0, qSize=0]
^-- Outbound messages queue [size=0]
[12:26:45,026][INFO][exchange-worker-#28%null%][GridCacheProcessor] Started
cache [name=dcache, mode=PARTITIONED]
[12:26:45,041][INFO][exchange-worker-#28%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=1, minorTopVer=15], evt=DISCOVERY_CUSTOM_EVT,
node=80a92be0-c845-4a05-bc65-ce7bd3a595b5]
*[12:26:54,914][WARNING][main][GridQueryProcessor] Neither key nor value
have property "ID" (is cache indexing configured correctly?) < [Key]
attribute is defined on ID*
[12:27:13,795][SEVERE][main][GridJobWorker] Failed to execute job due to
unexpected runtime exception
[jobId=d3ce5b46a51-80a92be0-c845-4a05-bc65-ce7bd3a595b5,
ses=GridJobSessionImpl [ses=GridTaskSessionImpl
[taskName=o.a.i.i.processors.cache.GridCacheAdapter$LoadCacheJobV2,
dep=LocalDeployment [super=GridDeployment [ts=1487748329540,
depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader@c387f44,
clsLdrId=52be5b46a51-80a92be0-c845-4a05-bc65-ce7bd3a595b5, userVer=0,
loc=true, sampleClsName=java.lang.String, pendingUndeploy=false,
undeployed=false, usage=0]],
taskClsName=o.a.i.i.processors.cache.GridCacheAdapter$LoadCacheJobV2,
sesId=c3ce5b46a51-80a92be0-c845-4a05-bc65-ce7bd3a595b5,
startTime=1487748405041, endTime=9223372036854775807,
taskNodeId=80a92be0-c845-4a05-bc65-ce7bd3a595b5,
clsLdr=sun.misc.Launcher$AppClassLoader@c387f44, closed=false, cpSpi=null,
failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=true,
subjId=80a92be0-c845-4a05-bc65-ce7bd3a595b5, mapFut=IgniteFuture
[orig=GridFutureAdapter [resFlag=0, res=null, startTime=1487748405041,
endTime=0, ignoreInterrupts=false, state=INIT]]],
jobId=d3ce5b46a51-80a92be0-c845-4a05-bc65-ce7bd3a595b5]]
java.lang.OutOfMemoryError: Java heap space
at org.jsr166.ConcurrentHashMap8.initTable(ConcurrentHashMap8.java:2011)
at
org.jsr166.ConcurrentHashMap8.internalPutIfAbsent(ConcurrentHashMap8.java:1427)
at org.jsr166.ConcurrentHashMap8.putIfAbsent(ConcurrentHashMap8.java:2725)
at
org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapImpl.putEntryIfObsoleteOrAbsent(GridCacheConcurrentMapImpl.java:134)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.putEntryIfObsoleteOrAbsent(GridDhtLocalPartition.java:280)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionedConcurrentMap.putEntryIfObsoleteOrAbsent(GridCachePartitionedConcurrentMap.java:89)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.entry0(GridCacheAdapter.java:971)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.entryEx(GridCacheAdapter.java:939)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.entryEx(GridDhtCacheAdapter.java:404)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.loadEntry(GridDhtCacheAdapter.java:547)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.access$300(GridDhtCacheAdapter.java:94)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$4.apply(GridDhtCacheAdapter.java:501)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter$4.apply(GridDhtCacheAdapter.java:497)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter$3.apply(GridCacheStoreManagerAdapter.java:528)
at
org.apache.ignite.internal.processors.platform.dotnet.PlatformDotNetCacheStore$6.applyx(PlatformDotNetCacheStore.java:231)
at
org.apache.ignite.internal.processors.platform.dotnet.PlatformDotNetCacheStore$6.applyx(PlatformDotNetCacheStore.java:226)
at
org.apache.ignite.internal.util.lang.IgniteInClosureX.apply(IgniteInClosureX.java:38)
at
org.apache.ignite.internal.processors.platform.dotnet.PlatformDotNetCacheStore.doInvoke(PlatformDotNetCacheStore.java:438)
at
org.apache.ignite.internal.processors.platform.dotnet.PlatformDotNetCacheStore.loadCache(PlatformDotNetCacheStore.java:219)
at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:512)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.localLoadCache(GridDhtCacheAdapter.java:497)
at
org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.localLoadCache(GridCacheProxyImpl.java:228)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter

Re: Ignite cache range query using cache keys

2017-02-21 Thread Denis Magda

> On Feb 21, 2017, at 11:48 PM, diopek  wrote:
> 
> Hi Denis,
> we have ignite cache that has key value pair as the following;
> IgniteCache>
> and cache cofiguration for this cache is as the following;
> CacheConfiguration> cacheCfg = new
> CacheConfiguration<>(cacheName);
>   cacheCfg.setStartSize(startSize);
>   cacheCfg.setCacheMode(CacheMode.LOCAL);
> 
> I wrote query as the following;
> String sql = "_key >= ? && _key < ? ";
> IgniteCache> igniteCache =
> Ignition.ignite().cache(this.getCacheName());
> SqlQuery> sqlQuery = new SqlQuery ArrayList>("ArrayList.class", sql);
> sqlQuery.setArgs(min, max);
> igniteCache.query(sqlQuery).getAll();
> 

Is there any significant reason why you store a list of MyPojos in a cache 
rather than individual objects? Just want to make a precaution that if some day 
you decide to query using MyPojo's fields than you’ll not be able to leverage 
from indexes because the objects will be inside of the lists.


> above method call returns the following exception;
> Caused by: javax.cache.CacheException: Indexing is disabled for cache:
> MY_CACHE. Use setIndexedTypes or setTypeMetadata methods on
> CacheConfiguration to enable.
>   at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.validate(IgniteCacheProxy.java:852)
> 
> I think I need to add indexing by calling
> cacheConfig.setIndexedTypes(Long.class, Value.class)

Do it this way - cacheConfig.setIndexedTypes(Long.class, ArrayList.class).

—
Denis

> But my value class is ArrayList not sure how to specify *.class file
> for this generic collection type. Can you please advise. Thanks,
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-range-query-using-cache-keys-tp10717p10785.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Ignite cache range query using cache keys

2017-02-21 Thread diopek
Basically, we just need indexing on keys, as we only query via keys (which is
Long type).
Value is is ArrayList which we don't need to query.
Is there any way to add index only on keys.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-range-query-using-cache-keys-tp10717p10786.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite cache range query using cache keys

2017-02-21 Thread diopek
Hi Denis,
we have ignite cache that has key value pair as the following;
IgniteCache>
and cache cofiguration for this cache is as the following;
CacheConfiguration> cacheCfg = new
CacheConfiguration<>(cacheName);
cacheCfg.setStartSize(startSize);
cacheCfg.setCacheMode(CacheMode.LOCAL);

I wrote query as the following;
String sql = "_key >= ? && _key < ? ";
IgniteCache> igniteCache =
Ignition.ignite().cache(this.getCacheName());
SqlQuery> sqlQuery = new SqlQuery>("ArrayList.class", sql);
sqlQuery.setArgs(min, max);
igniteCache.query(sqlQuery).getAll();

above method call returns the following exception;
Caused by: javax.cache.CacheException: Indexing is disabled for cache:
MY_CACHE. Use setIndexedTypes or setTypeMetadata methods on
CacheConfiguration to enable.
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.validate(IgniteCacheProxy.java:852)

I think I need to add indexing by calling
cacheConfig.setIndexedTypes(Long.class, Value.class)
But my value class is ArrayList not sure how to specify *.class file
for this generic collection type. Can you please advise. Thanks,



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-range-query-using-cache-keys-tp10717p10785.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: GROUP_CONCAT function is unsupported

2017-02-21 Thread zaid
Related to the question:

http://apache-ignite-users.70518.x6.nabble.com/List-of-supported-SQL-functions-same-as-H2-td10722.html



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/GROUP-CONCAT-function-is-unsupported-tp10757p10784.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


IGNITE-2680

2017-02-21 Thread Anil
Hi,

IGNITE-2680 says it is resolved. Would it be available in 1.9 ?

I see following code in 1.8.

@Override public void setQueryTimeout(int timeout) throws SQLException {
ensureNotClosed();

throw new SQLFeatureNotSupportedException("Query timeout is not
supported.");
}

Thanks


Re: NOT IN in ignite

2017-02-21 Thread Anil
Hi Val,

I agree with you.

Controlling query execution plan as per query is useful in this case.
collocation = true does not make sense for queries without join though
caches are collocated. what do you say ?

i feel query executor must be intelligent enough to use collection as per
query.

Thanks.

On 22 February 2017 at 06:09, vkulichenko 
wrote:

> Anil,
>
> OK, so you're talking about setting collocated flag on per query level in
> JDBC driver, right? This makes sense, but it seems to be a limitation of
> JDBC API rather than Ignite implementation. How would you provide a
> parameter when creating a statement and/or executing a query? Do you have
> any ideas how to do this?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/NOT-IN-in-ignite-tp9861p10777.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


what is the message guaranteed level about distributed messaging

2017-02-21 Thread ght230
I think there are 3 message guaranteed levels.

1. one message will be received at least once.
2. one message will be received exactally once.
3. one message will be received at most once.

I want to know what is the message guaranteed level about distributed
messaging.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-is-the-message-guaranteed-level-about-distributed-messaging-tp10781.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Monitoring Cache - Data counters, Cache Data Size

2017-02-21 Thread bintisepaha
Hi Val, 

I saw that MBean, but it reports the same number as the Local MBean. and if
I go on each node, the CacheCluster and CacheLocal Cache size matches. I do
not see a sum total across all nodes.

Thanks,
Binti



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Monitoring-Cache-Data-counters-Cache-Data-Size-tp3203p10780.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-02-21 Thread bintisepaha
Attaching the console log file too, it has the above error.
Unfortunately we lost the files for older updates to this key now, they
rolled over.
Ignite-Console-1.zip

  

but the "Transaction has been already completed" error could also happen
because maybe we call explicit rollback in case of any Runtime Exception.

tx.rollback();

We do get genuine timeouts in the code, but at that time, we never get this
exception.

Hope this helps.

Thanks,
Binti





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p10779.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-02-21 Thread bintisepaha
Hi, I will try this code the next time this issue happens.

Attaching one nodes full logs. It has a lot of info.
Ignite-11221.gz
  

However, I found this on the console logs on the first exception that
occurred. Is this usual?

 14764 Feb 17, 2017 1:16:04 PM org.apache.ignite.logger.java.JavaLogger
error
 14765 SEVERE: Failed to execute job
[jobId=fcedb9c4a51-9aa7d7c5-f6fa-4bdd-9473-0439b889d46f,
ses=GridJobSessionImpl [ses=GridTaskSessionImpl
[taskName=com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable,
dep=LocalDeployment [super=GridDeployment[ts=1486832074045,
depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2,
clsLdrId=678f81e2a51-b602d584-5565-434b-9727-94e218108073, userVer=0,
loc=true, sampleClsName=java.lang.String, pendingUndeploy=false,
undeployed=false, usage=0]],   
taskClsName=com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable,
sesId=ecedb9c4a51-9aa7d7c5-f6fa-4bdd-9473-0439b889d46f,
startTime=1487355352478, endTime=9223372036854775807,
taskNodeId=9aa7d7c5-f6fa-4bdd-9473-0439b889d46f, clsLdr=sun.misc  
.Launcher$AppClassLoader@18b4aac2, closed=false, cpSpi=null, failSpi=null,
loadSpi=null, usage=1, fullSup=false,
subjId=9aa7d7c5-f6fa-4bdd-9473-0439b889d46f, mapFut=IgniteFuture
[orig=GridFutureAdapter [resFlag=0, res=null, startTime=1487355352483,   
endTime=0, ignoreInterrupts=false, state=INIT]]],
jobId=fcedb9c4a51-9aa7d7c5-f6fa-4bdd-9473-0439b889d46f]]
 14766 class org.apache.ignite.IgniteException: Transaction has been already
completed.
 14767 at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:908)
 14768 at
org.apache.ignite.internal.processors.cache.transactions.TransactionProxyImpl.rollback(TransactionProxyImpl.java:299)
 14769 at
com.tudor.datagridI.server.cache.transaction.IgniteCacheTransaction.rollback(IgniteCacheTransaction.java:19)
 14770 at
com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.processOrderHolders(OrderHolderSaveRunnable.java:509)
 
 14771 at
com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.run(OrderHolderSaveRunnable.java:105)
 14772 at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4V2.execute(GridClosureProcessor.java:2184)
 14773 at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
 14774 at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6521)
 14775 at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
 14776 at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
 14777 at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
 14778 at
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1161)
 
 14779 at
org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1766)
 14780 at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1238)
 14781 at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:866)
 14782 at
org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:106)
 14783 at
org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:829)
 14784 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 14785 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 14786 at java.lang.Thread.run(Thread.java:745)
 14787 Caused by: class org.apache.ignite.IgniteCheckedException:
Transaction has been already completed.
 14788 at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finishDhtLocal(IgniteTxHandler.java:776)
 14789 at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.finish(IgniteTxHandler.java:718)
 
 14790 at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxFinishRequest(IgniteTxHandler.java:681)
 
 14791 at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$3.apply(IgniteTxHandler.java:156)
 14792 at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler$3.apply(IgniteTxHandler.java:154)
 14793 at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:748)
 14794 at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:353)
 14795 at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:277)
 14796  

Re: NOT IN in ignite

2017-02-21 Thread vkulichenko
Anil,

OK, so you're talking about setting collocated flag on per query level in
JDBC driver, right? This makes sense, but it seems to be a limitation of
JDBC API rather than Ignite implementation. How would you provide a
parameter when creating a statement and/or executing a query? Do you have
any ideas how to do this?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/NOT-IN-in-ignite-tp9861p10777.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: unicode character with apache ignite with multiple nodes

2017-02-21 Thread vkulichenko
Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


babak wrote
> I'm trying to cache my unicode data using apache ignite. When I use one
> node, I don't have any problems but when I use multiple nodes, my unicode
> data corrupted.
> 
> For example, consider I have two values ĂĆ, Ɇɜɷ for my data, when I use
> one node my sqlquery groupby returns these two values, but when add new
> node my groupby result have these 4 records: ĂĆ, Ɇɜɷ, ??, ???
> 
> How can I solve this problem?

Do you have a unit test that reproduces this issue? Can you share it?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/unicode-character-with-apache-ignite-with-multiple-nodes-tp10765p10776.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Monitoring Cache - Data counters, Cache Data Size

2017-02-21 Thread vkulichenko
Binti,

These metrics are exposed via CacheClusterMetricsMXBeanImpl.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Monitoring-Cache-Data-counters-Cache-Data-Size-tp3203p10775.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: List of supported SQL functions (same as H2?)

2017-02-21 Thread Denis Magda
Sergi,

Is there any technical reason that prevents us supporting GROUP_CONCAT?

—
Denis

> On Feb 21, 2017, at 2:46 AM, zaid  wrote:
> 
> 
> Yes, everything that is supported in H2 available in Ignite.
> 
> GROUP_CONCAT is not working for me: 
> 
> Please find below code snippet from ignite-indexing module: 
> 
> class file: 
> 
> GridSqlAggregateFunction 
> 
> line no: 84 
> 
>  switch (type) { 
>case GROUP_CONCAT: 
>throw new UnsupportedOperationException(); 
> Regards.
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/List-of-supported-SQL-functions-same-as-H2-tp10722p10756.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: simple file based persistent store support TTL

2017-02-21 Thread vkulichenko
What is the exception message? Your trace is truncated.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/simple-file-based-persistent-store-support-TTL-tp10355p10773.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Questions on Ignite Clients

2017-02-21 Thread vkulichenko
abdul shareef wrote
> Thanks Val. Is there a way to disable the startup message on the client
> nodes?

Not completely, bur you can disable the ASCII logo by setting
-DIGNITE_NO_ASCII=true system property.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Questions-on-Ignite-Clients-tp10304p10772.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: persist periodically

2017-02-21 Thread vkulichenko
Shawn,

If it's a standalone node, you can just terminate the process. There is a
shutdown hook that will gracefully stop the node. For example, kill command
(without -9) will do the job.

For embedded node call Ignition.stop().

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/persist-periodically-tp10621p10771.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: getOrCreateCache hang

2017-02-21 Thread Matt Warner
There were some coding errors in the previous attachment. The previous code
still illustrates the deadlock, but the attached correctly stores data in
tables.

I've also included the test input file, for completeness. testCode.gz
  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10770.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: getOrCreateCache hang

2017-02-21 Thread Matt Warner
There is only one Ignite server in my testing and two clients.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10769.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: getOrCreateCache hang

2017-02-21 Thread Matt Warner
Attached is a file containing the outputs (stack trace, Ignite log) and two
Maven test files.  test+output.gz
  

I'm hoping you can tell me I'm just doing something silly to provoke this...

Thanks!

Matt



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/getOrCreateCache-hang-tp10737p10768.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How to configure MongoDB Client? I'm getting NotSerializableException

2017-02-21 Thread Mauricio Arroqui
Hi, 

I have the same problem mentioned in the post below, but I use mongoDb
instead of CacheJdbcBlobStore. 
http://apache-ignite-users.70518.x6.nabble.com/How-to-use-CacheJdbcBlobStore-Getting-NotSerializableException-td431.html

Here is an example with Mongo, but it use Embedded MongoDB which it's main
goal is for unit testing:
https://github.com/gridgain/gridgain-advanced-examples/blob/master/src/main/java/org/gridgain/examples/datagrid/store/CacheMongoStore.java

I read you solve the issue for CacheJdbcBlobStore
https://issues.apache.org/jira/browse/IGNITE-960

Are you planning to do the same with mongo? I'm losing something? 

Thanks in advance. 
Mauricio

Here's a my bean configuration:  
 




 
  

  
  

  
  
  
  


Abstract Cache store (SimulationCacheMongoStore extends this on):
public abstract class CacheMongoStore extends CacheStoreAdapter
implements Serializable, LifecycleAware {

/**
 * MongoDB port.
 */
private static final int MONGOD_PORT = 27017;

/**
 * MongoDB executable for embedded MongoDB store.
 */
private MongoClient mongoClient;

/**
 * Mongo data store.
 */
protected Datastore morphia;

@Override
public void start() throws IgniteException {

mongoClient = new MongoClient();
morphia = new Morphia().createDatastore(mongoClient, "test");
}

@Override
public void stop() throws IgniteException {
if (mongoClient != null) {
mongoClient.close();

}
}

}








--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-configure-MongoDB-Client-I-m-getting-NotSerializableException-tp10767.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-02-21 Thread Andrey Gura
There are no something suspicious in stack trace.

You can check that key is locked using IgniteCache.isLocalLocked()
method. For remote nodes you can run task that performs this checking.

Could you please provide full logs for analysis?

On Tue, Feb 21, 2017 at 6:48 PM, bintisepaha  wrote:
> Andrey, thanks for getting back.
> I am attaching the stack trace. Don't think the cause is a deadloc, but the
> trace is long so maybe I am missing out something, let me know if you find
> something useful.
>
> We cannot ourselves reproduce this issue as there are no errors on the prior
> successful update. It feels like the txn was marked successful, but on one
> of the keys the lock was not released. and later when we try to access the
> key, its locked, hence the exceptions.
>
> No messages in the logs for long running txns or futures.
> By killing the node that holds the key, the lock is not released.
>
> Is there a way to query ignite to see if the locks are being held on a
> particular key? Any code we can run to salvage such locks?
>
> Any other suggestions?
>
> Thanks,
> Binti
>
>
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p10764.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-02-21 Thread bintisepaha
Andrey, thanks for getting back.
I am attaching the stack trace. Don't think the cause is a deadloc, but the
trace is long so maybe I am missing out something, let me know if you find
something useful.

We cannot ourselves reproduce this issue as there are no errors on the prior
successful update. It feels like the txn was marked successful, but on one
of the keys the lock was not released. and later when we try to access the
key, its locked, hence the exceptions.

No messages in the logs for long running txns or futures. 
By killing the node that holds the key, the lock is not released.

Is there a way to query ignite to see if the locks are being held on a
particular key? Any code we can run to salvage such locks?

Any other suggestions?

Thanks,
Binti






--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p10764.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignit Cache Stopped

2017-02-21 Thread Andrey Gura
I think it is just H2 wrapper for string values.

On Tue, Feb 21, 2017 at 8:21 AM, Anil  wrote:
> Thanks Andrey.
>
> I see node is down even gc log looks good. I will try to reproduce.
>
> May I know what is the org.h2.value.ValueString objects in the attached the
> screenshot ?
>
> Thanks.
>
> On 20 February 2017 at 18:37, Andrey Gura  wrote:
>>
>> Anil,
>>
>> No, it doesn't. Only client should left topology in this case.
>>
>> On Mon, Feb 20, 2017 at 3:44 PM, Anil  wrote:
>> > Hi Andrey,
>> >
>> > Does client ignite gc impact ignite cluster topology ?
>> >
>> > Thanks
>> >
>> > On 17 February 2017 at 22:56, Andrey Gura  wrote:
>> >>
>> >> From GC logs at the end of files I see Full GC pauses like this:
>> >>
>> >> 2017-02-17T04:29:22.118-0800: 21122.643: [Full GC (Allocation Failure)
>> >>  10226M->8526M(10G), 26.8952036 secs]
>> >>[Eden: 0.0B(512.0M)->0.0B(536.0M) Survivors: 0.0B->0.0B Heap:
>> >> 10226.0M(10.0G)->8526.8M(10.0G)], [Metaspace:
>> >> 77592K->77592K(1120256K)]
>> >>
>> >> Your heap is exhausted. During GC discovery doesn't receive heart
>> >> betas and nodes stopped due to segmentation. Please check your nodes'
>> >> logs for NODE_SEGMENTED pattern. If it is your case try to tune GC or
>> >> reduce load on GC (see for details [1])
>> >>
>> >> [1] https://apacheignite.readme.io/docs/jvm-and-system-tuning
>> >>
>> >> On Fri, Feb 17, 2017 at 6:35 PM, Anil  wrote:
>> >> > Hi Andrey,
>> >> >
>> >> > The queyr execution time is very high when limit 1+250 .
>> >> >
>> >> > 10 GB of heap memory for both client and servers. I have attached the
>> >> > gc
>> >> > logs of 4 servers. Could you please take a look ? thanks.
>> >> >
>> >> >
>> >> > On 17 February 2017 at 20:52, Anil  wrote:
>> >> >>
>> >> >> Hi Andrey,
>> >> >>
>> >> >> I checked GClogs  and everything looks good.
>> >> >>
>> >> >> Thanks
>> >> >>
>> >> >> On 17 February 2017 at 20:45, Andrey Gura  wrote:
>> >> >>>
>> >> >>> Anil,
>> >> >>>
>> >> >>> IGNITE-4003 isn't related with your problem.
>> >> >>>
>> >> >>> I think that nodes are going out of topology due to long GC pauses.
>> >> >>> You can easily check this using GC logs.
>> >> >>>
>> >> >>> On Fri, Feb 17, 2017 at 6:04 PM, Anil  wrote:
>> >> >>> > Hi,
>> >> >>> >
>> >> >>> > We noticed whenever long running queries fired, nodes are going
>> >> >>> > out
>> >> >>> > of
>> >> >>> > topology and entire ignite cluster is down.
>> >> >>> >
>> >> >>> > In my case, a filter criteria could get 5L records. So each API
>> >> >>> > request
>> >> >>> > could fetch 250 records. When page number is getting increased
>> >> >>> > the
>> >> >>> > query
>> >> >>> > execution time is high and entire cluster is down
>> >> >>> >
>> >> >>> >  https://issues.apache.org/jira/browse/IGNITE-4003 related to
>> >> >>> > this ?
>> >> >>> >
>> >> >>> > Can we set seperate thread pool for queries executions, compute
>> >> >>> > jobs
>> >> >>> > and
>> >> >>> > other services instead of common public thread pool ?
>> >> >>> >
>> >> >>> > Thanks
>> >> >>> >
>> >> >>> >
>> >> >>
>> >> >>
>> >> >
>> >
>> >
>
>


Re: Task was not deployed error

2017-02-21 Thread Nikolai Tikhonov
Hi!

Try to enable peer class loading (by
IgniteConfiguration#setPeerClassLoadingEnabled to *true) *or deploy task
classes on all nodes manually (put jars with classes to all ignite\libs
folder). It would be great if you can share a maven project which reproduce
this case.

On Mon, Feb 20, 2017 at 11:56 PM, rflachlan  wrote:

> Hello,
> I am new to Ignite, and am stuck on something probably very basic.
>
> I have a compute job that needs to (a) run 1000 tasks; (b) calculate some
> statistics; (c) run another set of the same 1000 tasks (same classes, and
> methods, different initial parameters). The program correctly runs the
> first
> 1000 tasks across my test cluster of three machines (three Ignite servers;
> program run as a client on one of the machines). But then, when it tries to
> run the second set of tasks, I get errors. For the Ignite server running on
> the local machine, it starts running again (correctly). For the servers on
> the other two machines, there is a pause for ~6s, then "SEVERE" errors. In
> the servers, these are reported as:
>
> [20:35:41,697][SEVERE][pub-#286%null%][GridJobProcessor] Task was not
> deployed or was redeployed since task execution
> [taskName=com.lachlan.maven.SongABCIg.SongABCIg$1,
> taskClsName=com.lachlan.maven.SongABCIg.SongABCIg$1, codeVer=0,
> clsLdrId=e925b3d5a51-5aa311c1-4c5d-47a4-a4ff-2a27e008d841,
> seqNum=1487622853278, depMode=SHARED, dep=null]
> class org.apache.ignite.IgniteDeploymentException: Task was not deployed
> or
> was redeployed since task execution
> [taskName=com.lachlan.maven.SongABCIg.SongABCIg$1,
> taskClsName=com.lachlan.maven.SongABCIg.SongABCIg$1, codeVer=0,
> clsLdrId=e925b3d5a51-5aa311c1-4c5d-47a4-a4ff-2a27e008d841,
> seqNum=1487622853278, depMode=SHARED, dep=null]
> at
> org.apache.ignite.internal.processors.job.GridJobProcessor.
> processJobExecuteRequest(GridJobProcessor.java:1159)
> at
> org.apache.ignite.internal.processors.job.GridJobProcessor$
> JobExecutionListener.onMessage(GridJobProcessor.java:1894)
> at
> org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(GridIoManager.java:1082)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.
> processRegularMessage0(GridIoManager.java:710)
> at
> org.apache.ignite.internal.managers.communication.
> GridIoManager.access$1700(GridIoManager.java:102)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$5.run(
> GridIoManager.java:673)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
>
> These errors then cause a second set of errors in the client:
>[GridTaskWorker] Failed to obtain remote job policy for result from
> ComputeTask.result(..) 
>
>
>
>
> I have implemented my code in a method that schedules the tasks and uses
> IgniteCallable to collate results. I then calculate the statistics, and
> call
> the method again. The Ignite code is:
>
> try(Ignite ignite = Ignition.start(cfg)){
>IgniteCluster=ignite.cluster();
>Collection>> calls=new
> ArrayList<>();
>for (int i=0; i  calls.add(new IgniteCallable
> ...
> );
>Collection> res
> =ignite.compute(cluster.forRemotes()).call(calls);
>
>
> I'm obviously doing something fairly obvious incorrectly here. I'd greatly
> appreciate any advice.
>
> Rob
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Task-was-not-deployed-error-tp10743.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How to support Sql string operations IN and Regex

2017-02-21 Thread Pavel Tupitsyn
IN queries and their alternative with temporary table are described there:
https://apacheignite.readme.io/docs/sql-performance-and-debugging#section-sql-performance-and-usability-considerations

Ignite uses H2 SQL engine, which supports regular expressions:
http://www.h2database.com/html/functions.html#regexp_like

Pavel

On Tue, Feb 21, 2017 at 5:12 PM, mrinalkamboj 
wrote:

> Following is my current query for *OrderCache*, using a Sql *like* operator
>
> *var fieldsQuery = orderCache.QueryFields(new SqlFieldsQuery("select Top 10
> OrderId, OrderName from OrderEntity Where OrderName like '%' || ? || '%'
> Order By OrderId", orderNameValue));*
>
> As per my understanding this will help in creating query for StartsWith,
> EndsWith, Contains, as the '%' can be accordingly placed, but how shall we
> create a query for IN operator, where we need to supply a collection and
> check the existence of a value. Can the current placeholder take a
> collection or would it be:
>
> *OrderName IN 'A,B,C,D,E'*
>
> Also how to support the regular expression, I need all the values for a
> field which match the following Regular expression:
>
> @"^(?:A|B)-*(?:C|D)-(?:E|F)$"
>
> Following values shall match:
>
> A-K-C-E
> B-D-F
>
> Following shall not match:
>
> E-D-F
>
> Any pointer that can help me devise these queries
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-support-Sql-string-operations-
> IN-and-Regex-tp10760.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


How to support Sql string operations IN and Regex

2017-02-21 Thread mrinalkamboj
Following is my current query for *OrderCache*, using a Sql *like* operator

*var fieldsQuery = orderCache.QueryFields(new SqlFieldsQuery("select Top 10
OrderId, OrderName from OrderEntity Where OrderName like '%' || ? || '%'
Order By OrderId", orderNameValue));*

As per my understanding this will help in creating query for StartsWith,
EndsWith, Contains, as the '%' can be accordingly placed, but how shall we
create a query for IN operator, where we need to supply a collection and
check the existence of a value. Can the current placeholder take a
collection or would it be:

*OrderName IN 'A,B,C,D,E'*

Also how to support the regular expression, I need all the values for a
field which match the following Regular expression:

@"^(?:A|B)-*(?:C|D)-(?:E|F)$"

Following values shall match:

A-K-C-E
B-D-F

Following shall not match:

E-D-F

Any pointer that can help me devise these queries



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-support-Sql-string-operations-IN-and-Regex-tp10760.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: getOrCreateCache hang

2017-02-21 Thread Nikolai Tikhonov
Hi,

Could you share logs and thread dumps from all nodes from cluster?
If you can create and share a maven project which reproduce your problem
then it would be great!

Thanks,
Nikolay

On Mon, Feb 20, 2017 at 8:35 PM, Matt Warner 
wrote:

> I'm experiencing Ignite client hangs when calling getOrCreateCache when
> both
> are starting simultaneously. The stack trace shows the clients are hung in
> the getOrCreateCache method, which is why I'm focusing here.
>
> This seems like a deadlock when both clients are trying to simultaneously
> call getOrCreateCache.
>
> The setup is a vanilla Ignite installation running (./bin/ignite.sh) and
> two
> clients (IgniteConfiguration setClientMode(true)). Both go through the same
> setup, albeit in separate jar files (and separate PIDs):
>
> TcpDiscoverySpi spi = new TcpDiscoverySpi();
> TcpDiscoveryVmIpFinder ipFinder = new
> TcpDiscoveryVmIpFinder();
> ipFinder.setAddresses(Arrays.asList("127.0.0.1"));
> spi.setIpFinder(ipFinder);
> IgniteConfiguration cfg = new IgniteConfiguration();
> cfg.setDiscoverySpi(spi);
> cfg.setClientMode(true);
> try (Ignite ignite = Ignition.start(cfg)) {
> CacheConfiguration<> cacheCfg = new
> CacheConfiguration<>(CACHE_NAME);
> cacheCfg.setAtomicityMode(ATOMIC);
> cacheCfg.setReadThrough(true);
> cacheCfg.setWriteThrough(true);
> cacheCfg.setWriteBehindEnabled(false);
> cache = ignite.getOrCreateCache(cacheCfg);  <--
> Hangs here
> //
> }
>
> "main" #1 prio=5 os_prio=31 tid=0x7fdd01009800 nid=0xc07 waiting on
> condition [0x70218000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x0007964b0e58> (a
> org.apache.ignite.internal.processors.cache.GridCacheProcessor$
> DynamicCacheStartFuture)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.
> java:175)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.
> parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.
> get0(GridFutureAdapter.java:160)
> at
> org.apache.ignite.internal.util.future.GridFutureAdapter.
> get(GridFutureAdapter.java:118)
> at
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(
> IgniteKernal.java:2586)
>
>
> My apologies in advance if this is a well-known problem. I've been
> searching
> and am stumped.
>
> Thanks!
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/getOrCreateCache-hang-tp10737.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Cache write behind optimization

2017-02-21 Thread Tolga Kavukcu
Hi Yakov,

I am trying to process data based on primary node calculation using
mapKeyToNode function of cache's affinity function. I expect there is no
remote access.  I will try to summarize problem into a reproducible code
piece.

Thanks for your help.

On Tue, Feb 21, 2017 at 11:09 AM, Yakov Zhdanov  wrote:

> Tolga, this looks like you do cache.get() and key resides on remote node.
> So, yes, local node waits for response from remote node.
>
> --Yakov
>
> 2017-02-21 10:23 GMT+03:00 Tolga Kavukcu :
>
>> Hi Val,Everyone
>>
>> I am able to overcome with write behind issue and can process exteremly
>> fast in single node. But when i switched to multinode with partitioned
>> mode. My threads waiting at some condition. There are 16 threads processing
>> data all waits at same trace. Adding the thread dump.
>>
>>  java.lang.Thread.State: WAITING (parking)
>> at sun.misc.Unsafe.park(Native Method)
>> - parking to wait for  <0x000711093898> (a
>> org.apache.ignite.internal.processors.cache.distributed.dht.
>> GridPartitionedSingleGetFuture)
>> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAn
>> dCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcqu
>> ireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
>> eSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.get
>> 0(GridFutureAdapter.java:161)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.get
>> (GridFutureAdapter.java:119)
>> at org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:487)
>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .get(GridCacheAdapter.java:4629)
>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .get(GridCacheAdapter.java:1386)
>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .get(IgniteCacheProxy.java:1118)
>> at com.intellica.evam.engine.cache.dao.ScenarioCacheDao.getCurr
>> entScenarioRecord(ScenarioCacheDao.java:35)
>>
>> What might be the reason of the problem. Does it waits for a response
>> from other node ?
>>
>> -Regards.
>>
>> On Fri, Feb 10, 2017 at 7:31 AM, Tolga Kavukcu 
>> wrote:
>>
>>> Hi Val,
>>>
>>> Thanks for your tip, with enough memory i believe write-behind queue can
>>> handle peak times.
>>>
>>> Thanks.
>>>
>>> Regards.
>>>
>>> On Thu, Feb 9, 2017 at 10:44 PM, vkulichenko <
>>> valentin.kuliche...@gmail.com> wrote:
>>>
 Hi Tolga,

 There is a back-pressure mechanism to ensure that node doesn't run out
 of
 memory because of too long write behind queue. You can try increasing
 writeBehindFlushSize property to relax it.

 -Val



 --
 View this message in context: http://apache-ignite-users.705
 18.x6.nabble.com/Cache-write-behind-optimization-tp10527p10531.html
 Sent from the Apache Ignite Users mailing list archive at Nabble.com.

>>>
>>>
>>>
>>> --
>>>
>>> *Tolga KAVUKÇU*
>>>
>>
>>
>>
>> --
>>
>> *Tolga KAVUKÇU*
>>
>
>


-- 

*Tolga KAVUKÇU*


GROUP_CONCAT function is unsupported

2017-02-21 Thread zaid
Hi,
Any reason why GROUP_CONCAT function is unsupported?

Please find below code snippet from ignite-indexing module: 

class file: 

GridSqlAggregateFunction 

line no: 84 

  switch (type) { 
case GROUP_CONCAT: 
throw new UnsupportedOperationException(); 
Regards.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/GROUP-CONCAT-function-is-unsupported-tp10757.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: List of supported SQL functions (same as H2?)

2017-02-21 Thread zaid

Yes, everything that is supported in H2 available in Ignite.

GROUP_CONCAT is not working for me: 

Please find below code snippet from ignite-indexing module: 

class file: 

GridSqlAggregateFunction 

line no: 84 

  switch (type) { 
case GROUP_CONCAT: 
throw new UnsupportedOperationException(); 
Regards.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/List-of-supported-SQL-functions-same-as-H2-tp10722p10756.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Flink Streamer Compatibility.

2017-02-21 Thread dkarachentsev
ignite-flink is a thin, one-class module, and yes, it's compatible with
newest Apache Flink versions.

-Dmitry.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Flink-Streamer-Compatibility-tp10727p10755.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cache write behind optimization

2017-02-21 Thread Yakov Zhdanov
Tolga, this looks like you do cache.get() and key resides on remote node.
So, yes, local node waits for response from remote node.

--Yakov

2017-02-21 10:23 GMT+03:00 Tolga Kavukcu :

> Hi Val,Everyone
>
> I am able to overcome with write behind issue and can process exteremly
> fast in single node. But when i switched to multinode with partitioned
> mode. My threads waiting at some condition. There are 16 threads processing
> data all waits at same trace. Adding the thread dump.
>
>  java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000711093898> (a org.apache.ignite.internal.
> processors.cache.distributed.dht.GridPartitionedSingleGetFuture)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> get0(GridFutureAdapter.java:161)
> at org.apache.ignite.internal.util.future.GridFutureAdapter.
> get(GridFutureAdapter.java:119)
> at org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:487)
> at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:4629)
> at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:1386)
> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(
> IgniteCacheProxy.java:1118)
> at com.intellica.evam.engine.cache.dao.ScenarioCacheDao.
> getCurrentScenarioRecord(ScenarioCacheDao.java:35)
>
> What might be the reason of the problem. Does it waits for a response from
> other node ?
>
> -Regards.
>
> On Fri, Feb 10, 2017 at 7:31 AM, Tolga Kavukcu 
> wrote:
>
>> Hi Val,
>>
>> Thanks for your tip, with enough memory i believe write-behind queue can
>> handle peak times.
>>
>> Thanks.
>>
>> Regards.
>>
>> On Thu, Feb 9, 2017 at 10:44 PM, vkulichenko <
>> valentin.kuliche...@gmail.com> wrote:
>>
>>> Hi Tolga,
>>>
>>> There is a back-pressure mechanism to ensure that node doesn't run out of
>>> memory because of too long write behind queue. You can try increasing
>>> writeBehindFlushSize property to relax it.
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/Cache-write-behind-optimization-tp10527p10531.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>>
>> *Tolga KAVUKÇU*
>>
>
>
>
> --
>
> *Tolga KAVUKÇU*
>