Re: ContiniousQuery giving notification for different dataset
Hi Fatih. Documentation for initial query says: "an initial query that will be executed before the continuous query gets registered in the cluster and before you start to receive the updates". So it will only be executed once. To further filter events you must implement it on filter logic. Your filter implementation always returns true. On Mon, May 8, 2017 at 12:01 AM, fatih wrote: > When I try ContinuousQuery I am getting events for the records I am not > subscribed to. > > Entity Class > public class Profile implements Serializable{ > @QuerySqlField(index = true) > private String number; > @QuerySqlField(index = true) > private Double dataUsage; > } > > Update Profile Method > void updateAnyProfile(Double newDataUsage){ > SqlQuery qry = new SqlQuery(Profile.class,"select * from Profile > where > dataUsage < 30"); > List> res = > profileCache.query(qry).getAll(); > Profile profile = res.iterator().next().getValue(); > profile.setDataUsage(newDataUsage); > veonProfileCache.put(profile.getNumber(), profile); > } > > Query method > > QueryCursor> > detectChangesNoRemoteFilter(double start, double end) throws Exception{ > ContinuousQuery qry = new > ContinuousQuery<>(); > SqlQuery sqlQry = new SqlQuery(Profile.class,"dataUsage > > ? and dataUsage > < ? "); > sqlQry.setArgs(start,end); > qry.setInitialQuery(sqlQry); > qry.setLocalListener((evts) -> > evts.forEach(e -> { > System.out.println("Inserting/Updating > profiles with more than > %"+start+" key=" + e.getKey() + ", dataUsage=" + > e.getValue().field("dataUsage")); > })); > qry.setRemoteFilterFactory(new > Factory>() { > public CacheEntryEventFilter > create() { > return new CacheEntryEventFilter BinaryObject>() { > public boolean > evaluate(CacheEntryEvent BinaryObject> e) > throws > CacheEntryListenerException { > return true; > } > }; > } > }); > > IgniteCache cache = > Ignition.ignite().cache("profileCache").withKeepBinary(); > QueryCursor> cursor = > cache.query(qry); > return cursor; > } > > And registering the ContinuousQuery as below > QueryCursor> > cursor30 = > > profileService.detectChangesNoRemoteFilter(30,33); > > > Updating a profile with dataUSage is less than 30 to 51 and still getting > notifications which i should not since the data change is irrelevant to my > initial query. If that is the case then ContinuousQueries are not working > as > mentioned in the document > profileService.updateAnyProfile(51D); > > > From https://apacheignite.readme.io/docs/continuous-queries > Continuous queries enable you to listen to data modifications occurring on > Ignite caches. Once a continuous query is started, you will get notified of > all the data changes that fall into your query filter if any. > > > > > -- > View this message in context: http://apache-ignite- > developers.2346864.n4.nabble.com/ContiniousQuery-giving- > notification-for-different-dataset-tp17538.html > Sent from the Apache Ignite Developers mailing list archive at Nabble.com. > -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Re: Client got stucked on get operation
Hi. Can someone point me a direction that why a thread can stuck on the trace above? Where should I look for? How can I investicate the issue? Where should I look? On Mon, Mar 20, 2017 at 12:57 PM, Alper Tekinalp wrote: > Hi all. > > > We have 3 ignite servers. Server 1 works as standalone. Server 2 and 3 > connects eachother as server but connects server 1 as client. In a point of > the time server 3 got stucked at: > > at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) > at java.util.concurrent.locks.AbstractQueuedSynchronizer. > parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834) > at java.util.concurrent.locks.AbstractQueuedSynchronizer. > doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994) > at java.util.concurrent.locks.AbstractQueuedSynchronizer. > acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303) > at org.apache.ignite.internal.util.future.GridFutureAdapter. > get0(GridFutureAdapter.java:161) > at org.apache.ignite.internal.util.future.GridFutureAdapter. > get(GridFutureAdapter.java:119) > at org.apache.ignite.internal.processors.cache.distributed. > dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488) > at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get( > GridCacheAdapter.java:4665) > at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get( > GridCacheAdapter.java:1388) > at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get( > IgniteCacheProxy.java:1121) > at sun.reflect.GeneratedMethodAccessor634.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at com.evam.cache.client.CachePassthroughInvocationHand > ler.invokeInternal(CachePassthroughInvocationHandler.java:99) > at com.evam.cache.client.CachePassthroughInvocationHandler.invoke( > CachePassthroughInvocationHandler.java:78) > at com.sun.proxy.$Proxy56.get(Unknown Source) > > when getting record from server 1. Long after that server 2 also got > stucked at the same trace and also server 2 and server 3 disconnects from > each other. > > We investigated gc logs and there is not an unusual behaviour. One things > is server 1 logs errors as follows when server 2 and server 3 disconnects: > > [ERROR] 2017-03-18 22:01:21.881 [sys-stripe-82-#83%cache-server%] msg - > Received message without registered handler (will ignore) > [msg=GridNearSingleGetRequest [futId=1490866022968, key=BinaryObjectImpl > [arr= true, ctx=false, start=0], partId=199, flags=1, > topVer=AffinityTopologyVersion [topVer=33, minorTopVer=455], > subjId=53293ebb-f01b-40b6-a060-bec4209e9c8a, taskNameHash=0, createTtl=0, > accessTtl=-1], node=53293ebb-f01b-40b6-a060-bec4209e9c8a, > locTopVer=AffinityTopologyVersion > [topVer=33, minorTopVer=2937], msgTopVer=AffinityTopologyVersion > [topVer=33, minorTopVer=455], cacheDesc=null] > Registered listeners: > > > Where should we look for the main cause of the problem? What can cause > such a behaviour? There seems nothing wrong on server 1 logs etc. > > We use ignite 1.8.3. > > -- > Alper Tekinalp > > Software Developer > Evam Streaming Analytics > > Atatürk Mah. Turgut Özal Bulv. > Gardenya 5 Plaza K:6 Ataşehir > 34758 İSTANBUL > > Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 > www.evam.com.tr > <http://www.evam.com> > -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Client got stucked on get operation
Hi all. We have 3 ignite servers. Server 1 works as standalone. Server 2 and 3 connects eachother as server but connects server 1 as client. In a point of the time server 3 got stucked at: at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834) at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303) at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:161) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:119) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4665) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1388) at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:1121) at sun.reflect.GeneratedMethodAccessor634.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.evam.cache.client.CachePassthroughInvocationHandler.invokeInternal(CachePassthroughInvocationHandler.java:99) at com.evam.cache.client.CachePassthroughInvocationHandler.invoke(CachePassthroughInvocationHandler.java:78) at com.sun.proxy.$Proxy56.get(Unknown Source) when getting record from server 1. Long after that server 2 also got stucked at the same trace and also server 2 and server 3 disconnects from each other. We investigated gc logs and there is not an unusual behaviour. One things is server 1 logs errors as follows when server 2 and server 3 disconnects: [ERROR] 2017-03-18 22:01:21.881 [sys-stripe-82-#83%cache-server%] msg - Received message without registered handler (will ignore) [msg=GridNearSingleGetRequest [futId=1490866022968, key=BinaryObjectImpl [arr= true, ctx=false, start=0], partId=199, flags=1, topVer=AffinityTopologyVersion [topVer=33, minorTopVer=455], subjId=53293ebb-f01b-40b6-a060-bec4209e9c8a, taskNameHash=0, createTtl=0, accessTtl=-1], node=53293ebb-f01b-40b6-a060-bec4209e9c8a, locTopVer=AffinityTopologyVersion [topVer=33, minorTopVer=2937], msgTopVer=AffinityTopologyVersion [topVer=33, minorTopVer=455], cacheDesc=null] Registered listeners: Where should we look for the main cause of the problem? What can cause such a behaviour? There seems nothing wrong on server 1 logs etc. We use ignite 1.8.3. -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
[jira] [Created] (IGNITE-4765) Different partition mapping for caches that use Fair Affinity Function
Alper Tekinalp created IGNITE-4765: -- Summary: Different partition mapping for caches that use Fair Affinity Function Key: IGNITE-4765 URL: https://issues.apache.org/jira/browse/IGNITE-4765 Project: Ignite Issue Type: Bug Reporter: Alper Tekinalp Caches created on different topology versions with fair affinity funtion may have different partition mapping. The cause of this problem is fair affinity function calculates partition mappings based on previous assignments. When rebalancing partitions previous assignments for a cache is known and new assignment calculated based on that. But when you create a new cache there is no previous assignments and the calculation is different. Reproduction steps: - Start node1 - Start cache1 - Start node2 - Partitions for cache1 will be rebalanced - Start cache2 - Check partition mapping for both cache1 and cache2 -- This message was sent by Atlassian JIRA (v6.3.15#6346)
Re: Same Affinity For Same Key On All Caches
Hi. As I investigated the issue occurs when different nodes creates the caches. Say I have 2 nodes node1 and node2 and 2 caches cache1 and cache2. If I create cache1 on node1 and create cache2 on node2 with same FairAffinityFunction with same partition size, keys can map different nodes on different caches. You can find my test code and resuts as attachment. So is that a bug? Is there a way to force same mappings althought caches created on different nodes? On Fri, Feb 24, 2017 at 9:37 AM, Alper Tekinalp wrote: > Hi. > > Thanks for your comments. Let me investigate the issue deeper. > > Regards. > > On Thu, Feb 23, 2017 at 11:00 PM, Dmitriy Setrakyan > wrote: > >> If you use the same (or default) configuration for the affinity, then the >> same key in different caches will always end up on the same node. This is >> guaranteed. >> >> D. >> >> On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov < >> andrey.mashen...@gmail.com> wrote: >> >>> Val, >>> >>> Yes, with same affinity function entries with same key should be saved in >>> same nodes. >>> As far as I know, primary node is assinged automatically by Ignite. And >>> I'm >>> not sure that >>> there is a guarantee that 2 entries from different caches with same key >>> will have same primary and backup nodes. >>> So, get operation for 1-st key can be local while get() for 2-nd key will >>> be remote. >>> >>> >>> On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko < >>> valentin.kuliche...@gmail.com> wrote: >>> >>> > Actually, this should work this way out of the box, as long as the same >>> > affinity function is configured for all caches (that's true for default >>> > settings). >>> > >>> > Andrey, am I missing something? >>> > >>> > -Val >>> > >>> > On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov < >>> > andrey.mashen...@gmail.com> wrote: >>> > >>> > > Hi Alper, >>> > > >>> > > You can implement you own affinityFunction to achieve this. >>> > > In AF you should implement 2 mappings: key to partition and >>> partition to >>> > > node. >>> > > >>> > > First mapping looks trivial, but second doesn't. >>> > > Even if you will lucky to do it, there is no way to choose what node >>> wil >>> > be >>> > > primary and what will be backup for a partition, >>> > > that can be an issue. >>> > > >>> > > >>> > > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp >>> wrote: >>> > > >>> > > > Hi all. >>> > > > >>> > > > Is it possible to configures affinities in a way that partition for >>> > same >>> > > > key will be on same node? So calling >>> > > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for >>> any >>> > > cache >>> > > > will return same node id. Is that possible with a configuration >>> etc.? >>> > > > >>> > > > -- >>> > > > Alper Tekinalp >>> > > > >>> > > > Software Developer >>> > > > Evam Streaming Analytics >>> > > > >>> > > > Atatürk Mah. Turgut Özal Bulv. >>> > > > Gardenya 5 Plaza K:6 Ataşehir >>> > > > 34758 İSTANBUL >>> > > > >>> > > > Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 >>> > > > www.evam.com.tr >>> > > > <http://www.evam.com> >>> > > > >>> > > >>> > > >>> > > >>> > > -- >>> > > Best regards, >>> > > Andrey V. Mashenkov >>> > > >>> > >>> >>> >>> >>> -- >>> Best regards, >>> Andrey V. Mashenkov >>> >> >> > > > -- > Alper Tekinalp > > Software Developer > Evam Streaming Analytics > > Atatürk Mah. Turgut Özal Bulv. > Gardenya 5 Plaza K:6 Ataşehir > 34758 İSTANBUL > > Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 > www.evam.com.tr > <http://www.evam.com> > -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53
Re: Same Affinity For Same Key On All Caches
Hi. Thanks for your comments. Let me investigate the issue deeper. Regards. On Thu, Feb 23, 2017 at 11:00 PM, Dmitriy Setrakyan wrote: > If you use the same (or default) configuration for the affinity, then the > same key in different caches will always end up on the same node. This is > guaranteed. > > D. > > On Thu, Feb 23, 2017 at 8:09 AM, Andrey Mashenkov < > andrey.mashen...@gmail.com> wrote: > >> Val, >> >> Yes, with same affinity function entries with same key should be saved in >> same nodes. >> As far as I know, primary node is assinged automatically by Ignite. And >> I'm >> not sure that >> there is a guarantee that 2 entries from different caches with same key >> will have same primary and backup nodes. >> So, get operation for 1-st key can be local while get() for 2-nd key will >> be remote. >> >> >> On Thu, Feb 23, 2017 at 6:49 PM, Valentin Kulichenko < >> valentin.kuliche...@gmail.com> wrote: >> >> > Actually, this should work this way out of the box, as long as the same >> > affinity function is configured for all caches (that's true for default >> > settings). >> > >> > Andrey, am I missing something? >> > >> > -Val >> > >> > On Thu, Feb 23, 2017 at 7:02 AM, Andrey Mashenkov < >> > andrey.mashen...@gmail.com> wrote: >> > >> > > Hi Alper, >> > > >> > > You can implement you own affinityFunction to achieve this. >> > > In AF you should implement 2 mappings: key to partition and partition >> to >> > > node. >> > > >> > > First mapping looks trivial, but second doesn't. >> > > Even if you will lucky to do it, there is no way to choose what node >> wil >> > be >> > > primary and what will be backup for a partition, >> > > that can be an issue. >> > > >> > > >> > > On Thu, Feb 23, 2017 at 10:44 AM, Alper Tekinalp >> wrote: >> > > >> > > > Hi all. >> > > > >> > > > Is it possible to configures affinities in a way that partition for >> > same >> > > > key will be on same node? So calling >> > > > ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any >> > > cache >> > > > will return same node id. Is that possible with a configuration >> etc.? >> > > > >> > > > -- >> > > > Alper Tekinalp >> > > > >> > > > Software Developer >> > > > Evam Streaming Analytics >> > > > >> > > > Atatürk Mah. Turgut Özal Bulv. >> > > > Gardenya 5 Plaza K:6 Ataşehir >> > > > 34758 İSTANBUL >> > > > >> > > > Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 >> > > > www.evam.com.tr >> > > > <http://www.evam.com> >> > > > >> > > >> > > >> > > >> > > -- >> > > Best regards, >> > > Andrey V. Mashenkov >> > > >> > >> >> >> >> -- >> Best regards, >> Andrey V. Mashenkov >> > > -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Same Affinity For Same Key On All Caches
Hi all. Is it possible to configures affinities in a way that partition for same key will be on same node? So calling ignite.affinity(CACHE).mapKeyToNode(KEY).id() with same key for any cache will return same node id. Is that possible with a configuration etc.? -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Re: NullPointerException on ScanQuery
Hi Val. Do you have an idea what is causing the NPE in the first place? As Andrey says the exception is caused because of different topology versions cluster nodes. > What is null? > Node is null in the following code in org.apache.ignite.internal.processors.cache.query.ScanQueryFallbackClosableIterator.init(): 710:final ClusterNode node = nodes.poll(); 711: 712:if (*node*.isLocal()) { and nodes found as: private Queue fallbacks(AffinityTopologyVersion topVer) { Deque fallbacks = new LinkedList<>(); Collection owners = new HashSet<>(); for (ClusterNode node : cctx.topology().owners(part, topVer)) { if (node.isLocal()) fallbacks.addFirst(node); else fallbacks.add(node); owners.add(node); } for (ClusterNode node : cctx.topology().moving(part)) { if (!owners.contains(node)) fallbacks.add(node); } return fallbacks; } It is clear that if topology versions are different NPE is expected. But my claim is NPE can occur without any change in topology versions. In the example/dummy code NPE happens before any change in topology versions on cluster. But I could not convince him about that :) Or I am missing somthing really badly. Thanks all of you. Regards. -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Cache Metrics
Hi all. I have the following code: IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); igniteConfiguration.setGridName("alper"); Ignite start = Ignition.start(igniteConfiguration); CacheConfiguration configuration = new CacheConfiguration(); configuration.setAtomicityMode(CacheAtomicityMode.ATOMIC) .setCacheMode(CacheMode.PARTITIONED) .setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED) .setRebalanceMode(CacheRebalanceMode.SYNC) .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC) .setRebalanceThrottle(100) .setRebalanceBatchSize(2*1024*1024) .setBackups(1) .setName("cemil") .setEagerTtl(false); start.getOrCreateCache(configuration); IgniteCache cemil = start.getOrCreateCache("cemil"); cemil.put("1", "10"); cemil.put("2", "10"); cemil.put("3", "10"); cemil.put("4", "10"); System.out.println(cemil.metrics().getOffHeapAllocatedSize()); System.out.println(cemil.metrics().getOffHeapBackupEntriesCount()); System.out.println(cemil.metrics().getOffHeapGets()); System.out.println(cemil.metrics().getOffHeapHits()); System.out.println(cemil.metrics().getOffHeapMisses()); System.out.println(cemil.metrics().getOffHeapPuts()); System.out.println(cemil.metrics().getOffHeapEvictions()); System.out.println(cemil.metrics().getOffHeapHitPercentage()); All prints 0. Is that normal? Am i doing something wrong? -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Re: NullPointerException on ScanQuery
Hi all. Is there any comment on this? -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Re: NullPointerException on ScanQuery
> > Hi, > Hi Andrey. First of all thanks for your responces. First of all you have races in your code. For example your start two > threads and destroy caches before thread is finished and it leads to > cache closed error. It is of course reasonable to get "cache closed" errors in that case and I am OK with that. > Moreover, you stops application before any thread > finished and it leads to topology changing and NPE. > Correct me if I am wrong but I dont think that is true. NPE occurs before stopping the application. Hence, before any topology changes. I can understand that this error occurs in a topology change but in that case it is not. > The second, I don't understand why do you use threads at all? Usually > you should start Ignite cluster, connect to it using client node and > run scan query. > > Starting of several instances of Ignite in one JVM makes sense only > for test purposes. Anyway, you should start Ignite instances, creates > caches and after it run threads with your tasks. Actually this is a test code I try to simulate our real use case. In our application we have two server nodes that internally running in our application and we run scan query in that server nodes. In some cases both of the servers destroy cache and I expect to get the cache closed errors. But not NPEs because there is no change in topology. Again thanks for your responses. Regards. -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Re: NullPointerException on ScanQuery
Hi Andrey. Did you able to look to the code? Regards. On Thu, Dec 8, 2016 at 10:05 AM, Alper Tekinalp wrote: > Hi. > > Could you please share your reproducer example? > > > I added classes to repoduce the error. It also throws cache closed errors > I am ok with it. But others. > > -- > Alper Tekinalp > > Software Developer > Evam Streaming Analytics > > Atatürk Mah. Turgut Özal Bulv. > Gardenya 5 Plaza K:6 Ataşehir > 34758 İSTANBUL > > Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 > www.evam.com.tr > <http://www.evam.com> > -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Re: NullPointerException on ScanQuery
Hi all. It seems one method uses affinity: cctx.affinity().primaryPartitions(n.id(), topologyVersion()); other uses topology API. cctx.topology().owners(part, topVer) Are these two fully consistent? On Tue, Dec 6, 2016 at 4:58 PM, Alper Tekinalp wrote: > Hi all. > > We have 2 servers and a cache X. On both servers a method is running > reqularly and run a ScanQurey on that cache. We get partitions for that > query via > > ignite.affinity(cacheName).primaryPartitions(ignite.cluster().localNode()) > > and run the query on each partitions. When cache has been destroyed by > master server on second server we get: > > javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: > null > at org.apache.ignite.internal.processors.cache. > IgniteCacheProxy.query(IgniteCacheProxy.java:740) > at com.intellica.evam.engine.event.future.FutureEventWorker. > processFutureEvents(FutureEventWorker.java:117) > at com.intellica.evam.engine.event.future.FutureEventWorker.run( > FutureEventWorker.java:66) > Caused by: class org.apache.ignite.IgniteCheckedException: null > at org.apache.ignite.internal.processors.query.GridQueryProcessor. > executeQuery(GridQueryProcessor.java:1693) > at org.apache.ignite.internal.processors.cache. > IgniteCacheProxy.query(IgniteCacheProxy.java:494) > at org.apache.ignite.internal.processors.cache. > IgniteCacheProxy.query(IgniteCacheProxy.java:732) > ... 2 more > Caused by: java.lang.NullPointerException > at org.apache.ignite.internal.processors.cache.query. > GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.init( > GridCacheQueryAdapter.java:712) > at org.apache.ignite.internal.processors.cache.query. > GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.( > GridCacheQueryAdapter.java:677) > at org.apache.ignite.internal.processors.cache.query. > GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.( > GridCacheQueryAdapter.java:628) > at org.apache.ignite.internal.processors.cache.query. > GridCacheQueryAdapter.executeScanQuery(GridCacheQueryAdapter.java:548) > at org.apache.ignite.internal.processors.cache. > IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:497) > at org.apache.ignite.internal.processors.cache. > IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:495) > at org.apache.ignite.internal.util.lang.IgniteOutClosureX. > apply(IgniteOutClosureX.java:36) > at org.apache.ignite.internal.processors.query.GridQueryProcessor. > executeQuery(GridQueryProcessor.java:1670) > ... 4 more > > for a while until cache is closed on that server too. > > The corresponding line is: > > 710:final ClusterNode node = nodes.poll(); > 711: > 712:if (*node*.isLocal()) { > > Obviously node is null. nodes is a dequeue fill by following method: > > private Queue fallbacks(AffinityTopologyVersion > topVer) { > Deque fallbacks = new LinkedList<>(); > Collection owners = new HashSet<>(); > > for (ClusterNode node : cctx.topology().owners(part, topVer)) { > if (node.isLocal()) > fallbacks.addFirst(node); > else > fallbacks.add(node); > > owners.add(node); > } > > for (ClusterNode node : cctx.topology().moving(part)) { > if (!owners.contains(node)) > fallbacks.add(node); > } > > return fallbacks; > } > > There errors occurs before cache closed on second server. So checking if > cache closed is not enough. > > Why when we take partitions for local node we get some partitions but > ignite cant find any owner for that partition? > Is our method for getting partitions wrong? > Is there any way to avoid that? > > Best regards. > -- > Alper Tekinalp > > Software Developer > Evam Streaming Analytics > > Atatürk Mah. Turgut Özal Bulv. > Gardenya 5 Plaza K:6 Ataşehir > 34758 İSTANBUL > > Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 > www.evam.com.tr > <http://www.evam.com> > -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
NullPointerException on ScanQuery
Hi all. We have 2 servers and a cache X. On both servers a method is running reqularly and run a ScanQurey on that cache. We get partitions for that query via ignite.affinity(cacheName).primaryPartitions(ignite.cluster().localNode()) and run the query on each partitions. When cache has been destroyed by master server on second server we get: javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: null at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:740) at com.intellica.evam.engine.event.future.FutureEventWorker.processFutureEvents(FutureEventWorker.java:117) at com.intellica.evam.engine.event.future.FutureEventWorker.run(FutureEventWorker.java:66) Caused by: class org.apache.ignite.IgniteCheckedException: null at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1693) at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:494) at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:732) ... 2 more Caused by: java.lang.NullPointerException at org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.init(GridCacheQueryAdapter.java:712) at org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.(GridCacheQueryAdapter.java:677) at org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter$ScanQueryFallbackClosableIterator.(GridCacheQueryAdapter.java:628) at org.apache.ignite.internal.processors.cache.query.GridCacheQueryAdapter.executeScanQuery(GridCacheQueryAdapter.java:548) at org.apache.ignite.internal.processors.cache.IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:497) at org.apache.ignite.internal.processors.cache.IgniteCacheProxy$2.applyx(IgniteCacheProxy.java:495) at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:1670) ... 4 more for a while until cache is closed on that server too. The corresponding line is: 710:final ClusterNode node = nodes.poll(); 711: 712:if (*node*.isLocal()) { Obviously node is null. nodes is a dequeue fill by following method: private Queue fallbacks(AffinityTopologyVersion topVer) { Deque fallbacks = new LinkedList<>(); Collection owners = new HashSet<>(); for (ClusterNode node : cctx.topology().owners(part, topVer)) { if (node.isLocal()) fallbacks.addFirst(node); else fallbacks.add(node); owners.add(node); } for (ClusterNode node : cctx.topology().moving(part)) { if (!owners.contains(node)) fallbacks.add(node); } return fallbacks; } There errors occurs before cache closed on second server. So checking if cache closed is not enough. Why when we take partitions for local node we get some partitions but ignite cant find any owner for that partition? Is our method for getting partitions wrong? Is there any way to avoid that? Best regards. -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Server Node Stops Unexpectedly
Hi all. On one of our clusters we have 3 server nodes. 2 times in this week one of the nodes in cluster stops. We are getting errors that stating the node is stopping. We listen node joined and left events and before errors we see logs saying node x left cluster. I suspect that the reason is kinda network timeout or similar. Because other nodes are up and running. 1 - Is it possible that in a network timeout nodes stops themself? 2 - Is there any parameter or property that we can increase to avoid such timeouts? 3 - What could be another reason that nodes stops themself? I will try to collect more information like logs etc. Regard. -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Re: Threads got stuck
Hi. After restarting servers, that issue did not occur again. I guess for some reason the cluster did not start normally. Thanks responding so quick. Regards. On Tue, Oct 25, 2016 at 11:31 AM, Yakov Zhdanov wrote: > Alper, thanks for clarification, this will definitely help after we get the > info I requested. This is the only way to go with the investigation. > > --Yakov > > 2016-10-25 11:20 GMT+03:00 Alper Tekinalp : > > > Hi Yakov. > > > > I should also mention that we load cache data from one server and wait > the > > data to be replicated to others. Can that cause such a situation, too? > > > > On Tue, Oct 25, 2016 at 11:14 AM, Yakov Zhdanov > > wrote: > > > >> Alper, > >> > >> There can be multiple reasons. > >> > >> Can you please reproduce the issue one more time, collect and share the > >> following with us: > >> > >> 1. collect all the logs from all the nodes - clients and servers > >> 2. take threaddumps of all JVMs (from all nodes) with jstack -l > >> > >> --Yakov > >> > >> 2016-10-25 10:49 GMT+03:00 Alper Tekinalp : > >> > >>> Hi. > >>> > >>> There is also a few logs as : > >>> > >>> Failed to register marshalled class for more than 10 times in a row > >>> (may affect performance). > >>> > >>> Can it be releated? > >>> > >>> On Tue, Oct 25, 2016 at 10:32 AM, Alper Tekinalp > wrote: > >>> > >>>> Hi all. > >>>> > >>>> We have 3 servers and cache configuration like: > >>>> > >>>> >>>> name="DEFAULT"> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> >>>> value="#{evamProperties['topology.cache.partition.size']}"/> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> For our worker threads we check heartbeat and if a thread did not sent > >>>> heart beat for 10 minutes we consider it as stucked and interrrupt and > >>>> recreate it. > >>>> > >>>> As I can see all our worker threads are stucked in cache.put() state > >>>> and interrupted and recreated regularly. > >>>> > >>>> What can be the reason we are stucked at put? Following is stacktrace > >>>> for interruption error. > >>>> > >>>> javax.cache.CacheException: class org.apache.ignite. > IgniteInterruptedException: > >>>> Failed to wait for asynchronous operation permit (thread got > interrupted). > >>>> at org.apache.ignite.internal.processors.cache. > GridCacheUtils.c > >>>> onvertToCacheException(GridCacheUtils.java:1502) > >>>> at org.apache.ignite.internal.processors.cache. > IgniteCacheProxy > >>>> .cacheException(IgniteCacheProxy.java:2021) > >>>> at org.apache.ignite.internal.processors.cache. > IgniteCacheProxy > >>>> .put(IgniteCacheProxy.java:1221) > >>>> at com.intellica.project.helper.ee.ConfigManagerHelperEE. > setSta > >>>> te(ConfigManagerHelperEE.java:90) > >>>> at com.intellica.project.helper.ee. > StateMachineConfigManagerEEI > >>>> mpl.store(StateMachineConfigManagerEEImpl.java:53) > >>>> at com.evelopers.unimod.runtime.AbstractEventProcessor. > storeCon > >>>> fig(AbstractEventProcessor.java:175) > >>>> at com.evelopers.unimod.runtime.AbstractEventProcessor. > process( > >>>> AbstractEventProcessor.java:130) > >>>> at com.evelopers.unimod.runtime.AbstractEventProcessor. > process( > >>>> AbstractEventProcessor.java:80) > >>>> at com.evelopers.unimod.runtime.ModelEngine.process( > ModelEngine > >>>> .java:199) > >>>> at com.evelopers.unimod.runtime.StrictHandler.handle( > StrictHand > >>>> ler.java:46) > >>>> at com.intellica.evam.engine.server.worker. > AbstractScenarioWork > >>>> er.runScenarioLogic(Abstrac
Re: Threads got stuck
Hi Yakov. I should also mention that we load cache data from one server and wait the data to be replicated to others. Can that cause such a situation, too? On Tue, Oct 25, 2016 at 11:14 AM, Yakov Zhdanov wrote: > Alper, > > There can be multiple reasons. > > Can you please reproduce the issue one more time, collect and share the > following with us: > > 1. collect all the logs from all the nodes - clients and servers > 2. take threaddumps of all JVMs (from all nodes) with jstack -l > > --Yakov > > 2016-10-25 10:49 GMT+03:00 Alper Tekinalp : > >> Hi. >> >> There is also a few logs as : >> >> Failed to register marshalled class for more than 10 times in a row (may >> affect performance). >> >> Can it be releated? >> >> On Tue, Oct 25, 2016 at 10:32 AM, Alper Tekinalp wrote: >> >>> Hi all. >>> >>> We have 3 servers and cache configuration like: >>> >>> >> name="DEFAULT"> >>> >>> >>> >>> >>> >>> >>> >> value="#{evamProperties['topology.cache.partition.size']}"/> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> For our worker threads we check heartbeat and if a thread did not sent >>> heart beat for 10 minutes we consider it as stucked and interrrupt and >>> recreate it. >>> >>> As I can see all our worker threads are stucked in cache.put() state and >>> interrupted and recreated regularly. >>> >>> What can be the reason we are stucked at put? Following is stacktrace >>> for interruption error. >>> >>> javax.cache.CacheException: class >>> org.apache.ignite.IgniteInterruptedException: >>> Failed to wait for asynchronous operation permit (thread got interrupted). >>> at org.apache.ignite.internal.processors.cache.GridCacheUtils.c >>> onvertToCacheException(GridCacheUtils.java:1502) >>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy >>> .cacheException(IgniteCacheProxy.java:2021) >>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy >>> .put(IgniteCacheProxy.java:1221) >>> at com.intellica.project.helper.ee.ConfigManagerHelperEE.setSta >>> te(ConfigManagerHelperEE.java:90) >>> at com.intellica.project.helper.ee.StateMachineConfigManagerEEI >>> mpl.store(StateMachineConfigManagerEEImpl.java:53) >>> at com.evelopers.unimod.runtime.AbstractEventProcessor.storeCon >>> fig(AbstractEventProcessor.java:175) >>> at com.evelopers.unimod.runtime.AbstractEventProcessor.process( >>> AbstractEventProcessor.java:130) >>> at com.evelopers.unimod.runtime.AbstractEventProcessor.process( >>> AbstractEventProcessor.java:80) >>> at com.evelopers.unimod.runtime.ModelEngine.process(ModelEngine >>> .java:199) >>> at com.evelopers.unimod.runtime.StrictHandler.handle(StrictHand >>> ler.java:46) >>> at com.intellica.evam.engine.server.worker.AbstractScenarioWork >>> er.runScenarioLogic(AbstractScenarioWorker.java:172) >>> at com.intellica.evam.engine.server.worker.AbstractScenarioWork >>> er.runScenario(AbstractScenarioWorker.java:130) >>> at com.intellica.evam.engine.server.worker.AsyncWorker.processE >>> vent(AsyncWorker.java:156) >>> at com.intellica.evam.engine.server.worker.AsyncWorker.run(Asyn >>> cWorker.java:88) >>> Caused by: class org.apache.ignite.IgniteInterruptedException: Failed >>> to wait for asynchronous operation permit (thread got interrupted). >>> at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUt >>> ils.java:747) >>> at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUt >>> ils.java:745) >>> ... 14 more >>> Caused by: java.lang.InterruptedException >>> at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir >>> eSharedInterruptibly(AbstractQueuedSynchronizer.java:1301) >>> at java.util.concurrent.Semaphore.acquire(Semaphore.java:317) >>> at org.apache.ignite.internal.processors.cache.GridCacheAdapter >>> .asyncOpAcquire(GridCacheAdapter.java:4597) >>> at org.apache.ignite.internal.processors.cache.distributed.dht. &g
Re: Threads got stuck
Hi. There is also a few logs as : Failed to register marshalled class for more than 10 times in a row (may affect performance). Can it be releated? On Tue, Oct 25, 2016 at 10:32 AM, Alper Tekinalp wrote: > Hi all. > > We have 3 servers and cache configuration like: > > name="DEFAULT"> > > > > > > > value="#{evamProperties['topology.cache.partition.size']}"/> > > > > > > > > > > For our worker threads we check heartbeat and if a thread did not sent > heart beat for 10 minutes we consider it as stucked and interrrupt and > recreate it. > > As I can see all our worker threads are stucked in cache.put() state and > interrupted and recreated regularly. > > What can be the reason we are stucked at put? Following is stacktrace for > interruption error. > > javax.cache.CacheException: class > org.apache.ignite.IgniteInterruptedException: > Failed to wait for asynchronous operation permit (thread got interrupted). > at org.apache.ignite.internal.processors.cache.GridCacheUtils. > convertToCacheException(GridCacheUtils.java:1502) > at org.apache.ignite.internal.processors.cache.IgniteCacheProxy. > cacheException(IgniteCacheProxy.java:2021) > at org.apache.ignite.internal.processors.cache. > IgniteCacheProxy.put(IgniteCacheProxy.java:1221) > at com.intellica.project.helper.ee.ConfigManagerHelperEE.setState( > ConfigManagerHelperEE.java:90) > at com.intellica.project.helper.ee.StateMachineConfigManagerEEImp > l.store(StateMachineConfigManagerEEImpl.java:53) > at com.evelopers.unimod.runtime.AbstractEventProcessor. > storeConfig(AbstractEventProcessor.java:175) > at com.evelopers.unimod.runtime.AbstractEventProcessor.process( > AbstractEventProcessor.java:130) > at com.evelopers.unimod.runtime.AbstractEventProcessor.process( > AbstractEventProcessor.java:80) > at com.evelopers.unimod.runtime.ModelEngine.process( > ModelEngine.java:199) > at com.evelopers.unimod.runtime.StrictHandler.handle( > StrictHandler.java:46) > at com.intellica.evam.engine.server.worker.AbstractScenarioWorker. > runScenarioLogic(AbstractScenarioWorker.java:172) > at com.intellica.evam.engine.server.worker.AbstractScenarioWorker. > runScenario(AbstractScenarioWorker.java:130) > at com.intellica.evam.engine.server.worker.AsyncWorker. > processEvent(AsyncWorker.java:156) > at com.intellica.evam.engine.server.worker.AsyncWorker.run( > AsyncWorker.java:88) > Caused by: class org.apache.ignite.IgniteInterruptedException: Failed to > wait for asynchronous operation permit (thread got interrupted). > at org.apache.ignite.internal.util.IgniteUtils$2.apply( > IgniteUtils.java:747) > at org.apache.ignite.internal.util.IgniteUtils$2.apply( > IgniteUtils.java:745) > ... 14 more > Caused by: java.lang.InterruptedException > at java.util.concurrent.locks.AbstractQueuedSynchronizer. > acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1301) > at java.util.concurrent.Semaphore.acquire(Semaphore.java:317) > at org.apache.ignite.internal.processors.cache.GridCacheAdapter. > asyncOpAcquire(GridCacheAdapter.java:4597) > at org.apache.ignite.internal.processors.cache.distributed. > dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:683) > at org.apache.ignite.internal.processors.cache.distributed. > dht.atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1014) > at org.apache.ignite.internal.processors.cache.distributed. > dht.atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:484) > at org.apache.ignite.internal.processors.cache. > GridCacheAdapter.putAsync(GridCacheAdapter.java:2541) > at org.apache.ignite.internal.processors.cache.distributed. > dht.atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:461) > at org.apache.ignite.internal.processors.cache. > GridCacheAdapter.put(GridCacheAdapter.java:2215) > at org.apache.ignite.internal.processors.cache. > IgniteCacheProxy.put(IgniteCacheProxy.java:1214) > ... 11 more > > > -- > Alper Tekinalp > > Software Developer > Evam Streaming Analytics > > Atatürk Mah. Turgut Özal Bulv. > Gardenya 5 Plaza K:6 Ataşehir > 34758 İSTANBUL > > Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 > www.evam.com.tr > <http://www.evam.com> > -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Threads got stuck
Hi all. We have 3 servers and cache configuration like: For our worker threads we check heartbeat and if a thread did not sent heart beat for 10 minutes we consider it as stucked and interrrupt and recreate it. As I can see all our worker threads are stucked in cache.put() state and interrupted and recreated regularly. What can be the reason we are stucked at put? Following is stacktrace for interruption error. javax.cache.CacheException: class org.apache.ignite.IgniteInterruptedException: Failed to wait for asynchronous operation permit (thread got interrupted). at org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1502) at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.cacheException(IgniteCacheProxy.java:2021) at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1221) at com.intellica.project.helper.ee.ConfigManagerHelperEE.setState(ConfigManagerHelperEE.java:90) at com.intellica.project.helper.ee.StateMachineConfigManagerEEImpl.store(StateMachineConfigManagerEEImpl.java:53) at com.evelopers.unimod.runtime.AbstractEventProcessor.storeConfig(AbstractEventProcessor.java:175) at com.evelopers.unimod.runtime.AbstractEventProcessor.process(AbstractEventProcessor.java:130) at com.evelopers.unimod.runtime.AbstractEventProcessor.process(AbstractEventProcessor.java:80) at com.evelopers.unimod.runtime.ModelEngine.process(ModelEngine.java:199) at com.evelopers.unimod.runtime.StrictHandler.handle(StrictHandler.java:46) at com.intellica.evam.engine.server.worker.AbstractScenarioWorker.runScenarioLogic(AbstractScenarioWorker.java:172) at com.intellica.evam.engine.server.worker.AbstractScenarioWorker.runScenario(AbstractScenarioWorker.java:130) at com.intellica.evam.engine.server.worker.AsyncWorker.processEvent(AsyncWorker.java:156) at com.intellica.evam.engine.server.worker.AsyncWorker.run(AsyncWorker.java:88) Caused by: class org.apache.ignite.IgniteInterruptedException: Failed to wait for asynchronous operation permit (thread got interrupted). at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUtils.java:747) at org.apache.ignite.internal.util.IgniteUtils$2.apply(IgniteUtils.java:745) ... 14 more Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1301) at java.util.concurrent.Semaphore.acquire(Semaphore.java:317) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.asyncOpAcquire(GridCacheAdapter.java:4597) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:683) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAsync0(GridDhtAtomicCache.java:1014) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:484) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAsync(GridCacheAdapter.java:2541) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:461) at org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2215) at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1214) ... 11 more -- Alper Tekinalp Software Developer Evam Streaming Analytics Atatürk Mah. Turgut Özal Bulv. Gardenya 5 Plaza K:6 Ataşehir 34758 İSTANBUL Tel: +90 216 455 01 53 Fax: +90 216 455 01 54 www.evam.com.tr <http://www.evam.com>
Client Connecting Issues
Hi all. We have two set of applications: server and client applications. Servers running all the time and with client applications we connect to server, do things and disconnect, etc. We have some issues with that connections: 1 - We use TcpDiscoveryVmIpFinder to connect cluster. But establishing connection take some time. We have default 20 sec that clients try to connect. After that we start client aplications in offline mode. Is it normal? Is there a way to decrease that time? 2 - Most of the time through the life of client apps, client disconnects from cluster, goes offline mode and never reconnects without a restart. After restart it disconnects after some time again. Most of the time we are connecting with client apps through VPN and we dont have the chance to improve or edit network configuration or quality. Our SPI config for client is: private IgniteConfiguration getIgniteConfiguration() { TcpDiscoverySpi spi = new TcpDiscoverySpi(); TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); ipFinder.setAddresses("IP of oldest node in cluster"); spi.setIpFinder(ipFinder); IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); igniteConfiguration.setClientMode(true) .setPeerClassLoadingEnabled(false) .setIncludeEventTypes(EventType.EVTS_DISCOVERY) .setDiscoverySpi(spi) .setAddressResolver(new AmazonAddressResolver()); return igniteConfiguration; } Do you have any suggestions? What can we do about especially for second case? Regards. -- *Alper Tekinalp* *Software Developer* *Evam Stream Analytics* *Atatürk Mah.Turgut Özal Bulvarı Gardenya 1 Plaza 42/B K:4 Ataşehir / İSTANBUL* *Tlf : +90216 688 45 46 <%2B90216%20688%2045%2046> Fax : +90216 688 45 47 <%2B90216%20688%2045%2047> Gsm:+90 536 222 76 01 <%2B90%20553%20489%2044%2099>* *www.evam.com <http://www.evam.com/>* <http://www.evam.com>
Re: Contribution merged: IGNITE-2394 Cache loading from storage is called on client nodes
Hi Denis, I want to thank you and all devs for your support. Will try to contribute more. Best regards. 2016-05-17 17:26 GMT+03:00 Denis Magda : > Alper, let me thank you from the community side for the contribution you’ve > done. > > The contribution fixed an unpleasant issue when loading from a cache store > was triggered on a client side (for non local caches) causing useless > transferring of data from a storage to the client side. > > Waiting for more contributions from your side. > > Regards, > Denis -- Alper Tekinalp atekinalp.blogspot.com
Server node info when client disconnects
Hi. I have a server and a client. I connect server from client with static ip. When server closed I get client node disconnected event. Is there easy way that I could know the UUID of the server? Best regards. -- Alper Tekinalp Software Developer Evam Stream Analytics Atatürk Mah.Turgut Özal Bulvarı Gardenya 1 Plaza 42/B K:4 Ataşehir / İSTANBUL Tlf : +90216 688 45 46 Fax : +90216 688 45 47 Gsm:+90 536 222 76 01 www.evam.com
Connecting Amazon cluster with client from local
Hi all. Is there a way to connect an amazon cluster from local client? I tried to use static ip discovery with public ip of node. It seems to find the cluster but cant connect. As I understand from error messages, client tries to use internal ip of server node for other things. How can I workaround that? Is there a suitable way to implement? -- Alper Tekinalp Software Developer Evam Stream Analytics Atatürk Mah.Turgut Özal Bulvarı Gardenya 1 Plaza 42/B K:4 Ataşehir / İSTANBUL Tlf : +90216 688 45 46 Fax : +90216 688 45 47 Gsm:+90 536 222 76 01 www.evam.com
Re: Hello
Hi. You can choose issue from this list https://issues.apache.org/jira/issues/?jql=labels%20%3D%20newbie%20and%20project%20%3D%20%22IGNITE%22 It contains issues labelled as newbie and good for a starting point. Good luck and wellcome. On Thu, Mar 10, 2016 at 10:31 AM, Ryan Zhao wrote: > Hi everyone! > > My name is Ryan, and I'm very interested in this project and willing to > contribute to it. > > Let me know if I can do something useful! -- Alper Tekinalp Software Developer Evam Stream Analytics Atatürk Mah.Turgut Özal Bulvarı Gardenya 1 Plaza 42/B K:4 Ataşehir / İSTANBUL Tlf : +90216 688 45 46 Fax : +90216 688 45 47 Gsm:+90 536 222 76 01 www.evam.com