Possible starvation in striped pool in one of the node

2022-06-08 Thread Lo, Marcus
=org.apache.ignite.internal.util.typedef.G >>> Possible starvation in striped pool. Thread name: sys-stripe-7-#8%Ignite% Queue: [Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtAtomicSingleUpda

Re: When the client frequently has FullGC, it blocks all requests from the server. "Possible starvation in striped pool"

2019-05-23 Thread Ilya Kasnacheev
ing occurred is a large number of "[2019-05-21T16:36:04,880][WARN > ][grid-timeout-worker-#10343][G] >>> Possible starvation in striped pool." > > Please refer to the attachment for the full log, 10.110.118.53 in the log > is the FullGC test node. > > What parameters

Re: ignite zk: Possible starvation in striped pool

2019-01-22 Thread wangsan
Thank you! I see that this is communication spi not discovery spi. But in other nodes there are many zk session timeout message or zk reconnect fail message. And the starvation message only print in the node which have the zk server(not cluster,just three zk node in one machine) in the same

Re: ignite zk: Possible starvation in striped pool

2019-01-22 Thread Denis Mekhanikov
communication SPI, and doesn't have anything to do with ZooKeeper. Denis вт, 22 янв. 2019 г. в 15:38, wangsan : > 10:38:31.577 [grid-timeout-worker-#55%DAEMON-NODE-10-153-106-16-8991%] > WARN > o.a.ignite.internal.util.typedef.G - >>> Possible starvation in striped > po

ignite zk: Possible starvation in striped pool

2019-01-22 Thread wangsan
10:38:31.577 [grid-timeout-worker-#55%DAEMON-NODE-10-153-106-16-8991%] WARN o.a.ignite.internal.util.typedef.G - >>> Possible starvation in striped pool. Thread name: sys-stripe-9-#10%DAEMON-NODE-10-153-106-16-8991% Queue: [] Deadlock: false Completed: 17156 Thread [

Re: Possible starvation in striped pool

2018-07-20 Thread Ilya Kasnacheev
Hello! At this point I recommend debugging which statements are ran on Oracle and why they take long. Also I have noticed: appDataSource - is it behind some kind of connection pool? I am afraid it is possible that this data source is single-threaded in the absense of connection pool, hence you

Re: Possible starvation in striped pool

2018-07-18 Thread Shailendrasinh Gohil
Here you go...

Re: Possible starvation in striped pool

2018-07-18 Thread Ilya Kasnacheev
Hello again! I have just noticed the following stack trace: "flusher-0-#588%AppCluster%" #633 prio=5 os_prio=0 tid=0x7f18d424f800 nid=0xe1bb runnable [0x7f197c1cd000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream.socketRead0(Native Method) at

Re: Possible starvation in striped pool

2018-07-18 Thread Ilya Kasnacheev
Hello! Can you please share the configuration of your Apache Ignite nodes, especially the cache store's of caches. I have just noticed that you're actually waiting on cache store lock. Regards, -- Ilya Kasnacheev 2018-07-17 19:11 GMT+03:00 Shailendrasinh Gohil <

Re: Possible starvation in striped pool

2018-07-17 Thread Shailendrasinh Gohil
We are using the TreeMap for all the putAll operations. We also tried streamer API to create the automatic batches. Still the issue is same. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Possible starvation in striped pool

2018-07-17 Thread Sambhaji Sawant
Hello same issue occurred when trying to put object in cache using cache.put method.after changing put to putAsync issue was solved. I have read about when you using putAll methode pass sorted collection to it so it avoid deadlock. So is it true? On Tue, Jul 17, 2018, 8:22 PM ilya.kasnacheev

Re: Possible starvation in striped pool

2018-07-17 Thread ilya.kasnacheev
Hello! I have noticed that you are using putAll in your code. Apache Ignite is susceptible to deadlocks in the same fashion as regular multi-threaded code: i.e., if you take multiple locks (as putAll does, on partitions for its keys), you can get deadlock unless you maintain sequence of locks,

Re: Possible starvation in striped pool

2018-07-16 Thread Shailendrasinh Gohil
Please find attached thread dump as requested. ServerThreadDump0716.txt -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Possible starvation in striped pool

2018-07-16 Thread Ilya Kasnacheev
Hello! Can you please provide the thread dump of problematic cluster after removal of close statements on caches? Regards, -- Ilya Kasnacheev 2018-07-16 17:21 GMT+03:00 Shailendrasinh Gohil < shailendrasinh.go...@salientcrgt.com>: > Thanks again for the response. > > We have tried removing

Re: Possible starvation in striped pool

2018-07-16 Thread Shailendrasinh Gohil
Thanks again for the response. We have tried removing the close statements but the result was same. And yes, other threads accessing cache from the same Dao. We also tried both the atomicityMode to see if any improvement. We also have write behind enabled for the large tables with frequent get

Re: Possible starvation in striped pool

2018-07-13 Thread Ilya Kasnacheev
Hello! I can see here that you are trying to destroy a cache: at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177) at org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140) at

Re: Possible starvation in striped pool

2018-07-12 Thread Shailendrasinh Gohil
Thank you for your response. Please find attached thread dumps for client and server nodes. ClientThreadDump.txt ThreadDumpServer1.txt

Re: Possible starvation in striped pool

2018-07-12 Thread Ilya Kasnacheev
ta from cache. We see the below issue when there are more than 2 > users performing the similar operation on their own data. This was not the > performance we expected from the documentation. > > > > WARN [org.apache.ignite.internal.util.typedef.G] - >>> Possible > starvati

Possible starvation in striped pool

2018-07-12 Thread Gohil, Shailendrasinh (INTL)
their data from cache. We see the below issue when there are more than 2 users performing the similar operation on their own data. This was not the performance we expected from the documentation. WARN [org.apache.ignite.internal.util.typedef.G] - >>> Possible starvation in striped pool.

Re: Possible starvation in striped pool. message

2017-08-30 Thread ezhuravlev
peerClassLoading used only for compute, for example for sharing job classes between nodes, it's not working for objects that put into cache. If you want to work without this classes on nodes, take a look to BinaryObjects: https://apacheignite.readme.io/v2.0/docs/binary-marshaller Evgenii --

Re: Possible starvation in striped pool. message

2017-08-29 Thread kestas
http://apache-ignite-users.70518.x6.nabble.com/Possible-starvation-in-striped-pool-message-tp15993p16482.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Possible starvation in striped pool. message

2017-08-09 Thread Yakov Zhdanov
ache.get(1).getSome(); > > > > -- > View this message in context: http://apache-ignite-users. > 70518.x6.nabble.com/Possible-starvation-in-striped-pool- > message-tp15993p16081.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com. >

Re: Possible starvation in striped pool. message

2017-08-07 Thread kestas
Yes, this seems to appear when we start working with large objects. Is there a way to solve this? Does it affect cache put/get operations performance directly ? -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Possible-starvation-in-striped-pool-message

Re: Possible starvation in striped pool. message

2017-08-04 Thread slava.koptilin
()/putAll() methods from different threads. In that case, you need to sort collection of keys before, because batch operation on the same entries in different order could lead to deadlock. Thanks. -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Possible-starvation

Possible starvation in striped pool. message

2017-08-04 Thread kestas
Hi, sometimes we get this message in logs. What does it mean ? Jul 26, 2017 11:43:25 AM org.apache.ignite.logger.java.JavaLogger warning WARNING: >>> Possible starvation in striped pool. Thread name: sys-stripe-3-#4%null% Queue: [] Deadlock: false Completed: 17 Thread [

Re: Possible starvation in striped pool

2017-07-18 Thread Andrey Mashenkov
you use? >> Would you please share full logs? >> >> On Fri, Jul 14, 2017 at 1:24 PM, Alper Tekinalp <al...@evam.com> wrote: >> >>> Hi. >>> >>> What does following log means: >>> >>> [WARN ] 2017-07-12 23:00:50.786 [grid-tim

Re: Possible starvation in striped pool

2017-07-14 Thread Andrey Mashenkov
, Jul 14, 2017 at 1:24 PM, Alper Tekinalp <al...@evam.com> wrote: > Hi. > > What does following log means: > > [WARN ] 2017-07-12 23:00:50.786 [grid-timeout-worker-#71%cache-server%] G > - >>> Possible starvation in striped pool: sys-stripe-10-#11%cache-server% >

Possible starvation in striped pool

2017-07-14 Thread Alper Tekinalp
Hi. What does following log means: [WARN ] 2017-07-12 23:00:50.786 [grid-timeout-worker-#71%cache-server%] G - >>> Possible starvation in striped pool: sys-stripe-10-#11%cache-server% [Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8, ordered=false,