Ignite node is down due to full RAM usage

2019-03-22 Thread praveeng
Hi,

Ignite version : 1.8
One of the ignite node in 3node cluster is down due to full usage of RAM.

At that point of time i can observe the following logs on this node:

[00:32:02,119][INFO
][grid-timeout-worker-#7%CasinoApacheIgniteServices%][IgniteKernal%CasinoApacheIgniteServices]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=9f8df386, name=CasinoApacheIgniteServices,
uptime=23:21:45:744]
^-- H/N/C [hosts=8, nodes=8, CPUs=44]
^-- CPU [cur=8.33%, avg=1.6%, GC=0%]
^-- Heap [used=3886MB, free=36.65%, comm=6134MB]
^-- Non heap [used=78MB, free=85.96%, comm=529MB]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=16, qSize=0]
^-- Outbound messages queue [size=0]

[00:33:24,674][WARN
][exchange-worker-#23%CasinoApacheIgniteServices%][GridCachePartitionExchangeManager]
Failed to wait for partition map exchange [topVer=AffinityTopologyVersion
[topVer=84, minorTopVer=0], node=9f8df386-2886-451f-b1ff-53713878d432].
Dumping pending objects that might be the cause:
[00:33:24,674][WARN
][exchange-worker-#23%CasinoApacheIgniteServices%][GridCachePartitionExchangeManager]
Failed to wait for partition map exchange [topVer=AffinityTopologyVersion
[topVer=84, minorTopVer=0], node=9f8df386-2886-451f-b1ff-53713878d432].
Dumping pending objects that might be the cause:


SAR stats for memory usage on this date: 

-- mar 6
12:00:01 AM kbmemfree kbmemused  %memused kbbuffers  kbcached  kbcommit  
%commit  kbactive   kbinact   kbdirty
12:10:01 PM170120  16090232 98.95 0   3393384   8222696
45.02   9887268   208850460
01:50:01 PM168176  16092176 98.97 0   2120848   8224724
45.03  10804712   159679248
03:10:01 PM199128  16061224 98.78 0991832   8224904
45.04  11384652   1241284   436
04:10:01 PM153060  16107292 99.06 0229984   8224880
45.04  11255628   1627600   208
04:20:01 PM165580  16094772 98.98 0 78572   8224828
45.03  11338592   156094452
04:30:01 PM153508  16106844 99.06 0 29740   8224872
45.03  11436544   157946844
04:40:01 PM162184  16098168 99.00 0 33152   8224892
45.04  11606584   158038824
11:10:01 PM370956  15889396 97.72 0 74816   8225312
45.04  11927676   161082836
11:20:01 PM348576  15911776 97.86 0 69012   8225272
45.04  11929820   160274848
11:30:01 PM359132  15901220 97.79 0 27060   8225308
45.04  11912656   157784836
11:40:01 PM340252  15920100 97.91 0 24908   8225272
45.04  11910516   157766832
11:50:01 PM308340  15952012 98.10 0 39208   8242284
45.13  11914564   158920848
Average:   253568  16006784 98.44 0   2317289   8226063
45.04  10368276   1955525   142

Please find the attached file for the cache configuration.

  ignite-clb-cache-config_dev.xml

  

Please find the memory snapshot which is captured by app dynamics tool in
the attachment.
memorySnapshot.JPG
  

Following is my analysis.
When the data is evicting from on heap to off heap, there is not much space
in off heap.
Due to that off heap memory usage is full and application has become slow
and unresponsive.

Even the data in off heap is not expired because of that there is not much
free memory in RAM.
After i restarted the application on this node, the RAM usage has become to
25% and now it's usage is 45%.

can you please check and suggest once.

Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite node is down due to full RAM usage

2019-03-26 Thread praveeng
Hi,

As we can't upgrade java version to 1.8, we can't use the ignite latest
version.
If it is because of Heap Memory issue, i could have got the OOM error in
logs and heap dump might have generated automatically.
This could be because of the data in off heap is not expired and the RAM is
used completely.

Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Inserting data into Ignite thread is hung

2018-10-22 Thread praveeng
Hi ,

Version: 1.8
Java: 1.7.0_45-b18 [ we can't upgrade the java version to 1.8 due to some
internal restriction.]

We are facing that the ignite thread is hung when we are trying to insert
the data.
This is not happening that much frequently.
We don't see any exceptions in ignite except below that too the below
exception has come only on one server. Can you please check and suggest us.

*Exception in ignite server:*

[02:15:14,320][WARN
][grid-nio-worker-1-#10%CasinoApacheIgniteServices%][TcpCommunicationSpi]
Failed to process selector key (will close): GridSelectorNioSessionImpl
[selectorIdx=1, queueSize=0, writeBuf=java.nio.DirectByteBuffer[pos=0
lim=32768 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768
cap=32768], recovery=GridNioRecoveryDescriptor [acked=2060041, resendCnt=0,
rcvCnt=2060037, sentCnt=2060041, reserved=true, lastAck=2060032,
nodeLeft=false, node=TcpDiscoveryNode
[id=62c088cc-f6a8-49b2-8eee-911f147d99b2, addrs=[10.166.161.111, 127.0.0.1],
sockAddrs=[/127.0.0.1:0, gi2p1xrcds001.gi02.bpty/10.166.161.111:0],
discPort=0, order=109, intOrder=61, lastExchangeTime=1539342673260,
loc=false, ver=1.8.0#20161205-sha1:9ca40dbe, isClient=true], connected=true,
connectCnt=0, queueLimit=10, reserveCnt=68], super=GridNioSessionImpl
[locAddr=/10.166.186.158:9090, rmtAddr=/10.166.161.111:45006,
createTime=1539921738152, closeTime=0, bytesSent=6593820,
bytesRcvd=20615839, sndSchedTime=1539929665103, lastSndTime=1539921785427,
lastRcvTime=1539929665103, readsPaused=false,
filterChain=FilterChain[filters=[GridNioCodecFilter
[parser=o.a.i.i.util.nio.GridDirectParser@40e27086, directMode=true],
GridConnectionBytesVerifyFilter, SSL filter], accepted=true]]
[02:15:14,320][WARN
][grid-nio-worker-1-#10%CasinoApacheIgniteServices%][TcpCommunicationSpi]
Closing NIO session because of unhandled exception [cls=class
o.a.i.i.util.nio.GridNioException, msg=Connection timed out]

When we timed out the hung thread, we can see the below stack trace in
client service.

02:08:40,697 - ERROR - pool-45-thread-1 - CASINO_DATASYNC_LOGGER Exception
in Syncing the Data for LMTEMPLATECONFIG
javax.cache.CacheException: class
org.apache.ignite.IgniteInterruptedException: null
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1440)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.cacheException(IgniteCacheProxy.java:2183)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.putAll(IgniteCacheProxy.java:1430)
at
com.pg.casino.service.datasync.DataSyncManager.syncDataIntoIgnite(DataSyncManager.java:835)
at
com.pg.casino.service.datasync.LMDataSyncManager.syncGenericLobbyTemplateConfig(LMDataSyncManager.java:327)
at
com.pg.casino.service.datasync.DataSyncCallable.call(DataSyncCallable.java:118)
at
com.pg.casino.service.datasync.DataSyncCallable.call(DataSyncCallable.java:22)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: class org.apache.ignite.IgniteInterruptedException: null
at
org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:766)
at
org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:764)
... 11 more
Caused by: java.lang.InterruptedException
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:996)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:160)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:118)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAll(GridDhtAtomicCache.java:704)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.putAll(IgniteCacheProxy.java:1423)


*cache info in Ignite xml :*















*
Ignite config:*
http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd";>









  

Re: Inserting data into Ignite thread is hung

2018-10-22 Thread praveeng
Hi ,

I have forgot to specify the below points.

1. This issue is happening every time at the same place of code for the same
cache.
2. The cache to which we are inserting is not at all used by any other
service [ cache instance is created in other services but we will not be
making any call to this cache. At present we are just inserting the data
into the cache by client1 and other clients just creating the cache instance
to this cache but not making any  call.]


Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inserting data into Ignite thread is hung

2018-10-22 Thread praveeng
Hi,

I don't think there is a network issue, since at that moment other clients
are able to access the data from the Ignite cluster. But some times the
inserting thread is hung for 15 mins, after that it has proceeded further.

If the thread is hung which is inserting, we are canceling the thread
explicitly by the below code.
That's why you can see IgniteInterruptedException in the logs.
But why the thread is becoming hung while inserting the data[ max 10
records].
 if(futureReturn.size() > 0){
for(Future future : 
futureReturn){
try{
Boolean result = 
future.get(20, TimeUnit.MINUTES);  
if(!result){

dataSyncSucces 
= false;
break;
}
}catch(Exception e){
future.cancel(true);

dataSyncSucces = false;
}   

}
} 

Even the same issue i observed in local services also but at different piece
of code.

Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inserting data into Ignite thread is hung

2018-10-25 Thread praveeng
Hi,

We are getting this issue in production continuously.
I am attaching the thread dumps of my services (ignite client
-threaddump14168.txt file  and ignite servers- 3 files ).
can any one please analyze and suggest once how to fix this issue.

threaddump14168.txt
 
 
CAI.7z   

Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


The thread which is inserting data into Ignite is hung

2018-04-14 Thread praveeng
Hi,

We have observed one issue with IGNITE today is , the client thread has been
waiting forever when it is trying to create a cache, inserting the data and
trying to know the size of the cache [ REPLICATED caches, backups value for
these caches is 2147483647 ]. We are getting this issue few times only, but
daily one or 2 times it is coming in our environment.  Following are the
logs, can any one please check and suggest us. 


Java Version : 1.7.0_45 [ we can't upgrade to newer version.]
Ignite Version : 1.8
Ignite server nodes = 3
Unique Ignite Client services = 6

cache configuration:













 














 

DS - Client Service 1:
Thread stack trace from thread dump When first time it is occurred:
-
when we are trying to sync the data from DS service to ignite server, thread
has been waiting from client is forever. Following is the stack trace from
the thread dump of this client service.

"pool-160-thread-1" prio=10 tid=0x7f39800bc800 nid=0x404e waiting on
condition [0x7f396f9f8000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00078ac134c8> (a
org.apache.ignite.internal.ComputeTaskInternalFuture)
at
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:160)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:118)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.size(GridCacheAdapter.java:3833)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.size(IgniteCacheProxy.java:997)
at
com.pg.casino.service.datasync.DataSyncManager.clearFeedCaches(DataSyncManager.java:446)
at
com.pg.casino.service.datasync.DataSyncCallable.call(DataSyncCallable.java:119)
at
com.pg.casino.service.datasync.DataSyncCallable.call(DataSyncCallable.java:21)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)

Thread stack trace from thread dump When 2nd time it is occurred: 
-
When it is occurred first time, we have restarted this DS client service and
again it has started to do the same task, but at that time it took 20 mins
of time to insert the 30 rows of data into ignite. Following are the logs of
DS service. We spawn 2 threads to from this client service and insert the
data into different caches in ignite. There is no other service will insert
or modify the data in ignite server other than this DS service.

pool-4-thread-2:

09:38:08,632 - INFO  - pool-4-thread-2 - IG_LOGGER DSIgniteCacheManager- got
cache mapping for-bt
09:38:07,283 - INFO  - pool-4-thread-2 - DS_LOGGER DataSyncManager->syncBL()
is started 

thread stack trace in thread dump:
"pool-4-thread-2" prio=10 tid=0x00651800 nid=0x6619 waiting on
condition [0x7fd25e232000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x000790c4df90> (a
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture)
at
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:994)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:160)
at
org.apache.ignite.internal.util.future.GridFutu

Re: The thread which is inserting data into Ignite is hung

2018-06-03 Thread praveeng
Hi All,

We have identified the following issues and fixed after that onwards we
haven't got any issue.
1. Cursor is not closed when we do a query
2. backups 2147483646 to 1 or 2 [ i am not sure will it really cause any
issue or not.]
3. TCP port 9090 is opened from client to server but not opened from server
to client.

Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


High cpu on ignite server nodes

2018-06-16 Thread praveeng
Hi ,

We are observing one issue recently that on ignite servers is 100% CPU 
utilization.
ignite version 1.8, 
server config: RAM 16GB, Cores : 8 
please find a thread dump and cache xml and can you please check and suggest
if there is any issue.

cai003_june14_tdump.zip

  

  ignite-cl-cache-config.xml

  

Thanks,
Praveen 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: High cpu on ignite server nodes

2018-06-16 Thread praveeng
Hi ,

Following are the high cpu consumed threads.

  PID USER  PR  NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND
36708 gmedia20   0 11.031g 6.749g  14720 R 53.5 43.5 267:15.83 java
36709 gmedia20   0 11.031g 6.749g  14720 R 51.5 43.5 266:53.53 java
34433 gmedia20   0 11.031g 6.749g  14720 R 48.8 43.5 268:50.46 java
35687 gmedia20   0 11.031g 6.749g  14720 R 48.8 43.5 270:04.85 java
36706 gmedia20   0 11.031g 6.749g  14720 R 48.8 43.5 266:05.29 java
36712 gmedia20   0 11.031g 6.749g  14720 R 48.5 43.5 268:34.80 java
36713 gmedia20   0 11.031g 6.749g  14720 R 48.5 43.5 270:16.17 java
37366 gmedia20   0 11.031g 6.749g  14720 R 48.5 43.5 269:50.37 java
37367 gmedia20   0 11.031g 6.749g  14720 R 48.5 43.5 267:32.84 java
48957 gmedia20   0 11.031g 6.749g  14720 R 48.2 43.5 266:50.72 java
36707 gmedia20   0 11.031g 6.749g  14720 R 47.8 43.5 268:30.12 java
36714 gmedia20   0 11.031g 6.749g  14720 R 47.5 43.5 266:10.18 java
37811 gmedia20   0 11.031g 6.749g  14720 R 47.5 43.5 267:44.25 java
34438 gmedia20   0 11.031g 6.749g  14720 R 47.2 43.5 269:45.54 java
34439 gmedia20   0 11.031g 6.749g  14720 R 46.8 43.5 268:24.83 java
36710 gmedia20   0 11.031g 6.749g  14720 R 46.5 43.5 269:43.68 java

Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: High cpu on ignite server nodes

2018-06-16 Thread praveeng
Hi Stan,

The high cpu usage is on all servers.
one doubt , what is the default expiry policy for any cache if we don't set.

Following are the stats of one cache collected from ignitevisor.



Cache 'playerSessionInfoCacheIgnite(@c18)':
+--+
| Name(@) | playerSessionInfoCacheIgnite(@c18) |
| Nodes   | 7  |
| Total size Min/Avg/Max  | 0 / 53201.29 / 151537  |
|   Heap size Min/Avg/Max | 0 / 21857.29 / 50001   |
|   Off-heap size Min/Avg/Max | 0 / 31344.00 / 101536  |
+--+

Nodes for: playerSessionInfoCacheIgnite(@c18)
+=+
|Node ID8(@), IP| CPUs | Heap Used | CPU Load |   Up Time   
|   Size   | Hi/Mi/Rd/Wr |
+=+
| 54F7EA58(@n4), ip.ip.ip.ip1   | 4| 43.81 %   | 2.43 %   | 24:20:08:018
| Total: 1000  | Hi: 0   |
|   |  |   |  | 
|   Heap: 1000 | Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 0| Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 0 | Wr: 0   |
+---+--+---+--+--+--+-+
| D3A97470(@n7), ip.ip.ip.ip2   | 8| 41.88 %   | 0.27 %   | 02:26:29:576
| Total: 151536| Hi: 0   |
|   |  |   |  | 
|   Heap: 5| Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 101536   | Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 100mb | Wr: 0   |
+---+--+---+--+--+--+-+
| 6BA0FEA2(@n5), ip.ip.ip.ip3   | 8| 25.74 %   | 0.30 %   | 02:29:02:915
| Total: 151529| Hi: 0   |
|   |  |   |  | 
|   Heap: 5| Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 101529   | Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 100mb | Wr: 0   |
+---+--+---+--+--+--+-+
| E41C47FD(@n6), ip.ip.ip.ip4   | 8| 38.53 %   | 0.30 %   | 02:27:35:184
| Total: 66344 | Hi: 0   |
|   |  |   |  | 
|   Heap: 50001| Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 16343| Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 16mb  | Wr: 0   |
+---+--+---+--+--+--+-+
| D487DD7A(@n3), ip.ip.ip.ip5   | 4| 36.07 %   | 1.90 %   | 24:27:24:711
| Total: 1000  | Hi: 0   |
|   |  |   |  | 
|   Heap: 1000 | Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 0| Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 0 | Wr: 0   |
+---+--+---+--+--+--+-+
| A30CC6D1(@n2), ip.ip.ip.ip6   | 4| 29.72 %   | 0.50 %   | 24:33:45:581
| Total: 0 | Hi: 0   |
|   |  |   |  | 
|   Heap: 0| Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 0| Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 0 | Wr: 0   |
+---+--+---+--+--+--+-+
| 4FA3F3EB(@n1), ip.ip.ip.ip7   | 4| 28.57 %   | 1.77 %   | 24:38:18:856
| Total: 1000  | Hi: 0   |
|   |  |   |  | 
|   Heap: 1000 | Mi: 0   |
|  

RE: High cpu on ignite server nodes

2018-06-22 Thread praveeng
Hi Stan,

Thanks for your analysis.
We have increased the on heap cache size 50 and added expiry policy
[30mins].
The expiry policy is expiring the entries and the cache is never reaching to
it's max size.

But now we see high heap usage because of that GCs are happening frequently
and FULL GC is happened only once in a 2 days, after that full gc didn't
happen, only GCs are happening frequently.
Every time the heap usage is more than 59% in all the nodes and the heap
usage is reaching to 94% after 40 to 60 mins. Once GC happens it is coming
down to 60% . 


Following are the gc logs.
Desired survivor size 10485760 bytes, new threshold 1 (max 15)
 [PSYoungGen: 2086889K->10213K(2086912K)] 5665084K->3680259K(6281216K),
0.0704050 secs] [Times: user=0.54 sys=0.00, real=0.07 secs]
2018-06-22T09:53:38.873-0400: 374604.772: Total time for which application
threads were stopped: 0.0794010 seconds
2018-06-22T09:55:00.332-0400: 374686.231: Total time for which application
threads were stopped: 0.0084890 seconds
2018-06-22T09:55:00.340-0400: 374686.239: Total time for which application
threads were stopped: 0.0075450 seconds
2018-06-22T09:55:00.348-0400: 374686.247: Total time for which application
threads were stopped: 0.0078560 seconds
2018-06-22T09:55:26.847-0400: 374712.746: Total time for which application
threads were stopped: 0.0090060 seconds
2018-06-22T10:00:26.857-0400: 375012.756: Total time for which application
threads were stopped: 0.0105490 seconds
2018-06-22T10:02:48.740-0400: 375154.639: Total time for which application
threads were stopped: 0.0093160 seconds
2018-06-22T10:02:48.748-0400: 375154.647: Total time for which application
threads were stopped: 0.000 seconds
2018-06-22T10:02:48.757-0400: 375154.656: Total time for which application
threads were stopped: 0.0092110 seconds
2018-06-22T10:05:26.867-0400: 375312.766: Total time for which application
threads were stopped: 0.0098100 seconds
2018-06-22T10:05:52.775-0400: 375338.674: Total time for which application
threads were stopped: 0.0083580 seconds
2018-06-22T10:05:52.783-0400: 375338.682: Total time for which application
threads were stopped: 0.0074860 seconds
2018-06-22T10:05:52.790-0400: 375338.689: Total time for which application
threads were stopped: 0.0073980 seconds
2018-06-22T10:06:48.756-0400: 375394.655: Total time for which application
threads were stopped: 0.0086660 seconds
2018-06-22T10:06:48.764-0400: 375394.662: Total time for which application
threads were stopped: 0.0076080 seconds
2018-06-22T10:06:48.771-0400: 375394.670: Total time for which application
threads were stopped: 0.0076890 seconds
2018-06-22T10:07:05.603-0400: 375411.501: Total time for which application
threads were stopped: 0.0077390 seconds
2018-06-22T10:07:05.610-0400: 375411.509: Total time for which application
threads were stopped: 0.0074570 seconds
2018-06-22T10:07:05.617-0400: 375411.516: Total time for which application
threads were stopped: 0.0073410 seconds
2018-06-22T10:07:05.626-0400: 375411.525: Total time for which application
threads were stopped: 0.0072380 seconds
2018-06-22T10:07:05.633-0400: 375411.532: Total time for which application
threads were stopped: 0.0073070 seconds
2018-06-22T10:10:26.876-0400: 375612.775: Total time for which application
threads were stopped: 0.0091690 seconds
2018-06-22T10:15:26.887-0400: 375912.786: Total time for which application
threads were stopped: 0.0111650 seconds
2018-06-22T10:20:26.897-0400: 376212.796: Total time for which application
threads were stopped: 0.0099680 seconds
2018-06-22T10:22:30.917-0400: 376336.816: Total time for which application
threads were stopped: 0.0085330 seconds
2018-06-22T10:25:26.907-0400: 376512.806: Total time for which application
threads were stopped: 0.0094760 seconds
2018-06-22T10:26:04.247-0400: 376550.145: Total time for which application
threads were stopped: 0.0077120 seconds
2018-06-22T10:26:04.254-0400: 376550.153: Total time for which application
threads were stopped: 0.0075380 seconds
2018-06-22T10:26:04.262-0400: 376550.161: Total time for which application
threads were stopped: 0.0073460 seconds
2018-06-22T10:30:26.918-0400: 376812.817: Total time for which application
threads were stopped: 0.0107140 seconds
2018-06-22T10:35:26.929-0400: 377112.827: Total time for which application
threads were stopped: 0.0102250 seconds
2018-06-22T10:40:26.939-0400: 377412.838: Total time for which application
threads were stopped: 0.0096620 seconds
2018-06-22T10:41:06.178-0400: 377452.077: Total time for which application
threads were stopped: 0.0085630 seconds
2018-06-22T10:41:06.186-0400: 377452.085: Total time for which application
threads were stopped: 0.0079250 seconds
2018-06-22T10:41:06.194-0400: 377452.092: Total time for which application
threads were stopped: 0.0074940 seconds
2018-06-22T10:42:57.088-0400: 377562.987: Total time for which application
threads were stopped: 0.0090560 seconds
2018-06-22T10:42:57.096-0400: 377562.995: Total time for which applic