Re: Regarding Connection timed out observed during client startup

2021-01-27 Thread Ilya Kasnacheev
Hello!

I'm actually not sure. You can try running the same query in local mode
(query.setLocal(true)) to check if there would be any difference. However,
you will need to run it against a server node as well. When it is run
against client, the client will become reducer, connect to some server
nodes.

I don't think you should be diagnosing this issue by counting "Established
outgoing communication connection" messages.

Regards,
-- 
Ilya Kasnacheev


вт, 26 янв. 2021 г. в 23:38, VeenaMithare :

> Hi Ilya,
>
> >>Generally a reducer node will need to connect to all map nodes while
> doing
> SQL query.
>
> >>You will not see this message if connection was already up.
> *Here are examples of 'outgoing communication' when the count was incorrect
> :
> *Example 1 :
> 2021-01-15T17:04:35,905 INFO  o.a.i.s.c.t.TcpCommunicationSpi
> [grid-nio-worker-tcp-comm-0-#60%Instance%]: Established outgoing
> communication connection [locAddr=/a.b.c.153:59010,
> rmtAddr=/a.b.c.199:47130]
> 2021-01-15T17:04:51,682 INFO  o.a.i.s.c.t.TcpCommunicationSpi
> [grid-nio-worker-tcp-comm-1-#61%Instance%]: Established outgoing
> communication connection [locAddr=/a.b.c.153:61374,
> rmtAddr=/a.b.c.201:47130]
> /2021-01-15T17:04:51,840 INFO  o.a.i.s.c.t.TcpCommunicationSpi
> [grid-nio-worker-tcp-comm-2-#62%Instance%]: Established outgoing
> communication connection [locAddr=/x.y.z.153:52558,
> rmtAddr=machine003.cmc.local/x.y.z.202:47130]/
>
>
> Example 2 :
> 2021-01-15T17:17:25,876 INFO  o.a.i.s.c.t.TcpCommunicationSpi
> [grid-nio-worker-tcp-comm-0-#60%Instance%]: Established outgoing
> communication connection [locAddr=/a.b.c.153:59786,
> rmtAddr=/a.b.c.199:47130]
> 2021-01-15T17:17:41,720 INFO  o.a.i.s.c.t.TcpCommunicationSpi
> [grid-nio-worker-tcp-comm-1-#61%Instance%]: Established outgoing
> communication connection [locAddr=/a.b.c.153:62150,
> rmtAddr=/a.b.c.201:47130]
> /2021-01-15T17:17:41,878 INFO  o.a.i.s.c.t.TcpCommunicationSpi
> [grid-nio-worker-tcp-comm-2-#62%Instance%]: Established outgoing
> communication connection [locAddr=/x.y.z.153:53334,
> rmtAddr=machine003.cmc.local/x.y.z.202:47130]/
>
> *Here is an example of  'outgoing communication' when the count was correct
> :
> *Example 3 :
> 2021-01-15T17:30:12,638 INFO  o.a.i.s.c.t.TcpCommunicationSpi
> [grid-nio-worker-tcp-comm-0-#60%Instance%]: Established outgoing
> communication connection [locAddr=/a.b.c.153:60534,
> rmtAddr=/a.b.c.199:47130]
> 2021-01-15T17:30:28,403 INFO  o.a.i.s.c.t.TcpCommunicationSpi
> [grid-nio-worker-tcp-comm-1-#61%Instance%]: Established outgoing
> communication connection [locAddr=/a.b.c.153:62898,
> rmtAddr=/a.b.c.201:47130]
>
> 1. We are on a 3 node cluster ( a.b.c.199, a.b.c.201, x.y.z.202  ).  When
> the count is right it doesn't even try to make a connection to x.y.z.202 .
> It is happy with the connections made to a.b.c.199, a.b.c.201.
>
> As mentioned in the cases where the count is incorrect the connection to
> x.y.z.202 is made between cache.query and cursor.getall.
>
>
> 2. The cache that is used in the query here is a REPLICATED cache. As per
> this documentation here - the query on a REPLICATED CACHE would not be run
> against multiple server nodes -
> https://apacheignite-sql.readme.io/docs/select
> Would the reducer still be used ?
>
> regards,
> Veena.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: 2.8.1 : Continuous Query Initial query not returning any result sometimes

2021-01-27 Thread VeenaMithare
Hi Andrei, 

Did you get a chance to look at my comments . 

Regarding doing the continuous query as per this example : 
https://github.com/gridgain/gridgain/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheContinuousQueryExample.java

Our code is pretty much as shown in the example. I cannot use the
cache.query in a try with resources block since I want the continuous query
to live for the life time of the client. Hence the cursor is closed when the
client is stopped. 

The changed code is there in 
InitialQueryProject-1.zip

  

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [2.9.1]in SYS.METRICS View some data is always 0

2021-01-27 Thread Ilya Kasnacheev
Hello!

These values are per-node, as far as I know. Is it possible that you have
connected to a node which does not handle any queries (as reducer, anyway)?

Regards,
-- 
Ilya Kasnacheev


вт, 26 янв. 2021 г. в 13:48, 38797715 <38797...@qq.com>:

> Hi,
>
> We found that SYS.METRICS View some data is always 0, such as
> QueryCompleted,QueryExecuted,QuerySumTime,QueryCompleted,QuerySumTime
> and QueryMaximumTime. This is a bug? Or what configuration is needed? Or
> the related functions have not been implemented yet?
>
>


Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread jjimeno
Hi everyone,

For our project, we have the next requirements:

- One single cache 
- Ability to lock a list of cache entries.
- Large transactions. A typical one is to commit 1.2 million keys (a single
PutAll call) with a total size of around 600MB.
- Persistence

In our proof of concept, we've got everything implemented and running:

- One server node 2.9.1. Native persistence is enabled for default data
region.
- One client application using one Ignite C++ Thin Client to connect to the
server node.
- Both, server and client, are in the same machine by now.

With this scenario, we're currently evaluating Ignite vs RocksDB.  We would
really like to choose Ignite because of its scalability, but we are facing a
problem related to its performance:

In Ignite, one single transaction commit of 1.2 million keys and 600MB takes
around 70 seconds to complete, while RocksDB takes no more than 12 seconds. 
Moreover, if a second local node is added to the cluster, the application is
not even able of completing the transaction (it stops after 10 minutes)

Default data region's page size has been modified up to 16KB. Persistence
has been enabled.
Cache is PARTITIONED with TRANSACTIONAL atomicity mode.  
Because of the requirement about locking keys, performed transaction is
PESSIMISTIC + READ_COMMITTED.

The rest of the configuration values are the default ones (No backup,
PRIMARY_SYNC, no OnHeapCache, etc)

So, my questions are:

- Taking the requirements into account, is Ignite a good option?
- It's those time values that one might expect?
- If not, any advice to improve them?

Configuration files for both server nodes have been attached.  Thanks
everyone in advance for your help and time,

first-node.xml
  
second-node.xml
  

Josemari



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread Zhenya Stanilovsky



hi jjimeno.
*  I doubt about « messageQueueLimit » correctness, plz remove it and try once 
more.
*  16k per page is questionable, i suggest to try with default.
*  why there is no progress with 2 node? can you append somehow logs from 2 
nodes with transactions degradation?
*  Take into account [1]
*  You can collide with page replacements, is 4 Gb are ok for all your data ?
*  Are you reading performance tricks ? [2]
 
[1] https://issues.apache.org/jira/browse/IGNITE-13997 
​​​[2] 
https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/general-perf-tips
>Hi everyone,
>
>For our project, we have the next requirements:
>
>- One single cache 
>- Ability to lock a list of cache entries.
>- Large transactions. A typical one is to commit 1.2 million keys (a single
>PutAll call) with a total size of around 600MB.
>- Persistence
>
>In our proof of concept, we've got everything implemented and running:
>
>- One server node 2.9.1. Native persistence is enabled for default data
>region.
>- One client application using one Ignite C++ Thin Client to connect to the
>server node.
>- Both, server and client, are in the same machine by now.
>
>With this scenario, we're currently evaluating Ignite vs RocksDB. We would
>really like to choose Ignite because of its scalability, but we are facing a
>problem related to its performance:
>
>In Ignite, one single transaction commit of 1.2 million keys and 600MB takes
>around 70 seconds to complete, while RocksDB takes no more than 12 seconds.
>Moreover, if a second local node is added to the cluster, the application is
>not even able of completing the transaction (it stops after 10 minutes)
>
>Default data region's page size has been modified up to 16KB. Persistence
>has been enabled.
>Cache is PARTITIONED with TRANSACTIONAL atomicity mode.
>Because of the requirement about locking keys, performed transaction is
>PESSIMISTIC + READ_COMMITTED.
>
>The rest of the configuration values are the default ones (No backup,
>PRIMARY_SYNC, no OnHeapCache, etc)
>
>So, my questions are:
>
>- Taking the requirements into account, is Ignite a good option?
>- It's those time values that one might expect?
>- If not, any advice to improve them?
>
>Configuration files for both server nodes have been attached. Thanks
>everyone in advance for your help and time,
>
>first-node.xml
>< http://apache-ignite-users.70518.x6.nabble.com/file/t3059/first-node.xml >
>second-node.xml
>< http://apache-ignite-users.70518.x6.nabble.com/file/t3059/second-node.xml >
>
>Josemari
>
>
>
>--
>Sent from:  http://apache-ignite-users.70518.x6.nabble.com/ 
 
 
 
 

Re: [2.9.1]in SYS.METRICS View some data is always 0

2021-01-27 Thread 38797715

Hello Ilya,

The test method is as follows:

Start 2 nodes on the localhost.
Use the CREATE TABLE statement to create a table;
Use the COPY command to load some data;
Access to cluster through sqlline;
Execute select count (*) from T;
Execute select * from sys.metrics  WHERE name LIKE '%cache.T%';
At this time, you will find that the relevant data are all 0, but the 
value of OffHeapEntriesCount is still correct.


If you use sqlline to access another node, the result is the same.

The configuration file to start the cluster is as follows:









































在 2021/1/27 下午6:24, Ilya Kasnacheev 写道:

Hello!

These values are per-node, as far as I know. Is it possible that you 
have connected to a node which does not handle any queries (as 
reducer, anyway)?


Regards,
--
Ilya Kasnacheev


вт, 26 янв. 2021 г. в 13:48, 38797715 <38797...@qq.com 
>:


Hi,

We found that SYS.METRICS View some data is always 0, such as
QueryCompleted,QueryExecuted,QuerySumTime,QueryCompleted,QuerySumTime
and QueryMaximumTime. This is a bug? Or what configuration is
needed? Or
the related functions have not been implemented yet?



Re: Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread jjimeno
Hi Zhenya,

Thanks for your quick response

1. If it is not set, the following message appears on node's startup:
[13:22:33] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
2. No improvement.
3. I've got the same question :)  Here are the cluster logs until it
crashes:  ignite-9f92ab96.log
 
 
4. Yes, I'm aware about that since I reported it... but there is only one
transaction in the test
5. Yes, 4Gb is large enough.  There is only one single transaction of 600MB
6. Yes, in fact, that's why I modified the page size



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re[2]: Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread Zhenya Stanilovsky

1. Yes i know about this message, don`t pay attention (just remove it). Ilya is 
this param is safe to use ?
2. plz rerun your nodes with -DIGNITE_QUIET=false jvm param (there are would be 
more informative logs).
3 You have long running tx in your logs (IGNITE_QUIET will help to detect why 
it hangs) you can configure default timeout [1]
4 If tx will hang one more — plz attach new logs and up this thread.
 
[1] 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/TransactionConfiguration.html#setDefaultTxTimeout-long-
> 
>> 
>>> 
Hi Zhenya,

Thanks for your quick response

1. If it is not set, the following message appears on node's startup:
[13:22:33] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
2. No improvement.
3. I've got the same question :) Here are the cluster logs until it
crashes: ignite-9f92ab96.log
< 
http://apache-ignite-users.70518.x6.nabble.com/file/t3059/ignite-9f92ab96.log
 >
4. Yes, I'm aware about that since I reported it... but there is only one
transaction in the test
5. Yes, 4Gb is large enough. There is only one single transaction of 600MB
6. Yes, in fact, that's why I modified the page size



--
Sent from:  http://apache-ignite-users.70518.x6.nabble.com/ 
>>> 
>>> 
>>> 
>>> 

Re: Re[2]: Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread Ilya Kasnacheev
Hello!

It seems to me that you should never use messageQueueLimit - if you ever
saturate your network it will break down your cluster.

Regards,
-- 
Ilya Kasnacheev


ср, 27 янв. 2021 г. в 15:55, Zhenya Stanilovsky :

> 1. Yes i know about this message, don`t pay attention (just remove it). Ilya 
> is this param is safe to use ?
> 2. plz rerun your nodes with -DIGNITE_QUIET=false jvm param (there are would 
> be more informative logs).
>
> 3 You have long running tx in your logs (IGNITE_QUIET will help to detect
> why it hangs) you can configure default timeout [1]
> 4 If tx will hang one more — plz attach new logs and up this thread.
>
> [1]
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/TransactionConfiguration.html#setDefaultTxTimeout-long-
>
>
>
>
>
>
>
> Hi Zhenya,
>
> Thanks for your quick response
>
> 1. If it is not set, the following message appears on node's startup:
> [13:22:33] Message queue limit is set to 0 which may lead to potential
> OOMEs
> when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
> message queues growth on sender and receiver sides.
> 2. No improvement.
> 3. I've got the same question :) Here are the cluster logs until it
> crashes: ignite-9f92ab96.log
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3059/ignite-9f92ab96.log
> >
> 4. Yes, I'm aware about that since I reported it... but there is only one
> transaction in the test
> 5. Yes, 4Gb is large enough. There is only one single transaction of 600MB
> 6. Yes, in fact, that's why I modified the page size
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
>
>
>
>


Re: Re[2]: Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread jjimeno
2. I've found this:

[16:02:23,836][WARNING][client-connector-#139][GridDhtColocatedCache] 
Unordered map java.util.LinkedHashMap is used for putAll operation on cache
vds. This can lead to a distribut
ed deadlock. Switch to a sorted map like TreeMap instead.

... but I'm using a c++ std::map.  It's hard to think that it is being
mapped to a unordered map in Java and how a deadlock is possible if keys are
no repeated.  I've modified my code to use Put instead of PutAll and the
transaction finishes... in 312sec

There is a way of fixing this?

3. I think the fix to the problem is not increasing a timeout but in
reducing the commit time. More than 300 sec for a transaction of 600MB is
not what I would expect from Ignite



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re[2]: Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread jjimeno
Hello! Ok... thanks Ilya!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[2.7.6] Unsupported protocol version: -76 with thin client.

2021-01-27 Thread maxi628
Hello everyone.

I'm having an issue that appears randomly on my internal product testing.
Looks like its related to https://issues.apache.org/jira/browse/IGNITE-13401
but the stacktrace looks different.
Basically I'm building a BinaryObject which ends up fetching
BinaryObjectMetadata from the server and it fails with that exception.



Upgrading to ignite 2.9.1 where it's patched isn't an option for me.
The BinaryObjectMetadata is user-configured via some interfaces.
Is there any workarounds for this? 
If we have a way to calculate the size of the metadata before sending it we
could add a dummy field in those cases where it ends up matching the
GridBinaryMarshaller.OBJ on the second byte of the stream.

Here's the datagram in hex


And a photo from the debugger in which it shows the problematic value

Screen_Shot_2021-01-27_at_10.png

  

Thanks everyone.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re[2]: Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread Pavel Tupitsyn
> how a deadlock is possible if keys are no repeated
Deadlock is possible when two putAll operations are executed
at the same time on the same keys, but key order is different in the
provided map.

> LinkedHashMap is used for putAll operation
> but I'm using a c++ std::map
Unfortunately, this warning is misleading when a non-Java client is used.
Make sure that the order of entries is always the same when doing a putAll
operation.
If your current test does not perform multiple operations in parallel, this
is not an issue.

On Wed, Jan 27, 2021 at 6:40 PM jjimeno  wrote:

> Hello! Ok... thanks Ilya!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [2.7.6] Unsupported protocol version: -76 with thin client.

2021-01-27 Thread Alex Plehanov
Hello,

Can you please share your stacktrace? In 2.7.6 I think this bug
affects only cache configuration retrieval and binary metadata retrieval.
To avoid problems with binary metadata retrieval you should not use types
which typeId (IgniteClient.binary().typeId()) starts with byte 103 as cache
keys or values.

If you can't upgrade Ignite server to 2.9.1, perhaps you can upgrade only
the client? This will also solve the problem.

ср, 27 янв. 2021 г. в 19:46, maxi628 :

> Hello everyone.
>
> I'm having an issue that appears randomly on my internal product testing.
> Looks like its related to
> https://issues.apache.org/jira/browse/IGNITE-13401
> but the stacktrace looks different.
> Basically I'm building a BinaryObject which ends up fetching
> BinaryObjectMetadata from the server and it fails with that exception.
>
>
>
> Upgrading to ignite 2.9.1 where it's patched isn't an option for me.
> The BinaryObjectMetadata is user-configured via some interfaces.
> Is there any workarounds for this?
> If we have a way to calculate the size of the metadata before sending it we
> could add a dummy field in those cases where it ends up matching the
> GridBinaryMarshaller.OBJ on the second byte of the stream.
>
> Here's the datagram in hex
>
>
> And a photo from the debugger in which it shows the problematic value
>
> Screen_Shot_2021-01-27_at_10.png
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3058/Screen_Shot_2021-01-27_at_10.png>
>
>
> Thanks everyone.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Re[2]: Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread jjimeno
Hello,

I understand these deadlocks when there are more than one PuAll at the same
time, but in this case there is only one, and it’s always sorted.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [2.7.6] Unsupported protocol version: -76 with thin client.

2021-01-27 Thread maxi628
Here's the stacktrace:

org.apache.ignite.binary.BinaryObjectException: Unsupported protocol
version: 43
BinaryUtils checkProtocolVersion798
org.apache.ignite.internal.binary.BinaryUtils in BinaryUtils.java
BinaryReaderExImpl221
org.apache.ignite.internal.binary.BinaryReaderExImpl in
BinaryReaderExImpl.java
BinaryReaderExImpl186
org.apache.ignite.internal.binary.BinaryReaderExImpl in
BinaryReaderExImpl.java
BinaryReaderExImpl165
org.apache.ignite.internal.binary.BinaryReaderExImpl in
BinaryReaderExImpl.java
ClientUtils binaryMetadata  156
org.apache.ignite.internal.client.thin.ClientUtils in ClientUtils.java
TcpIgniteClient.ClientBinaryMetadataHandler lambda$metadata0$2  293
org.apache.ignite.internal.client.thin.TcpIgniteClient$ClientBinaryMetadataHandler
in TcpIgniteClient.java
TcpClientChannel receive189
org.apache.ignite.internal.client.thin.TcpClientChannel in
TcpClientChannel.java
ReliableChannel service 126
org.apache.ignite.internal.client.thin.ReliableChannel in
ReliableChannel.java
TcpIgniteClient.ClientBinaryMetadataHandler metadata0  288
org.apache.ignite.internal.client.thin.TcpIgniteClient$ClientBinaryMetadataHandler
in TcpIgniteClient.java
TcpIgniteClient.ClientBinaryMetadataHandler metadata  270
org.apache.ignite.internal.client.thin.TcpIgniteClient$ClientBinaryMetadataHandler
in TcpIgniteClient.java
BinaryContext metadata 1266
org.apache.ignite.internal.binary.BinaryContext in BinaryContext.java
BinaryObjectBuilderImpl serializeTo 209
org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl in
BinaryObjectBuilderImpl.java
BinaryObjectBuilderImpl build   190
org.apache.ignite.internal.binary.builder.BinaryObjectBuilderImpl in
BinaryObjectBuilderImpl.java

so basically it fails while trying to call to binaryObjectBuilder.build()



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [2.7.6] Unsupported protocol version: -76 with thin client.

2021-01-27 Thread jjimeno
Hello,

Thanks for your answer. 
The server is already in version 2.9.1 and the c++ thin client is from
master branch to get transactions support.

José 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [2.9.1]in SYS.METRICS View some data is always 0

2021-01-27 Thread akorensh
Hi, there is not a view called metrics, only node_metrics, see below

Here are the docs for those views:
https://ignite.apache.org/docs/latest/monitoring-metrics/system-views


Looks like you might be referring to an SQL view, in which case there is a
realtime one which will only show values when the query is executing and a
historical one which shows data for queries that have already ran.

https://ignite.apache.org/docs/latest/monitoring-metrics/system-views#sql_queries
https://ignite.apache.org/docs/latest/monitoring-metrics/system-views#sql_queries_history

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [2.9.1]in SYS.METRICS View some data is always 0

2021-01-27 Thread 38797715

Hello,

I know SQL_ QUERIES and SQL_ QUERY_ HISTORY.
I opened *org.apache.ignite.spi.metric.sql.SqlViewMetricExporterSpi* in 
the configuration file

A new METRICS view will be added.

在 2021/1/28 上午7:02, akorensh 写道:

Hi, there is not a view called metrics, only node_metrics, see below

Here are the docs for those views:
https://ignite.apache.org/docs/latest/monitoring-metrics/system-views


Looks like you might be referring to an SQL view, in which case there is a
realtime one which will only show values when the query is executing and a
historical one which shows data for queries that have already ran.

https://ignite.apache.org/docs/latest/monitoring-metrics/system-views#sql_queries
https://ignite.apache.org/docs/latest/monitoring-metrics/system-views#sql_queries_history





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance of Ignite as key-value datastore. C++ Thin Client.

2021-01-27 Thread jjimeno
Hi again,

As a test, I just disabled persistence in both nodes.  The already mentioned
transaction of 1.2 million keys and 600MB in size takes 298sec.

Remember that for one single node and persistence enabled it takes 70sec, so
just adding a second node makes the test more than 4 times slower. 

Is this, really, the performance that Ignite can offer? Please, don't take
me wrong, I'm just asking, not criticizing.  I want to be sure I'm not doing
anything wrong and the timings I get are the expected ones...

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/