Re: Index not getting applied

2019-03-06 Thread ashishb888
You can make good use of Index Hints.

e.g. SELECT * FROM Person USE INDEX(index_age) WHERE salary > 15 AND age
< 35;



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Index not getting applied

2019-03-06 Thread mausam
Hi,

I am using Ignite Cache with Persistence. While execution over large tables,
i noticed the below message repeatedly getting logged.

[WARNING][query-#100%IGNITE%][IgniteH2Indexing] Query execution is too long
[time=4863 ms, sql='SELECT
PUBLIC.__Z0._KEY __C0_0,
PUBLIC.__Z0._VAL __C0_1
FROM PUBLIC.TABLE_A __Z0
WHERE (__Z0.COL_A IN('AB', 'CD')) AND ((__Z0.COL_B IN('123', '1234')) AND
((__Z0.COL_C IN('XY', 'XYZ')) AND ((__Z0.COL_D = ?3) AND ((__Z0.COL_E = ?1)
AND (__Z0.COL_F = ?2)', plan=
SELECT
PUBLIC.__Z0._KEY AS __C0_0,
PUBLIC.__Z0._VAL AS __C0_1
FROM PUBLIC.TABLE_A __Z0
/* PUBLIC.TABLE_A.__SCAN_ */
WHERE (__Z0.COL_A IN('AB', 'CD'))
AND ((__Z0.COL_B IN('123', '1234'))
AND ((__Z0.COL_C IN('XY', 'XYZ'))
AND ((__Z0.COL_D = ?3)
AND ((__Z0.COL_E = ?1)
AND (__Z0.COL_F = ?2)
, parameters=[A, 1, 2]]

To resolve this, I created the below Index
CREATE INDEX IF NOT EXISTS TABLE_A_index1 
 ON TABLE_A
 (COL_A Asc, COL_B ASC, COL_C ASC, COL_D ASC,  COL_E ASC, COL_F asc)
 PARALLEL 8;

Even after this, the slow queries kept coming.

On further analysis using 'Explain', I identified that Index is applied
properly if the below query is fired.

SELECT
  COL_A
FROM PUBLIC.TABLE_A __Z0
WHERE (__Z0.COL_A IN('AB', 'CD'))
AND ((__Z0.COL_B IN('123', '1234'))
AND ((__Z0.COL_C IN('XY', 'XYZ'))
AND ((__Z0.COL_D = ?3)
AND ((__Z0.COL_E = ?1)
AND (__Z0.COL_F = ?2)

It doesn't get applied, when I fire the below query:

SELECT
  COL_Z
FROM PUBLIC.TABLE_A __Z0
WHERE (__Z0.COL_A IN('AB', 'CD'))
AND ((__Z0.COL_B IN('123', '1234'))
AND ((__Z0.COL_C IN('XY', 'XYZ'))
AND ((__Z0.COL_D = ?3)
AND ((__Z0.COL_E = ?1)
AND (__Z0.COL_F = ?2)

It means, *the Index is getting applied only if I select a column which is
present in the Index. If I try to select any other column, which is not in
the Index, the index doesn't get applied.
*
Is any of you aware of this kind of behaviour? Is this normal with Ignite?
Do we have a workaround to make the index work?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite client connection issue "Cache doesn't exists ..." even though cache severs and caches are up and running.

2019-03-06 Thread Hemasundara Rao
Hi,
 We have a  Micro Service client to ignite cluster throwing exception
"Cache doesn't exists ..." even though cache severs and caches are up and
running.
 Could you please know us how to resolve this issue?

Thanks and Regards,
Hemasundara Rao Pottangi  | Senior Project Leader

[image: HotelHub-logo]
HotelHub LLP
Phone: +91 80 6741 8700
Cell: +91 99 4807 7054
Email: hemasundara@hotelhub.com
Website: www.hotelhub.com 
--

HotelHub LLP is a service provider working on behalf of Travel Centric
Technology Ltd, a company registered in the United Kingdom.
DISCLAIMER: This email message and all attachments are confidential and may
contain information that is Privileged, Confidential or exempt from
disclosure under applicable law. If you are not the intended recipient, you
are notified that any dissemination, distribution or copying of this email
is strictly prohibited. If you have received this email in error, please
notify us immediately by return email to
noti...@travelcentrictechnology.com and
destroy the original message. Opinions, conclusions and other information
in this message that do not relate to the official business of Travel
Centric Technology Ltd or HotelHub LLP, shall be understood to be neither
given nor endorsed by either company.


Re: SQL XML configuration for timestamp problematic when using sybase as third party persistence

2019-03-06 Thread Chandni
To add the java type configuration I am currently using is
java.sql.TYPES.TIMESTAMP 
And java.sql.Timestamp



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


SQL XML configuration for timestamp problematic when using sybase as third party persistence

2019-03-06 Thread Chandni
Hi

I am using Sybase as third party persistence and using XML to configure
cache store factory.

Sybase datetime displays date by default format :
/mm/dd HH:mm:ss Am/pm 

Whereas Ignite is expecting a different format with nanoseconds.

This is causing BinaryMarshaller to fail in deserialising the object for
tables that have a datetime field.

Can someone please suggest how can we use custom date format with third
party persistence.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot exchange messages between two nodes.

2019-03-06 Thread Ropugg
Thanks all.
I resolve it, since the second node didn't remoteListen by an inner logic.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: queries, we are evaluating to use Apache Ignite as caching layer on top of underlying Cassandra database.

2019-03-06 Thread Prasad Bhalerao
Just out of curiosity how are you planning to load 1 TB of data in cache,
using datastreamer or cachestore?
What's the expected time to load cache?
Since you are not keeping back up, how are you going to handle the
situation when any of the node crashes? This can happen in prod env, so
what is the expected down time for you.

 To load 15-20 GB of data(from Oracle tables) in different caches,  with
backup count 1 it is around 40 Gb, my application is taking 28-30 minutes.


Please share your results if possible.

Thanks,
Prasad

On Wed 6 Mar, 2019, 2:58 PM Navneet Kumar  Ilya,
> Thanks for your quick response. I have gone through the capacity planning
> link shared by you.
> 1,000,000,000 Total objects(Records)
> 1,024 bytes per object (1 KB)
> 0 backup
> 4 nodes
>
> Total number of objects X object size (only primary copy since back up is
> set 0. Better remove the back up property from XML):
> 1,000,000,000 x 1,024 = 10240 bytes (1024000 MB)
>
> No Indexes used. I know if it is used it will take 30% of overhead more.
>
> Approximate additional memory required by the platform:
> 300 MB x 1(No of node in the cluster) = 300 MB
>
> Total size:
> 1024000 + 300 = 1024300 MB
>
> Hence the anticipated total memory consumption would be just over ~ 1024.3
> GB
>
>
> So what I want to know that my use case is I want to load full 1 billion
> subscriber records on datagrid(Apache Ignite) and will read from there. No
> disk swapping once my data is loaded in memory.
> Let me know my calculation is correct or do I need to add some more memory.
> I have a single node cluster as of now and I am not using any index and no
> back up.
>
> Regards
> Navneet
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Exception on node startup: Attempted to release write lock while not holding it

2019-03-06 Thread Dmitry Lazurkin
Thank you for reply, Ilya.

> Have you tried to start this on Nightly Build? Can you try that?

No, on 2.7.0#20181130-sha1:256ae401. I will try Nightly Build.

> If it still would not work, can you share your DB+wal files?
I think yes, but it's 15 GB.

On 3/6/19 6:59 PM, Ilya Kasnacheev wrote:
> Hello!
>
> Have you tried to start this on Nightly Build? Can you try that?
>
> If it still would not work, can you share your DB+wal files?
>
> Regards,
> -- 
> Ilya Kasnacheev
>
>
> вт, 5 мар. 2019 г. в 20:37, Dmitry Lazurkin  >:
>
> Ignite version: 2.7.0#20181130-sha1:256ae401
>
>



Re: Exception on node startup: Attempted to release write lock while not holding it

2019-03-06 Thread Ilya Kasnacheev
Hello!

Have you tried to start this on Nightly Build? Can you try that?

If it still would not work, can you share your DB+wal files?

Regards,
-- 
Ilya Kasnacheev


вт, 5 мар. 2019 г. в 20:37, Dmitry Lazurkin :

> Ignite version: 2.7.0#20181130-sha1:256ae401
>
>
>


Re: BLOB with JDBC Driver

2019-03-06 Thread Taras Ledkov

Hi,

But SQL type BLOB isn't supported by Ignite.
Please use BINARY SQL type.

06.03.2019 18:40, Taras Ledkov пишет:

Hi,

The JDBC Blob is supported by JDBC v2 driver (thick driver based on 
Ignite client node).

Thin JDBC driver hasn't supported Blob yet.

06.03.2019 13:50, KR Kumar пишет:
Hi - I trying out JDBC driver with ignite SQL tables. How do i insert 
a blob

into my cache table through jdbc?

Thanx and Regards,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Cannot exchange messages between two nodes.

2019-03-06 Thread Ilya Kasnacheev
Hello!

Can you make a small reproducer project, share it with us?

Regards,
-- 
Ilya Kasnacheev


ср, 6 мар. 2019 г. в 16:08, Ropugg :

> The two nodes have the same subscriber.
> The only difference is the order of starting node.
> The first started node, we call it as Node1, the second started node is
> Node2.
> The Node1 send a message, the Node2 will handle it as expected in
> remoteListen(@Nullable Object topic, IgniteBiPredicate p)
> But the Node2 send a message, the Node1 can receive it, but never execute
> the IgniteBiPredicate of remoteListen.
> You could check the screenshots, it prove the Node1receive the message, but
> didn't handle it.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: queries, we are evaluating to use Apache Ignite as caching layer on top of underlying Cassandra database.

2019-03-06 Thread Ilya Kasnacheev
Hello!

You can try to see how many entries will fit in your single node (enable
page eviction, see how much you can get before eviction is started), then
scale it up after adding some buffer space.

Regards,
-- 
Ilya Kasnacheev


ср, 6 мар. 2019 г. в 12:28, Navneet Kumar :

> Ilya,
> Thanks for your quick response. I have gone through the capacity planning
> link shared by you.
> 1,000,000,000 Total objects(Records)
> 1,024 bytes per object (1 KB)
> 0 backup
> 4 nodes
>
> Total number of objects X object size (only primary copy since back up is
> set 0. Better remove the back up property from XML):
> 1,000,000,000 x 1,024 = 10240 bytes (1024000 MB)
>
> No Indexes used. I know if it is used it will take 30% of overhead more.
>
> Approximate additional memory required by the platform:
> 300 MB x 1(No of node in the cluster) = 300 MB
>
> Total size:
> 1024000 + 300 = 1024300 MB
>
> Hence the anticipated total memory consumption would be just over ~ 1024.3
> GB
>
>
> So what I want to know that my use case is I want to load full 1 billion
> subscriber records on datagrid(Apache Ignite) and will read from there. No
> disk swapping once my data is loaded in memory.
> Let me know my calculation is correct or do I need to add some more memory.
> I have a single node cluster as of now and I am not using any index and no
> back up.
>
> Regards
> Navneet
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Connection to cluster failed. Error: Latest topology update failed.

2019-03-06 Thread Ilya Kasnacheev
Hello!

It might take some time to rewind WAL. Consider starting it with -v (or
-DIGNITE_QUIET=false)

Regards,
-- 
Ilya Kasnacheev


ср, 6 мар. 2019 г. в 16:26, James Wang 王升平 (edvance CN) <
james.w...@edvancesecurity.com>:

> Hi Support,
>
>
>
> My lab VM were shutdown by lack of power. But After I restart them, I find
> the console hang at
>
>
>
> [21:26:43] Topology snapshot [ver=35, locNode=2931204b, servers=3,
> clients=0, state=ACTIVE, CPUs=12, offheap=12.0GB, heap=3.0GB]
>
> [21:26:43]   ^-- Baseline [id=3, size=3, online=3, offline=0]
>
>
>
> If I used control.sh –baseline, the command hang a long time and prompt:
>
> Connection to cluster failed.
>
> Error: Latest topology update failed.
>
>
>
> Please advise whether it indeed need a lot of time to start the cluster
>
>
>
> Thank you.
>
>
>
> Best Regards,
>
> James Wang
>
> M/WeChat: +86 135 1215 1134
>
>
>
>
>
> This message contains information that is deemed confidential and
> privileged. Unless you are the addressee (or authorized to receive for the
> addressee), you may not use, copy or disclose to anyone the message or any
> information contained in the message. If you have received the message in
> error, please advise the sender by reply e-mail and delete the message.
>


Re: Performance degradation in case of high volumes

2019-03-06 Thread Ilya Kasnacheev
Hello!

I'm afraid that at this point even indexes do not fit in RAM so every
insert needs to get index pages from disk, change them and put them back to
make room for newer ones. Please consider having larger DataRegions.

Regards,
-- 
Ilya Kasnacheev


ср, 6 мар. 2019 г. в 16:26, Antonio Conforti :

> Hello Ilya,
>
>
> thanks for reply.
>
> For answer to your questions today i runned again a test that i stopped at
> about 22 millions of entries:
>
> a) the approximate hardware and data region config used in the test:
>
> 2 hosts each with:
>
> Processor total  : 2
>
> Processor: 0
> Name : Intel Xeon
> Speed: 3300 MHz
> Bus  : 100 MHz
> Core : 8
> Thread   : 16
> Socket   : 1
> Level2 Cache : 2048 KBytes
> Level3 Cache : 25600 KBytes
>
> Processor: 1
> Name : Intel Xeon
> Stepping : 4
> Speed: 3300 MHz
> Bus  : 100 MHz
> Core : 8
> Thread   : 16
> Socket   : 2
> Level2 Cache : 2048 KBytes
> Level3 Cache : 25600 KBytes
>
> MEMORY:
> Memory installed : 196608 MBytes
>
> # getconf PAGESIZE: 4096
>
> vm.swappiness = 5
>
>
>
>
>
> Disk:
>
> Cache Board Present: True
> Cache Status: OK
> Cache Ratio: 25% Read / 75% Write
> Drive Write Cache: Disabled
> Total Cache Size: 512 MB
> Total Cache Memory Available: 304 MB
>
> Logical Drive: 1
> Size: 279.4 GB
> Fault Tolerance: RAID 1
> Heads: 255
> Sectors Per Track: 32
> Cylinders: 65535
> Strip Size: 256 KB
> Full Stripe Size: 256 KB
> Status: OK
> Caching:  Enabled
> Rotational Speed: 15000
> PHY Transfer Rate: 6.0Gbps
> PHY Transfer Rate: 6.0Gbps
> type ext4
> Block size:  4096
>
>
> From the attached xml config the default data storage configuration is:
>
>
> datafeed-persistent-store.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2315/datafeed-persistent-store.xml>
>
>
> 
>
> 
>
>
>  About the memory below the top command for server and client nodes before
> start test:
>
>
>
>
> HOST_1
>
>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2315/HOST1_top_preTest.png>
>
>
> HOST_2
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2315/HOST2_top_preTest.png>
>
>
> and below the top command for server and client nodes when test is stopped:
>
> HOST_1
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2315/HOST1_top_postTest.png>
>
>
> HOST_2
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2315/HOST2_top_postTest.png>
>
>
>
> b) approximate size of data is 656 bytes
> c) The size of db/ directory at about  22 millions is
>
> For HOST_1:
> 6.7GWork_1/
> 5.3GWork_3/
> 6.3GWork_5/
> 7.2GWork_7/
>
>
>
> For HOST_2:
> 6.5GWork_2
> 6.5GWork_4
> 6.6GWork_6
> 5.9GWork_8
>
> Thanks,
>
> Antonio
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: BLOB with JDBC Driver

2019-03-06 Thread Ilya Kasnacheev
Hello!

I don't think we support blobs at the moment. Use setBytes to store data as
byte[].

Regards,
-- 
Ilya Kasnacheev


ср, 6 мар. 2019 г. в 13:58, KR Kumar :

> Hi - I trying out JDBC driver with ignite SQL tables. How do i insert a
> blob
> into my cache table through jdbc?
>
> Thanx and Regards,
> KR Kumar
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Connection to cluster failed. Error: Latest topology update failed.

2019-03-06 Thread edvance CN
Hi Support,

My lab VM were shutdown by lack of power. But After I restart them, I find the 
console hang at

[21:26:43] Topology snapshot [ver=35, locNode=2931204b, servers=3, clients=0, 
state=ACTIVE, CPUs=12, offheap=12.0GB, heap=3.0GB]
[21:26:43]   ^-- Baseline [id=3, size=3, online=3, offline=0]

If I used control.sh �Cbaseline, the command hang a long time and prompt:
Connection to cluster failed.
Error: Latest topology update failed.

Please advise whether it indeed need a lot of time to start the cluster

Thank you.

Best Regards,
James Wang
M/WeChat: +86 135 1215 1134


This message contains information that is deemed confidential and privileged. 
Unless you are the addressee (or authorized to receive for the addressee), you 
may not use, copy or disclose to anyone the message or any information 
contained in the message. If you have received the message in error, please 
advise the sender by reply e-mail and delete the message.


Re: Performance degradation in case of high volumes

2019-03-06 Thread Antonio Conforti
Hello Ilya,


thanks for reply.

For answer to your questions today i runned again a test that i stopped at
about 22 millions of entries:

a) the approximate hardware and data region config used in the test:

2 hosts each with:

Processor total  : 2

Processor: 0
Name : Intel Xeon
Speed: 3300 MHz
Bus  : 100 MHz
Core : 8
Thread   : 16
Socket   : 1
Level2 Cache : 2048 KBytes
Level3 Cache : 25600 KBytes

Processor: 1
Name : Intel Xeon
Stepping : 4
Speed: 3300 MHz
Bus  : 100 MHz
Core : 8
Thread   : 16
Socket   : 2
Level2 Cache : 2048 KBytes
Level3 Cache : 25600 KBytes

MEMORY:
Memory installed : 196608 MBytes

# getconf PAGESIZE: 4096

vm.swappiness = 5





Disk:

Cache Board Present: True
Cache Status: OK
Cache Ratio: 25% Read / 75% Write
Drive Write Cache: Disabled
Total Cache Size: 512 MB
Total Cache Memory Available: 304 MB

Logical Drive: 1
Size: 279.4 GB
Fault Tolerance: RAID 1
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: OK
Caching:  Enabled
Rotational Speed: 15000
PHY Transfer Rate: 6.0Gbps
PHY Transfer Rate: 6.0Gbps
type ext4
Block size:  4096


>From the attached xml config the default data storage configuration is:


datafeed-persistent-store.xml

  

   




 About the memory below the top command for server and client nodes before
start test:




HOST_1




 

HOST_2


 

and below the top command for server and client nodes when test is stopped:

HOST_1


 

HOST_2


 


b) approximate size of data is 656 bytes
c) The size of db/ directory at about  22 millions is

For HOST_1:
6.7GWork_1/
5.3GWork_3/
6.3GWork_5/
7.2GWork_7/



For HOST_2:
6.5GWork_2
6.5GWork_4
6.6GWork_6
5.9GWork_8

Thanks,

Antonio





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot exchange messages between two nodes.

2019-03-06 Thread Ropugg
The two nodes have the same subscriber.
The only difference is the order of starting node.
The first started node, we call it as Node1, the second started node is
Node2.
The Node1 send a message, the Node2 will handle it as expected in
remoteListen(@Nullable Object topic, IgniteBiPredicate p)
But the Node2 send a message, the Node1 can receive it, but never execute
the IgniteBiPredicate of remoteListen.
You could check the screenshots, it prove the Node1receive the message, but
didn't handle it.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


BLOB with JDBC Driver

2019-03-06 Thread KR Kumar
Hi - I trying out JDBC driver with ignite SQL tables. How do i insert a blob
into my cache table through jdbc?

Thanx and Regards,
KR Kumar



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Backup make DataStreamer performance decreased a lot.

2019-03-06 Thread Ilya Kasnacheev
Hello!

So I have re-ran it with backup.

2 nodes with 1 backup will load 15M entries slightly faster than 1 node
which loads 30M entries without backups.

So I would say that having backups is actually slightly faster.

However, storing 15M entries without backups on 1 node is still 4-5x faster.

Regards,
-- 
Ilya Kasnacheev


пн, 4 мар. 2019 г. в 14:45, ilya.kasnacheev :

> Hello!
>
> Actually, now I understand I only had one node so backups were not
> applicable.
>
> I will re-run it, but as you can see, since adding more data slows down
> superlinearly, you can expect that adding back-up also decrease performance
> superlinearly.
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: queries, we are evaluating to use Apache Ignite as caching layer on top of underlying Cassandra database.

2019-03-06 Thread Navneet Kumar
Ilya,
Thanks for your quick response. I have gone through the capacity planning
link shared by you.
1,000,000,000 Total objects(Records)
1,024 bytes per object (1 KB)
0 backup
4 nodes

Total number of objects X object size (only primary copy since back up is
set 0. Better remove the back up property from XML):
1,000,000,000 x 1,024 = 10240 bytes (1024000 MB)

No Indexes used. I know if it is used it will take 30% of overhead more.

Approximate additional memory required by the platform:
300 MB x 1(No of node in the cluster) = 300 MB

Total size:
1024000 + 300 = 1024300 MB

Hence the anticipated total memory consumption would be just over ~ 1024.3
GB


So what I want to know that my use case is I want to load full 1 billion
subscriber records on datagrid(Apache Ignite) and will read from there. No
disk swapping once my data is loaded in memory.
Let me know my calculation is correct or do I need to add some more memory.
I have a single node cluster as of now and I am not using any index and no
back up.

Regards
Navneet



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: TPS does not increase even though new server nodes added

2019-03-06 Thread c c
HI ,
We have provide thread dumps from all client and server. Would you mind
look at it ?
thanks very much.

yu...@toonyoo.net  于2019年3月6日周三 上午9:29写道:

> you can see the full thread dump in the attarchment,the filename is
> dump.zip
>
> --
> yu...@toonyoo.net
>
>
> *From:* Ilya Kasnacheev 
> *Date:* 2019-03-04 16:21
> *To:* user 
> *Subject:* Re: Re: TPS does not increase even though new server nodes
> added
> Hello!
>
> Is it a single node dump? Looks like it is idle.
>
> Please provide dumps from all clients and servers during data load.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 4 мар. 2019 г. в 10:53, yu...@toonyoo.net :
>
>> it's the  full thread dumps ,the attarchment file  name  is 1.txt
>>
>>
>> --
>> yu...@toonyoo.net
>>
>>
>> *From:* Ilya Kasnacheev 
>> *Date:* 2019-03-04 15:32
>> *To:* user 
>> *Subject:* Re: TPS does not increase even though new server nodes added
>> Hello!
>>
>> Can you provide full thread dumps from all nodes during max load?
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 4 мар. 2019 г. в 10:01, c c :
>>
>>> More information. The cache is partitioned and full sync.
>>>
>>> c c  于2019年3月4日周一 下午2:54写道:
>>>
 HI,
 We are working on ignite 2.7.0. We just put entity into ignite
 cache with transaction and backup(2) enabled. We can get 6000 TPS with 3
 server node. Then we test on 5 server nodes. But tps does not increase. We
 operate ignite by client server node embbed in application. Except for
 "publicThreadPoolSize" and "systemThreadPoolSize" other configurations left
 unchanged. What problem is it ?
thanks

>>>