Hi,
First off, thanks for your help.
In the test, I'm using a single server node cluster with the official 2.9.1
version. Client is a C++ Thin Client with transactions support (commit
685c1b70ca from master branch).
The test is very simple:
struct Blob
{
int8_t m_blob[512];
};
IgniteClient client = IgniteClient::Start(cfg);
CacheClient<int32_t, Blob> cache = client.GetOrCreateCache<int32_t,
examples::Blob>("vds");
cache.Clear();
std::map<int32_t, Blob> map;
for (uint32_t i = 0; i < 2000000; ++i)
map.insert (std::make_pair(i, Blob()));
ClientTransactions transactions = client.ClientTransactions();
ClientTransaction tx = transactions.TxStart(PESSIMISTIC,
READ_COMMITTED);
cache.PutAll(map);
tx.Commit();
As you can see, the total size of the transaction (not taking keys into
account) is 2M * 512B = 1GB. If we limit the loop up to 1.9M, it works...
and I've found where the problem is:
<http://apache-ignite-users.70518.x6.nabble.com/file/t3059/bug.png>
As you can see, as "doubleCap" is an int, trying to double it when "cap" is
big enough makes it negative, therefore, it's not finally doubled... which
leads to a reallocation of 1GB each time a new key-value entry is added to
the tcp message.
Using integers to store capacity in your C++ Thin Client is implicitly
limiting your maximum transaction size up to 1GB. Maybe you should consider
to use uint64_t instead...
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/