Hello!
I think you need a core pool of server nodes, larger pool of clients with
client connector to handle thin client connections, and supply all these
clients' addressed to thin clients.
Regards,
--
Ilya Kasnacheev
чт, 3 дек. 2020 г. в 23:29, Wolfgang Meyerle <
wolfgang.meye...@googlemail.c
Hi,
use:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setConsistentId-java.io.Serializable-
to set the consistent id.
Thanks, Alex
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Finally found the issue.
In the thin client configuration I had only one port provided to connect
to the server nodes. As soon as I provided the default range
(10800-10900) the connections seemed to behave better. This resolved the
issue somehow.
However I tried it out at the cluster on the c
Hi,
The connection parameters are set by the thin client connector.
see:
https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients#configuring-thin-client-connector
all attributes are listed here:
https://ignite.apache.org/releases/2.9.0/javadoc/org/apache/ignite/co
Hi,
today I tried to use the thin client to connect to two server nodes and
test the performance.
Basically I tried to push random data from nodes where the Ignite
database is not running.
I used the put_all method of the c++ thin client api to do that with
concurrent threads.
However af