Is localEntries(CachePeekMode.PRIMARY) guarantees that keys will not overlap
for partitioned cache?
Thanks.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Backup count = 1 and wal mode = background.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Is it safe to parallel update local node entries with data streamer ? Here
is the benchmark code, it seems that addData() much faster than individual
put().
val res = Node.ignite.compute(project.indexNodes).broadcast {
var n = 0L
Node.ignite.dataStreamer(project.index.name).use { ds
->
You can also look to this topic, probably related to yours with code sample
http://apache-ignite-users.70518.x6.nabble.com/Embedded-ignite-and-baseline-upgrade-questions-td30822.html
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thanks, its definitely clear now that rebalancing should be triggered from
code if node removal detected. Assuming that number of backups > 0 and only
one node removed it looks like safe case. But what if backup count = 0 (bad
idea but the risk may be acceptable in some cases) and we need to
!= in kotlin uses equals() under the hood so it works here as expected.
I can't use control.sh to manage topology (becase ignite is embedded) and
trying to implement it in code. So the actual initialization sequence is:
1) Start all fresh nodes, wait for all specified persistence nodes to be
Also when starting second node I am getting this:
2019/12/27 23:39:36.665 [disco-pool-#55] WARN BaselineTopology of joining
node (dev-1) is not compatible with BaselineTopology in the cluster.
Branching history of cluster BlT ([95475834]) doesn't contain branching
point hash of joining node BlT
Actually, you even don't need these two lines and jul-to-slf4j library.
Ignite will work well with rest.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Logging-Using-Logback-and-setting-logging-level-tp5652p5674.html
Sent from the Apache Ignite Users
I am using logback with ignite.
You need to add the following to pom.xml
ch.qos.logback
logback-classic
1.1.7
org.apache.ignite
ignite-slf4j
1.6.0
org.slf4j
I've tried the code from
http://smartkey.co.uk/development/securing-an-apache-ignite-cluster/ too.
The second (blocked) node just throw exception as in my example with
DiscoverySpiNodeAuthenticator:
2016/05/23 17:28:08.062 [main] ERROR Failed to start manager:
GridManagerAdapter [enabled=true,
Thanks, now it waits for task completion. I've also implemented shutdown
check during the tasks to allow correct termination.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Graceful-shutdown-tp4911p5088.html
Sent from the Apache Ignite Users mailing list
Hi,
I need to implement token-based security authenticator. The best place I've
found so far is to extend discovery SPI
TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
…
discoSpi.setAuthenticator(new TokenAuthenticator());
igniteConfig.setDiscoverySpi(discoSpi);
Then I've implemented simple
Thanks for clarification.
So if the originator dies - all started tasks dies too, right?
In this case I see solution like this:
1) Create task on initiator node.
2) Put node id + task data to ignite cache.
3) Create service which returns task liveness/status/result for taskId
originated
Hi,
I have some confusion with implementing async task execution on cluster.
There is own API server embedded along with ignite core in one application.
API users execute http calls to some host as entry point and can be switched
to another one on failover.
1) REST API server accepts
14 matches
Mail list logo