[akka-user] Event sourcing user database with akka persistence and sharding
Hi folks, I am just starting with event sourcing and still struggle to get domain problems solved. Maybe a user database is not a good fit for ES, dunno :) What do I want to achieve: * Users can be created (having an immutable UUID and an email which is used as username) * Email can be changed * Email should be unique (since used as login username) * Lookup users is needed by email (for login) and by UUID (for everything else) *Lookup by email and change email* Since user database is potentially unbound I wanted to use sharding over the UUID to be able to split the state hold in memory across nodes. So I would end up with multiple ES event streams, one for each user UUID. So getting the current user state by ID is done. But how do I look up the user by email? Would it be a common solution to have a secondary event stream, one for each email and have that be updated whenever a new user is created or a user changes it's email address? *Uniqueness of email* As I read uniqueness is expensive to be guaranteed. I thought about not creating the user object directly, but instead on registration, send an email to the given email address that contains a verification link. But this link does not only contain a nonce, but instead is an encrypted and signed JWT. Only when the user clicks that link, the user account is actually created. This does not totally solve the uniqueness constraint, but it at least ensures that not two different persons create an account with the same email. *Choosing persistenceId (and maybe also shardingId/entityId)* Here is some code I have so far, but it is not totally working (for example the akka.persistence.Update event is not properly propagated through the sharding). Also when using sharding this fails, because multiple instances of UserProcessor share the same `persistenceId`. But I did not now what to do, since when I include the user UUID in the `persistenceId`, which would make sense, then the `EmailView` does not get all events to build up the email lookup dictionary. TL;DR: this code is probably total nonsense. // UserProcessor.scala package io.airfocus.user import akka.actor.{Actor, ActorLogging, ActorRef} import akka.persistence.{PersistentActor} import de.choffmeister.auth.common.{PBKDF2, PasswordHasher, Plain} import io.airfocus.common.{RandomString, WithPassivation} import io.airfocus.model.User import io.airfocus.model.UserProcessor._ import scala.concurrent.duration._ class UserProcessor(userView: ActorRef, emailView: ActorRef) extends Actor with ActorLogging with PersistentActor with WithPassivation { private val passwordHasher = new PasswordHasher("pbkdf2", "hmac-sha1" :: "1" :: "128" :: Nil, PBKDF2 :: Plain :: Nil) override def passivationTimeout: FiniteDuration = 5.seconds override def persistenceId: String = "users" override def receiveRecover: Receive = { case ev: CreateUser => } override def receiveCommand: Receive = withPassivation({ case cmd @ CreateUser(userId, email, password) => persistAsync(UserCreated(User(id = userId, emails = Map(email -> Some(RandomString.base32(32))), passwordHash = passwordHasher.hash(password { ev => userView ! io.airfocus.model.UserView.Update(userId) emailView ! io.airfocus.model.EmailView.Update(email) sender() ! ev.user } }) override def preStart() = log.info("Starting {}", self.path) override def postStop() = log.info("Stopped {}", self.path) } // UserView.scala package io.airfocus.user import java.util.UUID import akka.actor.{Actor, ActorLogging} import akka.persistence.PersistentView import io.airfocus.common.WithPassivation import io.airfocus.model.User import io.airfocus.model.UserProcessor._ import io.airfocus.model.UserView._ import scala.concurrent.duration._ class UserView extends Actor with ActorLogging with PersistentView with WithPassivation { var users = Map.empty[UUID, User] override def passivationTimeout: FiniteDuration = 5.seconds override def persistenceId: String = "users" override def viewId: String = s"users-user-${self.path.name}" override def autoUpdate: Boolean = true override def autoUpdateInterval: FiniteDuration = 1.second override def receive: Receive = withPassivation({ case ev @ UserCreated(u) => println(ev) users = users + (u.id -> u) case GetUser(userId) => sender() ! users.get(userId) case Update(_) => self ! akka.persistence.Update(await = true) }) } // EmailView.scala package io.airfocus.user import java.util.UUID import akka.actor.{Actor, ActorLogging} import akka.persistence.PersistentView import io.airfocus.common.WithPassivation import io.airfocus.model.EmailView.{LookupEmail, Update} import io.airfocus.model.UserProcessor._ import scala.concurrent.duration._ class EmailView extends Actor with ActorLogging with PersistentView with WithPassivation { var users = Map.empty[String, UUID] override def pass
[akka-user] Re: DistributedPubSub keeps sending to exited nodes
Have chatted with @ktoso about this and seems, that my thoughts are not totally wrong. Will create a ticket on GitHub for that. Am Donnerstag, 23. Juni 2016 22:27:45 UTC+2 schrieb Christian Hoffmeister: > > Hi, > > I have a sample application where two nodes join an Akka cluster. On both > nodes there is an actor running registered to DistributedPubSubMediator and > both are sending a Ping once per second (that hits either itself or the > other node). > > Then I gracefully shut down Node B (leave cluster, wait for MemberRemoved > event, wait some seconds, terminate actor system, wait for termination, > exit the jvm). > > But even though both nodes see Node B leaving through gossip convergence > at 22:16:58, Node A keeps the left Node B in the PubSub (you see this with > the RECEIVED PONG(nr, ping sender, pong responder)). > > Why does PubSub not remove actors from exited nodes, but only actors from > removed nodes (remove of Node B on Node A happens 4 seconds after exiting > of Node B). In my case I only do not loose messages, because Node B is > waiting another 10 seconds after it sees its one remove (which again, is 4 > seconds earlier than Node A sees Node B removed). > > Thanks in advance. > Christian > > > Output Node A > > [info] Thu Jun 23 22:16:23 CEST 2016 RECEIVE Pong(0,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER MemberUp(Member(address = > akka.tcp://airfocus@127.0.0.1:2551, status = Up)) > [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER LeaderChanged(Some(akka.tcp:// > airfocus@127.0.0.1:2551)) > [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER > RoleLeaderChanged(api,Some(akka.tcp://airfocus@127.0.0.1:2551)) > [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER > SeenChanged(true,Set(akka.tcp://airfocus@127.0.0.1:2551)) > [info] Thu Jun 23 22:16:24 CEST 2016 RECEIVE Pong(1,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:25 CEST 2016 RECEIVE Pong(2,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:26 CEST 2016 RECEIVE Pong(3,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:28 CEST 2016 RECEIVE Pong(4,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:29 CEST 2016 RECEIVE Pong(5,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:30 CEST 2016 RECEIVE Pong(6,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:31 CEST 2016 RECEIVE Pong(7,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:32 CEST 2016 RECEIVE Pong(8,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:33 CEST 2016 RECEIVE Pong(9,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:34 CEST 2016 RECEIVE Pong(10,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:35 CEST 2016 RECEIVE Pong(11,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:36 CEST 2016 RECEIVE Pong(12,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:37 CEST 2016 RECEIVE Pong(13,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:38 CEST 2016 RECEIVE Pong(14,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:39 CEST 2016 RECEIVE Pong(15,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:40 CEST 2016 RECEIVE Pong(16,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:41 CEST 2016 RECEIVE Pong(17,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:42 CEST 2016 RECEIVE Pong(18,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:43 CEST 2016 RECEIVE Pong(19,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:44 CEST 2016 RECEIVE Pong(20,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:45 CEST 2016 RECEIVE Pong(21,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:46 CEST 2016 RECEIVE Pong(22,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:47 CEST 2016 RECEIVE Pong(23,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:48 CEST 2016 RECEIVE Pong(24,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:49 CEST 2016 CLUSTER MemberJoined(Member(add
[akka-user] Re: DistributedPubSub keeps sending to exited nodes
Ok, my assumption is right (see https://github.com/akka/akka/blob/master/akka-cluster-tools/src/main/scala/akka/cluster/pubsub/DistributedPubSubMediator.scala#L669). Is there a way to work around this? I don't want to wait some random time interval after leaving before actually shutting down. If not just the removing but already the exiting would remove the node, then one would have a more robust way (without random waits) to gracefully leave a cluster/pubsub. Also another question arises here: Wouldn't it be good, to temporarily remove nodes from pubsub that are unreachable and readd them when they are reachable again? Am Donnerstag, 23. Juni 2016 22:27:45 UTC+2 schrieb Christian Hoffmeister: > > Hi, > > I have a sample application where two nodes join an Akka cluster. On both > nodes there is an actor running registered to DistributedPubSubMediator and > both are sending a Ping once per second (that hits either itself or the > other node). > > Then I gracefully shut down Node B (leave cluster, wait for MemberRemoved > event, wait some seconds, terminate actor system, wait for termination, > exit the jvm). > > But even though both nodes see Node B leaving through gossip convergence > at 22:16:58, Node A keeps the left Node B in the PubSub (you see this with > the RECEIVED PONG(nr, ping sender, pong responder)). > > Why does PubSub not remove actors from exited nodes, but only actors from > removed nodes (remove of Node B on Node A happens 4 seconds after exiting > of Node B). In my case I only do not loose messages, because Node B is > waiting another 10 seconds after it sees its one remove (which again, is 4 > seconds earlier than Node A sees Node B removed). > > Thanks in advance. > Christian > > > Output Node A > > [info] Thu Jun 23 22:16:23 CEST 2016 RECEIVE Pong(0,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER MemberUp(Member(address = > akka.tcp://airfocus@127.0.0.1:2551, status = Up)) > [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER LeaderChanged(Some(akka.tcp:// > airfocus@127.0.0.1:2551)) > [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER > RoleLeaderChanged(api,Some(akka.tcp://airfocus@127.0.0.1:2551)) > [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER > SeenChanged(true,Set(akka.tcp://airfocus@127.0.0.1:2551)) > [info] Thu Jun 23 22:16:24 CEST 2016 RECEIVE Pong(1,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:25 CEST 2016 RECEIVE Pong(2,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:26 CEST 2016 RECEIVE Pong(3,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:28 CEST 2016 RECEIVE Pong(4,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:29 CEST 2016 RECEIVE Pong(5,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:30 CEST 2016 RECEIVE Pong(6,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:31 CEST 2016 RECEIVE Pong(7,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:32 CEST 2016 RECEIVE Pong(8,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:33 CEST 2016 RECEIVE Pong(9,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:34 CEST 2016 RECEIVE Pong(10,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:35 CEST 2016 RECEIVE Pong(11,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:36 CEST 2016 RECEIVE Pong(12,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:37 CEST 2016 RECEIVE Pong(13,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:38 CEST 2016 RECEIVE Pong(14,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:39 CEST 2016 RECEIVE Pong(15,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:40 CEST 2016 RECEIVE Pong(16,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:41 CEST 2016 RECEIVE Pong(17,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:42 CEST 2016 RECEIVE Pong(18,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:43 CEST 2016 RECEIVE Pong(19,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:44 CEST 2016 RECEIVE Pong(20,akka.tcp:// > airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) > [info] Thu Jun 23 22:16:45 CEST 2016 RECEIVE Pong(21,akka.tcp:// > airfocu
[akka-user] DistributedPubSub keeps sending to exited nodes
Hi, I have a sample application where two nodes join an Akka cluster. On both nodes there is an actor running registered to DistributedPubSubMediator and both are sending a Ping once per second (that hits either itself or the other node). Then I gracefully shut down Node B (leave cluster, wait for MemberRemoved event, wait some seconds, terminate actor system, wait for termination, exit the jvm). But even though both nodes see Node B leaving through gossip convergence at 22:16:58, Node A keeps the left Node B in the PubSub (you see this with the RECEIVED PONG(nr, ping sender, pong responder)). Why does PubSub not remove actors from exited nodes, but only actors from removed nodes (remove of Node B on Node A happens 4 seconds after exiting of Node B). In my case I only do not loose messages, because Node B is waiting another 10 seconds after it sees its one remove (which again, is 4 seconds earlier than Node A sees Node B removed). Thanks in advance. Christian Output Node A [info] Thu Jun 23 22:16:23 CEST 2016 RECEIVE Pong(0,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER MemberUp(Member(address = akka.tcp://airfocus@127.0.0.1:2551, status = Up)) [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER LeaderChanged(Some(akka.tcp://airfocus@127.0.0.1:2551)) [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER RoleLeaderChanged(api,Some(akka.tcp://airfocus@127.0.0.1:2551)) [info] Thu Jun 23 22:16:23 CEST 2016 CLUSTER SeenChanged(true,Set(akka.tcp://airfocus@127.0.0.1:2551)) [info] Thu Jun 23 22:16:24 CEST 2016 RECEIVE Pong(1,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:25 CEST 2016 RECEIVE Pong(2,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:26 CEST 2016 RECEIVE Pong(3,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:28 CEST 2016 RECEIVE Pong(4,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:29 CEST 2016 RECEIVE Pong(5,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:30 CEST 2016 RECEIVE Pong(6,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:31 CEST 2016 RECEIVE Pong(7,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:32 CEST 2016 RECEIVE Pong(8,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:33 CEST 2016 RECEIVE Pong(9,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:34 CEST 2016 RECEIVE Pong(10,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:35 CEST 2016 RECEIVE Pong(11,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:36 CEST 2016 RECEIVE Pong(12,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:37 CEST 2016 RECEIVE Pong(13,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:38 CEST 2016 RECEIVE Pong(14,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:39 CEST 2016 RECEIVE Pong(15,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:40 CEST 2016 RECEIVE Pong(16,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:41 CEST 2016 RECEIVE Pong(17,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:42 CEST 2016 RECEIVE Pong(18,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:43 CEST 2016 RECEIVE Pong(19,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:44 CEST 2016 RECEIVE Pong(20,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:45 CEST 2016 RECEIVE Pong(21,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:46 CEST 2016 RECEIVE Pong(22,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:47 CEST 2016 RECEIVE Pong(23,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:48 CEST 2016 RECEIVE Pong(24,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:49 CEST 2016 CLUSTER MemberJoined(Member(address = akka.tcp://airfocus@127.0.0.1:2552, status = Joining)) [info] Thu Jun 23 22:16:49 CEST 2016 RECEIVE Pong(25,akka.tcp://airfocus@127.0.0.1:2551,akka.tcp://airfocus@127.0.0.1:2551) [info] Thu Jun 23 22:16:49 CEST 2016 CLUSTER SeenChanged(true,Set(akka.tcp://airfocus@127.0.0.1:2551, akka.tcp://airfocus@127.0.0.1:2552)) [info] Thu Jun 23 22:16:49 CEST 2016 CLUST
[akka-user] Re: Dedicated seed nodes for akka cluster
So basically the known seed nodes are replaces with a known etcd cluster for the information sharing, if I understand this right? Somewhat like this? * A joins, no one is there, so it puts inself into etcd and starts a single node cluster * B joins, finds A in etcd and joins A * C joins, finds A * A gets shut down * After a timeout, B or C are put into etcd as new seed node * D joins, finds B or C In the examples (https://github.com/rkrzewski/akka-cluster-etcd/blob/master/examples/cluster-monitor/src/main/resources/application.conf) I found that auto-downing is used. What happens in case of temporory network separation? For example: * A is in etcd as seed node * A/B (can access each other, but neither etcd nor C/D) * C/D (can access each other, but not A/B) What happens now? And what happen when network is fully up again and A/B/C/D/etcd can communicate again. Do the two isle recombine into a single cluster? Greetings /c Am Freitag, 18. März 2016 00:13:51 UTC+1 schrieb Christian Hoffmeister: > > Hello, > > I am just starting to dive into akka-cluster and have a question regarding > the seed nodes: > > My test project consists of 4 projects so far: > > * PROTOCOL contains messaging case classes > * API contains cluster node, that also exposes a rest api (akka-cluster + > akka-http) > * AUTH contains cluster node for auth stuff (akka-cluster) > * USERS contains cluster node to manage user data (akka-cluster) > > At the moment I just set one API and one AUTH instance as seed nodes > (could also have been some other nodes). Is it good practice to have > another special project (called SEED), that does not do anything in > particular, > except for joining the cluster, to act as seed node? > > From my first thoughts, this might be a good idea, since this nodes would > have to be restarted less often then other nodes (that have business > logic). Basically only when updating infrastructure like host machine or > akka. > > Am I getting something wrong here? > > Greetings > Christian > -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups "Akka User List" group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at https://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Dedicated seed nodes for akka cluster
Hello, I am just starting to dive into akka-cluster and have a question regarding the seed nodes: My test project consists of 4 projects so far: * PROTOCOL contains messaging case classes * API contains cluster node, that also exposes a rest api (akka-cluster + akka-http) * AUTH contains cluster node for auth stuff (akka-cluster) * USERS contains cluster node to manage user data (akka-cluster) At the moment I just set one API and one AUTH instance as seed nodes (could also have been some other nodes). Is it good practice to have another special project (called SEED), that does not do anything in particular, except for joining the cluster, to act as seed node? >From my first thoughts, this might be a good idea, since this nodes would have to be restarted less often then other nodes (that have business logic). Basically only when updating infrastructure like host machine or akka. Am I getting something wrong here? Greetings Christian -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >> http://doc.akka.io/docs/akka/current/additional/faq.html >> Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups "Akka User List" group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at https://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.