[akka-user] How to revive actors when starting up (redeploying) another cluster instance

2017-12-29 Thread Sebastian Oliveri
I have in AWS a cluster let´s call it "Blue" with lots of actors alive in memory, each of these running its own schedulers every a period of time. When I deploy to Prod another instance of the cluster and let´s call it "Green" I just change the routing to point out to the new cluster instance

[akka-user] Node with ShardCoordinator killing himself

2017-10-05 Thread Sebastian Oliveri
I deployed to Prod but I realized about this situation and wonder if I am safe. It is a cluster deployed on AWS with persistent actors. I am using ClusterSharding for remoting which creates a ShardCoordinator. I also have implemented a Keep Majority Split Brain. The question is this: when there

Re: [akka-user] Removing programmatically and dynamically Node from Cluster

2017-09-15 Thread Sebastian Oliveri
Justin, I wrote your implementation but as a local actor in every node to live in memory as long as the instance is up and running. I thought about possible border cases regarding to model it as a local actor but I can not come up with a case that would break the scenario. I am thinking

Re: [akka-user] Removing programmatically and dynamically Node from Cluster

2017-09-14 Thread Sebastian Oliveri
septiembre de 2017, 9:39:29 (UTC-3), Justin du coeur escribió: > > On Wed, Sep 13, 2017 at 5:55 PM, Sebastian Oliveri <seba...@gmail.com > > wrote: > >> Am I in the right direction? I was thinking more in a server that crashes >> more than a vertical network

Re: [akka-user] Removing programmatically and dynamically Node from Cluster

2017-09-13 Thread Sebastian Oliveri
Justin, Thanks for much for the code sample. First of all let me say that I think I will have no more than 2 nodes at first. I think I was referring exactly to what you did but yours is much smarter. The workaround I had in mind was to just "down" the unreachable member and that´s all. In

[akka-user] Removing programmatically and dynamically Node from Cluster

2017-09-13 Thread Sebastian Oliveri
Hi, I have a cluster with a few nodes running clustered sharding persistent actors that I am close to deploy in prod I tested that once a node is unreachable all the persistent actors inside it are unreachable as well until human intervention takes places to Down that unreachable node for the

Re: [akka-user] Exceptions and supervision when aggregate invariants are violated

2017-09-10 Thread Sebastian Oliveri
s thrown during the > weekend. > > /Patrick > > On Sep 5, 2017, at 16:09, Sebastian Oliveri <seba...@gmail.com > > wrote: > > > I am going to describe a concrete scenario I have: "*an item can be > removed from the menu*" > > class MenuActor

[akka-user] Exceptions and supervision when aggregate invariants are violated

2017-09-05 Thread Sebastian Oliveri
I am going to describe a concrete scenario I have: "*an item can be removed from the menu*" class MenuActor extends PersistentActor { var state: Option[Menu] = None override def receiveCommand: Receive = { case remove: RemoveItem => persist(MenuItemRemoved(remove.itemId)) {

Re: [akka-user] Actor System hierarchy when using DDD

2017-09-04 Thread Sebastian Oliveri
Thanks Patricks, I will use Backoff.onStop method to create the supervisors for the aggregates and try to restart them incrementally in time. Sebastian. -- >> Read the docs: http://akka.io/docs/ >> Check the FAQ: >>

[akka-user] Actor System hierarchy when using DDD

2017-09-02 Thread Sebastian Oliveri
I´ve read that in order for an akka based software to be well organized and supervised it is recommended to organize actors as a tree instead of having top level actors hanging from the User guardian and this is something I still don´t get the point. I see the point from a performance tip when

[akka-user] AtLeastOnceDelivery: access to the original sender and error handling

2017-08-31 Thread Sebastian Oliveri
Hi! I am sort of new to Akka. I am implementing DDD around clustered actors. Since they are clustered I need to make sure they receive the messages at least once. Because of that I have an implicit actor in a Play controller which calls my target actor the aggregate passing through a "proxy"