I have in AWS a cluster let´s call it "Blue" with lots of actors alive in
memory, each of these running its own schedulers every a period of time.
When I deploy to Prod another instance of the cluster and let´s call it
"Green" I just change the routing to point out to the new cluster instance
I deployed to Prod but I realized about this situation and wonder if I am
safe.
It is a cluster deployed on AWS with persistent actors. I am using
ClusterSharding for remoting which creates a ShardCoordinator. I also have
implemented a Keep Majority Split Brain.
The question is this: when there
Justin,
I wrote your implementation but as a local actor in every node to live in
memory as long as the instance is up and running.
I thought about possible border cases regarding to model it as a local
actor but I can not come up with a case that would break the scenario.
I am thinking
septiembre de 2017, 9:39:29 (UTC-3), Justin du coeur
escribió:
>
> On Wed, Sep 13, 2017 at 5:55 PM, Sebastian Oliveri <seba...@gmail.com
> > wrote:
>
>> Am I in the right direction? I was thinking more in a server that crashes
>> more than a vertical network
Justin,
Thanks for much for the code sample.
First of all let me say that I think I will have no more than 2 nodes at
first.
I think I was referring exactly to what you did but yours is much smarter.
The workaround I had in mind was to just "down" the unreachable member and
that´s all.
In
Hi,
I have a cluster with a few nodes running clustered sharding persistent
actors that I am close to deploy in prod
I tested that once a node is unreachable all the persistent actors inside
it are unreachable as well until human intervention takes places to Down
that unreachable node for the
s thrown during the
> weekend.
>
> /Patrick
>
> On Sep 5, 2017, at 16:09, Sebastian Oliveri <seba...@gmail.com
> > wrote:
>
>
> I am going to describe a concrete scenario I have: "*an item can be
> removed from the menu*"
>
> class MenuActor
I am going to describe a concrete scenario I have: "*an item can be removed
from the menu*"
class MenuActor extends PersistentActor {
var state: Option[Menu] = None
override def receiveCommand: Receive = {
case remove: RemoveItem =>
persist(MenuItemRemoved(remove.itemId)) {
Thanks Patricks,
I will use Backoff.onStop method to create the supervisors for the
aggregates and try to restart them incrementally in time.
Sebastian.
--
>> Read the docs: http://akka.io/docs/
>> Check the FAQ:
>>
I´ve read that in order for an akka based software to be well organized and
supervised it is recommended to organize actors as a tree instead of having
top level actors hanging from the User guardian and this is something I
still don´t get the point. I see the point from a performance tip when
Hi! I am sort of new to Akka. I am implementing DDD around clustered
actors. Since they are clustered I need to make sure they receive the
messages at least once. Because of that I have an implicit actor in a Play
controller which calls my target actor the aggregate passing through a
"proxy"
11 matches
Mail list logo