Greetings, 

We have an actor structure where there is a cluster sharded actor that 
calculates a parametric matrix of about 7 meg and has to distribute it to 5 
other nodes for consumption. First of all there are the following 
constraints. 

1. The matrix has to be generated in one place.
2. The matrix is needed on all nodes to handle user load. 
3. The matrix will be periodically generated again to handle changing 
variables and then sent off to all nodes to handle user load. 
4. Saving the matrix in a database is probably not viable as it merely 
shifts the networking load and the database would get very large very fast. 
The Database only saves the input params. 

We changed the akka max message size to 10 meg to accomplish this but that 
feels a bit odd, but we didn't see another choice. Normally this works fine 
even though passing 10 meg messages around a DistributedPubSub seems odd to 
me. However on startup the system has to stat up 2000 of these all at once. 
As a result the sharding coordinators scream at us about buffered messages. 
It eventually calms down and life resumes but I would love to be able to do 
this without the bloodbath in the logs. 

Can someone recommend an alternative strategy for handling the distribution 
of the parametric matrix that gets it to every node but doesn't cause shard 
coordinator complaint bloodbath? 

Thanks in advance. 
Robert SImmons Jr. MSc.

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to