[akka-user] Re: Akka Cluster Project - Monolithic JAR or JARs per service?
*Awesome*. This is exactly the answer I was looking for, Ryan! I need to read and re-read your post a couple more times for it to sink in, but this seems like a great starting point. A blog article on this topic would really be appreciated, though! :- ) I'm of the opinion that while Typesafe has done an absolutely brilliant job at documenting the technology from a developer's point of view, documentation on the ops/production side of things really is a little lacking. I'm super glad that business like Conspire are giving back to the community like this - your 5 part blog series has been absolutely invaluable to us. Thanks again, Ryan, Kane On Wednesday, 7 January 2015 02:39:57 UTC+11, Ryan Tanner wrote: We're also deploying an Akka cluster on CoreOS with Docker. We deploy the same fat JAR for every node (in fact, the exact same docker image) and then change their behavior by setting the role via environment variables. In our case, each role has a bootstrap class which sets up whatever role-specific actors might be needed (but *not* the worker actors) as well as the actors needed on every node. Cluster-aware routers running on our supervisor node are responsible for actually creating the worker actors as needed. We still split each role into sub-projects in our SBT project. We then have an SBT project within that called node which aggregates all of the service-specific sub-projects into a single JAR and builds the Docker image using sbt-docker. This way, we can iterate quickly on code within a specific service and keep our compile/test times down. That node project also houses our multi-JVM end-to-end tests. Then, opposite that, is our commons project which every other project depends on. The service projects explicitly do *not* depend on each other, only commons. This keeps us from getting them too coupled. Since our service isn't directly user-facing, we don't bother trying to upgrade a single service at a time, we just restart the whole thing and let it pull down the new Docker image if needed. Most of this is written up on our blog: http://blog.conspire.com/post/64130417462/akka-at-conspire-part-1-how-we-built-our-backend-on I've been meaning to write about our transition from Chef/Vagrant to CoreOS/Docker but I haven't found the time yet. Hopefully within the next few weeks (which, of course, I said a month ago). On Sunday, January 4, 2015 6:42:21 PM UTC-7, Kane Rogers wrote: Hi, hAkkers! We're in the process of moving our distributed Akka service from the dark ages of remoting and manual management of IPs (shudder) into the wonderful new world of Akka Cluster. Currently, our project is split up something like this: - spray-frontend - worker-1 - worker-2 - worker-3 Where the spray-frontend forwards messages to the different worker, depending on the type of job. In our current environment, each of these projects are deployed as individual fat JARs using sbt-assembly, and deployed onto individual nodes. In our planned environment, we'll be deploying these fat JARs into docker containers and allow CoreOS to take care of distributing the nodes. We're toying with things like roles, ClusterPoolRouter and ClusterGroupRouter to take care of distributing work amongst the correct node - but nothing is set in stone yet. This then begs the question - how should these nodes be deployed? I can see a couple of possibilities: - Docker container with fat JAR per project (eg. spray-frontend container, worker-1 container etc. etc). - Docker container with fat JAR containing all projects (eg. one container containing code for spray-frontend AND worker-1 etc.). Role is then set via environment variable, or a different main class is fired off on startup. Exploring the different options, one limitation that I can see is that ClusterPoolRouter requires the class of the actor that's going to be remotely deployed to the cluster to be *present on the class path of the router. *That is, if our front-ends are to create a worker on a remote machine to handle a request, the class for that router must be in the JAR on the front-end machine. *Please correct me if I'm mistaken here.* The advantage we've found in splitting the project up into these different sub-projects is tests are a lot quicker, code is smaller, etc. etc. Upgrades are also then made easier, as only certain machines have to be upgraded/restarted if a component of the service is improved/fixed. We also have a shared project between the different services that contains the dialect (eg. the different case classes for message sent between services). This was a best practice that we read about when we first went down the Akka path a couple of years ago, but things may have changed since then! Any suggestions, past experience, pointers to articles to read, activator templates or even just
[akka-user] Re: Akka Cluster Project - Monolithic JAR or JARs per service?
Glad to help! On Wednesday, January 7, 2015 3:53:28 AM UTC-7, Kane Rogers wrote: *Awesome*. This is exactly the answer I was looking for, Ryan! I need to read and re-read your post a couple more times for it to sink in, but this seems like a great starting point. A blog article on this topic would really be appreciated, though! :- ) I'm of the opinion that while Typesafe has done an absolutely brilliant job at documenting the technology from a developer's point of view, documentation on the ops/production side of things really is a little lacking. I'm super glad that business like Conspire are giving back to the community like this - your 5 part blog series has been absolutely invaluable to us. Thanks again, Ryan, Kane On Wednesday, 7 January 2015 02:39:57 UTC+11, Ryan Tanner wrote: We're also deploying an Akka cluster on CoreOS with Docker. We deploy the same fat JAR for every node (in fact, the exact same docker image) and then change their behavior by setting the role via environment variables. In our case, each role has a bootstrap class which sets up whatever role-specific actors might be needed (but *not* the worker actors) as well as the actors needed on every node. Cluster-aware routers running on our supervisor node are responsible for actually creating the worker actors as needed. We still split each role into sub-projects in our SBT project. We then have an SBT project within that called node which aggregates all of the service-specific sub-projects into a single JAR and builds the Docker image using sbt-docker. This way, we can iterate quickly on code within a specific service and keep our compile/test times down. That node project also houses our multi-JVM end-to-end tests. Then, opposite that, is our commons project which every other project depends on. The service projects explicitly do *not* depend on each other, only commons. This keeps us from getting them too coupled. Since our service isn't directly user-facing, we don't bother trying to upgrade a single service at a time, we just restart the whole thing and let it pull down the new Docker image if needed. Most of this is written up on our blog: http://blog.conspire.com/post/64130417462/akka-at-conspire-part-1-how-we-built-our-backend-on I've been meaning to write about our transition from Chef/Vagrant to CoreOS/Docker but I haven't found the time yet. Hopefully within the next few weeks (which, of course, I said a month ago). On Sunday, January 4, 2015 6:42:21 PM UTC-7, Kane Rogers wrote: Hi, hAkkers! We're in the process of moving our distributed Akka service from the dark ages of remoting and manual management of IPs (shudder) into the wonderful new world of Akka Cluster. Currently, our project is split up something like this: - spray-frontend - worker-1 - worker-2 - worker-3 Where the spray-frontend forwards messages to the different worker, depending on the type of job. In our current environment, each of these projects are deployed as individual fat JARs using sbt-assembly, and deployed onto individual nodes. In our planned environment, we'll be deploying these fat JARs into docker containers and allow CoreOS to take care of distributing the nodes. We're toying with things like roles, ClusterPoolRouter and ClusterGroupRouter to take care of distributing work amongst the correct node - but nothing is set in stone yet. This then begs the question - how should these nodes be deployed? I can see a couple of possibilities: - Docker container with fat JAR per project (eg. spray-frontend container, worker-1 container etc. etc). - Docker container with fat JAR containing all projects (eg. one container containing code for spray-frontend AND worker-1 etc.). Role is then set via environment variable, or a different main class is fired off on startup. Exploring the different options, one limitation that I can see is that ClusterPoolRouter requires the class of the actor that's going to be remotely deployed to the cluster to be *present on the class path of the router. *That is, if our front-ends are to create a worker on a remote machine to handle a request, the class for that router must be in the JAR on the front-end machine. *Please correct me if I'm mistaken here.* The advantage we've found in splitting the project up into these different sub-projects is tests are a lot quicker, code is smaller, etc. etc. Upgrades are also then made easier, as only certain machines have to be upgraded/restarted if a component of the service is improved/fixed. We also have a shared project between the different services that contains the dialect (eg. the different case classes for message sent between services). This was a best practice that we read about when we first went down the Akka path a couple of years ago, but things may have changed since then!
[akka-user] Re: Akka Cluster Project - Monolithic JAR or JARs per service?
Hi Kane, Exploring the different options, one limitation that I can see is that ClusterPoolRouter requires the class of the actor that's going to be remotely deployed to the cluster to be *present on the class path of the router. *That is, if our front-ends are to create a worker on a remote machine to handle a request, the class for that router must be in the JAR on the front-end machine. *Please correct me if I'm mistaken here.* In my experience this is true. In my activator template http://typesafe.com/activator/template/play-akka-cluster-sample I used the same pattern you describe, building an *API* package that contains all the messages in order to distribute the classes over all services. However this means that your spray-frontend has dependencies to *all *API packages. This could get messy if you have a lot of small services (versioning, backwards compatibility, etc.) I'm really interested in other user suggestions :) cheers, Muki -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.
[akka-user] Re: Akka Cluster Project - Monolithic JAR or JARs per service?
We're also deploying an Akka cluster on CoreOS with Docker. We deploy the same fat JAR for every node (in fact, the exact same docker image) and then change their behavior by setting the role via environment variables. In our case, each role has a bootstrap class which sets up whatever role-specific actors might be needed (but *not* the worker actors) as well as the actors needed on every node. Cluster-aware routers running on our supervisor node are responsible for actually creating the worker actors as needed. We still split each role into sub-projects in our SBT project. We then have an SBT project within that called node which aggregates all of the service-specific sub-projects into a single JAR and builds the Docker image using sbt-docker. This way, we can iterate quickly on code within a specific service and keep our compile/test times down. That node project also houses our multi-JVM end-to-end tests. Then, opposite that, is our commons project which every other project depends on. The service projects explicitly do *not* depend on each other, only commons. This keeps us from getting them too coupled. Since our service isn't directly user-facing, we don't bother trying to upgrade a single service at a time, we just restart the whole thing and let it pull down the new Docker image if needed. Most of this is written up on our blog: http://blog.conspire.com/post/64130417462/akka-at-conspire-part-1-how-we-built-our-backend-on I've been meaning to write about our transition from Chef/Vagrant to CoreOS/Docker but I haven't found the time yet. Hopefully within the next few weeks (which, of course, I said a month ago). On Sunday, January 4, 2015 6:42:21 PM UTC-7, Kane Rogers wrote: Hi, hAkkers! We're in the process of moving our distributed Akka service from the dark ages of remoting and manual management of IPs (shudder) into the wonderful new world of Akka Cluster. Currently, our project is split up something like this: - spray-frontend - worker-1 - worker-2 - worker-3 Where the spray-frontend forwards messages to the different worker, depending on the type of job. In our current environment, each of these projects are deployed as individual fat JARs using sbt-assembly, and deployed onto individual nodes. In our planned environment, we'll be deploying these fat JARs into docker containers and allow CoreOS to take care of distributing the nodes. We're toying with things like roles, ClusterPoolRouter and ClusterGroupRouter to take care of distributing work amongst the correct node - but nothing is set in stone yet. This then begs the question - how should these nodes be deployed? I can see a couple of possibilities: - Docker container with fat JAR per project (eg. spray-frontend container, worker-1 container etc. etc). - Docker container with fat JAR containing all projects (eg. one container containing code for spray-frontend AND worker-1 etc.). Role is then set via environment variable, or a different main class is fired off on startup. Exploring the different options, one limitation that I can see is that ClusterPoolRouter requires the class of the actor that's going to be remotely deployed to the cluster to be *present on the class path of the router. *That is, if our front-ends are to create a worker on a remote machine to handle a request, the class for that router must be in the JAR on the front-end machine. *Please correct me if I'm mistaken here.* The advantage we've found in splitting the project up into these different sub-projects is tests are a lot quicker, code is smaller, etc. etc. Upgrades are also then made easier, as only certain machines have to be upgraded/restarted if a component of the service is improved/fixed. We also have a shared project between the different services that contains the dialect (eg. the different case classes for message sent between services). This was a best practice that we read about when we first went down the Akka path a couple of years ago, but things may have changed since then! Any suggestions, past experience, pointers to articles to read, activator templates or even just general advice would be really appreciated! Thanks and kind regards, Kane -- Read the docs: http://akka.io/docs/ Check the FAQ: http://doc.akka.io/docs/akka/current/additional/faq.html Search the archives: https://groups.google.com/group/akka-user --- You received this message because you are subscribed to the Google Groups Akka User List group. To unsubscribe from this group and stop receiving emails from it, send an email to akka-user+unsubscr...@googlegroups.com. To post to this group, send email to akka-user@googlegroups.com. Visit this group at http://groups.google.com/group/akka-user. For more options, visit https://groups.google.com/d/optout.