Anybody? Scalr guys, can you comment?
On Jul 10, 4:01 pm, rhythmandcode <[email protected]> wrote: > Hey everybody, I've been scheming on the best way to utilize Scalr to > set up a farm for our application. I'd like to present what I've been > thinking of and see if anyone has any comments or critiques. > Specifically I'd like to know if anyone has done something similar and > if you had success or ran into problems. > > The basic overview is that we are deploying a fairly standard rails > CMS app. I'm planning on the farm having 4 roles. > > 1. DB - Standard DB setup. > 2. Storage - 1 or more EC2 instances configured to export some EBS > stores as a Gluster file system which will hold public assets and user > uploaded content. > 3. app-rails - Standard rails front ends. These instances will mount > the GlusterFS partition exported by the storage roles. > 4. Memcache - Caching for the app-rails. > > I'm planning to put an elastic load balancer in front of the app-rails > role. > > Some questions I have: > > Should the files for my rails app live on the GlusterFS? This would > allow us to just deploy app change to the storage nodes, then just > restart the rails process on the app-rails servers. It seems like > having multiple rails processes pointed at the same app directory > could be thorny. Anybody know about that? > > The other alternative would be to setup the app-rails roles to > checkout the app from Git each time they launch. Then when we push a > new version of the app, we push to each app-rails server and restart. > Are there any big benefits to doing it this way? > > Any feedback on this would be appreciated. > > Thanks, > Jeremy --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "scalr-discuss" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/scalr-discuss?hl=en -~----------~----~----~----~------~----~------~--~---
