Hey Frédéric, thanks for commenting. I've been looking at how to store some of our stuff directly in S3, but it seems a little problematic for some of our files. Some of them are template files that will be modified by end users, then rendered by the app instances to deliver a final page. It seems like storing the template directly in S3 could be a little slow. Of course, I haven't tested it yet....
On Jul 14, 4:38 pm, Frédéric Sidler <[email protected]> wrote: > Not from scalr, but here is our config. > We don't use GlusterFS, because I didn't know it before ;-) > > So at each deployment we upload the JS and CSS file to S3. Deployment is > done with one simple command line. We use yuicompressor to optimize these > files. Now we use CloudFront to deliver these files. So no more latency > problems and files accessible from all app instances. With this config, app > instances can scale and files are always accessible. Static files are served > by these instances, but directly from S3 via CloudFront. For the uploaded > files we stored them directly to S3 and we don't use CloudFront for that > (not necessary). We are using Django for our dev and the boto library is > used for that. > > For the DB, we decide to put mysql-proxy in front of the DB. Writes go to > master, read go to slave(s). One scalr user provide a mysql-proxy script > that is run at each mysql instance start. This update the mysql-proxy > process with the IP address of the master and the IP(s) address(es) of the > slave(s). So the DB is also scalable. > > We have one separated instance for memcache > We have one separated instance for the notification. A Twisted process is > checking an SQS queue for SMS, Jabber or Email notifications. > These instances are not scalable yet, but could be based on Load Average for > Memcache and based on SQS size for Twisted. > > Hope this helps. > > On Tue, Jul 14, 2009 at 6:16 PM, rhythmandcode <[email protected]>wrote: > > > > > Anybody? Scalr guys, can you comment? > > > On Jul 10, 4:01 pm, rhythmandcode <[email protected]> wrote: > > > Hey everybody, I've been scheming on the best way to utilize Scalr to > > > set up a farm for our application. I'd like to present what I've been > > > thinking of and see if anyone has any comments or critiques. > > > Specifically I'd like to know if anyone has done something similar and > > > if you had success or ran into problems. > > > > The basic overview is that we are deploying a fairly standard rails > > > CMS app. I'm planning on the farm having 4 roles. > > > > 1. DB - Standard DB setup. > > > 2. Storage - 1 or more EC2 instances configured to export some EBS > > > stores as a Gluster file system which will hold public assets and user > > > uploaded content. > > > 3. app-rails - Standard rails front ends. These instances will mount > > > the GlusterFS partition exported by the storage roles. > > > 4. Memcache - Caching for the app-rails. > > > > I'm planning to put an elastic load balancer in front of the app-rails > > > role. > > > > Some questions I have: > > > > Should the files for my rails app live on the GlusterFS? This would > > > allow us to just deploy app change to the storage nodes, then just > > > restart the rails process on the app-rails servers. It seems like > > > having multiple rails processes pointed at the same app directory > > > could be thorny. Anybody know about that? > > > > The other alternative would be to setup the app-rails roles to > > > checkout the app from Git each time they launch. Then when we push a > > > new version of the app, we push to each app-rails server and restart. > > > Are there any big benefits to doing it this way? > > > > Any feedback on this would be appreciated. > > > > Thanks, > > > Jeremy --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "scalr-discuss" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/scalr-discuss?hl=en -~----------~----~----~----~------~----~------~--~---
