Hi Frédéric, thank you for the info.  So, I've done a little testing
on this and your suggestion to use S3 for hosting these files seems to
be a pretty good one.  Speed is quite acceptable.  However, now I've
run up on a different question.  I realize this is more of an AWS-S3
question than a Scalr question, but I'm curious if you've had the same
problem.

We're trying to do some virtual hosting for our clients, and as such
have set up 1 bucket for each client so that they can have vanity URLs
for their images (images.client-domain.com) in addition to their main
web site (www.client-domain.com) which is being served by our EC2
instances.  Apparently AWS still has a 100 bucket limit per account,
so this means we could only have 100 clients.  Not good.  Have you run
into this, and/or do you have any ideas on how to proceed?

Thanks again,
Jeremy

On Jul 15, 3:07 pm, Frédéric Sidler <[email protected]> wrote:
> I understand. In our case we store on CSS, JS and file uploaded by the
> customer on S3. This is why the configuration we have is working.
> I know there is latency from S3 if there is an internet user trying to
> access it. But if I remember well, this latency problem is not the same if
> you try to access this file inside Amazon architecture directly from an
> instance that resides in the same availability zone. I think you should give
> it a try or do a search.
>
> On Wed, Jul 15, 2009 at 7:16 PM, rhythmandcode <[email protected]>wrote:
>
>
>
> > Hey Frédéric, thanks for commenting.
>
> > I've been looking at how to store some of our stuff directly in S3,
> > but it seems a little problematic for some of our files.  Some of them
> > are template files that will be modified by end users, then rendered
> > by the app instances to deliver a final page.  It seems like storing
> > the template directly in S3 could be a little slow.  Of course, I
> > haven't tested it yet....
>
> > On Jul 14, 4:38 pm, Frédéric Sidler <[email protected]> wrote:
> > > Not from scalr, but here is our config.
> > > We don't use GlusterFS, because I didn't know it before ;-)
>
> > > So at each deployment we upload the JS and CSS file to S3. Deployment is
> > > done with one simple command line. We use yuicompressor to optimize these
> > > files. Now we use CloudFront to deliver these files. So no more latency
> > > problems and files accessible from all app instances. With this config,
> > app
> > > instances can scale and files are always accessible. Static files are
> > served
> > > by these instances, but directly from S3 via CloudFront. For the uploaded
> > > files we stored them directly to S3 and we don't use CloudFront for that
> > > (not necessary). We are using Django for our dev and the boto library is
> > > used for that.
>
> > > For the DB, we decide to put mysql-proxy in front of the DB. Writes go to
> > > master, read go to slave(s). One scalr user provide a mysql-proxy script
> > > that is run at each mysql instance start. This update the mysql-proxy
> > > process with the IP address of the master and the IP(s) address(es) of
> > the
> > > slave(s). So the DB is also scalable.
>
> > > We have one separated instance for memcache
> > > We have one separated instance for the notification. A Twisted process is
> > > checking an SQS queue for SMS, Jabber or Email notifications.
> > > These instances are not scalable yet, but could be based on Load Average
> > for
> > > Memcache and based on SQS size for Twisted.
>
> > > Hope this helps.
>
> > > On Tue, Jul 14, 2009 at 6:16 PM, rhythmandcode <[email protected]
> > >wrote:
>
> > > > Anybody?  Scalr guys, can you comment?
>
> > > > On Jul 10, 4:01 pm, rhythmandcode <[email protected]> wrote:
> > > > > Hey everybody, I've been scheming on the best way to utilize Scalr to
> > > > > set up a farm for our application.  I'd like to present what I've
> > been
> > > > > thinking of and see if anyone has any comments or critiques.
> > > > > Specifically I'd like to know if anyone has done something similar
> > and
> > > > > if you had success or ran into problems.
>
> > > > > The basic overview is that we are deploying a fairly standard rails
> > > > > CMS app.  I'm planning on the farm having 4 roles.
>
> > > > > 1.  DB - Standard DB setup.
> > > > > 2.  Storage - 1 or more EC2 instances configured to export some EBS
> > > > > stores as a Gluster file system which will hold public assets and
> > user
> > > > > uploaded content.
> > > > > 3.  app-rails - Standard rails front ends.  These instances will
> > mount
> > > > > the GlusterFS partition exported by the storage roles.
> > > > > 4.  Memcache - Caching for the app-rails.
>
> > > > > I'm planning to put an elastic load balancer in front of the
> > app-rails
> > > > > role.
>
> > > > > Some questions I have:
>
> > > > > Should the files for my rails app live on the GlusterFS?  This would
> > > > > allow us to just deploy app change to the storage nodes, then just
> > > > > restart the rails process on the app-rails servers.  It seems like
> > > > > having multiple rails processes pointed at the same app directory
> > > > > could be thorny.  Anybody know about that?
>
> > > > > The other alternative would be to setup the app-rails roles to
> > > > > checkout the app from Git each time they launch.  Then when we push a
> > > > > new version of the app, we push to each app-rails server and restart.
> > > > > Are there any big benefits to doing it this way?
>
> > > > > Any feedback on this would be appreciated.
>
> > > > > Thanks,
> > > > > Jeremy
>
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"scalr-discuss" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/scalr-discuss?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to