The problem is with DNS convergence it takes time when your lb fails. Even if you have low ttls; If you use an eip you avoid that.
I've had lb's fail and come back and still some people bacause of DNS caching at the browser and local resolvers unable to reach the site. I've never had that issue when using eip's. Also when you use eip's you can make A records to the root of your DNS zone safely. If you have other DNS not under scalr control you can use A records instead of cnames which are faster to resolve. On Jan 3, 2013, at 8:11 AM, RichBos <[email protected]> wrote: > Hi Donovan > > Interesting, I'm wondering why you would use an EIP as the nature of Scalr > DNS negates the need for such? It seems like an unnecessary step? > > That said, load (and/or instance size) isn't really a concern for us at the > design stage, we're just wondering if the Nginx LB is actually needed for a > multiple instance LAMP farm as even though the Sclar wiki advises otherwise > it's unclear how traffic would be distributed if not, unless, as Srini > states, there may be some low-latency round robin DNS baked into the LAMP > role? > > Richard. > > On Thursday, 3 January 2013 16:02:45 UTC, Donovan wrote: > Use a nginx lb with an elastic ip; point dns to the elastic ip. I've found > micros work fine for lbs even with https. Then I monitor. If the micro gets > overloaded I skip smalls and go straight to c2.mediums. > > On Jan 3, 2013, at 6:54 AM, RichBos <[email protected]> wrote: > >> Hi, I'm just wondering if an Nginx load balancer is required for a x2 role >> LAMP Farm? (As it would be using x2 App instances). I would expect so but >> the Wiki documentation says no. However as I understand it wouldn't the DNS >> be better pointed at the Nginx LB rather than the LAMP (app) 'role'? I'm >> only thinking so as if not how would (web/app) traffic be distributed evenly >> between the x2 LAMP instances? Or have I misunderstood things? >> >> http://wiki.scalr.com/display/docs/Mixed+images+-+LAMP >> >> Any advice appreciated. >> >> Richard. >> -- >> >> > > -- > > --
