I'm just scared of what the costs would look like if I have 2000 users on
websockets recieving very small packets of updated info (think a chat
server or something similar).  Guess maybe I should disable scaling and
only enable it once it's needed, so long as that can actually be done.  The
other possibility might be to split the websocket servers off from
webserver itself and manually spin up new instances when the CPU or network
load goes up.  Not sure how I would tell newly connected clients to connect
to the newly spun up instances though.


On Tue, Jan 14, 2014 at 9:12 PM, Grant Shipley <gship...@gmail.com> wrote:

> On Tue, Jan 14, 2014 at 8:30 PM, S. Dale Morrey <sdalemor...@gmail.com
> >wrote:
>
> > No that actually sounds perfect, thank you.
> > I'm going to use the free tier for initial testing.  Then move to paid
> when
> > we go live.
> > Is there anyway to scale based on latency/page load time rather than by
> > number of connections?
> >
> >
> Not right now but we are working on.  A possible workaround if you need is
> to disable auto-scaling and then run a job that checks latency from pingdom
> or something.  Once you script sees that latency is bad, you can manually
> scale up via ssh to the box and a scaleup command.
>
> --
> gs
>
> >
> > On Tue, Jan 14, 2014 at 8:01 PM, Grant Shipley <gship...@gmail.com>
> wrote:
> >
> > > On Tue, Jan 14, 2014 at 7:27 PM, S. Dale Morrey <sdalemor...@gmail.com
> > > >wrote:
> > >
> > > > Ok so this is not intended as flamebait or a troll or anything.
> > > > But earlier I mentioned my site running on Drupal is basically
> falling
> > > down
> > > > under it's own weight.
> > > >
> > > > I have an extremely limited budget upfront.  I'm open to completely
> > > > dropping Drupal at this point and exploring other options.
> > > >
> > > > One of the options I'm looking at is KeystoneJs.  It looks really
> nice,
> > > and
> > > > I figure if I go with with it, I may as well go whole hog and move
> > > > providers as well.
> > > >
> > > > Keystone requires nodejs & mongo.  For obvious reasons I would
> greatly
> > > > prefer to have a development environment and a production
> environment.
> > > > Since Redshift offers 3 servers I can see myself setting it up as
> > > > "development 1 box all inclusive", "production 2 boxes, 1 would be
> node
> > > and
> > > > 1 would be mongo".
> > > >
> > > > I know we have someone from OpenShift on the list, so I figured I
> would
> > > ask
> > > > if that is feasible.  Also is there any way to spin up additional
> > > instances
> > > > based on load similar to AWS's AutoScale feature.
> > > >
> > >
> > > You are talking about me.  Given that I work there I will keep this as
> > > unbiased as possible and just tell you what it can or can't do and let
> > > others chime in on the other areas.
> > >
> > > On the free tier, you can create 3 free gears (think containers).  This
> > > doesn't really allow you scale your application because HAProxy would
> > > consume 1 gear, you app server 1 gear, and your database 1 gear.  You
> app
> > > wouldn't have anywhere to scale up.  The free tier is set up so that it
> > > allows people to use the platform for smallish type sites that don't
> need
> > > scaling.
> > >
> > > As far as development, stage, production etc you can do all of this
> with
> > > the free tier.  You would just create a different gear for your dev
> > > instance and then add the remote git repository and push from that when
> > you
> > > are ready for production. You can also enable rollbacks for deployments
> > so
> > > if something goes wrong with a push, reverting back is fairly easy.
> > >
> > > Also, on the free tier, by default when you add a database to an
> > > application, the database is on the same gear as the application code.
> >  In
> > > theory, you could create a scaled app on the free tier to seperate your
> > db
> > > from your app but you would consume all of the free resources.
> > >
> > > As for the version of packages you are considering, the official
> packages
> > > supported are nodejs 0.10 and mongodb 2.4.  You can of course create
> your
> > > own cartridges to run any version/binary that you want.
> > >
> > > On the Bandwidth and disk space front.  Free gears get 512mb of ram,
> 1gb
> > of
> > > diskspace, and unlimited bw.  We do not monitor BW or cap it.
> > >
> > > If you had a paid account (20.00 a month + .02 an hour for each gear
> > above
> > > the three free ones) scaling works automatically based upon the number
> of
> > > concurrent HTTP requests your application has at any point in time.  I
> > > think the number is 20 concurrent connections but would have to double
> > > check.  Once your application has that many connections, the platform
> > adds
> > > another gear, rsyncs your code over from the head gear, deploys it, and
> > > then add it to HAProxy.  It then continues to monitor to see of it can
> > > scale back down based upon that same metrics.
> > >
> > > I hope that makes sense.
> > >
> > >
> > > > For the rest of the list, does structuring my environment this way
> make
> > > > sense?  Or would it be better to have the development box talking to
> > the
> > > > production DB?
> > > > Also has anyone actually used OpenShift to power a site that
> > experiences
> > > > reasonably heavy loads?
> > > >
> > > > Thanks!
> > > >
> > > > /*
> > > > PLUG: http://plug.org, #utah on irc.freenode.net
> > > > Unsubscribe: http://plug.org/mailman/options/plug
> > > > Don't fear the penguin.
> > > > */
> > > >
> > >
> > > /*
> > > PLUG: http://plug.org, #utah on irc.freenode.net
> > > Unsubscribe: http://plug.org/mailman/options/plug
> > > Don't fear the penguin.
> > > */
> > >
> >
> > /*
> > PLUG: http://plug.org, #utah on irc.freenode.net
> > Unsubscribe: http://plug.org/mailman/options/plug
> > Don't fear the penguin.
> > */
> >
>
> /*
> PLUG: http://plug.org, #utah on irc.freenode.net
> Unsubscribe: http://plug.org/mailman/options/plug
> Don't fear the penguin.
> */
>

/*
PLUG: http://plug.org, #utah on irc.freenode.net
Unsubscribe: http://plug.org/mailman/options/plug
Don't fear the penguin.
*/

Reply via email to