On Mon, Mar 28, 2011 at 8:30 PM, Mohit Anchlia <mohitanch...@gmail.com> wrote:
> Apache 2:
>
> We use apache 2 and we have 2 data centers. Problem is that both data
> centers are active. So if User uploads a file for eg: in site A that
> User can be directed to site B. Files are kept in sync asynchronously.
> And it could take as long as 1 hr to bring them in sync on the other
> data center. Now this presents unique problem as to how to provide
> user stickyness accross data centers with 2 diff. cluster of nodes
> each cluster running with nodes only in that data center.
>
> I am sending this out in case people have solved it already or have
> some suggestions on how this can be done.

You don't provide enough information on how you have both data centers
"active". My guess is that you are using DNS multiple A records, but
who knows.

If you are using DNS, then my suggestion would be to allow users to visit:

www.myapp.com, which is distributed to both data centers, then after
login, redirect them to:

dc1.myapp.com
- or -
dc2.myapp.com

The downside is there is no automatic fail over, the user would have
to go back to www.myapp.com and log back in to be redirected to an
accessible data center. For most cases, this would be acceptable.

Another way to do it is to put an intelligent load balancer at each
data center. This load balancer would then send connections to it's
backend servers. You can then configure all the backend servers at
both data centers into the load balancer. Use sticky connections in
the load balancer to send the user to the correct backend (perhaps at
the other data center).

If you have good enough load balancing software, in the case of no
stickiness, you could prefer the local backend servers over the remote
ones, avoiding a long hop to the other data center. Also, you can use
the presence of a cookie to determine stickiness. Then you can stick a
user to a group of backend servers AFTER they upload something.
Something like this.

1. User hits your site.
2. They are routed to a random data center.
3. They have no association, and thus are routed to a local backend.
4. User keeps browsing.
5. User performs a POST request to your upload script, it writes a
cookie to their browser which expires in 1h.
6. Subsequent requests hit the load balancer and are routed to the
preferred backend (where the upload occurred).
7. One hour later, the user is released and free to "float" again.

This is probably the best solution as a user will always use the
shortest path unless there is a reason not to (cookie). It also allows
you to control the stickiness at the application level, so you can be
selective. Failover occurs automatically, as the browser will connect
to a live data center, however, their data may not live there. Such is
life.

This is not even close to all the options available to you, but these
are the two simplest ones I could come up with without any specifics.

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
   "   from the digest: users-digest-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org

Reply via email to