I wasn't seeking to belittle your project... you have dismissed several load
balancing solutions simply because 70 percent of your traffic originates in
one location and I was asking why you couldn't, worst case scenario, just
have a 70/30 split in your load balancing if it's really not possible to
On 4/11/2011 1:04 PM, Mohit Anchlia wrote:
Yes that's correct. After giving it a thought and seeing how
complicated this approach was becoming, we have decided to change our
API calls to include user identifier in query string. We use LTM F5
and we will persist user info at F5 LTM with expiry set
Yes that's correct. After giving it a thought and seeing how
complicated this approach was becoming, we have decided to change our
API calls to include user identifier in query string. We use LTM F5
and we will persist user info at F5 LTM with expiry set to 1 hr.
On Mon, Apr 11, 2011 at 10:59 AM,
On 4/11/2011 12:07 PM, Mohit Anchlia wrote:
Yes it is a big deal for the business otherwise I wouldn't be posting
it here asking for suggestions. I respect everyones input and
thanksful for that, but I need to see if it will work for us too :)
Agreed, it's not a rocket science.
It is at least
also is the 70 percent thing really honestly that huge of a deal? send all
the traffic from the data center to one instance and the rest to the other-
is not an even split but it's not that far off.
really it seems to me like people are coming up with perfectly valid
solutions to your problem and
Yes it is a big deal for the business otherwise I wouldn't be posting
it here asking for suggestions. I respect everyones input and
thanksful for that, but I need to see if it will work for us too :)
Agreed, it's not a rocket science.
On Mon, Apr 11, 2011 at 9:56 AM, Adam Lee wrote:
> also is th
Thanks for the post!
On Mon, Apr 11, 2011 at 8:51 AM, Roberto Spadim wrote:
> i was reading about this and remember your email..
>
> http://www.clusterdb.com/mysql-cluster/scalabale-persistent-ha-nosql-memcache-storage-using-mysql-cluster/comment-page-1/#comment-30425
>
>
> 2011/4/5 Mohit Anchlia
Use CPAN Geo-IP and cache the info for 24 hours.
On Wed, Apr 6, 2011 at 11:49 AM, dormando wrote:
>
>
> On Wed, 6 Apr 2011, Mohit Anchlia wrote:
>
>> Thanks! These points are on my list but none of them are useful. The
>> reason is I think I mentioned before that most of these servers that
>> ar
On Wed, 6 Apr 2011, Mohit Anchlia wrote:
> Thanks! These points are on my list but none of them are useful. The
> reason is I think I mentioned before that most of these servers that
> are sending requests to us are hosted inside the co. but by different
> group. So geoReplication will not work
Thanks! These points are on my list but none of them are useful. The
reason is I think I mentioned before that most of these servers that
are sending requests to us are hosted inside the co. but by different
group. So geoReplication will not work in this case since 70% of
request comes from one reg
sorry they should be private mails
2011/4/5 dormando :
> (First; Roberto I swear if you do that thing where you spam three e-mails
> in a row one more time I'm blocking you from the list. To be honest I do
> that to occasionally, but I limit myself to two responses and I try a lot
> harder to be u
(First; Roberto I swear if you do that thing where you spam three e-mails
in a row one more time I'm blocking you from the list. To be honest I do
that to occasionally, but I limit myself to two responses and I try a lot
harder to be useful.)
Mohit; Sorry for the confusion here. I hope you can see
check that session and memcache have a problem...
if your memcache get full of memory, some sessions are discarted
you should use a nosql (memcachedb, or membase)
2011/4/5 Roberto Spadim :
> about repcache:
> asynchronous data repliacation.
>
> why my implementation works? i make load balance per
about repcache:
asynchronous data repliacation.
why my implementation works? i make load balance per user
all write is always on only one server
if server down all users get back to the stand alone master
i´m using linux server (archlinux)
to sync i start repcache, wait it get in sync (i´m using
hum, repcache don´t have many information
it´s a older memcache version with a patch to make replication
at comand line you talk to what IP to 'replicate'
when one server get down and get up again, it first sync information
from 'master'
any information changed in one server is send to another (i
Thanks! I am unable to find design doc or some kind of "how it works"
doc on repcached. Do you have that info handy?
Also, how do I integrate it with apache httpd.
Thanks again for the link.
On Tue, Apr 5, 2011 at 3:40 PM, Roberto Spadim wrote:
> hum, sync must be done before users are allowed
hum, sync must be done before users are allowed to use service?
try this http://repcached.lab.klab.org/
membase is a nosql (it can save information on disk), i don´t know if
it do replication like repcache but theres some 'cluster' in docs:
http://techzone.couchbase.com/wiki/display/membase/Membase
One important thing. User will be stuck on one site for one hour which
will allow for data replication to occur. So a User should not be load
balanced to other site until one hour is expired.
On Tue, Apr 5, 2011 at 3:30 PM, Mohit Anchlia wrote:
> Currently there is no memcache. I was thinking of
Currently there is no memcache. I was thinking of using memcache to do
store user session per site info and redirect if user came to wrong
site, but memcached seems to be not meant to be master/master or
master/slave either. Generally industry practice is such use cases is
to use cookies but we can
That is already in place but business requirement is to do
active/active hence need for more complicated solution.
On Mon, Apr 4, 2011 at 3:32 PM, Brian Moon wrote:
> Use geo dns instead to stick users to a single datacenter and only fail over
> to the other data center when there is an issue.
BTW: Please do read my initial post again if you have time.
On Tue, Apr 5, 2011 at 3:11 PM, Mohit Anchlia wrote:
> I haven't seen any project that solves this use case. Do you know of any?
>
> On Tue, Apr 5, 2011 at 2:44 PM, Roberto Spadim wrote:
>> hummm i think it´s not innovative, there´s som
Problem here is that lot of traffic is generated internally by server
hosted by other projects within same co. now this need to be load
balanced. If we used geo then 70% of our traffic will be stuck on one
site. If all our clients were browser based then it would have been
easier.
theres more than
Bad design. Besides not that easy :) If it was I wouldn't have posted here.
On Mon, Apr 4, 2011 at 5:04 PM, Brian Moon wrote:
> You have full control over what resources your internal servers use. Just
> assign them a datacenter and go.
>
> Brian.
> http://brian.moonspot.net
>
> On 4/4/11 6:59 PM
Problem here is that lot of traffic is generated internally by server
hosted by other projects within same co. now this need to be load
balanced. If we used geo then 70% of our traffic will be stuck on one
site. If all our clients were browser based then it would have been
easier.
> 2011/4/4 Brian
I haven't seen any project that solves this use case. Do you know of any?
On Tue, Apr 5, 2011 at 2:44 PM, Roberto Spadim wrote:
> hummm i think it´s not innovative, there´s some open projects that
> solve this, you should check before developing the whell again
>
> 2011/4/5 Mohit Anchlia :
>> Tha
since it´s high latency, never will get sub milisecond right? so,
master/master will be >milisecond for writes, reads are done locally
here the solution is a master/slave
2011/4/5 Brian Moon :
> Products that do master/master across the WAN (or high latency LAN) and can
> assure that a request to
Products that do master/master across the WAN (or high latency LAN) and
can assure that a request to server A in datacenter 1 that sets a value
is available immediately (sub milisecond) to server B in datacenter 2?
And that does not involve making a connection back across the WAN to get
the dat
hummm i think it´s not innovative, there´s some open projects that
solve this, you should check before developing the whell again
2011/4/5 Mohit Anchlia :
> Thanks everyone for replying. There is no easy solution for the
> requirements being imposed upon us. Even though we have Oc3 still this
> ma
Thanks everyone for replying. There is no easy solution for the
requirements being imposed upon us. Even though we have Oc3 still this
may not work sine memcached seems to be hash accross the servers
architecture and not master/master type architecture.
I will have to come up with some other innov
On Apr 4, 9:28 pm, Roberto Spadim wrote:
> i'm using repcache without problem, if one server die the other have
> the same information, when other server is up it's automatic sync with
> the 'master'
> it works well with php memcache session handler
> but a good session handler could be a nosql
hum, a solution... not memcache list opnion, but a user opnion
use membase with replication at server side, or repcache
replication at client side will have async write/reads and maybe
problems (replication for mysql users)
replication at server side will be sync (ndb cluster for mysql users)
i'm
It's not even a network problem, it's an application design problem.
Even if all you wanted to do was load balance a user session (user ID
& some other text-based info) across data centers, it's not that
trivial, particularly if you didn't plan for that up front. At the
bare minimum all you need i
That is not a bad design. That is drop dead easy.
You are asking to this list and memcached to magically solve a problem
that is not realistically solvable with the current architecture of the
Internet at a scale you are likely to be running on.
Now, if you would like to invest in private OC3
You have full control over what resources your internal servers use.
Just assign them a datacenter and go.
Brian.
http://brian.moonspot.net
On 4/4/11 6:59 PM, Mohit Anchlia wrote:
Problem here is that lot of traffic is generated internally by server
hosted by other projects within same co. now
two solutions for replicate...
replication at client side (memcache libs, i don´t like this king of
replication since we can have 'dirst reads' if you use memcache as a
nosql database not a cache)
replication at server side (repcache, or membase, memcached don´t will
do it, this solution i like, bo
We are active/active as well. But, we use geo dns so that people only
get DNS for one data center. Having someone be able to hit any
datacenter in the world at any time without any temporary loss of
service is not reasonable. I don't care who you are. Even Google sticks
you to a geo-regional ba
Use geo dns instead to stick users to a single datacenter and only fail
over to the other data center when there is an issue. This will be much
less of a headache than trying to move cache data back and forth over
the net.
Brian.
http://brian.moonspot.net
On 4/4/11 3:03 PM, Mo wrote:
We have
We have multiple data centers and are now planning to make this
application active/active. Which means user can be load balanced. User
generally uploads a file and it should be accessible on both sites.
We expect it will take upto 1 hr to replicate files in worst case
scenario and we are not able
38 matches
Mail list logo