Problem here is that lot of traffic is generated internally by server
hosted by other projects within same co. now this need to be load
balanced. If we used geo then 70% of our traffic will be stuck on one
site. If all our clients were browser based then it would have been
easier.

theres more than 1 app using the memcache?
it´s a web host? what king of service/server?



2011/4/5 Mohit Anchlia <mohitanch...@gmail.com>:
> BTW: Please do read my initial post again if you have time.
>
> On Tue, Apr 5, 2011 at 3:11 PM, Mohit Anchlia <mohitanch...@gmail.com> wrote:
>> I haven't seen any project that solves this use case. Do you know of any?
>>
>> On Tue, Apr 5, 2011 at 2:44 PM, Roberto Spadim <robe...@spadim.com.br> wrote:
>>> hummm i think it´s not innovative, there´s some open projects that
>>> solve this, you should check before developing the whell again
>>>
>>> 2011/4/5 Mohit Anchlia <mohitanch...@gmail.com>:
>>>> Thanks everyone for replying. There is no easy solution for the
>>>> requirements being imposed upon us. Even though we have Oc3 still this
>>>> may not work sine memcached seems to be hash accross the servers
>>>> architecture and not master/master type architecture.
>>>>
>>>> I will have to come up with some other innovative idea to solve this
>>>> particular complex requirement.
>>>>
>>>> On Mon, Apr 4, 2011 at 11:23 PM, Dustin <dsalli...@gmail.com> wrote:
>>>>>
>>>>> On Apr 4, 9:28 pm, Roberto Spadim <robe...@spadim.com.br> wrote:
>>>>>
>>>>>> i'm using repcache without problem, if one server die the other have
>>>>>> the same information, when other server is up it's automatic sync with
>>>>>> the 'master'
>>>>>> it works well with php memcache session handler
>>>>>> but a good session handler could be a nosql database (membase) since
>>>>>> it's not a cache, it's a database...
>>>>>
>>>>>  Membase doesn't currently have cross datacenter master/master
>>>>> replication that can compensate for inconsistencies introduced by
>>>>> network outages or latency when a user is jumping back and forth
>>>>> between two data centers.  Anything that *can* is going to be much
>>>>> slower.
>>>>>
>>>>>  I think Brian's got it there.  Your best bet is to keep the users
>>>>> contained where networks are fast.  RTT between SF and VA is something
>>>>> like 20ms.  Replication doesn't help the situation.  You might as well
>>>>> pin the data for the user in one data center and just fetch it across
>>>>> the country every time (which is effectively what AP systems will do).
>>>>
>>>
>>>
>>>
>>> --
>>> Roberto Spadim
>>> Spadim Technology / SPAEmpresarial
>>>
>>
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial

Reply via email to