Of course you can.

If you want to enforce a synchronizing action in multiple servers, you
can keep a separate list of single instance memcached server
connections and iterate over each of those connections to perform any
valid memcached operation.

The reason that you are getting push back from respondents seems to me
to be that the questions that you are asking aren't in what are
considered the 'natural' use of memcached. That is to say, letting the
library hash keys to servers transparently. It is however a 'natural'
design goal to want to have synchronizations.

Now, if you are making a dependent sequence of operations you will
have nearly identical issues as those of the deadly embrace / multi
process / multi thread dead lock scenario in classical
synchronization.

To avoid those types of problems while enabling whatever your long
term goal is, you should ensure that the server acquisition of each
resource [ I'm assuming CAS on a key and change of value on each ]
would have to require identical order, if you want to avoid costly
complicated retry / reset / fallback scenarios.

So what I'm suggesting is the following:
serverlist = [
server1
server2
]

serverpool = serverlist

Session Acquire:
   for server : serverlist : do server->cas( LOCK ) ; done

Now pool = memcached_pool( serverpool )

value = pool->get( key )
pool->set( key, value ) . . .


lock release? Not sure if your model requires explicit release, or if
timeouts would suffice.

Does this make sense?

Now just let's say you used some other [ possibly random access order
] to acquire the "lock".

client A : A->server2->cas( lock )
client B : B->server1->cas( lock )


Dead locked until release and update.

client A : A->server1->cas( lock )
client B : B->server2->cas( lock )

Now this isn't the last of the complications. How do you handle a
client death notification for a client locker?
Can your application tolerate retries? Can it use timeout of the locks?

So I'm just making some assumptions about what the larger context is
for your question, but I think that the answer is, yes you can attempt
for force individual operations on each server, but if you want to do
this, there are likely more issues behind your design goals.




On Sat, Oct 27, 2012 at 2:27 AM, Kiran Kumar <krn1...@gmail.com> wrote:
> Hi ,
>
> I have Two  Memcache Servers set up for my Application as shown below
>
> String location = "10.1.1.1:11211 10.1.1.2:11211";
> MemcachedClientBuilder builder = new
> XMemcachedClientBuilder(AddrUtil.getAddresses(location));
>
> During the Memcache clinet operation ,  Key can be stored in any one of the
> Sever Mentioned above .
>
> When i observed the logs , the Setup is showing two Memcache Servers
> recognized  , i verified with the below code .
>
>
>
> Collection<InetSocketAddress> addressList =
> MemcachedClient.getAvaliableServers();
> for (InetSocketAddress address : addressList) {
> logger.error("THE address is"+address)
> }
>
>
> THE address is 10.1.1.1
> THE address is 10.1.1.2
>
>
>
>
> My question is , once i knew the Server IP and port , is Is it possible get
> the Key based on the Server ??
>
> Means something like this i need
>
> MemcachedClinet.getKey("MyKEY") // from server 1
> MemcachedClinet.getKey("MyKEY") // from server 2
>
>
> Please let me know if this is possible ??
>
>

Reply via email to