Re: XMemcached network layout exception

2020-07-24 Thread John Reilly
There is probably a xmemcached mailing list that would be better for this
question, but I suspect it is because mcrouter does not support the binary
protocol (which your client trying to use).

On Fri, Jul 24, 2020 at 3:34 PM saravanakumar rajendran <
saravanakumarr...@gmail.com> wrote:

> We have a Cluster of MemCacheD with McRouter as a Routing.
>
> 17:14:43.060 [Xmemcached-Reactor-0] ERROR
> n.r.xmemcached.impl.MemcachedHandler - XMemcached network layout exception
> net.rubyeye.xmemcached.exception.MemcachedDecodeException: Not a proper
> response
> at
> net.rubyeye.xmemcached.command.binary.BaseBinaryCommand.readMagicNumber(BaseBinaryCommand.java:311)
> at
> net.rubyeye.xmemcached.command.binary.BaseBinaryCommand.readHeader(BaseBinaryCommand.java:184)
> at
> net.rubyeye.xmemcached.command.binary.BaseBinaryCommand.decode(BaseBinaryCommand.java:120)
> at
> net.rubyeye.xmemcached.codec.MemcachedDecoder.decode0(MemcachedDecoder.java:55)
> at
> net.rubyeye.xmemcached.codec.MemcachedDecoder.decode(MemcachedDecoder.java:50)
> at
> com.google.code.yanf4j.nio.impl.NioTCPSession.decode(NioTCPSession.java:281)
> at
> com.google.code.yanf4j.nio.impl.NioTCPSession.decodeAndDispatch(NioTCPSession.java:223)
> at
> com.google.code.yanf4j.nio.impl.NioTCPSession.readFromBuffer(NioTCPSession.java:194)
> at
> com.google.code.yanf4j.nio.impl.AbstractNioSession.onRead(AbstractNioSession.java:184)
> at
> com.google.code.yanf4j.nio.impl.AbstractNioSession.onEvent(AbstractNioSession.java:324)
> at
> com.google.code.yanf4j.nio.impl.SocketChannelController.dispatchReadEvent(SocketChannelController.java:54)
> at
> com.google.code.yanf4j.nio.impl.NioController.onRead(NioController.java:150)
> at com.google.code.yanf4j.nio.impl.Reactor.dispatchEvent(Reactor.java:310)
> at com.google.code.yanf4j.nio.impl.Reactor.run(Reactor.java:177)
> 17:14:43.061 [Xmemcached-Reactor-0] ERROR
> c.g.c.y.core.impl.AbstractSession - Decode error
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/3c53a4e0-c272-4f54-b3be-8464e904ae15o%40googlegroups.com
> 
> .
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/CAJ__CS-oGWuzDeq%2BpYEWoXZ16_1ZLj28j7FAs_s5ZS-PcdrkuQ%40mail.gmail.com.


Re: get value, but only if cas token is not the provided one

2019-09-23 Thread John Reilly
On Mon, Sep 23, 2019 at 1:48 PM dormando  wrote:

> Gotcha. Thanks a ton for reaching out and putting up with my questions :)
>

Not at all :) - thank you for all your work on memcached.
mget/mset/mdelete will certainly be a great addition.


> One other thing mget might get you here is an easy probabilistic hot
>
cache. I really like probabilistic algorithms since they require low or
> zero coordination :)
>
> With mget, you could:
>
> `mget foo slhvfc` etc
>
> s = size, required kind of :)
> l = last access time (fixing this in the PR today)
> h = if the item's been hit before (also being fixed today)
> v = value
> f = client flags (if you need them)
> c = cas value, for your later comparison/check
>
> Then for your hot cache:
>
> if (hit && "last access is < 5 seconds" && random(1000) == 1) {
>   insert_into_localcache(obj);
> }
>
> to autotune, that 1000 becomes a variable you sync out periodically (or
> simply stick in the local cache for 60 seconds unconditionally),
> decreasing or increasing to match a target hit ratio on the local cache.
>

That is a really nice idea - I'll definitely have to explore it.  An
effective hot cache like this may reduce or eliminate the need for
mitigating specific keys.  We could incorporate size into it so that larger
values get a higher weighting.

Thanks,
John

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/CAJ__CS-adkKOTf8%3Du3qX0RW_4zSeH%2B6ZusGiYgLex8-5y9wgcA%40mail.gmail.com.


Re: get value, but only if cas token is not the provided one

2019-09-23 Thread John Reilly
I am on the caching team in Box and I'm thinking about this potentially as
part of a mechanism for hot key mitigation rather than for general use for
all gets.  Assuming I have a mechanism for detecting hot keys and notifying
clients about them, I believe the clients could mitigate the impact of hot
keys using a small local cache for these cache entries but I would still
want to ensure the clients are not using stale data from the local cache.
Using mget in either of the ways we talked about before would work for this
scenario, but having the "fetch if cas does not match" flag would avoid the
second round trip in the case that the value changes.

Cheers,
John

On Sun, Sep 22, 2019 at 12:20 AM dormando  wrote:

> Can you give me more background on your workload?
>
> Correct me if I'm wrong but your aim was to use a local cache but check
> the memcached server on every fetch to find out if it's valid?
>
> If your objects are less than... like a kilobyte that's actually slower
> than not having the local cache at all. Think you need to be in the 32k+
> range to start seeing benefits.
>
> Best is to time bound it, at any rate. Max 1 validations per second for
> hot keys that get X requests per second each.
>
> I'm going to draft out two possible mget flags to see if they'd not be
> overcomplex:
>
> - Return value if CAS match
> - Return value if CAS not match.
>
> ... the latter of which would do what you want, but lets just take a quick
> look at your workload first so you don't end up with something more
> complicated and slower :) I'm not sure I can get the flags to make sense
> but I'll try.
>
> On Sat, 21 Sep 2019, John Reilly wrote:
>
> > Hi dormando,Sorry for the delayed response.  I finally got a chance to
> read through https://github.com/memcached/memcached/pull/484 .  It sounds
> great.
> >
> > In my case, I was thinking about using a local cache to mitigate the
> network impact of hot keys rather than per-request performance reasons, but
> I was hoping to do that without
> > the clients potentially using stale data from their local cache.  It
> might still be nice to have a flag on mget to fetch the value if it does
> not match a provided cas, but in
> > the absence of this flag I think it would work fine using mget to only
> get the cas, and doing a full fetch on cas mismatch.
> >
> > Cheers,
> > John
> >
> >
> >
> > On Tue, Sep 17, 2019 at 5:43 PM dormando  wrote:
> >   Hey,
> >
> >   Check this out: https://github.com/memcached/memcached/pull/484
> >
> >   You can't quite do this with the way metaget is now, though it's
> feasible
> >   to add some "value if cas match on mget" flag. I'd have to think it
> >   through first.
> >
> >   For local caches though, unless your object is huge, simply
> waiting on a
> >   round trip to memcached to see if it's up to date removes most of
> the
> >   value of having the local cache. With a local cache you have to
> check it
> >   first, then check if it's fresh, then use it. It's likely the same
> speed
> >   to just not have the local cache at that point so you can avoid
> the CPU
> >   burn of the initial hash/test or trade it for CPU/network used in
> pulling
> >   in the value and having a simple system.
> >
> >   However! If you have a limited size "hot cache" and you can
> asynchronously
> >   test if they need to update, you could (say once per second or
> whatever
> >   makes sense for how hot your objects are), kick off an async test
> which
> >   runs mget with options for no-bump (optionally), no value, and cas
> (no
> >   flags, size, etc) for a lightweight response of just the cas value.
> >
> >   If the cas doesn't match, re-issue for a full fetch. This works
> okay for
> >   high frequency items since an update would only leave them out of
> sync
> >   briefly. Polling kind of sucks but you'd only do this when it
> would reduce
> >   the number of requests back to origin anyway :)
> >
> >   I'm hoping to get metaget in mainline ASAP. Been hunting around for
> >   feedback :) Should be finishing up the code very soon, with merge
> once a
> >   bit more confident.
> >
> >   On Tue, 17 Sep 2019, John Reilly wrote:
> >
> >   > Hi all,I was just thinking it would be great to be able to cache
> the most used items in a local cache on the client side and I think this
> would be possible if
> >   there was a way
> >   > for the client to s

Re: get value, but only if cas token is not the provided one

2019-09-22 Thread John Reilly
Hi dormando,
Sorry for the delayed response.  I finally got a chance to read through
https://github.com/memcached/memcached/pull/484 .  It sounds great.

In my case, I was thinking about using a local cache to mitigate the
network impact of hot keys rather than per-request performance reasons, but
I was hoping to do that without the clients potentially using stale data
from their local cache.  It might still be nice to have a flag on mget to
fetch the value if it does not match a provided cas, but in the absence of
this flag I think it would work fine using mget to only get the cas, and
doing a full fetch on cas mismatch.

Cheers,
John



On Tue, Sep 17, 2019 at 5:43 PM dormando  wrote:

> Hey,
>
> Check this out: https://github.com/memcached/memcached/pull/484
>
> You can't quite do this with the way metaget is now, though it's feasible
> to add some "value if cas match on mget" flag. I'd have to think it
> through first.
>
> For local caches though, unless your object is huge, simply waiting on a
> round trip to memcached to see if it's up to date removes most of the
> value of having the local cache. With a local cache you have to check it
> first, then check if it's fresh, then use it. It's likely the same speed
> to just not have the local cache at that point so you can avoid the CPU
> burn of the initial hash/test or trade it for CPU/network used in pulling
> in the value and having a simple system.
>
> However! If you have a limited size "hot cache" and you can asynchronously
> test if they need to update, you could (say once per second or whatever
> makes sense for how hot your objects are), kick off an async test which
> runs mget with options for no-bump (optionally), no value, and cas (no
> flags, size, etc) for a lightweight response of just the cas value.
>
> If the cas doesn't match, re-issue for a full fetch. This works okay for
> high frequency items since an update would only leave them out of sync
> briefly. Polling kind of sucks but you'd only do this when it would reduce
> the number of requests back to origin anyway :)
>
> I'm hoping to get metaget in mainline ASAP. Been hunting around for
> feedback :) Should be finishing up the code very soon, with merge once a
> bit more confident.
>
> On Tue, 17 Sep 2019, John Reilly wrote:
>
> > Hi all,I was just thinking it would be great to be able to cache the
> most used items in a local cache on the client side and I think this would
> be possible if there was a way
> > for the client to send a request to get a key, but only if the cas value
> is not the same as the cas token of the value I already know about locally
> in the client.  I don't think
> > this is possible with either protocol today, but would be happy if told
> otherwise :)
> >
> > Also, can anyone think of a reason why this would not work - if it is
> not supported today, it might be a nice feature to add.
> >
> > Thanks,
> > John
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/CAJ__CS_0CaWLU-fqTV%2BeYRU6V3Pg6D8Rix%2B7Lbg_YyDs5tjxPg%40mail.gmail.com
> .
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.1909171732430.1888%40dskull
> .
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/CAJ__CS_%3Db8rudQ2v11NjhO5Ldccy5txVo5YrUptdhBuZMKfHGw%40mail.gmail.com.


get value, but only if cas token is not the provided one

2019-09-17 Thread John Reilly
Hi all,
I was just thinking it would be great to be able to cache the most used
items in a local cache on the client side and I think this would be
possible if there was a way for the client to send a request to get a key,
but only if the cas value is not the same as the cas token of the value I
already know about locally in the client.  I don't think this is possible
with either protocol today, but would be happy if told otherwise :)

Also, can anyone think of a reason why this would not work - if it is not
supported today, it might be a nice feature to add.

Thanks,
John

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/CAJ__CS_0CaWLU-fqTV%2BeYRU6V3Pg6D8Rix%2B7Lbg_YyDs5tjxPg%40mail.gmail.com.


Re: Apply log4j patch on spymemcached jars

2018-10-18 Thread John Reilly
You might have better luck on the spymemcached mailing list
spymemcac...@googlegroups.com (
https://groups.google.com/forum/#!forum/spymemcached)

The vulnerability appears to only impact log4j 2.x, not 1.2.x.

Regards,
John

On Wed, Oct 17, 2018 at 6:00 PM Deepthi Komatineni 
wrote:

>
> We in our project use spymemcached.2.11.1.jar which uses Log4J 1.2.16
> 
>
> There is a security vulnerability observed in Apache Log4j 2.x before
> 2.8.2, when using the TCP socket server or UDP socket server to receive
> serialized log events from another application, a specially crafted binary
> payload can be sent that, when deserialised, can execute arbitrary code.
>
> How do I apply the Log4J security patch (
> https://www.cvedetails.com/cve/CVE-2017-5645/) on memcached jars? Would
> memcached do it or should i update the pom.xml in memcached jar myself?
>
> Regards,
> Deepthi
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: basic auth token support

2018-04-17 Thread John Reilly
Reload would be handy to have but not absolutely necessary.

For rotation, one would just set up their second token (the new one) at
some point in time. Any time after that clients can transition to the new
token.  Once all clients are transitioned to the new token, the original
token would then need to be removed on the memcached server.

Thanks,
John

On Mon, Apr 16, 2018 at 10:57 AM dormando <dorma...@rydia.net> wrote:

> Hey,
>
> Thanks for the feedback! That should be doable. I'm used to this being a
> pain with TLS ticket rotation/etc anyway. This'll probably end up
> requiring a reload mechanism but shouldn't be too messy, I guess?
>
> On Mon, 16 Apr 2018, John Reilly wrote:
>
> > Hi dormando,I would love to see this change.  One thing that would be
> great to have is support for multiple tokens for the purpose of key
> rotation.  If
> > there are roles, one could just assign 2 equivalent roles with different
> tokens, but in the absence of roles as you mentioned just having the
> ability to
> > define multiple tokens on the server level would work nicely.  This is
> an issue today with the redis password mechanism - once it is set, changing
> the
> > token across all clients and server at the same time is problematic.
> >
> > Of course, sasl already supports this so clients that want this
> capability can use sasl, but it would be nice to have it available in any
> new default
> > authentication mechanism.
> >
> > Thanks,
> > John
> >
> > On Wed, Apr 11, 2018 at 1:59 AM dormando <dorma...@rydia.net> wrote:
> >   Hey,
> >
> >   In the wake of all this exposed-internet fun, I want to do
> something I
> >   should've years ago; add support for a basic authentication token.
> >
> >   Currently, with binary protocol, you have the option of using
> SASL. This
> >   requires compiling against sasl, a client that both speaks binprot
> and
> >   sasl, and understand the sasl ecosystem enough to generate
> configurations,
> >   password files, hook it up to kerberos, or what have you. This is
> useful;
> >   I should also see if ascii can support it.
> >
> >   However, it's not simple. It can never be a default.
> >
> >   I propose to do more or less what redis does, except I'd call it a
> token
> >   instead of a password. Both ascii and binprot would support it.
> >
> >   There are two options I'm considering:
> >
> >   1) add a new command, "auth [token]", or "auth [length]\r\ntoken"
> >
> >   or:
> >
> >   2) if a connection is in an unauthenticated state, it will only
> accept a
> >   "set auth [etc]\r\ntoken" magic key.
> >
> >   It should be possible to extend this down the line if we want
> roles for
> >   tokens by just having multiple tokens on the server..
> >
> >   It would be passed by commandline (it would rewrite the string on
> start)
> >   and/or passed as a file to open and read on start. A restart would
> be
> >   required to change the token.
> >
> >   Plaintext only on both ends, no hashing. It should exist to help
> prevent
> >   accidents more than anything else. I will probably add a delay on
> failure
> >   to mitigate brute-force, but no other features.
> >
> >   The really hard part is adding support to clients, and perhaps in
> a few
> >   years distro's can start shipping with default or randomized auth
> tokens.
> >
> >   Open to feedback. Thanks!
> >   -Dormando
> >
> >   --
> >
> >   ---
> >   You received this message because you are subscribed to the Google
> Groups "memcached" group.
> >   To unsubscribe from this group and stop receiving emails from it,
> send an email to memcached+unsubscr...@googlegroups.com.
> >   For more options, visit https://groups.google.com/d/optout.
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to the Google
> Groups "memcached" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to memcached+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
> >
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: basic auth token support

2018-04-16 Thread John Reilly
Hi dormando,
I would love to see this change.  One thing that would be great to have is
support for multiple tokens for the purpose of key rotation.  If there are
roles, one could just assign 2 equivalent roles with different tokens, but
in the absence of roles as you mentioned just having the ability to define
multiple tokens on the server level would work nicely.  This is an issue
today with the redis password mechanism - once it is set, changing the
token across all clients and server at the same time is problematic.

Of course, sasl already supports this so clients that want this capability
can use sasl, but it would be nice to have it available in any new default
authentication mechanism.

Thanks,
John

On Wed, Apr 11, 2018 at 1:59 AM dormando  wrote:

> Hey,
>
> In the wake of all this exposed-internet fun, I want to do something I
> should've years ago; add support for a basic authentication token.
>
> Currently, with binary protocol, you have the option of using SASL. This
> requires compiling against sasl, a client that both speaks binprot and
> sasl, and understand the sasl ecosystem enough to generate configurations,
> password files, hook it up to kerberos, or what have you. This is useful;
> I should also see if ascii can support it.
>
> However, it's not simple. It can never be a default.
>
> I propose to do more or less what redis does, except I'd call it a token
> instead of a password. Both ascii and binprot would support it.
>
> There are two options I'm considering:
>
> 1) add a new command, "auth [token]", or "auth [length]\r\ntoken"
>
> or:
>
> 2) if a connection is in an unauthenticated state, it will only accept a
> "set auth [etc]\r\ntoken" magic key.
>
> It should be possible to extend this down the line if we want roles for
> tokens by just having multiple tokens on the server..
>
> It would be passed by commandline (it would rewrite the string on start)
> and/or passed as a file to open and read on start. A restart would be
> required to change the token.
>
> Plaintext only on both ends, no hashing. It should exist to help prevent
> accidents more than anything else. I will probably add a delay on failure
> to mitigate brute-force, but no other features.
>
> The really hard part is adding support to clients, and perhaps in a few
> years distro's can start shipping with default or randomized auth tokens.
>
> Open to feedback. Thanks!
> -Dormando
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: REST API

2010-07-29 Thread John Reilly
You could easily develop an http-to-memcached proxy to allow this.  All the
partitioning logic could exist in the memcache client embedded in your
proxy.  This might make some sense because then you would not have to
implement the partitioning logic into your clients.  I would say the answer
to the question is no (memcached does not need to support http).


On Thu, Jul 29, 2010 at 6:54 AM, j.s. mammen mamm...@gmail.com wrote:

 Folks, lets not get bogged down by REST defined by  Roy Fielding in
 2000.

 My question was simple.
 Here it is again, rephrased.

 Do we need to implement a memcached layer whereby we can access the
 cached objects by using HTTP protocol. Here is an example of getting a
 cached object from a server
 GET [server]/mc/object/id1

 Hope the question is clearer now?

 On Jul 29, 4:30 pm, Henrik Schröder skro...@gmail.com wrote:
  I would assume he's talking about making memcached expose some sort of
  simple web service api over http.
 
  Although, you could argue that both the ascii protocol and binary
 protocol
  are restful, the sure seem to me to fit the definition pretty closely.
 
  /Henrik
 
 
 
  On Thu, Jul 29, 2010 at 12:56, Aaron Stone sodab...@gmail.com wrote:
   What's a ReST protocol? ReST is a model.
 
   On Wed, Jul 28, 2010 at 8:42 PM, jsm mamm...@gmail.com wrote:
What I meant was to add a REST protocol to memcached layer, just like
you have a binary protocol and ascii.
Its up to the user to decide which protocol to use when accessing
memcached objects.
Regards,
J.S.Mammen
 
On Jul 29, 1:49 am, Aaron Stone sodab...@gmail.com wrote:
On Wed, Jul 28, 2010 at 8:37 AM, jsm mamm...@gmail.com wrote:
 
 On Jul 28, 8:02 pm, Rajesh Nair rajesh.nair...@gmail.com wrote:
 Gavin,
 
 If you go by the strict sense of word, HTTP protocol is not a
   pre-requisite
 for REST service.
 It requires a protocol which supports linking entities through
 URIs.
It is
 very much possible to implement a RESTful service by coming up
 with
   own URI
 protocol for memcached messages
 
 something like :
 mc://memcached-cluster/messages/key
 
 and the transport layer can be pretty much the same TCP to not
 add
   any
 overhead.
 
 JSM,
 
 What is the value-add you are looking from the RESTful version of
 the
 memcached API?
 
 Basically to be able to use without binding to any particular
 language.
 
I read this as requesting memcached native support for structured
values (e.g. hashes, lists, etc.) -- is that what you meant?
 
Aaron
 
 Regards,
 Rajesh Nair
 
 On Wed, Jul 28, 2010 at 8:13 PM, Gavin M. Roy 
 g...@myyearbook.com
   wrote:
 
  Why add the HTTP protocol overhead?  REST/HTTP would add
 ~75Mbps of
  additional traffic at 100k gets per second by saying there's a
   rough 100
  byte overhead per request over the ASCII protocol.  I base the
 100
   bytes by
  the HTTP GET request, minimal request headers and minimal
 response
  headers. The binary protocol is very terse in comparison to the
   ASCII
  protocol.  In addition netcat or telnet works as good as curl
 for
   drop dead
  simplicity.  Don't get me wrong, it would be neat, but
 shouldn't be
  considered in moderately well used memcached environments.
 
  Regards,
 
  Gavin
 
  On Wed, Jul 28, 2010 at 8:43 AM, jsm mamm...@gmail.com
 wrote:
 
  Anyone writing or planning to write a REST API for memcached?
  If no such plan, I would be interested in writing a REST API.
  Any suggestions, comments welcome.



Re: memcached and tomcat

2009-11-20 Thread John Reilly
Hi,

java.util.concurrent.ConcurrentHashMap exists in 1.5 so it should run with
1.5 unless I missed something.  Are you sure that you are using a 1.5 jre?
Cheers,
John


On Fri, Nov 20, 2009 at 1:33 AM, Kitutech met...@gmail.com wrote:

 :( is nothing good :( the application that  have it in this tomcat
 required the java 1.5, do you know another jars files compiled for
 java 1.5?


 On Nov 19, 5:34 pm, Dustin dsalli...@gmail.com wrote:
  On Nov 19, 1:31 am, Kitutech met...@gmail.com wrote:
 
   I use version jre1.5.0_15
 
It appears that de.javakaffee.web.msm.NodeAvailabilityCache requires
  java 1.6.  I don't know what
  de.javakaffee.web.msm.NodeAvailabilityCache is, but that's what your
  stack trace points to.
 
Do note that java 1.5 was EOL'd last month.



Re: memcached with java client VS java solutions like ehcache, oscache, jboss cache

2009-10-09 Thread John Reilly
Hi,
It doesn't have to be one or the other.  Sometimes it makes sense to use
both memcached and an in-process cache.  The in-process cache will be
faster, but you then have to think about memory usage and cache consistency.
 If you have some keys which are fetched more frequently than most, it may
make sense to try fetching from the in-process cache first and then from
memcached.
Personally I've used ehcache (with cache distribution via RMI) and
memcached, and the combination worked well for me - whether I used ehcache,
memcache or both together really depended on the use case.  This is a
off-topic for this list, but regarding ehcache, I've tried using JGroups
distribution with ehcache, but ran into problems with it, but RMI with
multicast cache peer discovery is mature.  The RMI distribution gets
expensive if you have a large number of boxes though since each cache has to
talk to all its peer caches.

Hope that helps.

Cheers,
John


On Fri, Oct 9, 2009 at 7:24 AM, samr snrs...@gmail.com wrote:


 Basically when a user requests for an artifact, the backend makes
 several webservice calls to build the response.
 A few responses from these webservice calls will not change over a
 period and would like to cache this in case
 customer comes back again with the same request.


 On Oct 9, 2:05 am, Dustin dsalli...@gmail.com wrote:
  On Oct 8, 11:30 pm, samr snrs...@gmail.com wrote:
 
   memcached is in C and hence would be faster.
 
C doesn't necessarily mean faster.  Many of them have at least an in-
  process offering which will probably perform like a java.util.Map with
  a mutex (with maybe a bit more overhead) when running in a single-node
  configuration.
 
That is to say, not running a C program will be faster than running
  a C program.
 
My java client (http://code.google.com/p/spymemcached/) offers a
  java.util.Map interface to memcached which will give you an easy kv
  store while allowing you to scale your memory independently of your
  JVMs.  (though, I tend to write more to the memcached interface and
  not the Map interface).
 
There are trade-offs.
 
   On feature wise the java solutions seem to be offering a lot.
 
A piece of software with a long list of features means that the
  features that you care about make up a smaller percentage of the
  product.  That doesn't necessarily speak to quality, but it at the
  very least makes me care less about choosing a product based on
  features going too far beyond my own requirements.
 
   Please suggest what should be considered in choosing the right option.
 
Based on the requirements you've stated, I'd suggest stating more
  requirements.  :)



Re: best memcached java client

2009-10-07 Thread John Reilly
+1

On Wed, Oct 7, 2009 at 1:23 PM, Nelz nelz9...@gmail.com wrote:


 As a user of Dustin's client, I corroborate what he said. It really is
 the best Java client around.

 - Nelz

 On Wed, Oct 7, 2009 at 11:46, Dustin dsalli...@gmail.com wrote:
 
 
  On Oct 7, 11:36 am, sudo fong.y...@gmail.com wrote:
  Which java client is most stable for working with memcached?
 
   Well, I'd have to suggest mine.  I maintain really high test
  coverage, good performance, and keep up with server features (or stay
  a bit ahead of them).
 
   http://code.google.com/p/spymemcached/
 
   Please ask more specific questions here or in the spymemcached
  group:
 
   http://groups.google.com/group/spymemcached



Re: distributed memcached

2009-07-03 Thread John Reilly
If you are doing this in Java, you can use ehcache server.  Actually, your
clients don't need to be java based.  The last time I looked there were no
clients for doing the partitioning of requests to servers.  I think there is
a diagram and description of this scenario on the ehcache web site.  I'm not
using ehcache server myself though, so I can't say how good it is.  I prefer
a combination of ehcache and memcache.
Cheers,
John


On Fri, Jul 3, 2009 at 7:22 AM, sven bommezijn sven.bommez...@gmail.comwrote:


 Well, I do not want to kick against any religion, but in grid
 computing (like for example Java Spaces) memory is the new harddisk.
 Since memory gets cheaper and processors are becoming faster I think
 there IS some cause for a memcached like mechanism to be used in a
 distributed, replicated way.

 I am thinking of a configuration where different memcached servers
 maintain a copy of themselves on secondary (or even more) machines.
 In the unlikely event of one primary server going down the client just
 refers to a secondary server.

 Memcached entries might replicate themselves after a put or replace
 operation is completed.
 Particularly when Memcached is used for sessions it is unlikely that a
 get operation will follow in the instant that a put or replace
 operation has completed on the same entry.

 I realize that this takes up extra network bandwidth and cycles but it
 definately offers an interesting mechanism for load balanced
 webservers hitting the same cache to retrieve session data. And if I
 am not mistaken such an architecture would still be of O(1), that is
 it still would grow lineairly with web traffic.

 Anyone interested in a discussion about this? seems a nice optional
 feature for Memcached.