Hey,

I read the whole thread and thought about it for a bit... I'm not sure we
should do this. Especially not as an explicit solution to a security
problem with a shared hosting cluster. We can roadmap out a method of
multitenancy (dustin's apparently done proof of concept before, and I can
imagine it being pretty easy)... but more long term.

If you disable flush, there're still a hundred ways where one customer can
screw another customer, or if there's a site injection, they could gather
information and destroy things you'd really rather they not otherwise.

They can either inject large faulty SET's to push things out of cache.. or
run 'stats cachedump' and fetch/delete random values from other
customers... or run 'stats sizes' in a loop and hang all your servers.

The -o discussion is good, but a separate discussion, and not related
towards security fixes in a multitenancy situation. We should revisit that
as the engine work comes in, since we'd need a set of extended options we
could pass into the engines?

-Dormando

On Fri, 24 Jul 2009, Adrian Otto wrote:

>
> Dormando,
>
> Thanks for your reply. The use case is for using memcached from a hosting
> environment where multiple subscribers share the same source IP address
> because they run application code together on the same cluster of web servers.
> The clusters are large, typically in the hundreds of nodes range. In this
> arrangement it's possible for one subscriber to dump the cache belonging to
> another, even when they have their own memcached instance running.
>
> We are also aware of horror stories where app developers don't properly
> sanitize user input that gets sent to memcached, potentially resulting in the
> equivalent of an SQL injection. It's possible to dump the cache using an
> exploit of such code to send a "flush_all" command and lead to rather serious
> database performance problems for busy sites when the cache is cold. Because
> we can't control the code that is run on our platform to protect us from this,
> we'd like a simple way to nip it in the bud right in memcached.
>
> We recognize that we could implement a more elaborate method of partitioning
> access to memcached on a per-subscriber basis, but we just wanted something
> simple to let them use an individual memcached instance if they want to,
> accepting the security implications of the shared environment.
>
> The feature is optional, defaults to off, and it only adds a simple check of a
> boolean to bypass the code in normal configuration. Furthermore, purge_all
> should be infrequently accessed anyway, so the performance implication of the
> additional data comparison should be mute. I appreciate the consideration.
>
> Thanks,
>
> Adrian
>
> On Jul 24, 2009, at 12:54 PM, dormando wrote:
>
> >
> > Hey,
> >
> > We've rejected a few similar patches in the past. Usually if folks need
> > this they have bigger problems... What is your particular use case?
> >
> > I could see this going in though. It squicks me out but I'm open to
> > opinions from the others :)
> >
> > -Dormando
> >
> > On Fri, 24 Jul 2009, Adrian Otto wrote:
> >
> > > Hi,
> > >
> > > I've attached a patch for a tan option flag -F to disable the
> > > purge_all command in memcached. It also includes:
> > >
> > > 1) A tiny tweak to testapp.c that allowed "make test" to pass
> > > 2) Fixed a minor bug in t/binary.t with a variable scope.
> > > 3) Fixed the memcached.spec file to include the protocol_binary.h and
> > > use the current version
> > >
> > > Please consider this for inclusion into future releases.
> > >
> > > It works like this:
> > >
> > > $ telnet localhost 11211
> > > Trying 127.0.0.1...
> > > Connected to localhost.localdomain (127.0.0.1).
> > > Escape character is '^]'.
> > > flush_all
> > > SEVER_ERROR flush_all command disabled
> > > quit
> > > Connection closed by foreign host.
> > > $
> > >
> > > I've attached a SPEC file that I adapted from DAG that works with
> > > RHEL5 for 1.4.0. Please consider adding that as an additional file in
> > > the dist.
> > >
> > > Cheers,
> > >
> > > Adrian Otto
> > >
> > >
>

Reply via email to