Re: Issue 98 in memcached: RPM can't figure out how to increment versions.

2009-11-02 Thread memcached


Updates:
Status: Fixed

Comment #3 on issue 98 by dorma...@rydia.net: RPM can't figure out how to  
increment versions.

http://code.google.com/p/memcached/issues/detail?id=98

Meant to close this issue, sorry for the spam :(

--
You received this message because you are listed in the owner
or CC fields of this issue, or because you starred this issue.
You may adjust your issue notification preferences at:
http://code.google.com/hosting/settings


Re: Memcached 1.4.3-rc1

2009-11-02 Thread kroki

On 2 ноя, 14:14, dormando dorma...@rydia.net wrote:
 Next: *please* test it out if you can. We're scheduling 1.4.3 final to
 come out in six days.

Large multigets are broken in 1.4.3_rc1.  The fix is below:

From d8b4153bb65e8cbb685363b42ed7ff11ff49d4e0 Mon Sep 17 00:00:00 2001
From: Tomash Brechko tomash.brec...@gmail.com
Date: Mon, 2 Nov 2009 15:55:34 +0300
Subject: [PATCH] Fix test for large multiget.

---
 memcached.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/memcached.c b/memcached.c
index ed67eb0..3ab2053 100644
--- a/memcached.c
+++ b/memcached.c
@@ -3148,7 +3148,7 @@ static int try_read_command(conn *c) {
 ++ptr;
 }

-if (strcmp(ptr, get )  strcmp(ptr, gets )) {
+if (strncmp(ptr, get , 4)  strncmp(ptr, gets ,
5)) {
 conn_set_state(c, conn_closing);
 return 1;
 }
--
1.6.2.5


Re: Memcached 1.4.3-rc1

2009-11-02 Thread kroki

On 2 ноя, 15:58, kroki tomash.brec...@gmail.com wrote:
 Large multigets are broken in 1.4.3_rc1.  The fix is below:

Also, if c-rcurr desn't end in '\0' (I don't know if this is the
case), then we have to be sure that ptr[5] is not outside the buffer
(malicious user could send lots of spaces).


Using memcached as a distributed file cache

2009-11-02 Thread Jay Paroline

I'm running this by you guys to make sure we're not trying something
completely insane. ;)

We already rely on memcached quite heavily to minimize load on our DB
with stunning success, but as a music streaming service, we also serve
up lots and lots of 5-6MB files, and right now we don't have a
distributed cache of any kind, just lots and lots of really fast
disks. Due to the nature of our content, we have some files that are
insanely popular, and a lot of long tail content that gets played
infrequently. I don't remember the exact numbers, but I'd guesstimate
that the top 50GB of our many TB of files accounts for 40-60% of our
streams on any given day.

What I'd love to do is get those popular files served from memory,
which should alleviate load on the disks considerably. Obviously the
file system cache does some of this already, but since it's not
distributed it uses the space a lot less efficiently than a
distributed cache would (say one popular file lives on 3 stream nodes,
it's going to be cached in memory 3 separate times instead of just
once).  We have multiple stream servers, obviously, and between them
we could probably scrounge up 50GB or more for memcached,
theoretically removing the disk load for all of the most popular
content.

My favorite memory cache is of course memcache, so I'm wondering if
this would be an appropriate use (with the slab size turned way up,
obviously). We're going to start doing some experiments with it, but
I'm wondering what the community thinks.

Thanks,

Jay


Memcached 1.4.3-rc2

2009-11-02 Thread dormando

http://code.google.com/p/memcached/wiki/ReleaseNotes143rc2

Bug reported by Tomash, fixed by trond, reviewed by dustin, and now we
have a new tarball. Thanks, and please continue testing :)

-Dormando


Re: Using memcached as a distributed file cache

2009-11-02 Thread Jay Paroline

I'm not sure how well a reverse proxy would fit our needs, having
never used one before. The way we do streaming is a client sends a one-
time-use key to the stream server. The key is used to determine which
file should be streamed, and then the file is returned. The effect is
that no two requests are identical, and that code must be run for
every single request to verify the request and lookup the appropriate
file. Is it possible or practical to use a reverse proxy in that way?

Jay

Adam Lee wrote:
 I'm guessing you might get better mileage out of using something written
 more for this purpose, e.g. squid set up as a reverse proxy.

 On Mon, Nov 2, 2009 at 4:35 PM, Jay Paroline boxmon...@gmail.com wrote:

 
  I'm running this by you guys to make sure we're not trying something
  completely insane. ;)
 
  We already rely on memcached quite heavily to minimize load on our DB
  with stunning success, but as a music streaming service, we also serve
  up lots and lots of 5-6MB files, and right now we don't have a
  distributed cache of any kind, just lots and lots of really fast
  disks. Due to the nature of our content, we have some files that are
  insanely popular, and a lot of long tail content that gets played
  infrequently. I don't remember the exact numbers, but I'd guesstimate
  that the top 50GB of our many TB of files accounts for 40-60% of our
  streams on any given day.
 
  What I'd love to do is get those popular files served from memory,
  which should alleviate load on the disks considerably. Obviously the
  file system cache does some of this already, but since it's not
  distributed it uses the space a lot less efficiently than a
  distributed cache would (say one popular file lives on 3 stream nodes,
  it's going to be cached in memory 3 separate times instead of just
  once).  We have multiple stream servers, obviously, and between them
  we could probably scrounge up 50GB or more for memcached,
  theoretically removing the disk load for all of the most popular
  content.
 
  My favorite memory cache is of course memcache, so I'm wondering if
  this would be an appropriate use (with the slab size turned way up,
  obviously). We're going to start doing some experiments with it, but
  I'm wondering what the community thinks.
 
  Thanks,
 
  Jay
 



 --
 awl


Re: Using memcached as a distributed file cache

2009-11-02 Thread dormando

You could put something like varnish inbetween that final step and your
client..

so key is pulled in, file is looked up, then file is fetched *through*
varnish. Of course I don't know offhand how much work it would be to make
your app deal with that fetch-through scenario.

Since these files are large memcached probably isn't the best bet for
this.

On Mon, 2 Nov 2009, Jay Paroline wrote:


 I'm not sure how well a reverse proxy would fit our needs, having
 never used one before. The way we do streaming is a client sends a one-
 time-use key to the stream server. The key is used to determine which
 file should be streamed, and then the file is returned. The effect is
 that no two requests are identical, and that code must be run for
 every single request to verify the request and lookup the appropriate
 file. Is it possible or practical to use a reverse proxy in that way?

 Jay

 Adam Lee wrote:
  I'm guessing you might get better mileage out of using something written
  more for this purpose, e.g. squid set up as a reverse proxy.
 
  On Mon, Nov 2, 2009 at 4:35 PM, Jay Paroline boxmon...@gmail.com wrote:
 
  
   I'm running this by you guys to make sure we're not trying something
   completely insane. ;)
  
   We already rely on memcached quite heavily to minimize load on our DB
   with stunning success, but as a music streaming service, we also serve
   up lots and lots of 5-6MB files, and right now we don't have a
   distributed cache of any kind, just lots and lots of really fast
   disks. Due to the nature of our content, we have some files that are
   insanely popular, and a lot of long tail content that gets played
   infrequently. I don't remember the exact numbers, but I'd guesstimate
   that the top 50GB of our many TB of files accounts for 40-60% of our
   streams on any given day.
  
   What I'd love to do is get those popular files served from memory,
   which should alleviate load on the disks considerably. Obviously the
   file system cache does some of this already, but since it's not
   distributed it uses the space a lot less efficiently than a
   distributed cache would (say one popular file lives on 3 stream nodes,
   it's going to be cached in memory 3 separate times instead of just
   once).  We have multiple stream servers, obviously, and between them
   we could probably scrounge up 50GB or more for memcached,
   theoretically removing the disk load for all of the most popular
   content.
  
   My favorite memory cache is of course memcache, so I'm wondering if
   this would be an appropriate use (with the slab size turned way up,
   obviously). We're going to start doing some experiments with it, but
   I'm wondering what the community thinks.
  
   Thanks,
  
   Jay
  
 
 
 
  --
  awl



Re: Using memcached as a distributed file cache

2009-11-02 Thread Les Mikesell


dormando wrote:

You could put something like varnish inbetween that final step and your
client..

so key is pulled in, file is looked up, then file is fetched *through*
varnish. Of course I don't know offhand how much work it would be to make
your app deal with that fetch-through scenario.

Since these files are large memcached probably isn't the best bet for
this.


You could also redirect the client to the proxy/cache after computing 
the filename, but that exposes the name in a way that might be reusable.


--
  Les Mikesell
   lesmikes...@gmail.com



Re: Using memcached as a distributed file cache

2009-11-02 Thread dormando

 You could also redirect the client to the proxy/cache after computing the
 filename, but that exposes the name in a way that might be reusable.

perlbal is great for this... I think nginx might be able to do it too?
Internal reproxy. Server returns headers for where the load balancer is
to re-run a request through to. Mostly it's used for looking up mogilefs
addresses, but could also be used to redirect files through caches and
such.


Re: Using memcached as a distributed file cache

2009-11-02 Thread Mark Atwood



On Nov 2, 2009, at 1:35 PM, Jay Paroline wrote:


What I'd love to do is get those popular files served from memory,
which should alleviate load on the disks considerably. Obviously the
file system cache does some of this already, but since it's not
distributed it uses the space a lot less efficiently than a
distributed cache would (say one popular file lives on 3 stream nodes,
it's going to be cached in memory 3 separate times instead of just
once).  We have multiple stream servers, obviously, and between them
we could probably scrounge up 50GB or more for memcached,
theoretically removing the disk load for all of the most popular
content.


Take a look at the Apache module mod_memcached



--
Mark Atwood http://mark.atwood.name





Re: Using memcached as a distributed file cache

2009-11-02 Thread Vladimir Vuksan


Perhaps using tmpfs may be an option. Benefit of using tmpfs is that you 
can create a filesystem that is larger than physical memory. This has 
the benefit that virtual memory manager will swap out unused items to 
disk. You can then perhaps NFS export the file system or do something 
else. Difficult to say without additional details.


Vladimir

Jay Paroline wrote:

We already rely on memcached quite heavily to minimize load on our DB
with stunning success, but as a music streaming service, we also serve
up lots and lots of 5-6MB files, and right now we don't have a
distributed cache of any kind, just lots and lots of really fast
disks. Due to the nature of our content, we have some files that are
insanely popular, and a lot of long tail content that gets played
infrequently. I don't remember the exact numbers, but I'd guesstimate
that the top 50GB of our many TB of files accounts for 40-60% of our
streams on any given day.

What I'd love to do is get those popular files served from memory,
which should alleviate load on the disks considerably. Obviously the
file system cache does some of this already, but since it's not
distributed it uses the space a lot less efficiently than a
distributed cache would (say one popular file lives on 3 stream nodes,
it's going to be cached in memory 3 separate times instead of just
once).  We have multiple stream servers, obviously, and between them
we could probably scrounge up 50GB or more for memcached,
theoretically removing the disk load for all of the most popular
content.