Re: Invalid configuration when doing configure

2010-11-14 Thread dormando
It's `build-essential`, iirc.

On Sun, 14 Nov 2010, Steven Veneralle wrote:

 hey Alex. for some reason Debain doesnt find that package.  i am literally 
 coming off a fresh instal plus Im having to do everything n CLI remotly

 On Sun, Nov 14, 2010 at 2:42 PM, Alex Miller alexmiller.can...@gmail.com 
 wrote:
   Hi Steven,
 A simple solution to try is installing the build-essentials package either 
 through something like Synaptic or by running apt-get install 
 build-essentials

 Let us know if that worked for you.

 Good luck!

  - Alex

 --Alex Miller
 skype | alexmiller.canada
 email | alexmiller.can...@gmail.com



 On Sun, Nov 14, 2010 at 9:48 AM, Steven Demoire stevenvenera...@gmail.com 
 wrote:
   I get the follwowing on my VPS when trying to doa  ./configure on my
   debain server
   Invalid configuration `i686-pc-linux-': machine `i686-pc-linux' not
   recognized







Re: MemCached Evictions

2010-11-04 Thread dormando
reclaims are good, evictions are bad

On Thu, 4 Nov 2010, Kate Wang wrote:

 We are experiencing high reclaims instead of evictions. Could slab 
 distribution shift cause that as well?

 If the slab distribution shifted could cause high eviction rate, what's the 
 best way to fix it or avoid it?

 Thanks!

 On Thu, Nov 4, 2010 at 5:52 PM, Mikael Fridh fri...@gmail.com wrote:
   On Nov 4, 6:32 pm, rahul_kcle vishnudee...@gmail.com wrote:
   
From last 2 weeks i am seeing evictions happening on our memcached
boxes even though there is lot of memory left . Here are the stats
from memcached
   
STAT bytes_read 434627188758
STAT bytes_written 357821569260
STAT limit_maxbytes 23622320128
STAT accepting_conns 1
STAT listen_disabled_num 0
STAT threads 5
STAT conn_yields 0
STAT bytes 14573115225
STAT curr_items 1853350
STAT total_items 66439158
STAT evictions 7000591
   
Bytes is much lessser than limit_maxbytes.

 See stats slabs, possibly your slab distribution profile have shifted
 over time.

 Mikael






Re: Memory capacity overflow of memcached

2010-10-20 Thread dormando
Uhh, try 128MB? or higher?

Are you adjusting the slab factor size? The minimum amount of memory
memcached uses is 1MB * the number of slab classes + some misc stuff (the
hash table, buffers, etc). 48M should be enough tho...

Just in case; if you're asking if you set the memory limit to 128M, and
you store 256M of data, what happens? That should work, and you should see
'evictions' increasing in the 'stats' output. As it'll eject the oldest
data to make room for newer data. You can start it with -M I think if you
want to have it get pissed off once it's full.

On Wed, 20 Oct 2010, yashushi wrote:

 Hi Dormando,

 We have tried the setting with 48MB and 64MB, but the same problem
 still comes out.
 Please kindly help to clarify.
 Thank you.

 -Yashushi


 On 10月18日, 下午4時50分, dormando dorma...@rydia.net wrote:
   Hi,
 
   My team try to use memcached to provide PHP caching recently but we
   find out there is a memoryoverflowproblem.
 
   Here is the steps we have tested.
   (1)Create memcached with the max. memory is 4MB .
   (2)Add 900KB data from web to memcached 3 times.
   (3)Add 700KB data from web to memcached.
   (4)Add 500KB data from web to memcached.
   (5)Add 300KB data from web to memcached.
   (6)The new data still can transfer to memcached when new data size is
   greater than memcached remaining space size.
 
   We use stats to check bytes and it shows the memory size is over
   4MB.
   Could you help to clarify what is going on?
   Thank you.
 
  The lowest the memory limit can go is about 48M. It ignores anything
  lower.
 
  -Dormando



Re: next planned release of memcached

2010-10-19 Thread dormando
soon [tm]

On Tue, 19 Oct 2010, Pavel Kushnirchuk wrote:

 Folks,

 May be anybody know, when will be a next planned release of memcached.
 Especially I am interested in a release of Win32 branch.




Re: Problems about two memcached java clients: spy and gwhalin

2010-10-17 Thread dormando
I think you're supposed to read to the point where it says queues stuff
in memory before sending to the server and extrapolate that writing to
the queue too fast is a bad thing.

On Sun, 17 Oct 2010, Shi Yu wrote:

 Kelvin.

 This is year 2010 and computer programs should not be that fragile.
 And I believe my code is just a fast simple toy problem trying to find
 out why I failed too many times in my real problem. Before I post my
 problem, I checked and searched many documents, I read through the API
 and there is no clear instruction telling me what should I do to
 prevent such an error. I don't have time to bug an API on purpose, I
 am doing NLP pos tagging and I have exactly 6 million stemmed word to
 store. Fortunately or unlucky to me, that number exactly triggers the
 failure so I had to spend 6 hours finding out the reason. Actually spy
 client is the first API I tried, as I pointed out in my first post, it
 is fast, however, there is an error. I don't think for a normal
 end-product API, the memory leak issue should be considered by the
 user.

 Shi

 On Sun, Oct 17, 2010 at 1:11 AM, Kelvin Edmison kel...@kindsight.net wrote:
  Shi,
 
   Be careful when you start calling it a buggy API, especially as you
  present the quality of code that you did in your initial test case.  Your
  bugs-per-LOC was pretty high.
 
  However, it seems that you did in fact stumble into a bug in the Spy client,
  but only because you did no error checking at all.
 
  Dustin,
   while trying to re-create this problem and point out the various errors in
  his code, I found that, in his test case, if I did not call Future.get() to
  verify the result of the set, the spyMemcached client leaked memory.  Given
  that the Spymemcached wiki says that fire-and-forget is a valid mode of
  usage, this appears to be a bug.
 
  Here's my testcase against spymemcached-2.5.jar:
  'java -cp .:./memcached-2.5.jar FutureResultLeak true' leaks memory and will
  eventually die OOM.
  ' java -cp .:./memcached-2.5.jar FutureResultLeak false' does not leak and
  runs to completion.
 
  Here's the code. It's based on Shi's testcase so he and I now share the
  blame for code quality :)
 
  --
  import net.spy.memcached.*;
  import java.lang.*;
  import java.net.*;
  import java.util.concurrent.*;
 
  public class FutureResultLeak {
 
   public static void main(String[] args) throws Exception {
     boolean leakMemory = false;
     if (args.length = 1) {
       leakMemory = Boolean.valueOf(args[0]);
     }
     System.out.println(Testcase will  + (leakMemory ? leak memory : not
  leak memory));
     MemcachedClient mc=new MemcachedClient(new
  InetSocketAddress(localhost, 11211));
     mc.flush();
     System.out.println(Memcached flushed ...);
     int count = 0;
     int logInterval = 10;
     int itemExpiryTime = 600;
     long intervalStartTime = System.currentTimeMillis();
     for(int i=0;i600;i++){
       String a = String+i;
       String b = Value+i;
 
 
       FutureBoolean f =mc.add(a,itemExpiryTime, b);
       if (!leakMemory) {
         f.get();
       }
       count++;
       if (count % logInterval == 0) {
         long elapsed = System.currentTimeMillis() - intervalStartTime;
         double itemsPerSec = logInterval*1.0/elapsed;
         System.out.println(count+  elements added in  + elapsed +  ( +
  itemsPerSec +  per sec).);
         intervalStartTime = System.currentTimeMillis();
       }
     }
 
     System.out.println(done + count + records inserted);
     mc.shutdown(60, TimeUnit.SECONDS);
   }
  }
  --
 
 
  Regards,
   Kelvin
 
 
 
 
  On 17/10/10 12:28 AM, Shi Yu shee...@gmail.com wrote:
 
  And I run with the following java command on a 64-bit Unix machine
  which has 8G memory. I separate the Map into three parts, still
  failed. TBH I think there is some bug in the spymemcached input
  method. With Whalin's API there is no any problem with only 2G heap
  size, just a little bit slower but thats definitely better than being
  stuck for 6 hours on a bugged API.
 
  java -Xms4G -Xmx4G -classpath ./lib/spymemcached-2.5.jar Memcaceload
 
  Here is the error output:
 
  2010-10-16 22:40:50.959 INFO net.spy.memcached.MemcachedConnection:
  Added {QA sa=ocuic32.research/192.168.136.36:11211, #Rops=0, #Wops=0,
  #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect
  queue
  Memchaced flushed ...
  Cache loader created ...
  2010-10-16 22:40:50.989 INFO net.spy.memcached.MemcachedConnection:
  Connection state changed for sun.nio.ch.selectionkeyi...@25fa1bb6
  map1 loaded
  map2 loaded
  java.lang.OutOfMemoryError: Java heap space
          at sun.nio.cs.UTF_8.newEncoder(UTF_8.java:51)
          at 
  java.lang.StringCoding$StringEncoder.init(StringCoding.java:215)
          at 
  java.lang.StringCoding$StringEncoder.init(StringCoding.java:207)
          at java.lang.StringCoding.encode(StringCoding.java:266)
          at 

Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread dormando
 our 50+ consistent hashing cluster is very reliable on normal
 operations, incr/decr, get, set, multiget, etc. is not a problem. If
 we have a problem with keys on wrong servers in the continuum, we
 should have more problems, which we currently have not.
 The cluster is always under relatively high load (the number of
 connections for example is very high due to 160+ webservers in the
 front). We are now expecting in a very few cases, that this
 locking mechanism does not work. Two different clients try to lock the
 with the same object (if you want to prevent multiple inserts in a
 database on the same
 primary key you have to explicitly set one key valid for all clients
 and not a key with unique hashes in it), it works millions of times as
 expected (we are generating a large number of user triggered database
 inserts (~60/sec.)
 with this construct). But a handful of locks does not work and shows
 the behaviour described. So now my question is again: is it thinkable
 (even if it is very implausible), that
 a multithreaded memd does not provide 100% sure atomic add()?

restart memcached with -t 1 and see if it stops happening. I already said
it's not possible.


Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread dormando

 Yeah, right. :-) Restarting all memd instances is not an option. Can
 you explain, why it is not possible?

Because we've programmed the commands with the full intent to be atomic.
If it's not, there's a bug... there's an issue with incr/decr that's been
fixed upstream but we've never had a reported issue with add.

I'm not sure what you want to hear. They're supposed to be atomic, yes.
- that much is in the wiki too.


Re: Is memcache add() atomic on a multithreaded memcached?

2010-10-14 Thread dormando
 On 14 Okt., 10:31, dormando dorma...@rydia.net wrote:
   Yeah, right. :-) Restarting all memd instances is not an option. Can
   you explain, why it is not possible?
 
  Because we've programmed the commands with the full intent to be atomic.
  If it's not, there's a bug... there's an issue with incr/decr that's been
  fixed upstream but we've never had a reported issue with add.
 
  I'm not sure what you want to hear. They're supposed to be atomic, yes.
  - that much is in the wiki too.

 I sure thought, that you designed memd to behave exactly the same with
 1 or many threads and it's good to hear, that there is no pending bug
 concerning
 atomicity of add() on multiple threads. The reason why someone posts
 such a think on the mailinglist is to hear, what the opinion of a dev
 is who has all the insight. :-)
 So please understand my obstinately behaviour.
 We are planning to run some tests concerning this behaviour, maybe I
 can provide more detail in the future. But it will be hard to find
 proof for a bug in this scenario. For that we have to build
 a test scenario, with multiple instances trying to make an add() on
 the same key on the exact same time on a consistent hashing cluster.

Can you give more info about exactly what the app is doing? What version
you're on as well? I can squint at it again and see if there's some minute
case.

Need to know exactly what you're doing though. How long the key tends to
live, how many processes are hammering the same key, what you're setting
the timeout to, etc.

Your behavior's only obstinate because you keep asking if we're sure if
it's atomic. Yes it's supposed to be atomic, if you think you've found a
bug lets talk about bug hunting :P


Re: memcached timeout error because of slow response

2010-10-11 Thread dormando

 well, I just used this tool to force enough requests :)

 ok, I extended it a bit, for generating min/max/avg times of the
 processes,
 and for executing an explicit memcache get.
 If you like, You can adopt my patches for redistribution:

 http://www.maiers.de/memcache/mc_conn_tester.pl

 IIRC there was also the same while doing a get foo, from the
 script,
 but I'll verify this today.
 We'll do a kernel update on the memcache-server today, so I'll test
 after that again.

I can't tell from your reply, did the script note failures during the get
stage consistently?


Troubleshooting client timeouts

2010-10-04 Thread dormando
Hey,

A common issue folks have are when clients give inexplicable timeout
errors. You ask the client what were you doing? why do you hate me so?
but it won't answer you.

http://code.google.com/p/memcached/wiki/Timeouts

I wrote a little utility while helping a friend diagnose similar issues.
So here it is (public domain) along with a wiki page on how to use it.

-Dormando


Re: memcached-1.4.5 without multithread support (or with `-t 0')

2010-10-04 Thread dormando
We took it out for a reason, + if you run with -t 1 you won't really see
contention. 'Cuz it's running single threaded and using futexes under
linux. Those don't have much of a performance hit until you do contend.

I know some paper just came out which showed people using multiple
memcached instances and scaling some kernel locks, along with the whole
redis ONE INSTANCE PER CORE IS AWESOME GUYS thing.

But I'd really love it if you would prove that this is better, and prove
that there is no loss when running multiple instances. This is all
conjecture.

I'm slowly chipping away at the 1.6 branch and some lock scaling patches,
which feels a lot more productive than anecdotally naysaying progress.

memcached -t 4 will run 140,000 sets and 300,000+ gets per second on a box
of mine. An unrefined patch on an older version from trond gets that to
400,000 sets and 630,000 gets. I expect to get that to be a bit higher.

I assume you have some 10GE memcached instances pushing 5gbps+ of traffic
in order for this patch to be worth your time?

Or are all of your keys 1 byte and you're fetching 1 million of them per
second?

On Mon, 4 Oct 2010, tudorica wrote:

 The current memcached-1.4.5 version I downloaded appears to always be
 built with multithreaded support (unless something subtle is happening
 during configure that I haven't noticed).  Would it be OK if I
 submitted a patch that allows a single-threaded memcached build? Here
 is the rationale: instead of peppering the code with expensive user-
 space locking and events (e.g. pthread_mutex_lock, and the producer-
 consumers), why not just have the alternative to deploy N instances of
 plain singlethreaded memcached distinct/isolated processes, where N is
 the number of available CPUs (e.g. each instance on a different port)?
 Each such memcached process will utilize 1/Nth of the memory that a
 `memcached -t N' would have otherwise utilized, and there would be no
 user-space locking (unlike when memcached is launched with `-t 1'),
 i.e. all locking is performed by the in-kernel network stack when
 traffic is demuxed onto the N sockets.  Sure, this would mean that the
 clients will have to deal with more memcached instances (albeit
 virtual), but my impression is that this is already the norm (see the
 consistent hashing libraries like libketama), and proper hashing (in
 the client) to choose the target memcached server (ip:port) is already
 commonplace.  The only down-side I may envision is clients utilizing
 non-uniform hash functions to choose the target memcached server, but
 that's their problem.

 Regards,
 T



Re: memcached-1.4.5 without multithread support (or with `-t 0')

2010-10-04 Thread dormando
Hey,

No actually... in -t 1 mode the only producer/consumer is between the
accept thread and the worker thread. Once a connection is open the socket
events are local to the thread. Persistent connections would remove almost
all of the overhead aside from the futexes existing.

There's also work we're presently doing to scale that lock, but it's not
really necessary as nobody hits that limit.

There is loss... There's a lot of loss to running multiple instances:

- Management overhead of running multiple instances instead of one (yes I
know blah blah blah go hire a junior guy and have him set it up right.
it's harder than it looks).

- Less efficient multigets. Natural multigets are more efficient if you
use fewer instances. Less natural multigets don't care nearly as much, but
can suffer if you accidentally cluster too much data on a single (too
small) instance. I *have* seen this happen.

- Socket overhead. This is a big deal if you're running 8x or more
instances on a single box. Now in order to fetch individual keys spread
about, both clients and servers need to manage *much, much* more socket
crap. This includes the DMA allocation overhead I think you noted in
your paper. If you watch /proc/slabinfo between maintaining 100 sockets or
800 sockets you'll see that there's a large memory loss, and the kernel
has to do a lot more work to manage all of that extra shuffling. Memcached
itself will lose memory to connection states and buffers.

You can tune it down with various /proc knobs but adding 8x+ connection
overhead is a very real loss.

On the note of a performance test suite... I've been trying to fix one up
to be that sort of thing. I have Matt's memcachetest able to saturate
memcached instead of itself, and I've gotten the redis test to do some
damage: http://dormando.livejournal.com/525147.html

...but it needs more work over the next few weeks.

Also, finally, I'm asking about a real actual usage. Academically it's
interesting that memcached should be able to run 1million+ sets per second
off of a 48 core box, but reality always *always* pins performance to
other areas.

Namely:

1) memory overhead. the more crap you can stuff in memcached the faster
your app goes per dollar.
2) other very important scientific-style advancements in cache usage are
more fixated on benefits shown from the binary protocol. Utilizing
multiset and asyncronous get/set stacking can shave real milliseconds off
of serving real responses.

Except people aren't using binprot, and they're finding bugs when they do
(it seems to generate more packets? But I haven't gotten that far yet in
fixing the benchmarks).

I wish we could focus over there for a while.

Btw, your paper was very good. There's no question that a lot of the
kernel-level fixes in there will have real world benefits.

I just question the hyperfocus here on an aspect of a practical
application that is never the actual problem.

On Mon, 4 Oct 2010, Tudor Marian wrote:

 I agree that if you have futexes on your platform and you don't contend (i.e. 
 don't even have to call into the kernel) the overhead is small(er), however, 
 there is also the overhead between the
 producer-consumer contexts, namely the event base and the `-t 1' thread 
 (again, unless I misread the code, in which case I apologize).

 I am not sure what paper you mean, but my thesis did deal with the 
 scalability of sockets (raw sockets in particular) and how a single-producer 
 multiple-consumer approach doesn't scale nearly as good as a
 ``one instance per core. Sure enough the details are more gory and I won't 
 get into them.  I'll try to find some time to test and compare and get back 
 to you with some numbers.  By the way, I do ot see
 how there would be ``loss when running multiple instances'' since your 
 traffic is disjoint, and if you do have ``loss'' then your OS kernel is 
 broken.

 As a matter of fact, I do have a 10Gbps machine, actually two of them, each 
 with a dual socket Xeon X5570 and 2x Myri-10G NICs that I was planning on 
 using for tests. Would you be so kind to tell me if
 there's any standard performance test suite for memcached that is typically 
 used? Or should I just write my own trivial client---in particular, as you 
 mentioned, I am interested in the scalability of
 memcached (-t 4 versus proper singlethreaded/multi-process) with respect to 
 the key and/or value size.

 Regards,
 T

 On Mon, Oct 4, 2010 at 4:21 PM, dormando dorma...@rydia.net wrote:
   We took it out for a reason, + if you run with -t 1 you won't really see
   contention. 'Cuz it's running single threaded and using futexes under
   linux. Those don't have much of a performance hit until you do contend.

   I know some paper just came out which showed people using multiple
   memcached instances and scaling some kernel locks, along with the whole
   redis ONE INSTANCE PER CORE IS AWESOME GUYS thing.

   But I'd really love it if you would prove that this is better

Re: bytes_written growing fast but not cmd_set

2010-09-29 Thread dormando

 I have a memcached server in a production environment that is showing
 a strange behavior :
  bytes_written grows fast (~1.2Mo per second), but the number of
 cmd_set does not change!

 How is that possible?

The bytes_* values refer to bytes written/read to the network. fetches/etc
will tick the counters.


Re: Memcached make many SI (Software Interrupts)

2010-09-28 Thread dormando
It's a big topic, here's the stuff to google:

- cat /proc/interrupts - look for eth0, eth1, etc. If you have one
interrupt assigned to eth0, you have a single-queue NIC.

If you have many interrupts that look like eth0-0, eth0-1, etc, you have
a multi-queue NIC. These can have their interrupts spread out more.

Use either irqbalance (probably a bad idea) or echoing values into
/proc/irq/nn/smp_affinity (google for help with this), to spread out the
interrupts. You can then experiment with using `taskset` to bind memcached
to the same CPUs as the interrupts, or to different CPU's and see if the
throughput changes.

- look up linux sysctl network tuning

This tends to give you crap like this:
net.ipv4.ip_local_port_range = 9500 65536
net.core.rmem_max = 1048576
net.core.wmem_max = 1048576
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 43690 4194304
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_max_orphans = 65536
net.ipv4.tcp_max_syn_backlog = 16384
#net.ipv4.tcp_synack_retries = 2
net.core.netdev_budget=1000
net.ipv4.tcp_max_tw_buckets = 1512000

Which will vary by if you're using persistent connections or not (ie; how
high the turnover is). Don't blindly copy/paste this stuff.

- Read up on all the ethtool options available for your NIC. ensure the
defaults work.

NICs are all configured for a balance between packet latency and
throughput. The more interrupts they coalesce within the driver, the
higher potential latency of packet return. This can be really hard to go
through and the settings will vary by every NIC you fiddle with. I've
managed to get differences of 40k-60k pps by tuning these values. Often
the latency doesn't get much worse.

- Use a recent kernel.

On a particular piece of recent hardware I doubled packet throughput by
moving from 2.6.27 to 2.6.32. I was not able to push 2.6.18 hard without
having it drop networking.

- Use a more recent kernel

http://kernelnewbies.org/Linux_2_6_35#head-94daf753b96280181e79a71ca4bb7f7a423e302a

I haven't played with this much yet, but it looks like a BFD, especially
if you're stuck with single-queue NIC's.

- Get a better NIC.

10ge NICs have awesomesauce features for shoveling more packets around.
There're different levels of how awesome straight-gbit NICs are as well. I
like the high-end intels more than most of the broadcoms, for instance.

- Don't think running multiple instances of memcached will make much of a
difference.

Maybe run more threads though, or try pinning them to a set of CPU's or a
particular CPU.

On Tue, 28 Sep 2010, Jay Paroline wrote:

 We've run into this exact same issue and narrowed it down to the NIC,
 but don't really know where to go from there. I'm going to look up
 Dormando's suggestions but if anyone else has experience with this and
 can point us in the right direction, it would be greatly appreciated.

 Thanks,

 Jay

 On Sep 27, 2:34 pm, dormando dorma...@rydia.net wrote:
   We have an 2 x quad core server with 32 gb ram. If many clients
   connect to this server (only memcached runs on it) the first core run
   to nearly 100 % use by si (software interrups) and so some client
   can't reach the server.
   Memcached runs currently with 4 threads and with version (1.4.2). All
   other cores have 70 % idle so I ask me is there a possibility to
   improve the performance?
 
  This is an issue with how your network interrupts are being routed, not
  with how memcached is being threaded.
 
  Wish I had some good links offhand for this, because it's a little obscure
  to deal with. In short; you'll want to balance your network interrupts
  across cores. Google for blog posts about smp_affinity for network cards
  and irqbalance (which poorly tries to automatically do this).
 
  Depending on how many NIC's you have and if it's multiqueued or not you'll
  have to tune it differently. Linux 2.6.35 has some features for extending
  the speed of single-queued NIC's (find the pages discussing it on
  kernelnewbies.org).



Re: Memcached make many SI (Software Interrupts)

2010-09-27 Thread dormando
 We have an 2 x quad core server with 32 gb ram. If many clients
 connect to this server (only memcached runs on it) the first core run
 to nearly 100 % use by si (software interrups) and so some client
 can't reach the server.
 Memcached runs currently with 4 threads and with version (1.4.2). All
 other cores have 70 % idle so I ask me is there a possibility to
 improve the performance?


This is an issue with how your network interrupts are being routed, not
with how memcached is being threaded.

Wish I had some good links offhand for this, because it's a little obscure
to deal with. In short; you'll want to balance your network interrupts
across cores. Google for blog posts about smp_affinity for network cards
and irqbalance (which poorly tries to automatically do this).

Depending on how many NIC's you have and if it's multiqueued or not you'll
have to tune it differently. Linux 2.6.35 has some features for extending
the speed of single-queued NIC's (find the pages discussing it on
kernelnewbies.org).



Re: Memcached 1.4.5 - memcached-init and memcached.sysv licenses

2010-09-25 Thread dormando
Uhhhm.

memcached-init is from:

commit 4b1b1ae76ef6e78dd3f1d753931ac8051ae99e9a
Author: Brad Fitzpatrick b...@danga.com
Date:   Tue Dec 30 19:56:33 2003 +

I assume brad copied the skeleton files and modified it. IANAL so I'm not
sure how much of it had to be changed to be able to flip the license, but
it seems like a trivial script.

The sysv one is from:

commit 275f8c40705526ac4514d5dcff2cbf0311b540ac
Author: Paul Lindner plind...@hi5.com
Date:   Fri May 4 11:23:02 2007 +

add rpm spec file, new sysv init script

So uh. Paul? Care to comment?

and finally, may I ask why you're finding this issue? Do you work for a
distro or are just curious? Or you're trying to package it up and sell it?
:P

On Tue, 21 Sep 2010, Tomasz Zieliński wrote:

 Hello,

 I've noticed that the two scripts mentioned in the subject are
 probably taken from some external
 sources and licensed under different licenses than Memcached itself.

 Memcached-init clearly says that it is taken from Debian. It's similar
 to /etc/init.s/skeleton on my Ubuntu 9.04,
 which belongs to initscripts package and that package is licensed
 under GPL3.
 Not sure which license applies to the memcached-init, but I suppose
 it's either GPL2 or GPL3.

 Regarding memcached.sysv, I found a similar script here:
 http://killersoft-yum.googlecode.com/svn-history/r24/trunk/repo/SOURCES/memcached.sysv
 - not sure who copied it from who, but in case killersoft-yum was
 first - it's licensed under MIT license (http://code.google.com/p/
 killersoft-yum/ - http://www.opensource.org/licenses/mit-license.php).

 Can someone responsible for this area comment on the topic?

 --
 Tomasz Zielinski



Re: (tcp 11211) failed with: Connection timed out (110)

2010-09-07 Thread dormando
Upgrade back to 1.4.5 and look for the 'listen_disabled_num' value in the
stats output. If the number is increasing, you're hitting maxconns.

If not, you're probably seeing packet loss, or have a firewall in the way
that's maxing out.

On Tue, 7 Sep 2010, the_fonz wrote:

 Guys,

 We are seeing tons of these messages reported;

 Memcache::get(): Server 10.3.230.15 (tcp 11211) failed with:
 Connection timed out (110)

 They are reported on all six of our webservers. Memcache is working
 but lots of connections are timing out as reported with the above
 messages.

 We are running memcached 1.2.5 with php-pecl-memcache-2.2.3 (we also
 tried with memcache 1.4.5 but had the same errors). We have PHP 5.2.6
 running on x86 Hardware with Red Hat 5.5.

 We have six web servers all running Apache 2.2 on prefork mode.
 Prefork MaxClients is set to 192.

 Our memcached config looks like this;

 PORT=11211
 USER=memcached
 MAXCONN=2048
 CACHESIZE=2048
 OPTIONS=

 We are running memcache between webservers on a LAN, no Firewall or
 iptables being used.

 Memcache is used to cache output of scripts most of the time. It could
 be HTML or XML that is delivered to client.

 Our tcp values look like this;

 tcp_fin_timeout
 60
 tcp_max_orphans
 65536
 tcp_orphan_retries
 0
 tcp_keepalive_probes
 9
 tcp_keepalive_time
 7200

 I am totally out of ideas, I can't see any dropped packets on the
 network, I just don't know what else to check.

 Any ideas as I am pulling my hair out!

 Thanks




Re: memcached permissions

2010-08-27 Thread dormando
We're still working on merging down 1.6... but if this exists outside as
an engine nothing of us blocks you from using it for now.

I sort of wonder a little about outright pulling it into the tree, since
that implies we have to maintain it.

On Fri, 27 Aug 2010, KaiGai Kohei wrote:

 BTW, how about getting inclusion of this patch?

 (2010/08/16 14:38), KaiGai Kohei wrote:
  The attached patch is a revised version of memcached permissions.
 
  The 'calculate' permission has gone, and INCR/DECR requires us
  both of 'read' and 'write' permissions.
  It means we should switch domain of the client process when we
  need special treatments to unaccessable items; something like
  trusted procedures.
 
  Rest of the patch is not changed.
 
  (2010/08/05 9:20), KaiGai Kohei wrote:
  (2010/08/04 10:25), Kelvin Edmison wrote:
  I'm still not sure how allowing a 'calculate' permission would be helpful 
  in
  this case.  Incr and decr allow for incrementing and decrementing by any
  amount.  There does not seem to be any real difference between that and
  'write' to me.
 
  INCR and DECR allow users to set a numerical value according to arithmetic
  rule, although SET allows to set a fully arbitrary value.
  If administrator want to allow users to modify a value in a limited way,
  he can grant 'calculate' permission, instead of 'write' permission.
 
  If we would be talking about RDBMS, it is possible to switch client's
  privileges during execution of a certain stored procedure.
  However, memcached does not have such a feature, so we need to consider
  more fine grained permissions.
 
  BTW, I noticed a different viewpoint, although I didn't reach the idea 
  before.
  Since memcached does not have stored procedure, it might be a correct 
  answer
  that the client switches its privileges when it tries to modify read-only
  values. Like set-uid programs in unix, SELinux has a feature to switch
  privileges of process on execve(2) time. It allows a small number of 
  trusted
  programs to write values, but prevents to modify items by others.
 
  If a strict security partitioning is desired, then perhaps a single
  reference counter isn't feasible.  Would it not be better, from a security
  point of view, to have individual counters for the different clients?
  The clients would have 'create|read|write' permissions, and any overall
  administrative app could have read-only permissions on all those counters 
  to
  collect and sum (or otherwise report) them?
 
  If a strict security partitioning environment, it seems to me what you 
  introduced
  is reasonable.
 
  Thanks,
 
  Kelvin
 
  On 02/08/10 1:45 AM, KaiGai Koheikai...@ak.jp.nec.comwrote:
 
  (2010/07/30 22:55), Kelvin Edmison wrote:
  While I haven't yet read the patch, I would like to understand why 
  there is
  a need for a Calculate permission.  Why would someone be granted 
  'calculate'
  permission but not 'write' permission?
 
  Kelvin
 
  The issue depends on individual user's requirement of security.
  If they want not to leak anything over the security domains,
  they should grant the 'calculate' permission on everybody who
  already have both 'read' and 'write' permissions.
  It it equivalent to these permissions.
  However, it may lack flexibility in configuration of access
  controls, if users don't consider 'INCR' and 'DECR' are risk
  information leaks/manipulations.
  For example, it is not a rare case that we don't want to expose
  individual client's items, but want to control a shared reference
  counter.
 
  Ideally, I'd like to define more fine grained permissions to
  distinguish a security sensitive operations and others.
  But here is limitation of protocol. We cannot automatically
  determine what is security sensitive data and what is not.
 
  Thanks,
 
  On 30/07/10 12:49 AM, KaiGai Koheikai...@ak.jp.nec.com wrote:
 
  I'll mainly submit the patch and message to SELinux community,
  but please don't hesitate to comment anything from memcached
  community.
  
 
  The attached patch adds policies to support access controls
  on key-value items managed by memcached with SELinux engine.
 
  Nowadays, various kind of key-value-stores support memcached
  compatible protocol as a de-facto standard. So, it will be a
  reasonable start to consider the protocol to control accesses
  from clients; typically web applications.
 

  http://github.com/memcached/memcached/blob/master/doc/protocol.txt
 
  1) new permissions
 
  This patch adds 'kv_item' class with the following permissions
   - create
   - getattr
   - setattr
   - remove
   - relabelfrom
   - relabelto
   - read
   - write
   - append
   - calculate
 
 Most of permission works as literal.
 On the 'SET' or 'CAS' operations, it creates a new item when 
  here
 is no items with same kye. In this case, 'create' permission 
  shall
 be checked. Elsewhere, 'write' permission shall be 

Re: memcached permissions

2010-08-27 Thread dormando
Ahhah, thanks :)

Was uh, scared for a moment the that initial thread had been lost in time.

On Fri, 27 Aug 2010, KaiGai wrote:

 Sorry for the confusion.
 I intended to talk to maintainer of the standard security policy in
 SELinux.

 It is my job to maintain the selinux_engine.so module. :-)

 Thanks,

 On 8月27日, 午後6:02, dormando dorma...@rydia.net wrote:
  We're still working on merging down 1.6... but if this exists outside as
  an engine nothing of us blocks you from using it for now.
 
  I sort of wonder a little about outright pulling it into the tree, since
  that implies we have to maintain it.
 
 
 
  On Fri, 27 Aug 2010, KaiGai Kohei wrote:
   BTW, how about getting inclusion of this patch?
 
   (2010/08/16 14:38), KaiGai Kohei wrote:
The attached patch is a revised version of memcached permissions.
 
The 'calculate' permission has gone, and INCR/DECR requires us
both of 'read' and 'write' permissions.
It means we should switch domain of the client process when we
need special treatments to unaccessable items; something like
trusted procedures.
 
Rest of the patch is not changed.
 
(2010/08/05 9:20), KaiGai Kohei wrote:
(2010/08/04 10:25), Kelvin Edmison wrote:
I'm still not sure how allowing a 'calculate' permission would be 
helpful in
this case.  Incr and decr allow for incrementing and decrementing by 
any
amount.  There does not seem to be any real difference between that 
and
'write' to me.
 
INCR and DECR allow users to set a numerical value according to 
arithmetic
rule, although SET allows to set a fully arbitrary value.
If administrator want to allow users to modify a value in a limited 
way,
he can grant 'calculate' permission, instead of 'write' permission.
 
If we would be talking about RDBMS, it is possible to switch client's
privileges during execution of a certain stored procedure.
However, memcached does not have such a feature, so we need to consider
more fine grained permissions.
 
BTW, I noticed a different viewpoint, although I didn't reach the idea 
before.
Since memcached does not have stored procedure, it might be a correct 
answer
that the client switches its privileges when it tries to modify 
read-only
values. Like set-uid programs in unix, SELinux has a feature to switch
privileges of process on execve(2) time. It allows a small number of 
trusted
programs to write values, but prevents to modify items by others.
 
If a strict security partitioning is desired, then perhaps a single
reference counter isn't feasible.  Would it not be better, from a 
security
point of view, to have individual counters for the different clients?
The clients would have 'create|read|write' permissions, and any 
overall
administrative app could have read-only permissions on all those 
counters to
collect and sum (or otherwise report) them?
 
If a strict security partitioning environment, it seems to me what you 
introduced
is reasonable.
 
Thanks,
 
Kelvin
 
On 02/08/10 1:45 AM, KaiGai Koheikai...@ak.jp.nec.com    wrote:
 
(2010/07/30 22:55), Kelvin Edmison wrote:
While I haven't yet read the patch, I would like to understand why 
there is
a need for a Calculate permission.  Why would someone be granted 
'calculate'
permission but not 'write' permission?
 
Kelvin
 
The issue depends on individual user's requirement of security.
If they want not to leak anything over the security domains,
they should grant the 'calculate' permission on everybody who
already have both 'read' and 'write' permissions.
It it equivalent to these permissions.
However, it may lack flexibility in configuration of access
controls, if users don't consider 'INCR' and 'DECR' are risk
information leaks/manipulations.
For example, it is not a rare case that we don't want to expose
individual client's items, but want to control a shared reference
counter.
 
Ideally, I'd like to define more fine grained permissions to
distinguish a security sensitive operations and others.
But here is limitation of protocol. We cannot automatically
determine what is security sensitive data and what is not.
 
Thanks,
 
On 30/07/10 12:49 AM, KaiGai Koheikai...@ak.jp.nec.com     
wrote:
 
I'll mainly submit the patch and message to SELinux community,
but please don't hesitate to comment anything from memcached
community.

 
The attached patch adds policies to support access controls
on key-value items managed by memcached with SELinux engine.
 
Nowadays, various kind of key-value-stores support memcached
compatible protocol as a de-facto standard. So, it will be a
reasonable start to consider the protocol to control accesses
from clients; typically web applications.
 
     
http

Re: what about the max expires time, 30days?

2010-08-26 Thread Dormando
Expire time turns into a unixtime date after 30 days. It's in the protocol.txt 
but possibly overlooked in the wiki :(

On Aug 26, 2010, at 8:19 PM, kedy211 kedy...@gmail.com wrote:

 It's unusually the question I mentioned in the topic. And, I can NOT find the 
 offical answer from wiki.
 
 Sometime, I set the expire time as '86400 * 24 * 4', the func memcached-set 
 will return true, bug false when get.
 
 I have tested the max expire time, the result is 86400 * 24 * 1.25, in others 
 words: 30 days.
 
 So what about the max time in your server? Does it relations to the hardware 
 or it's own version?
 
 My OS  env Info:
 Dell PE R610
 Intel(R) Xeon(R) CPU E5504  @ 2.00GHz * 2
 16G memory
 Linux mdev 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 
 x86_64 GNU/Linux
 Memcached ver: 1.0.1
 
 
 
 
 Best regards.
 
 Kedy


Re: memcached race condition problem

2010-08-25 Thread dormando

 [Quick Fix]
 Set memcached to only listen to a single interface.

 example memcached setting:
  memcached -U 0 -u nobody -p 11222 -t 4 -m 16000 -C -c 1000 -l 192.168.0.1 -v


 [Reproducing]
 Use attack script (thanks to mala): http://gist.github.com/522741

 w/ -l interface restriction: we have seen over 70 hours of stability
  - yes, you will see Too many open connections. but that's not an
 issue here

 w/o -l interface restriction: memcached quits w/ attack script


 Please give us some feedback on attach patch.  This should fix the
 race condition we have experienced.

 At last, we would like to thank numerous contributors on twitter that
 helped us nail this problem.  http://togetter.com/li/41702

 Shigeki

Excellent! Thanks for the detailed report and the patch. I'll see if I can
verify it here and get it applied asap. Might move the mutex to the
threads file, but the idea is sound.

Since you're using the clock event to re-enable connections, this means
that hitting maxconns will lock up memcached for a worst case minimum of 1
full second?

Perhaps a conditional could be used to wake up the main thread faster?

On a similar note (but wouldn't block this patch at all), I've been
wanting to change the default behavior in 1.6 to never close the accept()
routine, but instead instant-drop excessive connections. It makes more
sense for what memcached is, and I'd like to see an option in 1.4.x for
enabling such behavior to test with.

That avoids the issue in another way entirely, but I think we'll need both
fixes :P

-Dormando


Re: memcached race condition problem

2010-08-25 Thread dormando
Hey,

 But if the main thread waits for the conditional variable, who drives
 the
 main event loop?

 Preparing the pipe fd for notifying max conn, and watch it by the
 event loop
 should be the way.

I should've been more purposefully vague; lets find the easy way to make
it wake up faster. Using the pipe fd sounds like the right idea there.

 +1. Never saw the application which call listen(fd, 0), to drop the
 new
 connection. I thought if OS kernel reaches the max backlog, then it
 starts
 dropping the new packets. So no need to handle such case in
 application,
 don't you?

(repeating from IRC for posterity): I like shoving an ERROR: Too many
connections down the pipe before closing it. Gives client authors the
chance to throw useful errors for the developer. Otherwise it's likely to
manifest as failed gets/sets with no errors.

-Dormando


Re: is there a way to tell if a memcached client is still alive / connected?

2010-08-18 Thread dormando
sto

On Wed, 18 Aug 2010, Chad wrote:

 I am trying to find a way to check if the memcachedclient is still
 alive or not but it seems to be there is no public api for me to do
 so. Can someone clarify  this?

 Thanks.
 Chad



Re: is there a way to tell if a memcached client is still alive / connected?

2010-08-18 Thread dormando


On Wed, 18 Aug 2010, Matt Ingenthron wrote:

 dormando wrote:
  sto
 

 I'm pretty sure he sent it only once.  I think this is a problem with Google's
 SMTP and MTAs if I recall correctly.  This happened on a majordomo based list
 I am on, and it died down after a little bit.  I can't immediately find the
 reference though.

 Does yelling at it help?  ;)


no :(


Re: eliminating the taking lock for cqi_freelist

2010-08-17 Thread dormando
Can you resubmit as a unified diff?

On Tue, 17 Aug 2010, ilnarb wrote:

 I suggest you to eliminate taking of the lock on cqi_freelist.
 In order to it we should done all work on cqi_freelist by one thread
 -- dispatcher thread in memcached.
 I made some changes in queue code, including improvements for cache
 line.
 It will work on any platform.
 There are patch file (diff -C3 memcached-1.4.5/thread.c new
 thread.c)

 *** thread.c.orig   Sat Apr  3 11:07:16 2010
 --- thread.cTue Aug 17 14:09:28 2010
 ***
 *** 12,16 
   #include pthread.h

 ! #define ITEMS_PER_ALLOC 64

   /* An item in the connection queue. */
 --- 12,17 
   #include pthread.h

 ! #define CACHE_LINE_SIZE 64
 ! #define ITEMS_PER_ALLOC 256

   /* An item in the connection queue. */
 ***
 *** 29,35 
   struct conn_queue {
   CQ_ITEM *head;
   CQ_ITEM *tail;
   pthread_mutex_t lock;
 ! pthread_cond_t  cond;
   };

 --- 30,40 
   struct conn_queue {
   CQ_ITEM *head;
 + char pad0[CACHE_LINE_SIZE - sizeof(CQ_ITEM *)];
 + CQ_ITEM *divider;
 + char pad1[CACHE_LINE_SIZE - sizeof(CQ_ITEM *)];
   CQ_ITEM *tail;
 + char pad2[CACHE_LINE_SIZE - sizeof(CQ_ITEM *)];
   pthread_mutex_t lock;
 ! char pad3[CACHE_LINE_SIZE - sizeof(pthread_mutex_t)];
   };

 ***
 *** 45,49 
   /* Free list of CQ_ITEM structs */
   static CQ_ITEM *cqi_freelist;
 - static pthread_mutex_t cqi_freelist_lock;

   static LIBEVENT_DISPATCHER_THREAD dispatcher_thread;
 --- 50,53 
 ***
 *** 65,68 
 --- 69,75 
   static void thread_libevent_process(int fd, short which, void *arg);

 + static CQ_ITEM *cqi_new(void);
 + static void cqi_free(CQ_ITEM *item);
 +
   /*
* Initializes a connection queue.
 ***
 *** 70,76 
   static void cq_init(CQ *cq) {
   pthread_mutex_init(cq-lock, NULL);
 ! pthread_cond_init(cq-cond, NULL);
 ! cq-head = NULL;
 ! cq-tail = NULL;
   }

 --- 77,81 
   static void cq_init(CQ *cq) {
   pthread_mutex_init(cq-lock, NULL);
 ! cq-head = cq-divider = cq-tail = cqi_new();
   }

 ***
 *** 78,96 
* Looks for an item on a connection queue, but doesn't block if
 there isn't
* one.
 !  * Returns the item, or NULL if no item is available
*/
 ! static CQ_ITEM *cq_pop(CQ *cq) {
 ! CQ_ITEM *item;

   pthread_mutex_lock(cq-lock);
 ! item = cq-head;
 ! if (NULL != item) {
 ! cq-head = item-next;
 ! if (NULL == cq-head)
 ! cq-tail = NULL;
   }
   pthread_mutex_unlock(cq-lock);

 ! return item;
   }

 --- 83,105 
* Looks for an item on a connection queue, but doesn't block if
 there isn't
* one.
 !  * Returns 1 if there are new item, or 0 if no item is available
*/
 ! static int cq_pop(CQ *cq, CQ_ITEM *item) {
 !   int res = 0;
 !
 ! if (NULL == cq-divider-next)
 ! return 0;

   pthread_mutex_lock(cq-lock);
 ! if (NULL != cq-divider-next) {
 ! *item = *cq-divider-next;
 ! res = 1;
 ! cq-divider = cq-divider-next;
   }
   pthread_mutex_unlock(cq-lock);

 ! item-next = NULL;
 !
 ! return res;
   }

 ***
 *** 102,112 

   pthread_mutex_lock(cq-lock);
 ! if (NULL == cq-tail)
 ! cq-head = item;
 ! else
 ! cq-tail-next = item;
 ! cq-tail = item;
 ! pthread_cond_signal(cq-cond);
   pthread_mutex_unlock(cq-lock);
   }

 --- 111,124 

   pthread_mutex_lock(cq-lock);
 ! cq-tail-next = item;
 ! cq-tail = cq-tail-next;
   pthread_mutex_unlock(cq-lock);
 +
 + while(cq-head != cq-divider)
 + {
 + CQ_ITEM *tmp = cq-head;
 + cq-head = cq-head-next;
 + cqi_free(tmp);
 + }
   }

 ***
 *** 116,125 
   static CQ_ITEM *cqi_new(void) {
   CQ_ITEM *item = NULL;
 - pthread_mutex_lock(cqi_freelist_lock);
   if (cqi_freelist) {
   item = cqi_freelist;
   cqi_freelist = item-next;
   }
 - pthread_mutex_unlock(cqi_freelist_lock);

   if (NULL == item) {
 --- 128,135 
 ***
 *** 139,148 
   item[i - 1].next = item[i];

 - pthread_mutex_lock(cqi_freelist_lock);
   item[ITEMS_PER_ALLOC - 1].next = cqi_freelist;
   cqi_freelist = item[1];
 - pthread_mutex_unlock(cqi_freelist_lock);
   }

   return item;
   }
 --- 149,158 
   item[i - 1].next = item[i];

   item[ITEMS_PER_ALLOC - 1].next = cqi_freelist;
   cqi_freelist = item[1];
   }

 + item-next = NULL;
 +
   return item;
   }
 ***
 *** 153,160 
*/
   static void cqi_free(CQ_ITEM *item) {
 - pthread_mutex_lock(cqi_freelist_lock);
   item-next = cqi_freelist;
   cqi_freelist = item;
 - pthread_mutex_unlock(cqi_freelist_lock);
   }

 --- 163,168 
 ***
 *** 254,258 
   static void 

Re: Making memchaed more secure

2010-08-07 Thread dormando


On Sat, 7 Aug 2010, Dustin wrote:


 On Aug 7, 7:52 am, Loganaden Velvindron logana...@gmail.com wrote:
  There seems to be a problem when I pasted it in gmail.
 
  Here's a link to the git diff:
 
  http://devio.us/~loganaden/memcached.git.diff

   This makes some sense to me.  That functionality is kind of a
 plague.  On one hand we've got people trying to use it for things it
 doesn't do and on the other hand, we've got people who configure
 memcached incorrectly and put themselves at risk.


I propose:

1.4.6 would come with a -D option or something which would disable
cachedump, etc. possibly also stats sizes.

1.6.0 will have them disabled by default, with a different option for
enabling them? Yeah people abuse the shit out of them and I was
entertaining the idea of randomizing the names every release or removing
it before, but I don't want to listen to the whining on both sides of the
fence.

Definitely don't think printing warnings will do much. Honestly we do warn
you in the docs, it's clear that you never provide a username/password,
and dozens of articles on running memcached in the cloud tell you to
firewall the damn thing.

What's funny is despite all this people still screw it up. Even if we make
it harder to run debug commands that's only limiting the sort of damage
you can do by a teeny bit.

Then in six months some security geek will write something to run 'stats
slabs/stats items' and bomb your weakest slabs with junk data until your
site goes away. Or like, connect to it until it hits maxconns, or guess at
common keys. Yawn.


Re: REST API

2010-07-31 Thread dormando


On Thu, 29 Jul 2010, j.s. mammen wrote:

 Folks, lets not get bogged down by REST defined by  Roy Fielding in
 2000.

 My question was simple.
 Here it is again, rephrased.

 Do we need to implement a memcached layer whereby we can access the
 cached objects by using HTTP protocol. Here is an example of getting a
 cached object from a server
 GET [server]/mc/object/id1

 Hope the question is clearer now?

I'm not sure how this thread has gone on for so long? Adding a new API
won't fix this problem aside from happening to force you to write a new
set of clients that use the same hashing model and don't mess with the
blob on their way back.

We should be better about building a compatibility standard into the
clients so they have the ability to use matching hashing algorithms to
blind fetch binary blobs. Bonus points for standardizing compression.

If you're talking about using a raw REST approach on top of thin air
you'll be missing all of the client features that make memcached what it
is; a distributed cache.

If you take your existing java/C# clients and use a single server instead
of multiple, you should be able to fetch the same XML blobs through each
without having them mangle it?


Re: Using PCIe SSDs instead of RAM

2010-07-25 Thread dormando
 On Fri, Jul 23, 2010 at 8:47 AM, dormando dorma...@rydia.net wrote:
   I tried.

   Try the engine branch?

 I guess, I'll have to at some point.

 Just wanted to say, that LRU was designed as an algorithm for a uniform cost 
 model, where all elements are almost equally important (have the same cost of 
 miss) and the only thing that distinguishes them is
 the pattern of accesses. This is clearly not a good model for memcache, 
 where: some elements are totally unimportant as they have already expired, 
 some elements are larger than the others, some are always
 processed in batches (multigets), and so on. In my opinion GC moves the 
 reality closer to the model, by removing unimportant elements, so if you want 
 LRU to work correctly you should at least perform GC.
 You could also try to modify LRU to model that one large item actually 
 occupies space that could be better utilies by several small elements (this 
 is also a simple change). If you fill comfortable without
 GC, I am OK with that, just do not suggest, that GC is against LRU.

Alright, I'm sorry. I've been unfair to you (and a few others recently).
I've been unnecessarily grumpy. I tried to explain myself as fairly as
possible, and Dustin added the words that I apparently forgot already, in
that these things are better pressed through via SE's.

I get annoyed by these threads because:

- I really don't care for arguments on this level. When I said GC goes
against the LRU I mean that the LRU we have doesn't require GC. The whole
point of adding the LRU was so we could skip that part. I'm describing
*intent*, I'm just too tired to keep arguing these things.

- The thread hijacking is seriously annoying. If you want to ping us about
an ignored patch, start a new thread or necro your own old thread. :(

- Your original e-mail opened with We run this in single threaded mode
and the performance is good enough for us so please merge it. I'm pretty
dumbfounded that people can take a project which is supposed to be the
performant underpinnings of the entire bloody internet and not do any sort
of performance testing.

I try to test things and I do have some hardware on hand but I'm still
trying to find the motivation in myself to do a thorough performance
run through of the engine branch. There's a lot of stuff going on in
there. This is time consuming and often frustrating work.

You did make a good attempt at building an efficient implementation, and
it's a very clever way to go about the business, but best case:

- You're adding logic to the most central global lock
- You're adding 16 bytes per object
- Plus some misc memory overhead (minor).

If they're not causing the locks to be problems, the memory efficiency
drop is an issue for many more people. If we make changes to the memory
requirements of the default engine, I really only want to entertain ideas
that make it *drop* requirements (we have some, need to start testing
them as the engine stuff gets out there).

The big picture is many users have small items, and if we push this change
many people will suffer.

Yes it's true that once those metrics expose an issue you technically
already have an issue, but it's not an instant dropoff. Easily calculable
with graphs and things like the evicted_time stats. Items dropping off
the end that haven't been touched in 365,000+ seconds aren't likely to
cause you a problem tomorrow or even next week, but watch for that number
to fall. This is also why the evicted and evicted_nonzero stats were
split. Eviction of an item with a 0 expiration is nearly meaningless.

However, I can't seem to get this through without being rude to people,
and I apologize for that. I should've responded to your original message
with these *technical* problems instead of just harping on the idea that
it looks like you weren't using all of the available statistics properly.

I'm trying to chillax and get back to being a fun (albeit grumpy)
productive hacker dude. Sorry, all.

-Dormando


Re: Using PCIe SSDs instead of RAM

2010-07-25 Thread dormando


On Sun, 25 Jul 2010, Jakub Łopuszański wrote:

 Thanks for an explanation.
 I see that we have entirely different points of view, probably caused by 
 totally different identified sets of bottlenecks, different
 usage, different configurations etc (I assume that you have greater 
 experience, since my is restricted to one company, with just 55
 memcache machines). For example you often say about the locks and CPU usage, 
 while we observed that (not surprisingly to us) those O(1)
 operations, are relatively insignificant compared to socket operations which 
 take ages. 

 I agree that 16 extra bytes is a serious problem though. If I had time I 
 would definitely try to implement a version that uses just 8
 bytes or less (for example by reimplementing TTL buckets as an array of 
 pointers to items hashed by item address). This was just a proof
 of concept, that you can have GC in O(1), which some ppl claimed to be 
 difficult, which turned out to work very well for us at nk.pl.

 Sorry for tread hijacking, and all.

It's not hard to make it work, it's hard to make it work for everyone.
There're lots of things that I could add to memcached in a day each, but
it would make it less accessable instead of more accessable at the end of
the day.


Re: Using PCIe SSDs instead of RAM

2010-07-23 Thread dormando
I tried.

Try the engine branch?

On Fri, 23 Jul 2010, Jakub Łopuszański wrote:

 While I agree with most of your thesis, I can't see how GC is against the LRU.

 I agree, that often accessed keys with short TTL seem strange, and so do 
 rarely accessed keys with long TTL. But there are lots of perfect reasons to 
 have such situation, and we do.
 GC does not work against the LRU (at least I can't see it), it cooperates. 
 Apparently LRU is never used, because you have smaller chances to run out of 
 memory, but I'd like to answer doubts of Brian Moon:
 in case whole memory is occupied you will not get sudden lack of memory, 
 but just the usuall thing: LRU will start to evict oldest items.
 I agree that monitoring hitrates and evictions makes sens, but you can 
 forcast problems much sooner if you monitor number of unexpired items, as 
 well.
 The point is: GC does not forbid you from using your regular monitoring 
 tools, skills and procedures. It just gives you another tool: live monitoring 
 of unexpired items.
 I see nothing bad about it:)

 Scenario 1. You are releasing new feature, and you want to scale the number 
 of servers accordingly to the load. You can monitor memory usage as the users 
 join, extrapolate, and order new machines much
 sooner, than by monitoring evictions, as evictions indicate that you already 
 have a problem.
 Scenario 2. You need to steal machines from one cluster to help build another 
 one, and you have to decide if you can do so safely without risking that the 
 old cluster will run of memory. Again monitoring
 evictions can not reliably tell you how many machines can you remove from the 
 cluster, while monitoring memory gives you perfectly accurate info.


 On Fri, Jul 23, 2010 at 12:12 AM, dormando dorma...@rydia.net wrote:
   
 http://code.google.com/p/memcached/wiki/NewServerMaint#Looks_Can_be_Deceiving

   Think I'll write a separate page about managing memory, based off of the
   slides from my mysqlconf presentation about monitoring memcached...

   We're not ignoring you, the patch is against what the LRU is designed 
 for.
   Several people have argued to put garbage collection back into 
 memcached,
   but it just doesn't mix.

   In the interest of being constructive, you should look back through the
   mailing list for details on the storage engine branch, and if you really
   want it to work, it'd be a good exercise to implement this as a custom
   storage engine.

   In the interest of being thorough; you proved your own patch unnecessary
   by noting that the hitrate did not change. It just confirmed you weren't
   having a problem.

   The short notes of my slides are just:

   - Note evictions over time
   - Note hitrate over time
   - Investigate changes to either via a traffic snapshot from maatkit,
   either on your memcached server or from an app server. Or setup one app
   server to log its memcached traffic. whatever you need to do.
   - Note your DB load as well, and correlate *all* of these numbers.

   You'll get way more useful information out of the *flow* through 
 memcached
   than from *what's inside it*. What's inside it doesn't matter, at all!

   Keep your hitrate stable, investigate what your app is doing when it
   changes. If there's nothing for you to fix and the hitrate is dropping, 
 db
   load is increasing, add more memcached servers. It's really really 
 simple.
   Honestly! Looking at just one stat and making that decision is pretty
   weird.

   In your case, you were seeing evictions despite 50% of your memory being
   loaded with expired items. Neither of these things are a problem or even
   matter, because:

   - expired items are freed when they're fetched
   - evicted items are picked off of the tail of the LRU

   which means that *neither* the expired items or the evicted items are
   being accessed at all. You have unexpired items which are being accessed
   less frequently than stuff that's being expired!

   It *could* indicate a problem, but simply garbage collecting will 
 actually
   *hide* it from you! You'll find it by analyzing your miss's and set's. 
 You
   might then see that your app is uselessly setting hundreds of keys every
   time a user loads their profile, or frontpage, or whatever. Those keys
   then expire without ever being used again.

   That should lead you into a *real* benefit of not wasting time setting
   extraneous keys, or fetching keys that never exist, or finding places to
   combine data or issue multigets more correctly.

   With respect to your multiget note, I went over this in quite a bit of
   detail: http://dormando.livejournal.com/521163.html

   If you're multiget'ing related data, there's zero reason for it to hit
   more than one memcached instance. Except maybe you're fetching mass

Re: Using PCIe SSDs instead of RAM

2010-07-22 Thread dormando
http://code.google.com/p/memcached/wiki/NewServerMaint#Looks_Can_be_Deceiving

Think I'll write a separate page about managing memory, based off of the
slides from my mysqlconf presentation about monitoring memcached...

We're not ignoring you, the patch is against what the LRU is designed for.
Several people have argued to put garbage collection back into memcached,
but it just doesn't mix.

In the interest of being constructive, you should look back through the
mailing list for details on the storage engine branch, and if you really
want it to work, it'd be a good exercise to implement this as a custom
storage engine.

In the interest of being thorough; you proved your own patch unnecessary
by noting that the hitrate did not change. It just confirmed you weren't
having a problem.

The short notes of my slides are just:

- Note evictions over time
- Note hitrate over time
- Investigate changes to either via a traffic snapshot from maatkit,
either on your memcached server or from an app server. Or setup one app
server to log its memcached traffic. whatever you need to do.
- Note your DB load as well, and correlate *all* of these numbers.

You'll get way more useful information out of the *flow* through memcached
than from *what's inside it*. What's inside it doesn't matter, at all!

Keep your hitrate stable, investigate what your app is doing when it
changes. If there's nothing for you to fix and the hitrate is dropping, db
load is increasing, add more memcached servers. It's really really simple.
Honestly! Looking at just one stat and making that decision is pretty
weird.

In your case, you were seeing evictions despite 50% of your memory being
loaded with expired items. Neither of these things are a problem or even
matter, because:

- expired items are freed when they're fetched
- evicted items are picked off of the tail of the LRU

which means that *neither* the expired items or the evicted items are
being accessed at all. You have unexpired items which are being accessed
less frequently than stuff that's being expired!

It *could* indicate a problem, but simply garbage collecting will actually
*hide* it from you! You'll find it by analyzing your miss's and set's. You
might then see that your app is uselessly setting hundreds of keys every
time a user loads their profile, or frontpage, or whatever. Those keys
then expire without ever being used again.

That should lead you into a *real* benefit of not wasting time setting
extraneous keys, or fetching keys that never exist, or finding places to
combine data or issue multigets more correctly.

With respect to your multiget note, I went over this in quite a bit of
detail: http://dormando.livejournal.com/521163.html

If you're multiget'ing related data, there's zero reason for it to hit
more than one memcached instance. Except maybe you're fetching mass
numbers of huge keys and it makes more sense for the TCP sessions to be
split up in parallel. I dunno.

In one final note, I'd really really appreciate it if you could stop
hijacking threads to promote your patch. It's pretty rude, as your garbage
collector issue has been discussed on the list several times.

On Thu, 22 Jul 2010, Jakub Łopuszański wrote:

 Well, I beg to differ.
 We used to have evictions  0, actually around 200 (per whatever munin counts 
 them), so we used to think, that we have too small number of machines, and 
 kept adding them.
 After using the patch, the memory usage dropped by 80%, and we have no 
 evictions since a long time, which means, that evictions where misleading, 
 and happened just because LRU sometimes kills fresh items,
 even though there are lots of outdated keys.

 Moreover it's not like RAM usage fluctuates wildly. It's kind of constant, 
 or at least periodic, so you can very accurately say if something bad 
 happened, as it would be instantly visible as a deviation
 from yesterday's charts. Before applying the patch, you could as well not 
 look at the chart at all, as it was more than sure that it always shows 100% 
 usage, which in my opinion gives no clue about what is
 actually going on.

 Even if you are afraid of wildly fluctuating charts, you will not solve the 
 problem by hiding it, and this is what actually happens if you don't have GC 
 -- the traffic, the number of outdated keys, they
 all fluctuate, but you just don't see it, if the chart always shows 100% 
 usage...

 2010/7/22 Brian Moon br...@moonspot.net
   On 7/22/10 5:46 AM, Jakub Łopuszański wrote:
 I see that my patch for garbage collection is still being 
 ignored, and
 your post gives me some idea about why it is so.
 I think that RAM is a real problem, because currently (without 
 GC) you
 have no clue about how much RAM you really need. So you can end up
 blindly buying more and more machines, which effectively means 
 that
 multiget works worse and worse (client issues one big multiget 
 but it
 gets 

Re: new Cache::Memcached::libmemcached/Memcached::libmemcached releases

2010-07-21 Thread dormando
Huzzah!

On Mon, 19 Jul 2010, Patrick Galbraith wrote:

 Hi all!

 I'm pleased to announce the release of Cache::Memcached::libmemcached 0.02011
 and Memcached::libmemcached 0.4201. Cache::Memcached::libmemcached uses
 Memcached::libmemcached, which is a Perl wrapper for libmemcached, which is
 now in sync with the latest libmemcached, 0.42. This means support of binary
 protocol as well as other current libmemcached features. I've uploaded both to
 CPAN, so once the mirrors update they will be available via CPAN.

 Changes in this release:

 Changes in 0.4201 (svn r163) 15th July 2010

  Sync with libmemcached 0.42
  Squashed various compile warnings
  Many updates to libmemcached API calls due to API changes
  Imported and merged existing svn tree (which was out of sync) with Daisuke
 Maki's official git tree



Re: Get multi error - too many keys returned

2010-07-21 Thread dormando
 I'm having an issue with my memcached farm. I'm using 3 memcached
 servers (v1.2.6) and python-memcached client (v1.43).

 When using get_multi function, memcached returns sometimes keys that I
 wasn't asking for. Memcached can return less data than expected (if
 key-value does not exist), but documentation says nothing about
 returning more data than it should. What can cause this?

Client bugs, or buggy keys I assume? Memcached the server most certainly
only returns the keys you ask for, so if you're getting other ones you
might be using the client object wrong and getting out of sync with the
protocol. Or the client's just buggy...

Sorry, don't have any ideas about what it could be in particular. make
sure you're sending keys that don't have newlines/spaces/etc in them and
that you're not clobbering the object somehow.

-Dormando


Re: [PATCH] to make memcached drop privileges completely when running as root

2010-07-20 Thread dormando
You don't need to run memcached as root to do that, you need to *start* it
as root.

If you look just under the setrlimit(RLIMIT_NOFILE code you see that the
privilege dropping happens.

So you fire up memcached *from* root, specifying -u memcached pand it will
do its root-y things and then drop privileges to that user already.

On Tue, 20 Jul 2010, Loganaden Velvindron wrote:

 It's useful when you need to run memcached as root (-u root).


  if (setrlimit(RLIMIT_NOFILE, rlim) != 0) {
     fprintf(stderr, failed to set rlimit for open files. Try running 
 a$
     exit(EX_OSERR);
     }

 for upping rlimit.

 Once it's done setting rlimit, root privileges are no longer needed.

 Additionally, it chroots the process to /var/empty. If the attacker somehow
 succeeds in finding an exploit, he cannot execute commands like /bin/sh, since
 he's jailed inside the /var/empty.


 //Logan
 C-x-C-c
 On Tue, Jul 20, 2010 at 2:38 AM, dormando dorma...@rydia.net wrote:

Greetings,
   
We are a small company who are increasingly relying on
memcached for our big projects. We are very pleased with
its performance.
   
I've put this patch that
   
1) chroots to /var/empty
2) change from root to a simple user.
   
It effectively jails the process once it no longer needs root
privilege and allows an attacker very little room to play.
   
The patch has been working fine on our gentoo server for
quite some time.
   
Feedback is most welcomed, and we are more than willing to
improve the patch to fit your standards.

 I'm a little confused; there is already a method for memcached to drop
 user privileges, by specifying the -u option? What's the purpose of this
 that the other function doesn't do?




 --
 `` Real men run current !''


                                            






Re: [PATCH] to make memcached drop privileges completely when running as root

2010-07-20 Thread dormando
I'm not really sure how to delicately explain this so please have some
forgiveness :)

You can't really hardcode either of those things... That's replacing a
flexible feature with the inflexible sort you'd expect out of a
proprietary appliance.

I'm pretty sure the -u feature works the way you need it to, and
chroot'ing an application is perfectly doable with an init script. I think
we could take a patch with an example init script for chroot'ing it
(perhaps along with some directions).

A bit on the fence about adding an outright chroot command, since
different OS's have different ways of doing that, and hardcoding it
doesn't seem to be the best use here (tho someone correct me if I'm
wrong).

On Wed, 21 Jul 2010, Loganaden Velvindron wrote:

 Hi,

 _memcached is a dedicated user with home directory /var/empty,
 and the login shell is /sbin/nologin.

 /var/empty could be created by the package manager or install script.

 //Logan
 C-x-C-c


 On Wed, Jul 21, 2010 at 1:25 AM, Trond Norbye trond.nor...@gmail.com wrote:

 Why do you remove the ability for the user to specify the username it should 
 run as, and instead hardcode it to run as _memcached ?? In addition this 
 patch require /var/empty to exists, and I know of
 a number of platforms that don't have a /var/empty directory...

 Just my 0.5NOK

 Trond


 On 20. juli 2010, at 20.54, Loganaden Velvindron wrote:

   Greetings,

   I've investigated further, and this diff seems to be ok.

   What do you think ?

   //Logan
   C-x-C-c

   diff --git a/memcached.c b/memcached.c
   index 750c8b3..1d56a8f 100644
   --- a/memcached.c
   +++ b/memcached.c
   @@ -22,6 +22,8 @@
    #include sys/uio.h
    #include ctype.h
    #include stdarg.h
   +#include unistd.h
   +#include grp.h

    /* some POSIX systems need the following definition
    * to get mlockall flags out of sys/mman.h.  */
   @@ -4539,22 +4541,6 @@ int main (int argc, char **argv) {
           }
       }

   -    /* lose root privileges if we have them */
   -    if (getuid() == 0 || geteuid() == 0) {
   -        if (username == 0 || *username == '\0') {
   -            fprintf(stderr, can't run as root without the -u 
 switch\n);
   -            exit(EX_USAGE);
   -        }
   -        if ((pw = getpwnam(username)) == 0) {
   -            fprintf(stderr, can't find the user %s to switch to\n, 
 username);
   -            exit(EX_NOUSER);
   -        }
   -        if (setgid(pw-pw_gid)  0 || setuid(pw-pw_uid)  0) {
   -            fprintf(stderr, failed to assume identity of user %s\n, 
 username);
   -            exit(EX_OSERR);
   -        }
   -    }
   -
       /* Initialize Sasl if -S was specified */
       if (settings.sasl) {
           init_sasl();
   @@ -4675,6 +4661,30 @@ int main (int argc, char **argv) {
       }

       /* Drop privileges no longer needed */
   +    if (getuid()==0 || geteuid()==0) {
   +       if ((pw=getpwnam(_memcached)) == NULL) {
   +               fprintf(stderr,user _memcached not found);
   +               exit(EX_NOUSER);
   +       }
   +
   +       if((chroot(/var/empty) == -1)) {
   +               fprintf(stderr,check permissions on /var/empty);
   +               exit(EX_OSERR);
   +       }
   +
   +       if(chdir(/) == -1) {
   +               fprintf(stderr, Cannot set new root);
   +               exit(EX_OSERR);
   +       }
   +
   +       if(setgroups(1, pw-pw_gid) ||
   +       setresgid(pw-pw_gid, pw-pw_gid, pw-pw_gid) ||
   +       setresuid(pw-pw_uid, pw-pw_uid, pw-pw_uid)) {
   +               fprintf(stderr, failed to switch to correct user);
   +               exit(EX_NOUSER);
   +       }
   +
   +       }
       drop_privileges();

       /* enter the event loop */

   On Tue, Jul 20, 2010 at 10:53 AM, Loganaden Velvindron 
 logana...@gmail.com wrote:
 yep it makes sense.

 In this case, could we not remove this part and drop root at the 
 other location
 to gain the jail benefit ?


 //Logan
 C-x-C-c

 On Tue, Jul 20, 2010 at 10:24 AM, dormando dorma...@rydia.net wrote:
   You don't need to run memcached as root to do that, you need to *start* 
 it
   as root.

   If you look just under the setrlimit(RLIMIT_NOFILE code you see that the
   privilege dropping happens.

   So you fire up memcached *from* root, specifying -u memcached pand it 
 will
   do its root-y things and then drop privileges to that user already.

 On Tue, 20 Jul 2010, Loganaden Velvindron wrote:

  It's useful when you need to run memcached as root (-u root).
 
 
   if (setrlimit(RLIMIT_NOFILE, rlim) != 0) {
      fprintf(stderr, failed to set rlimit for open files. Try 
  running a$
      exit

Re: I get 'Could NOT connect to memcache server' sometimes even when server is up

2010-07-12 Thread dormando

On Mon, 12 Jul 2010, Snehal Shinde wrote:

 I am using the stable 2.x one. I have set the timeout to 3 secs now. Lets see 
 how that goes
 Snehal

You might want to stick with 2 or 4 seconds to test :) putting it right on
the line with the SYN timeout will still give you inconsistent results...


Re: I get 'Could NOT connect to memcache server' sometimes even when server is up

2010-07-12 Thread dormando

 switched to 4 secs now. also if its a packet loss issue will it make sense to 
 reduce the $retry_interval from the default 15 secs to say 2 
 secs? http://us2.php.net/manual/en/memcache.setserverparams.php
 also the error that i get intermittenly is COULD NOT CONNECT TO SERVE. Is 
 there a way to find out more? why it was not able to connect? If I restart 
 the memcached with -vvv option, will it give a lot
 more details in the error message? just trying to find a way to get more info

 snehal

top posting.

If you can, I'd recommend writing a commandline php nugget and run it in a
loop reconnecting over and over. Run that under strace and log to file, so
you can see what syscall was returning (RST, timeout, etc).

running memcahced in -vv or -vvv mode will create an intense amount of
logging.


Re: I get 'Could NOT connect to memcache server' sometimes even when server is up

2010-07-11 Thread dormando


On Sun, 11 Jul 2010, Snehal Shinde wrote:

 Yes Jay is right. My server and client config are in sync. The problem
 is only intermittent and so i felt maybe increasing the default
 timeout might help. Any idea how i can increase the default memcache
 timeout for the php client Memcache?

I forget offhand :/ if it's not listed in pecl.php.net/memcache then
you're a little SOL.

Are you on version 3.x or version 2.x? (I might've missed this from
earlier in the thread). I've heard rumors of random connect failures in
3.x.

Also, your network might be experiencing a really low rate of packet loss.
If you can change the timeout, set it to 2 seconds for a while.. if you
still get random timeouts, set it to 4 seconds (higher than the SYN
retry). If it never fails or almost never fails at 4 seconds then you have
some packet loss that might be fixable.

It should never take memcached longer than a few ms to set up a new
connection, so anything worse than that usually means the box is hosed
(swapping) or the network is hosed (maxed, packet loss, etc).


Re: What performance should I expect?

2010-07-09 Thread dormando
 Hello,
 I have a server that roughly have 2000 hits per second on
 memcached, I have seen
 about 0.06% GETs got response time higher than 10ms, is it
 something expected?

http://code.google.com/p/memcached/wiki/NewPerformance

that's probably packet loss, system swapping, or your box is otherwise
overloaded. also depending on how you're measuring it your client box/etc
could be slightly broken.

-Dormando


Re: Disappearing Keys

2010-07-06 Thread dormando
Or you could disable the failover feature...

On Tue, 6 Jul 2010, Darryl Kuhn wrote:

 FYI - we made the change on one server and it does appear to have resolved 
 premature key expiration.

 Effectively what appears to have been happening was that every so often a 
 client was unable to connect to one or more of the memcached servers. When 
 this happened it changed the key distribution. Because
 the connection was persistent it meant that subsequent requests would use the 
 same connection handle with the reduced server pool. Turning off persistent 
 connections ensures that a if we are unable to
 connect to a server in one instance the failure does not persist for 
 subsequent connections.

 We'll be rolling this change out to the entire server pool and I'll give the 
 list another update with our findings.

 Thanks,
 Darryl

 On Fri, Jul 2, 2010 at 8:34 AM, Darryl Kuhn darryl.k...@gmail.com wrote:
   Found the reset call - that was me being an idiot (I actually 
 introduced it when I added logging to debug this issue)... That's been 
 removed however there was no flush command. Somebody else
   suggested it may have to do with the fact that we're running persistent 
 connections; and that if a failure occurred that failure would persist and 
 alter hashing rules for subsequent requests on
   that connection. I do see a limited number of connection failures 
 (~5-15) throughout the day. I'm going to alter the config to make connections 
 non-persistent and see if it makes a difference
   (however I'm doubtful this is the issue as we've run with memcache 
 server pools with a single instance - which would make it impossible to alter 
 the hashing distribution).

   I'll report back what I find - thanks for your continued input!

   -Darryl


 On Thu, Jul 1, 2010 at 12:28 PM, dormando dorma...@rydia.net wrote:
Dormando... Thanks for the response. I've moved one of our servers to 
 use an upgraded version running 1.4.5. Couple of things:
 *  I turned on logging last night
 *  I'm only running -vv at the moment; -vvv generated way more 
 logging than we could handle. As it stands we've generated ~6GB of logs since 
 last night (using -vv). I'm looking at ways
   of reducing log
    volume by logging only specific data or perhaps standing up 10 or 
 20 instances on one machine (using multiple ports) and turning on -vvv on 
 only one instance. Any suggestions there?

 Oh. I thought given your stats output that you had reproduced it on a
 server that was on a dev instance or local machine... but I guess that's
 related to below. Running logs on a production instance with a lot of
 traffic isn't that great of an idea, sorry about that :/

  Looking at the logs two things jump out at me.
   *  While I had -vvv turned on I saw stats reset command being issued 
  constantly (at least once a second). Nothing in the code that we have does 
  this - do you know if the PHP client does
 this perhaps? Is
      this something you've seen in the past?

 No, you probably have some code that's doing something intensely wrong.
 Now we should probably add a counter for the number of times a stats
 reset has been called...

   *  Second with -vv on I get something like this:
       +  71 get resourceCategoryPath21:984097:
          71 sending key resourceCategoryPath21:984097:
          71 END
          71 set 
  popularProducts:2010-06-28:skinit.com:styleskins:en::2000:image_wall:0__type
   0 86400 5
          71 STORED
          71 set 
  popularProducts:2010-06-28:skinit.com:styleskins:en::2000:image_wall:0 1 
  86400 130230
          59 get domain_host:www.bestbuyskins.com
          59 sending key domain_host:www.bestbuyskins.com
          59 END
   *  Two questions on the output - what's the 71 and 59? Second - I 
  would have thought I'd see an END after each get and set however you 
  can see that's not the case.
 
  Last question... other than trolling through code is there a good place to 
  go to understand how to parse out these log files (I'd prefer to self-help 
  rather than bugging you)?

 Looks ike you figured that out. The numbers are the file descriptors
 (connections). END/STORED/etc are the responses.

 Honestly I'm going to take a wild guess that something on your end is
 constantly trying to reset the memcached instance.. it's probably doing a
 flush_all then a stats reset which would hide the flush counter. Do
 you see flush_all being called in the logs anywhere?

 Go find where you're calling stats reset and make it stop... that'll
 probably help bubble up what the real problem is.







Re: LRU mechanism question

2010-07-06 Thread dormando
Here's a more succinct and to the point page:

http://code.google.com/p/memcached/wiki/NewUserInternals
^ If your question isn't answered here ask for clarification and I'll
update the page.

Your problem is about the slab preallocation I guess.

On Tue, 6 Jul 2010, Matt Ingenthron wrote:

 Hi Sergei,

 For various reasons (performance, avoiding memory fragmentation), memcached
 uses a memory allocation approach called slab allocation.  The memcached
 flavor of it can be found here:

 http://code.google.com/p/memcached/wiki/MemcachedSlabAllocator

 Chances are, your items didn't fit into the slabs defined.  There are some
 stats to see the details and you can potentially do some slab tuning.

 Hope that helps,

 - Matt

 siroga wrote:
  Hi,
  I just started playing with memcached. While doing very basic stuff I
  found one thing that confused me a lot.
  I have memcached running with default settings - 64M of memory for
  caching.
  1. Called flushALL to clean the cache.
  2. insert 100 of byte arrays 512K each - this should consume about 51M
  of memory so  I should have enough space to keep all of them - and to
  very that call get() for each of them  - as expected all arrays are
  present
  3. I call flushAll again - so cache should be clear
  4. insert 100 arrays of smaller size ( 256K). I also expected that I
  have enough memory to store them (overall I need about 26M), but
  surprisingly to me when calling get() only last 15 where found in the
  cache!!!
 
  It looks like memcached still hold memory occupied by first 100
  arrays.
  Memcache-top says that only 3.8M out of 64 used.
 
  Any info/explanation on memcached memory management details is very
  welcomed. Sorry if it is a well known feature, but I did not find much
  on a wiki that would suggest explanation.
 
  Regards,
  Sergei
 
  Here is my test program (I got the same result using both danga and
  spy.memcached. clients):
 
  MemCachedClient cl;
 
  @Test
  public void strange() throws Throwable
  {
  byte[] testLarge = new byte[1024*512];
  byte[] testSmall = new byte[1024*256];
  int COUNT = 100;
  cl.flushAll();
  Thread.sleep(1000);
  for (int i = 0; i  COUNT; i++)
  {
  cl.set(largekey + i, testLarge, 600);
  }
  for (int i = 0; i  COUNT; i++)
  {
  if (null != cl.get(largekey + i))
  {
  System.out.println(First not null  + i);
  break;
  }
  }
  Thread.sleep(1000);
  cl.flushAll();
  Thread.sleep(1000);
  for (int i = 0; i  COUNT; i++)
  {
  cl.set(smallkey + i, testSmall, 600);
  }
  for (int i = 0; i  COUNT; i++)
  {
  if (null != cl.get(smallkey + i))
  {
  System.out.println(First not null  + i);
  break;
  }
  }
 
  }
 




Re: Adding/Removing order of instances from a consistent array

2010-07-04 Thread dormando
 Hi guys,

 I was wondering,
 If I have an array of 10 machines , 1 instance per machine set up like
 this:

 instances = array(
   server1 = array(host = 111.111.111.111, 
 port = 11211,
 weight = 6),
   server2 = array(host = 111.111.111.111, 
 port = 11211,
 weight = 6),
   server3 = array(host = 111.111.111.111, 
 port = 11211,
 weight = 6),
   server4 = array(host = 111.111.111.111, 
 port = 11211,
 weight = 6),
   server5 = array(host = 111.111.111.111, 
 port = 11211,
 weight = 6),
   server6 = array(host = 111.111.111.111, 
 port = 11211,
 weight = 6),
   server7 = array(host = 111.111.111.111, 
 port = 11211,
 weight = 6),
   server8 = array(host = 111.111.111.111, 
 port = 11211,
 weight = 6),
   server9 = array(host = 111.111.111.111, 
 port = 11211,
 weight = 6),
   server10 = array(host = 
 111.111.111.111, port =
 11211, weight = 6)
   );
 [ip's ofcourse are foobar]

 I run through the array and through the (php) client I addServer()
 foreach of the values.
 The client is configured to have consistent hashing

 Then I want to take out a single instance (permanently).. but i want
 to take out server3 for example.
 Would i be loosing the same percent/amount of keys if i were to remove
 the last server in the list (server10) ?
 Or, since the entire list just went up (in order), would the client
 think that now 4, is in 3'rd place.. and would look for keys usually
 found on server 3, on machine 4? (and so forth and so forth going down
 the list, thus losing ~70% of keys [very bad])

I probably shouldn't answer this without researching your client (what
client are you using exactly?), but for consistent hashing it's supposed
to sort the server list first... I'm pretty sure?

So you sort the supplied server list, then hash it out onto the big wheel.
Which makes adding/removing servers not expand or contract the final hash
table but instead shift around the areas where the sorted server would be
inserted or removed.


Re: Scalability and benchmarks

2010-07-01 Thread dormando


On Thu, 1 Jul 2010, Vladimir Vuksan wrote:

 I believe the FUD you are referring to was following presentation at Velocity

 Hidden Scalability Gotchas in Memcached and Friends

 http://en.oreilly.com/velocity2010/public/schedule/detail/13046

 There is a link to the PDF of slides so you can see what they talked about. 
 Here is the short link to it

 http://j.mp/cgIsE9

 It was an example of applying a model to a set of controlled measurements and 
 not necessarily picking on memcached. One of the findings was that contention 
 has increased between versions 1.2.8 and 1.4.5
 from about 2.5% to 9.8%. Another is that memcached didn't perform well beyond 
 6 threads which they attributed to locks. At Sun they coded some patches 
 where they replaced the single lock with multiple
 locks and you can view performance on slide 25.

 The measurements do show that memcached performs extremely well ie. 300k ops 
 on a single instance however memcached was not able to take advantage of 
 additional threads.

 Key takeaway from this talk was not that memcached doesn't scale but that it 
 can perform even better.

I don't want to turn this into a flamebait trollfest, but I've always been
seriously annoyed by this benchmark that those Sun/Oracle folks keep
doing. So I'm going to toss out some flamebait and pray that it doesn't
turn this thread much longer.

First, Hidden scalability gotchas - for the love of fuck, nobody gets
hit by that. Especially nobody attending velocity. I'm not even sure
facebook bothered to scale that lock.

Given the title, the overtly academic content, and the lack of serious
discussion as to the application of such knowledge, we end up with stupid
threads like this. Imagine how many people are just walking away with that
poor idea of holy shit I should use REDIS^WCassandra^Wetc because
memcached doesn't scale! - without anyone to inform them that redis isn't
even multithreaded and cassandra apparently sucks at scaling down. Google
is absolutely loaded with people benchmarking various shit vs memcached
and declaring that memcached is slower, despite the issue being in the
*client software* or even the benchmark itself.

People can't understand the difference! Please don't confuse them more!

There are *many* more interesting topics with scalability gotchas, like
the memory reallocation problem that we're working on. That one *actually
affects people*, is mitigated through education, and will be solved
through code. Other NoSQL solutions are absolutely riddled with
usability bugs that have nothing to do with how many non-fsync'd writes
you can push through a cluster per second. What separates academic wank
from truly useful topics is whether or not you can take the subject (given
the context applied to it) and actually do anything with it.

There're dozens of real problems to pick on us about - I sorely wish
people would stop hyperfocusing on the ones that don't matter. Sorry if
I've picked on your favorite memcached alternative; I don't really care
for a rebuttal, I'm sure everyone's working on fixing their problems :P

-Dormando


Re: Disappearing Keys

2010-07-01 Thread dormando
 Dormando... Thanks for the response. I've moved one of our servers to use an 
 upgraded version running 1.4.5. Couple of things:
  *  I turned on logging last night
  *  I'm only running -vv at the moment; -vvv generated way more logging than 
 we could handle. As it stands we've generated ~6GB of logs since last night 
 (using -vv). I'm looking at ways of reducing log
 volume by logging only specific data or perhaps standing up 10 or 20 
 instances on one machine (using multiple ports) and turning on -vvv on only 
 one instance. Any suggestions there?

Oh. I thought given your stats output that you had reproduced it on a
server that was on a dev instance or local machine... but I guess that's
related to below. Running logs on a production instance with a lot of
traffic isn't that great of an idea, sorry about that :/

 Looking at the logs two things jump out at me.
  *  While I had -vvv turned on I saw stats reset command being issued 
 constantly (at least once a second). Nothing in the code that we have does 
 this - do you know if the PHP client does this perhaps? Is
 this something you've seen in the past?

No, you probably have some code that's doing something intensely wrong.
Now we should probably add a counter for the number of times a stats
reset has been called...

  *  Second with -vv on I get something like this:
  +  71 get resourceCategoryPath21:984097:
 71 sending key resourceCategoryPath21:984097:
 71 END
 71 set 
 popularProducts:2010-06-28:skinit.com:styleskins:en::2000:image_wall:0__type 
 0 86400 5
 71 STORED
 71 set 
 popularProducts:2010-06-28:skinit.com:styleskins:en::2000:image_wall:0 1 
 86400 130230
 59 get domain_host:www.bestbuyskins.com
 59 sending key domain_host:www.bestbuyskins.com
 59 END
  *  Two questions on the output - what's the 71 and 59? Second - I would 
 have thought I'd see an END after each get and set however you can see 
 that's not the case.

 Last question... other than trolling through code is there a good place to go 
 to understand how to parse out these log files (I'd prefer to self-help 
 rather than bugging you)?

Looks ike you figured that out. The numbers are the file descriptors
(connections). END/STORED/etc are the responses.

Honestly I'm going to take a wild guess that something on your end is
constantly trying to reset the memcached instance.. it's probably doing a
flush_all then a stats reset which would hide the flush counter. Do
you see flush_all being called in the logs anywhere?

Go find where you're calling stats reset and make it stop... that'll
probably help bubble up what the real problem is.


Re: Scalability and benchmarks

2010-06-30 Thread dormando
 I've seen some FUD from people claiming that memcached doesn't scale
 very well on multiple CPUs, which surprised me.

 Is there an accepted benchmark we can use to examine performance in more
 detail?

 Does anybody have any testing results in that area?

While we don't have a standard set of benchmarks yet, we do test it,
people have tested it, and it's routinely heavily hammered in production
all over the place, including several top 10 websites.

The FUD is easy enough to dispell: just squint at their numbers a little
bit and think it through carefully.

Most of these are saying that you hit a wall scaling memcached past,
say, 300,000 requests per second on a single box. (though I think with the
latest 1.4 it's easier to hit 500,000+). Remember that 300,000 requests
per second at 1k per request is over 2.5 gigabits of outbound alone.

If your requests are much smaller than that, you might skid in under
1gbps. For sanity's sake though, take a realistic look at how many
requests per second you actually hit memcached with. 100,000+ per second
per box is unlikely but sometimes happens, and works fine.

Even large sites are likely doing less than that, as they will need more
memory before hosing a box.

memcached doesn't scale! is a pile of horseshit. The same industry
claims all the single-threaded NoSQL stuff Scales just fine. Figure out
what your own needs are for requests per second, then try to do that with
your hardware. Odds are good you'll be ale to hit 10x-100x that mark.

I think sometime this year we'll be making it scale across CPU's better
than it presently does, but the only people who would ever notice would be
users with massive memcached instances on 10gbps ethernet servicing small
requests. I'm sure there's someone out there like that, but I doubt anyone
listening to the FUD would be one of them.

-Dormando


Re: Scalability and benchmarks

2010-06-30 Thread dormando
 With what kind of boxes would that be?

 With 300-500k/sec you're getting really close to lowlevel limitations of
 single network interfaces. With dell 1950's (with broadcom netextreme II 5708
 and dual xeon 5150) we were able to produce about 550-600,000 packets/second
 with traffic/load-generating software (pktgen in the linux kernel) to test
 ddos protection. And that was with fire-and-forget 64byte tcp or udp-packets.
 Not with traffic some server was actually producing useful responses to
 requests it received earlier.

 With a much more recent Dell R210 (broadcom netextreme II 5709 and core i3
 530) we were able to reach twice that though. That one was able to reach about
 1.1 million pps. But still, that's with a packet generator generating unusable
 traffic. If you actually have to read the requests, process them and produce a
 response with a body, reaching up to 500k requests/second even on higher grade
 hardware with multiple interfaces sounds pretty good to me.

For most hardware memcached is limited by the NIC. I'd welcome someone to
prove a simple case showing otherwise, at which time we'd prioritize an
easy fix :)

-Dormando


Re: Scalability and benchmarks

2010-06-30 Thread dormando
 
  For most hardware memcached is limited by the NIC. I'd welcome someone to
  prove a simple case showing otherwise, at which time we'd prioritize an
  easy fix :)

 Does that mean you should use multiple NICs on the servers and spread the
 clients over different networks?

It means you probably don't need to worry about it. Most people will start
nailing evictions and need to add more instances long before they overrun
the NIC.

Also uh, no, you'd probably use port bonding instead of doing something
crazy with client networking.


Re: Disappearing Keys

2010-06-27 Thread dormando

 Any thoughts on what might be going on here?

 As for vitals/system config:

 Here's a recent stats dump:
 STAT pid 19986
 STAT uptime 8526687
 STAT time 1277416425
 STAT version 1.2.8

You should probably upgrade, but I think this version has what you may
need...

If you can reproduce this against an idle memcached, start up an instance
with -vvv (three v's), which will tell you why a GET failed, as well as
all of the parameters of any SET's/GET's you run.

So you can see if the GET failed because of it expired, was evicted,
expired due to a flush command (which doesn't seem to be your case), etc.
You can also see if your PHP client is actually setting the timeout
correctly or not.


Re: same key on both memcached servers

2010-06-20 Thread dormando
This smells like you're configuring your clients wrong... The client list
must be _exactly the same_ on every app server. Don't reorder them or put
localhost in there.

It's also possible that appserv2 can't reach one of the memcached daemons
as well, and is putting keys on the only available server.

On Sun, 20 Jun 2010, Shaulian wrote:

 Relating to Memcached FAQ (http://code.google.com/p/memcached/wiki/
 FAQ#How_does_memcached_work?), it is clear that when you use more than
 on server for memcache, each key exist only one time, and only on one
 server.

 We are using 2 servers for memcached and 3 servers for app.
 Lately we noticed that when we request the same key from AppServ1, it
 returns DIFFERENT value when requesting it from AppServ2.
 We tried to delete the key from BackOffice in AppServ1, but it was
 still exist on AppServ2.
 All the above clearly points that the key exists on each server.

 We're using .NET memcached client library (http://sourceforge.net/
 projects/memcacheddotnet/).

 Anyone experienced such a thing ?



Re: Suggestions for deferring DB write using memcached

2010-06-03 Thread dormando
 We're building a system with heavy real-time write volume and looking
 for a way to decouple db writes from the user request path.

 We're exploring the approach of buffering updated entities in
 memcached and writing them back to the database asynchronously.  The
 primary problem that we're concerned about is how to ensure that the
 entity remains in the cache until the background process has a change
 to write it.

 Any advice and/or references would be greatly appreciated.

You really probably want to use a job server... http://gearman.org/ -
write updates async to gearman and update caches in memcached if you want,
then have workers write them back to the DB as fast as your system can
handle.


RE: Global Lock When Getting Stats?

2010-05-18 Thread dormando
 Actually sorry I should've been more descriptive. I plan on using a java
 client to get the stats from memcached so that we can eventually pipe the
 stats to our monitoring software. What would be the best java client?
 Spymemcached?

Probably,  yes.

 Also, is there any consideration in the future to provide stats via JMX?

Unless you're getting it from a client, no.

Again, please take a look at those protocol documents. Parsing the stats
is incredibly trivial and you probably don't have to pull in a whole
client library as a dependency just for that. Also, again, it the document
describes the stats in detail.


RE: Global Lock When Getting Stats?

2010-05-17 Thread dormando
memcached.org/wiki

also inside the source tarball there's a protocol.txt that explains the
raw stats thoroughly. You seem to be referring to the php api's stats
call.

On Mon, 17 May 2010, Tim Sneed wrote:

 Great thanks for the information! Is there any documentation available from
 the project team that explains the monitoring aspects of memcached? Such as
 the methods of receiving stats (if there are any aside from getStats())?


 -Original Message-
 From: memcached@googlegroups.com [mailto:memcac...@googlegroups.com] On
 Behalf Of dormando
 Sent: Thursday, May 13, 2010 4:57 PM
 To: memcached
 Subject: Re: Global Lock When Getting Stats?

  I was watching a Memcached video spoken by John Adams when I heard
  something that made me curious.  When one gets stats from memcached,
  does it really perform a global lock? Does anyone have any good test
  cases on what sort of impact there is with an increasing node size
  with respect to performance degradation?

 'stats sizes' grabs a global lock and iterates the whole cache. we don't
 recommend you run that one. All other stats commands are extremely fast.

  I am looking to do some research on expanding our product to support
  real-time monitoring, management, and analysis of memcached. But I'm a
  little concerned that this lock occurs during every stat collection,
  so much to the point that John Adams mentions that it can noticeably
  degrade the performance of memcached. I tried to do some research on
  my own but am finding very little performance analysis or any
  benchmark info with respect to constant polling (minimally 30 sec
  interval) of a enterprise-level memcached distributed system. Has
  anyone seen bad performance with respect to a large cluster and
  gathering stats? Any info would be great, thanks!

 There is no major lock every stat collection. There's a minor lock and
 reading the values is very fast.

 A while ago twitter was running an extremely old version of memcached and
 they took a long time to upgrade. That particular version had a bug in
 stats collection that was *missing* a mutex lock, and would thus crash
 sometimes. So they were afraid of running stats commands. That bug hasn't
 existed for three years at least.

 -Dormando




Re: Global Lock When Getting Stats?

2010-05-13 Thread dormando
 I was watching a Memcached video spoken by John Adams when I heard
 something that made me curious.  When one gets stats from memcached,
 does it really perform a global lock? Does anyone have any good test
 cases on what sort of impact there is with an increasing node size
 with respect to performance degradation?

'stats sizes' grabs a global lock and iterates the whole cache. we don't
recommend you run that one. All other stats commands are extremely fast.

 I am looking to do some research on expanding our product to support
 real-time monitoring, management, and analysis of memcached. But I'm a
 little concerned that this lock occurs during every stat collection,
 so much to the point that John Adams mentions that it can noticeably
 degrade the performance of memcached. I tried to do some research on
 my own but am finding very little performance analysis or any
 benchmark info with respect to constant polling (minimally 30 sec
 interval) of a enterprise-level memcached distributed system. Has
 anyone seen bad performance with respect to a large cluster and
 gathering stats? Any info would be great, thanks!

There is no major lock every stat collection. There's a minor lock and
reading the values is very fast.

A while ago twitter was running an extremely old version of memcached and
they took a long time to upgrade. That particular version had a bug in
stats collection that was *missing* a mutex lock, and would thus crash
sometimes. So they were afraid of running stats commands. That bug hasn't
existed for three years at least.

-Dormando


Re: Cacti templates

2010-04-20 Thread dormando
Ding. You have access.

On Tue, 20 Apr 2010, Xaprb wrote:

 Can someone give me SVN commit rights so I can add a reference to
 http://code.google.com/p/mysql-cacti-templates/wiki/MemcachedTemplates
 to the wiki?  My Google Code username is baron.schwartz.

 - Baron


 --
 Subscription settings: 
 http://groups.google.com/group/memcached/subscribe?hl=en



Re: 48944634 Hits / 2869804 Items but just 127.8 KBytes used !

2010-04-17 Thread dormando
 STAT version 1.2.8

Consider upgrading to 1.4.x ;)

 STAT curr_items 25

You have at most 25 items in your cache. that sounds pretty small.

 STAT bytes 130845

Yeah, that's 128k. Your app's just not using the cache very much.

-Dormando


-- 
Subscription settings: http://groups.google.com/group/memcached/subscribe?hl=en


Re: 48944634 Hits / 2869804 Items but just 127.8 KBytes used !

2010-04-17 Thread dormando


On Sat, 17 Apr 2010, wminside wrote:

 I get memcached from the EPEL repository.

 Can I set a higher current items? I mean, is there anything I can do
 to get memcached use more memory or it's just how vBulletin works.

Go whine at the vB authors about putting some real shit into the cache. vB
just puts the data store stuff into memcached, which is cute but they
can do an awful lot more with it.

Given how awful their queries tend to be and how cacheable user data is,
it's confusing why they don't bother.


-- 
Subscription settings: http://groups.google.com/group/memcached/subscribe?hl=en


preview of new wiki

2010-04-11 Thread dormando
Hey,

I promised myself I'd give it another once over for formatting and obvious
issues first, but I find myself not... caring as much.

So be warned, I have to go back through most of this and fix formatting +
typos + wording, and it's missing sections on almost every page. The
unlinked portions at the bottom are pages I'm still drafting locally.

http://memcached.org/wiki

Sometime tomorrow I'm going to flip this to be the new wiki start page. I
should have almost everything I'm going to do on the first pass by ...
sometime tomorrow. A lot of the sections need to be fleshed out more over
time, or point to bugs or issues we need to work on, like providing
easier debian/fedora packages for folks when the base OS lags behind.

From there I'll be fiddling and tweaking and adding little things all
week while talking to people at MySQL Conf.

I think the layout sucks a lot less. As always I/we're open to feedback.
if you would like to contribute too, ask for wiki bits. If you're going to
complain, we (or you) might as well fix it.

It's our goal to make this clear and thorough for anyone who wishes to get
started with memcached.

-Dormando


-- 
To unsubscribe, reply using remove me as the subject.


Re: sporadic high max response times and client timeout strategies

2010-04-07 Thread dormando
Just about all respones should happen sub-ms (excepting for network
jitter).

Some stuff you can check for offhand:

- List versions of all related software you're running; memcached proper,
libmemcached, ruby client)
- Your full startup arguments to memcached
- Narrow down if these timeouts happen if it's initiating a new connection
to memcached, or when reusing a persistent connection, or both (may not be
easy).
- If your memcached is (hopefully) new enough, is 'listen_disabled_num'
under the `stats` command nonzero? If so, you're hitting maxconns and
memcached is blocking new connections until old ones disconnect. Seems
unlikely for your case.

Check dmesg and syslogs on the hosts to ensure iptables isn't complaining
and TIME_WAIT buckets aren't overflowing anywhere, clients or servers.

If all software is new and blah blah blah, would you mind running a test
using a pure client (ruby or whatever, just no libmemcached) over
localhost to see if you can reproduce the issue there.

thanks,
-Dormando

On Wed, 7 Apr 2010, Ryan Tomayko wrote:

 We have a few memcached machines doing ~1000 ops/s (9:1 get to set)
 each. These are fairly beefy, non-virtualized, 8-cpu servers with ~14G
 RAM (12G to memcached). They're actually our hot fileserver spares,
 which is why the hardware is so severely overallocated. CPU is
 essentially idle, load rarely goes over 0.2 or so. We've benchmarked
 the things at 100K ops/s over the network without any real tuning or
 tweaking.

 Response times average under 5ms with a few hundred active
 connections. Here's a graph of the min and avg response times reported
 by memslap as the number of connections increase from 1 to 250:

 http://img.skitch.com/20100407-c67xj7d2b1g979bumif9wm5ebd.png

 But we also get occasional 200ms response times in those runs. Here's
 the max response times for the same memslap runs graphed above:

 http://img.skitch.com/20100407-pj9djy5k432b2225nimd9qaqcq.png

 That's over the network, but I get similar spikeyness in max response
 time when I run the same tests over a loopback interface.

 I'm wondering, are occasional high max response times like this to be 
 expected?

 And, if so, would a low (say 10ms) client read timeout + retry be a
 good strategy for combatting them?

 We use libmemcached (via the memcached Ruby library) with a few
 hundred persistent connections to each memcached and are experimenting
 with different approaches for setting the receive timeout
 (MEMCACHED_BEHAVIOR_RCV_TIMEOUT). We started with 250ms because that
 seemed like a basically sane value, but the high rate of timeouts (and
 eventual host ejections) caused us to bump that up to 500ms, and then
 1s, until we settled on 1.5s. That will still timeout occassionally,
 but the frequency is much reduced -- more what I had expected with a
 ~250ms timeout.

 1.5s seems like an insanely high value. I figure we either have some
 kind of server configuration issue or we need to consider using the
 read timeout to guarantee consistent response times. I'm researching
 the former but was hoping someone on the list might have experience
 with the latter. Or even general advice on using client send/receive
 timeouts?

 Thanks,
 Ryan


 --
 To unsubscribe, reply using remove me as the subject.



Re: Read expires?

2010-04-07 Thread dormando
 I thought this was a FAQ -- or at least I have memory of this being discussed 
 here before.

 I have one process that uses memcached for throttling by setting a key with a 
 timeout.  If the set fails (NOT_STORED, as the FAQ describes)
 then the process waits.

 The question came up yesterday at work if a process can query the time 
 remaining for a given key.  I thought I remembered that there was no
 way to read the timeout value on an existing key, but I'm not having any luck 
 this morning finding discussion of this.  Can someone confirm
 this -- or point me to documentation/discussion about this?

No way to fetch this in the protocol... You can embed it in your object if
you want.

 Also, Is there a way to test that a key exists other than a get()?  I know 
 with the throttling example an add() is used, but I don't want
 to add a new value if it doesn't exist.  Perhaps premature optimization, but 
 can a process check that a key exists and not transfer the
 value when the data is not needed?

No dice there either. You can do an 'add' of a zero byte object with an
expiration of 1 second. Or uh... if you set the expiration to be  30 days
it's accepted as a date, so I think you can backdate it and have it expire
immediately? Can't remember offhand if that actually works.

Usually it's a premature optimization or a flow problem if you need to do
that though. Real cases have been rare.

-- 
To unsubscribe, reply using remove me as the subject.


Memcached release 1.4.5

2010-04-03 Thread dormando
Hey,

The team is happy to belatedly announce memcached 1.4.5. We're skipping
the -rc routine since this release is very late and contains only minor
bugfixes and one new statistic.

http://code.google.com/p/memcached/wiki/ReleaseNotes145

We will very shortly begin a long -rc cycle with 1.4.6, and are
stabilizing 1.6.0 for release. More of the rest of the open bugs will be
closed out for 1.4.6.

A number of us will be at MySQLConf (http://en.oreilly.com/mysql2010/)
next week. Speaking, dotorg-boothing, or getting drunk in the lobby. Come
say hi if you're brave enough.

-Dormando


-- 
To unsubscribe, reply using remove me as the subject.


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-18 Thread dormando
 class does not have enough memory
relative to other slabs. If you can prove that happens often, you'll need
to check this out more carefully.

 Tomcat provides a PersistentManager ([1]) which allows to store sessions in 
 the database. But this manager backups all sessions in batches every 10 
 seconds. For one thing scalability of the application is
 then directly dependent from the database (more than it is already if a 
 database is used) and there's a timeframe where sessions can be lost. If the 
 session backup frequency is shortened, the database is
 hit more often. Additionally sessions are stored in the database again and 
 again even if they were not changed at all. That was the reason why I decided 
 not to use this.

Yeah, I get why the original one is crap. I just don't see why the swing
from it batches everything every 10 seconds because some guy lacked total
clue to holy _SHIT_ we can't touch the database ever!!! is necessary.

In my old post I basically describe something where:

- A new session gets INSERT'ed into DB, and memcached.

- Fetched from memcached and updated back into memcached.

- On fetch, if you haven't synced with the DB in more than N minutes, sync
with the DB again.

- Alternatively a background thread could trawl through select
session_id, sync_timestamp, last_known_fetch from session where etc
every few minutes and sync stuff back to the DB if it's been changed (but
it doesn't have to keep checking if the last_accessed time isn't being
updated with the synced time).

- At some point background process or whatever DELETE's expired crap from
the DB.

So you still do some writes, but it should be vastly reduced compared to
updating the last accessed timestamp on every view, and reads against
the DB should almost never happen for active sessions that stay in cache.

Given the addition of the ADD bit, I guess you could build it so that in a
pinch you could just shut off DB journaling to deal with some overload
scenario.

-Dormando


Re: segfault on FC8

2010-03-18 Thread dormando
 Hi, I'm getting segfault just after running the memcached n Fedora 8,
 runnin in the VMWare.

 I just run
  # /usr/local/bin/memcached -u gri -m 1 -v
 in one console
 # telnet localhost 11211
 in another, and in the 1st get endless flow of
 Catastrophic: event fd doesn't match conn fd! lines.

 In syslog I see
 Mar 19 04:41:52 devel kernel: memcached[20924]: segfault at efffc ip
 0057f0ac sp bfedc3b8 error 4 in libc-2.7.so[512000+153000]

 Do you need some debug dumps?

Can you upgrade libevent? Fedora 8's pretty old and we've seen a segfault
or two with really old libevents and really new memcached's.

To unsubscribe from this group, send email to 
memcached+unsubscribegooglegroups.com or reply to this email with the words 
REMOVE ME as the subject.


Re: Memcache as session server with high cache miss?

2010-03-13 Thread dormando
From the looks of it, you're not evicting anything. Which means memcached
isn't forgetting about anything you're telling it. For some other reason
your sessions are not being found.

From your description of the problem (sometimes logged in, sometimes not),
along with the high miss rate on memcached, tells me that something else
is wrong with your configuration.

I know you've been asked this before, but:

- Can you quadruple check that both webservers have *identical
configurations*. Both webservers list both memcached servers as the same
IPs, in the same order. If they're incorrect, if a user hits webserver1,
then webserver2, then webserver1, that would end up having them flip
between memcached's.

- Have you combed your error logs? Are you handling errors from your
client? It's possible that memcached or your client are throwing errors
and you're not noticing, or that it's failing to contact one server
sometimes and is then contacting the other.

- Have you tried enabling persistent connections from php? There should be
an option for it. You have a high rate of new connections and switching to
persistent can reduce those symptoms (such as firewalls, running out of
local ports, etc).

- Your software is very old. Memcached is very old and I'm going to guess
that your pecl/memcached library is also very old. You're missing a lot of
counters and bug fixes that would help diagnose (or outright fix) issues
like this. Would you consider upgrading to the latest non-betas?

On Sat, 13 Mar 2010, TheOnly92 wrote:

 Webserver1: http://paste2.org/p/715491
 Webserver2: http://paste2.org/p/715492

 Situation:
 1. User logs in.
 2. User clicks somewhere (still logged in)
 3. User clicks on another placed and gets redirected to the home page
 (appears logged out for this page)
 4. Refreshes and able to access the page again (logged in).

 On Mar 13, 1:35 pm, dormando dorma...@rydia.net wrote:
  Can you telnet to the instances, type stats, stats items, and stats
  slabs, then copy/paste all that into pastebin?
 
  echo stats | nc host 11211  stats.txt works too
 
  You version is very old... It's missing many statistical counters that
  could help us diagnose a problem. The extendedstats isn't printing an
  evictions counter, but I can't remember if that version even had one.
 
  Can you describe your problem in more detail? If I recall:
 
  - User logs in.
  - Clicks somewhere. now they're logged out?
  - They click somewhere else, and they're logged in again? Does this mean
  they found their original session again, or did you app log them in again?
 
  -Dormando
 
 
 
  On Fri, 12 Mar 2010, TheOnly92 wrote:
   I'm retrieving statistics via Memcache::extendedStats function, here
   are the basics:
 
   Session Server 1
   Version    1.2.2
   Uptime     398,954 sec
   Cache Hits 2,065,061
   Cache Misses       987,726 (47.83%)
   Current Items      381,928
   Data Read  4,318,055.02 KB
   Data Written       2,011,004.09 KB
   Current Storage    100,688.96 KB
   Maximum Storage    256.00 MB
   Current Connections        9
   Total Connections  5,278,414
   Session Server 2
   Version    1.2.2
   Uptime     398,943 sec
   Cache Hits 2,225,697
   Cache Misses       987,733 (44.38%)
   Current Items      381,919
   Data Read  4,323,893.05 KB
   Data Written       2,159,309.95 KB
   Current Storage    100,685.52 KB
   Maximum Storage    256.00 MB
   Current Connections        11
   Total Connections  5,278,282
 
   We are absolutely sure that both webservers are able to access the
   memcache server instances, we selected memcache because it was an easy
   configuration and setup without any changes of source code required,
   not to think that it is absolutely reliable. We just need to make
   sure that it works most of the time, but current situation is just
   unacceptable.



Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-13 Thread dormando
 Cool. Would it be possible to make this number configurable via a cmd line 
 switch?

You really don't want to mess with this value. It will bring you the
absolute opposite results of what you expect. Memcached does this search
while holding a global mutex lock, so no other threads are able to access
the allocator at the same time. It must be very fast, and most people will
just crank it and grind the system to a halt, so I'm unsure if we will
make this configurable.

The number's just in a couple places in items.c if you wish to recompile
it and test, but again that's absolutely not a good idea if you want it to
perform.
  
 That's right for the normal case. However, for the memcached-session-manager 
 I just implemented a feature so that sessions are only sent to memcached for 
 backup if session data was modified. To prevent
 expiration of the session in memcached, a background thread is updating 
 sessions in memcached that would expire ~20 seconds later in memcached (if 
 there would be some kind of touch operation I'd just use
 that one). When they are updated, they are set with an expiration of the 
 remaining expiration time in tomcat. This would introduce the issue, that the 
 update would push them to the head of the LRU, but
 their expiration might be e.g. only 5 minutes. So they would be expired but 
 won't be reached when the 50 items are checked for expiration.

Are sessions modified on every hit? or just fetched on every hit? I don't
see how this would be different since active sessions are still getting
put to the front of the LRU. When you modify a session isn't the
expiration time of that session extended usually? Or do users get logged
out after they've run out of their 5 free hours of AOL?

Honestly, if you have a background thread that can already find sessions
when they're about to expire, why not issue a DELETE to memcached when
they do expire? Or even a GET after they expire :)

 That's true, this is the drawback that I'd have to accept.

 Do you see other disadvantages with this approach (e.g. performance wise)?

Shouldn't be any particular performance disadvantage, other than serverely
reducing the effectiveness of your cache and caching fewer items overall
:P

 Yes, for this I (the memcached-session-manager) should provide some stats on 
 size distribution.
 Or does memcached already provide this information via its stats?

`stats items` and `stats slabs` provide many statistics on individual
slabs. You can monitor this to see if a particular slab size is having a
higher number of evictions and doesn't have enough slabs assigned to it.
As well as how many items are in each slab and all that junk.
  
 In my case this is not that much an issue (users won't get logged out), as 
 sessions are served from local memory. Session are only in memcached for the 
 purpose of session failover. So this restart could be
 done when operations could be *sure* that no tomcat will die.

Tomcat sounds like such a pisser :P Even with your backup thing I'd
probably still add an option to allow it to journal to a database, and I
say this knowing how to get every last ounce of efficiency out of
memcached.

-Dormando


Re: reusing sockets between Perl memcached objects

2010-03-12 Thread dormando
We discovered this as well a few months ago... I don't think we found a
workaround :(

Maybe someone else has?

On Fri, 12 Mar 2010, Jonathan Swartz wrote:

 We make liberal use of the namespace option to
 Cache::Memcached::libmemcached, creating one object for each of our
 dozens of namespaces.

 However I recently discovered that it will create new socket
 connections for each object. i.e.:

#!/usr/bin/perl -w
use Cache::Memcached;
use strict;

my ( $class, $iter ) = @ARGV or die usage: $0 class iter;
eval require $class;

my @memd;
for my $i ( 0 .. $iter ) {
$memd[$i] = $class-new( { servers = [localhost:11211] } );
$memd[$i]-get('foo');
}

my $memd = new Cache::Memcached { servers = [localhost:11211] };
my $stats = $memd-stats;
print curr_connections:  . $stats-{total}-{curr_connections} .
 \n;

 swartz ./sockets.pl Cache::Memcached 50
 curr_connections: 10
 swartz ./sockets.pl Cache::Memcached::Fast 50
 curr_connections: 61
 swartz ./sockets.pl Cache::Memcached::libmemcached 50
 curr_connections: 61

 I don't know why curr_connections starts at 10. In any case,
 curr_connections will not grow with the number of Cache::Memcached
 objects created, but will grow with the number of
 Cache::Memcached::Fast or Cache::Memcached::libmemcached objects
 created.

 I was a little surprised that libmemcached, at least, didn't have this
 feature. Just wondering if I'm doing something wrong.

 If I want to keep using libmemcached, I guess I will have to create
 just one option and override its namespace each time I use it.

 Jon



Re: Memcache as session server with high cache miss?

2010-03-12 Thread dormando
 Here is our current setup:
 webserver1 (also runs session memcache server)
 webserver2 (also runs session memcache server)
 database (specialized memcache storage for data caching)

 We are not really a high loaded site, at peak time only about 1500
 users online together. Network is not really saturated, as not much
 data being transferred I believe.

 Here is the phpinfo() part for sessions:
 session.auto_startOff Off
 session.bug_compat_42 On  On
 session.bug_compat_warn   On  On
 session.cache_expire  180 180
 session.cache_limiter nocache nocache
 session.cookie_domain no valueno value
 session.cookie_httponly   Off Off
 session.cookie_lifetime   0   0
 session.cookie_path   /   /
 session.cookie_secure Off Off
 session.entropy_file  no valueno value
 session.entropy_length0   0
 session.gc_divisor100 100
 session.gc_maxlifetime14401440
 session.gc_probability0   0
 session.hash_bits_per_character   4   4
 session.hash_function 0   0
 session.name  PHPSESSID   PHPSESSID
 session.referer_check no valueno value
 session.save_handler  memcachememcache
 session.save_path tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
 tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
 session.serialize_handler php php
 session.use_cookies   On  On
 session.use_only_cookies  Off Off
 session.use_trans_sid 0   0

 And yes, we use PECL memcache extension.

Can you paste the output of 'stats' against both of your memcached
servers? Is your configuration identical on both servers? How have you
been calculating the miss rate?


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread dormando
 Some details here:
 http://dev.mysql.com/doc/refman/5.0/en/ha-memcached-using-memory.html

 thanx for this link. Some details in the text are confusing to me. It says:

   When you start to store data into the cache, memcached does not 
 allocate the memory for the data on an item by item basis. Instead, a slab 
 allocation is used to optimize memory usage and
   prevent memory fragmentation when information expires from the cache.


   With slab allocation, memory is reserved in blocks of 1MB. The slab is 
 divided up into a number of blocks of equal size.

 Ok, blocks of equal size, all blocks have 1 MB (as said in the sentence 
 before).

   When you try to store a value into the cache, memcached checks the size 
 of the value that you are adding to the cache and determines which slab 
 contains the right size allocation for the item.
   If a slab with the item size already exists, the item is written to the 
 block within the slab.

 written to the block within the slab sounds, as if there's one block for 
 one slab?
  


   If the new item is bigger than the size of any existing blocks,

 I thought all blocks are 1MB in their size? Should this be of any existing 
 slab?
  
   then a new slab is created, divided up into blocks of a suitable size.

 Again, I thought blocks are 1MB, then what is a suitable size here?
  
   If an existing slab with the right block size already exists,

 Confusing again.
  
   but there are no free blocks, a new slab is created.


 In the second part of this documentation, the terms page and chunk are 
 used, but not related to block. block is not used at all in the second 
 part. Can you clarify the meaning of block in this
 context and create a link to slab, page and chunk?

 Btw, I found Slabs, Pages, Chunks and Memcached ([1]) really well written 
 and easy to understand. Would say this explanation is complete?

The memory allocation is a bit more subtle... but it's hard to explain and
doesn't really affect anyone.

Urr... I'll give it a shot.

./memcached -m 128
^ means memcached can use up to 128 megabytes of memory for item storage.

Now lets say you store items that will fit in a slab class of 128, which
means 128 bytes for the key + flags + CAS + value.

The maximum item size is (by default) 1 megabyte. This ends up being the
ceiling for how big a slab page can be.

8192 128 byte items will fit inside the slab page limit of 1mb.

So now a slab page of 1mb is allocated, and split up into 8192 chunks.
Each chunk can store a single item.

Slabs grow at a default factor of 1.20 or whatever -f is. So the next slab
class after 128 bytes will be 154 bytes (rounding up). (note I don't
actually recall offhand if it rounds up or down :P)

154 bytes is not evenly divisible into 1048576. You end up with
6808.93 chunks. So instead memcached allocates a page of 1048432 bytes,
providing 6,808 chunks.

This is slightly smaller than 1mb! So as your chunks grow, they
don't allocate exactly a megabyte from the main pool of 128 megabytes,
then split that into chunks. Memcached attempts to leave the little scraps
of memory in the main pool in hopes that they'll add up to an extra page
down the line, rather than be thrown out as overhead when if a slab class
were to allocate a 1mb page.

So in memcached, a slab page is however many chunks of this size will
fit into 1mb, a chunk is how many chunks will fit into that page. The
slab growth factor determines how many slab classes exist.

I'm gonna turn this into a wiki entry in a few days... been slowly
whittling away at revising the whole thing.

-Dormando


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread dormando
 The memory allocation is a bit more subtle... but it's hard to explain and
 doesn't really affect anyone.

 Urr... I'll give it a shot.

 ./memcached -m 128
 ^ means memcached can use up to 128 megabytes of memory for item storage.

 Now lets say you store items that will fit in a slab class of 128, which
 means 128 bytes for the key + flags + CAS + value.

 The maximum item size is (by default) 1 megabyte. This ends up being the
 ceiling for how big a slab page can be.

 8192 128 byte items will fit inside the slab page limit of 1mb.

 So now a slab page of 1mb is allocated, and split up into 8192 chunks.
 Each chunk can store a single item.

 Slabs grow at a default factor of 1.20 or whatever -f is. So the next slab
 class after 128 bytes will be 154 bytes (rounding up). (note I don't
 actually recall offhand if it rounds up or down :P)

 154 bytes is not evenly divisible into 1048576. You end up with
 6808.93 chunks. So instead memcached allocates a page of 1048432 bytes,
 providing 6,808 chunks.

 This is slightly smaller than 1mb! So as your chunks grow, they
 don't allocate exactly a megabyte from the main pool of 128 megabytes,
 then split that into chunks. Memcached attempts to leave the little scraps
 of memory in the main pool in hopes that they'll add up to an extra page
 down the line, rather than be thrown out as overhead when if a slab class
 were to allocate a 1mb page.

 So in memcached, a slab page is however many chunks of this size will
 fit into 1mb, a chunk is how many chunks will fit into that page. The
 slab growth factor determines how many slab classes exist.

 I'm gonna turn this into a wiki entry in a few days... been slowly
 whittling away at revising the whole thing.

To be most accurate, it is how many chunks will fit into the max item
size, which by default is 1mb. The page size being == to the max item
size is just due to how the slabbing algorithm works. It creates slab
classes between a minimum and a maximum. So the maximum ends up being the
item size limit.

I can see this changing in the future, where we have a max item size of
whatever, and a page size of 1mb, then larger items are made up of
concatenated smaller pages or individual malloc's.


Re: Memcache as session server with high cache miss?

2010-03-12 Thread dormando
Can you telnet to the instances, type stats, stats items, and stats
slabs, then copy/paste all that into pastebin?

echo stats | nc host 11211  stats.txt works too

You version is very old... It's missing many statistical counters that
could help us diagnose a problem. The extendedstats isn't printing an
evictions counter, but I can't remember if that version even had one.

Can you describe your problem in more detail? If I recall:

- User logs in.
- Clicks somewhere. now they're logged out?
- They click somewhere else, and they're logged in again? Does this mean
they found their original session again, or did you app log them in again?

-Dormando

On Fri, 12 Mar 2010, TheOnly92 wrote:

 I'm retrieving statistics via Memcache::extendedStats function, here
 are the basics:

 Session Server 1
 Version   1.2.2
 Uptime398,954 sec
 Cache Hits2,065,061
 Cache Misses  987,726 (47.83%)
 Current Items 381,928
 Data Read 4,318,055.02 KB
 Data Written  2,011,004.09 KB
 Current Storage   100,688.96 KB
 Maximum Storage   256.00 MB
 Current Connections   9
 Total Connections 5,278,414
 Session Server 2
 Version   1.2.2
 Uptime398,943 sec
 Cache Hits2,225,697
 Cache Misses  987,733 (44.38%)
 Current Items 381,919
 Data Read 4,323,893.05 KB
 Data Written  2,159,309.95 KB
 Current Storage   100,685.52 KB
 Maximum Storage   256.00 MB
 Current Connections   11
 Total Connections 5,278,282

 We are absolutely sure that both webservers are able to access the
 memcache server instances, we selected memcache because it was an easy
 configuration and setup without any changes of source code required,
 not to think that it is absolutely reliable. We just need to make
 sure that it works most of the time, but current situation is just
 unacceptable.



Re: does memcached fasten web server?

2010-03-08 Thread dormando
 Hi,
 I do simple benchmark on my web server by issuing 50 queries. I send
 10,000 request and it shows by using memcached is slightly slower than
 (or almost the same) without memcached. The result shows 216.29
 request per second for memcached enabled and 216.51 for memcached
 disabled. However, if i run memcached in the same machine as web
 server, the result is faster, which is 231.55.

 i'm using memcached version 1.4.4 in my dedicated server. i used php
 as my memcached client written by abhinav at his site
 http://abhinavsingh.com/blog/2009/01/memcached-and-n-things-you-can-do-with-it.

 Hope someone can help me.Thanks :D

What is your app and what exactly is it doing? What are the components, db
queries, etc? have you verified memcached is being accessed instead of the
database?


Re: Compiling memcached for Linux Power pc64

2010-03-08 Thread dormando

 Hi Guys,


 I'm trying to compile memcached-1.4.4 in my IBM P550 machine running
 Linux RHE for Power PC64, but I'm getting the error:

 memcached-1.4.4]# ./configure
 checking build system type... Invalid configuration `powerpc64-unknown-
 linux-': machine `powerpc64-unknown-linux' not recognized
 configure: error: /bin/sh ./config.sub powerpc64-unknown-linux- failed


 Anybody know if i can compile memcached for this environment  ?

 Regards.

Uhm, my brain might be misfiring, but I'm pretty sure that means you need
to install GCC. Dunno why the error is weird offhand.


Re: MTU

2010-03-03 Thread dormando
 Hi there

 I am experimenting with using memcache for storing PHP sessions in a
 www cluster environment. But due to latency in network, I have noticed
 slight performance drop in application as opposed to just direct
 access.

 Has anyone played with changing MTU on their network card to minimize
 latency? Hence get faster response?

 http://www.speedguide.net/read_articles.php?id=111

What kind of latency are you getting? what client are you using? how are
you verifying it?


Re: Memcached set is too slow

2010-03-01 Thread dormando
What about memcached? Is it running on localhost or over the network?

Any chance you could get an strace from the memcached side? Have your test
app talk to a separate test instance or something.

Is it exactly or close to 0.10s each time? That's suspicious.

On Mon, 1 Mar 2010, Chaosty wrote:

 sorry my bad full strace is:
  0.85 [28b4f1ab] gettimeofday({1667458660, 1681273911}, NULL)
 = 0 0.13
  0.000202 [28b5f2a3] write(9, set
 b5e03d478da6b7b8e61095fec4eb0..., 8196) = 8196 0.38
  0.91 [28b5f2a3] write(9, pr.parent_id = p.parent_id
 (1)\;s..., 1360) = 1360 0.23
  0.78 [28b5f2c3] read(9, 0x2aef6054, 8196) = -1 EAGAIN
 (Resource temporarily unavailable) 0.14
  0.61 [28b09d6f] poll([{fd=574240058, events=POLLIN|POLLPRI|
 POLLRDNORM|POLLWRBAND|POLLERR|POLLNVAL|0x6000}], 1, INFTIM) = 1
 ([{fd=1933255265, revents=POLLIN|POLLPRI|POLLOUT|POLLHUP|POLLNVAL|
 0x3a00}]) 0.098976
  0.099064 [28b5f2c3] read(9, STORED\r\n..., 8196) = 8 0.18
  0.90 [28b4f1ab] gettimeofday({875772260, 912536929}, NULL) =
 0 0.13


 On Mar 1, 6:24 pm, Chaosty chaostyz...@gmail.com wrote:
  And here is strace
  ...
       0.85 [28b4f1ab] gettimeofday({1667458660, 1681273911}, NULL)
  = 0 0.13
       0.000202 [28b5f2a3] write(9, set
  b5e03d478da6b7b8e61095fec4eb0..., 8196) = 8196 0.38
       0.91 [28b5f2a3] write(9, pr.parent_id = p.parent_id
  (1)\;s..., 1360) = 1360 0.23
       0.78 [28b5f2c3] read(9, 0x2aef6054, 8196) = -1 EAGAIN
  (Resource temporarily unavailable) 0.14
       0.61 [28b09d6f] poll([{fd=574240058, events=POLLIN|POLLPRI|
  POLLRDNORM|POLLWRBAND|POLLERR|POLLNVAL|0x6000}], 1, INFTIM) = 1
  ([{fd=1933255265, revent$
       0.099064 [28b5f2c3] read(9, STORED\r\n..., 8196) = 8 0.18
       0.90 [28b4f1ab] gettimeofday({875772260, 912536929}, NULL) =
  0 0.13
   ...
 
  by strace read took almost 0.10 seconds..
 
  On Mar 1, 6:13 pm, Chaosty chaostyz...@gmail.com wrote:
 
   The tests were performed by modifying the set method like this:
 
           public static function set($key, $item, $exp = 60) {
                   $benchmark = Profiler::start ('Memory', 'Set: '. $key);
                   $return = self::$instance-set ($key, $item, $exp);
                   Profiler::stop ($benchmark);
                   return $return;
           }
 
   Then I figured out that almost each set is cost to us ~0.79
   seconds
   but the big key costs ~0.099359 seconds
 
   this is trace info with: truss -faedD -o /phplog
   ...
   76982: 2.431621337 0.14247 gettimeofday({1267456195.261569 },0x0)
   = 0 (0x0)
   76982: 2.431709896 0.14806 gettimeofday({1267456195.261658 },0x0)
   = 0 (0x0)
   76982: 2.431964677 0.45257 write(9,set
   b5e03d478da6b7b8e61095fec4eb...,8196) = 8196 (0x2004)
   76982: 2.432037871 0.17600 write(9,pr.parent_id = p.parent_id
   (1);...,1360) = 1360 (0x550)
   76982: 2.432097655 0.15644 read(9,0x2aedb054,8196) ERR#35
   'Resource temporarily unavailable'
   76982: 2.532011750 0.099864927 poll({9/POLLIN},1,-1) = 1 (0x1)
   76982: 2.532070696 0.16482 read(9,STORED\r\n,8196) = 8 (0x8)
   76982: 2.532135509 0.14248 gettimeofday({1267456195.362084 },0x0)
   = 0 (0x0)
   ...
   seems here's the problem: 76982: 2.532011750 0.099864927 poll({9/
   POLLIN},1,-1) = 1 (0x1)
 
   On Feb 28, 3:41 am, dormando dorma...@rydia.net wrote:
 
How are you performing the test? Is memcached over localhost or over the
network?
 
If you can reproduce this in isolation I'd be curious to see what
memcached and/or php are waiting on that takes so long (via strace or
similar).
 
On Wed, 24 Feb 2010, me from wrote:
 Adam we are on freebsd 7.2, as i said early we use PECL memcached 
 1.0.0 with libmemcached 0.35, the memcached version is 1.4.4.
 
 We use our framework this is an init method
 
 ? public static function instance() {
 ? ? if (self::$instance === NULL) {
 ? ? ? // Create a new instance
 ? ? ? self::$instance = new Memcached ();
 ? ? ? self::$instance-setOption 
 (Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT);
 ? ? ? self::$instance-addServers (array 
 (array ('127.0.0.1', 11211, 100)));
 ? ? }
 ? }
 
 and this is a set method that works fine with small data and stops 
 with data  100kbs
 
 ? public static function set($key, $item, $exp = 60) {
 ? ? return self::$instance-set ($key, $item, $exp);
 ? }
 
 Marc, no we don't use persistent you can see it in our init method.
 
 On Wed, Feb 24, 2010 at 8:47 PM, Marc Bollinger 
 mbollin...@gmail.com wrote:
       And are you using persistent connections? There have been a 
 handful of
       threads recently, discussing setting up

Re: Memcached set is too slow

2010-02-27 Thread dormando
How are you performing the test? Is memcached over localhost or over the
network?

If you can reproduce this in isolation I'd be curious to see what
memcached and/or php are waiting on that takes so long (via strace or
similar).

On Wed, 24 Feb 2010, me from wrote:

 Adam we are on freebsd 7.2, as i said early we use PECL memcached 1.0.0 with 
 libmemcached 0.35, the memcached version is 1.4.4.

 We use our framework this is an init method

     public static function instance() {
         if (self::$instance === NULL) {
             // Create a new instance
             self::$instance = new Memcached ();
             self::$instance-setOption (Memcached::OPT_DISTRIBUTION, 
 Memcached::DISTRIBUTION_CONSISTENT);
             self::$instance-addServers (array (array ('127.0.0.1', 11211, 
 100)));
         }
     }

 and this is a set method that works fine with small data and stops with data 
  100kbs

     public static function set($key, $item, $exp = 60) {
         return self::$instance-set ($key, $item, $exp);
     }

 Marc, no we don't use persistent you can see it in our init method.


 On Wed, Feb 24, 2010 at 8:47 PM, Marc Bollinger mbollin...@gmail.com wrote:
   And are you using persistent connections? There have been a handful of
   threads recently, discussing setting up persistent connections with
   PECL::memcached.

   - Marc

 On Wed, Feb 24, 2010 at 9:41 AM, Adam Lee a...@fotolog.biz wrote:
  What kind of hardware and software configurations are you using on the
  client and server sides?
  We have servers doing like 5M/s in and 10M/s out without even breaking a
  sweat...
 
  On Wed, Feb 24, 2010 at 7:35 AM, me from chaostyz...@gmail.com wrote:
 
  We use memcached php extension, (http://pecl.php.net/package/memcached)
 
  On Wed, Feb 24, 2010 at 12:02 PM, Juri Bracchi yak...@gmail.com wrote:
 
  the latest memcache php extension version is 2.2.5
 
  http://pecl.php.net/package/memcache
 
 
 
 
 
  On Wed, 24 Feb 2010 05:09:36 +0300, me from wrote:
   No. Sorry for misunderstanding, its my bad. Its php extension (PECL)
   of version 1.0.0.
  
   Memcached is 1.4.4
  
   On Wed, Feb 24, 2010 at 4:42 AM, Eric Lambert
   eric.d.lamb...@gmail.com wrote:
  
   PHP5.3, libmemcached 0.35, memcached 1.0.0
  
   Is this really the version of the memcached server you are using
   (1.0.0) If so, that is certainly out-of-date. Latest version is
   1.4.*.
  
   Eric
  
  
   Chaosty wrote:
   We have found that Memcahed::set stores items around 100-200kbs for
   0.10-0.11 seconds, its too slow. Compression is turned off. Any
   suggestions?
  
   PHP5.3, libmemcached 0.35, memcached 1.0.0
  
  
  
 
 
 
 
  --
  awl
 






Re: Memcached used so little memory and CPU

2010-02-27 Thread dormando
 Hi,

 I have huge Vbulletin Forum and i am trying now memcached as i was
 using Xcached before.

 But i wonder why Memcached using so little memory and CPU. And does
 not reduce the load much compare to Xcache?

 This is my screenshots showing memcached static:
 http://i45.tinypic.com/2eqcqck.png

 Is there any setting i should chance so that memcached save me more of
 resources ( lower load)

 Thanks

The only memcaching stuff I've seen vbulletin even have code for is their
datastore crap. Which is a tiny subset of data related to languages,
preferences, templates. I sorely wish VB would cache user objects/post
data/whatever, but I don't think it's possible without patching.


Re: Fix for Cache::Memcached breakage w/ unix domain support caused by IPv6 changes.

2010-02-20 Thread dormando
Er, I'm heading out the door atm, I'll get it when I get back in a few
hours. Thanks for the patch!

On Sat, 20 Feb 2010, Nathan Neulinger wrote:

 Haven't heard anything back on this... any chance of getting this
 patch applied to the CPAN perl module for memcached? Brad indicates
 that he no longer maintains it and to ask here about getting the patch
 applied. As it currently stands, the module breaks web applications
 that use it due to the spurious warnings it generates.

 -- Nathan

 On Feb 10, 8:51 am, Nathan Neulinger nathan.neulin...@gmail.com
 wrote:
  I just happened to bump up one of my boxes to latest Cache::Memcached
  version from CPAN and noticed that it breaks unix domain socket support - or
  rather, dumps a warning. Here's a fix - it just moves some udp/tcp specific
  code into the non-unixdomain code block.
 
  Technically, to resolve this warning, only the IPv6 line needs to be moved,
  but there's no point in having the pattern match and proto lookup run
  outside the block since they aren't used, and putting inside the block
  doesn't change code path for non unixdomain sockets.
 
  Thanks!
 
  -- Nathan
 
   mc-ipv6-unixdomain.diff
  2KViewDownload



Re: Memcached and EC2

2010-02-12 Thread dormando
 Thanks, but how does that work exactly?

 I've been trying to find information about it, and what I've pieced together 
 so far is that if you are running PHP as an Apache module, and if you are 
 making your own extension in C, then you get access to
 methods that allow you to store stuff in some sort of shared memory in the 
 Apache process, that you can retrieve between pageviews, such as socket 
 handles or initialized arrays or similar. But there's
 nothing in the PHP language itself to do something like that, you have to 
 make your own C module. Correct?

Yeah. special PHP mumbo jumbo.

 And that then, in turn, means that the answer to the original question is 
 that adding more servers doesn't slow down the app, if they're using 
 persistent connections, because client initialization will
 only happen once? (Given the prerequsities above, and given that they 
 actually make sure they only init once like a comment on that memcached 
 manual page describes)

 And is this true for both PHP clients?

I recall a thread a while back that the ketama/consistent hashing stuff
with the old library redid all of the calculations on every request
regardless of persistent settings.

Don't recall if that was ever fixed or properly tested, but the
complaining user had his performance problems go away when he stopped
using that.

I assume pecl/memcached isn't quite that stupid, but someone might want to
verify for PHP's sake.

-Dormando


Re: Preferred client list?

2010-02-06 Thread dormando
Worth noting that 'cmemcache' for python has been deprecated. Lots of
django users have been bitten by that.

Also probably worth releating that 'libmemcache' is deprecated,
'libmemcached' is what you want. the names being close are kinda suck.

Perl is a little harder. Cache::Memcached for pure perl.
Memcached::libmemcached is the actual libmecached-based library.
Cache::Memcached::libmemcached is a wrapper around the former to be more
compatible with the former-former. Ease migration or whatever.

-Dormando

On Sat, 6 Feb 2010, Brian Moon wrote:

 Yeah, I plan to warn them about pecl/memcache. I am in the process of moving
 off of it right now.

 I was just going include a line like this at the end of my client list.

  * Plus MySQL UDF, .NET, C#, Erlang, Lua, and more

 So, I just wanted to hit the languages most likely in use by attendees at the
 MySQL conference.

 Brian.
 
 http://brian.moonspot.net/

 On 2/6/10 3:46 AM, Henrik Schr?der wrote:
  I would suggest Dustin's SpyMemcached for Java.
 
  For C#/.Net I'm going to shamelessly plug my own: BeITMemcached. Google
  Analytics says it's very popular in China! :-D
 
  Honestly though, I have no idea which one is more popular, my client or
  the Enyim one. I definitely hope noone uses the old java port though. On
  the other hand, the total amount of C# users are probably much, much
  smaller than the amount of PHP or Perl users so I don't know how
  meaningful it is to pick one. Also, the PHP situation is more like that
  you want to warn people about using PECL/Memcache because it's not
  feature-complete, right? For the other languages, I'm pretty sure there
  are several feature-complete clients that have good performance, so the
  need to pick a favourite shouldn't be as strong.
 
 
  TL;DR - Pick me! Pick me!
 
 
  /Henrik
 
  On Sat, Feb 6, 2010 at 06:59, Brian Moon br...@moonspot.net
  mailto:br...@moonspot.net wrote:
 
  I am working on modifying my memcached presentation for MySQL
  conference. It was previously slanted toward PHP for PHP
  conferences.  I am hoping to make it more general and wanted to know
  which clients are preferred for the different languages out there.
I have:
 
  C/C++ - libmemcached
  PHP - PECL/memcached
  Perl - Cache::Memcache
 
  Need:
 
  Python ?
  Java ?
  Ruby ?
 
  Thanks!
 
  --
 
  Brian.
  
  http://brian.moonspot.net/
 
 



Re: get bytes?

2010-01-19 Thread dormando
How're you serializing the list?

In cases where I've had to work with that, we ensure the value is a flat
list of packed numerics, and run no serialization on them. Then you have
the overhead of a network fetch, but testing values within the list is
nearly free.

-Dormando

On Tue, 19 Jan 2010, jim wrote:

 Hi:

 I've recently started on a project that is utilizing memcached to
 buffer lists of 'long's that represent key values for larger more
 detailed objects to load if needed.  This list will have a maximum of
 10,000 items.

 ex:

 List of longs:
 key : actList value : maximum 10,000 longs sorted chronologically
 based on their 'detailed' object timestamp.

 Detailed entry:
 key : act_id  value : text serialized object representing detailed
 activity object.  id is contained on the 'List of longs' above.

 My question is there are times where we would like to load only the
 first block of entries from the list of longs, say 10 records.  We
 would then look at those 10 records and see if they are new based on
 what a currently displayed list shows and if so grab these new entries
 without pulling all 10,000 across from cache only to utilize the
 relatively small number of new entries on the top of the list.  This
 kind of goes hand in hand with the prepend (or append) operations
 where when new activities arrive we push these new activities into the
 front of the 'List of longs' and it's these new entries that clients
 may be interested in if they do not yet have them.

 My question is, is there a way to do this?  Is there a way to grab
 only X bytes from a value in memcache?  I read over the protocol
 document and it doesn't appear there is.  Is there any interest or
 valid use case that this seems to fill for other users?

 An alternate solution I can see if to store the 'list of longs' as a
 flat list of keys with a naming convention, such as actList_1,
 actList_2, etc.  However, this will obviously lead to an extremely
 long key name in the multi-key 'get' as well as lots of churn in
 managing these objects .. so it appears far less than ideal.  However,
 we have many (200,000) lists of these 10,000 item 'List of longs'
 which leads to pulling loads of data over the wire when a large update
 occurs that needs to be communicated with cache ... it would be much
 more efficient to only go after a certain number of bytes in cache vs.
 the entire cached value.

 Any other thoughts?  Ideas?

 Thank you.
 Jim




Re: get bytes?

2010-01-19 Thread dormando
That's pretty much exactly what we do in some cases.

prepend/append packed bytes so the list is in order. Then we pull the data
and read as much as we need without touching serialization.

If the overhead is a big deal you could fiddle with it a bit. Store 100 or
1,000 items per list and use multiple keys (if you can divide the numbers
by something to be able to do that easily). Or have a main 10k list and
then a separate recent key that you expire once it gets past 100 items
or so. Then you're still occasionally doing the full fetch, but not nearly
as often. Updating both keys ends up being the same prepend operation.

-Dormando

On Tue, 19 Jan 2010, jim wrote:

 At least initially we would use normal java serialization.  Since we
 are talking about using prepend to put the most recent entries in our
 list, we cannot use gzip anymore since it would not gel with
 prepending bytes.

 Did you use a custom serialization scheme?  Or do you mean the binary
 serializer?  Because at some point you HAVE to serialize something to
 a byte[], right?  Did you prepend data at all?  I don't see many
 people talking about utilizing the prepend/append methods within the
 protocol, I'm also trying to figure out why this is.

 Even though it appears the network fetch of even 10,000 longs isn't
 much data, since we have so many of these lists to process it ends up
 becoming a large network hit once you've done it 200k+ times.

 Jim

 On Jan 19, 3:05 pm, dormando dorma...@rydia.net wrote:
  How're you serializing the list?
 
  In cases where I've had to work with that, we ensure the value is a flat
  list of packed numerics, and run no serialization on them. Then you have
  the overhead of a network fetch, but testing values within the list is
  nearly free.
 
  -Dormando
 
  On Tue, 19 Jan 2010, jim wrote:
   Hi:
 
   I've recently started on a project that is utilizing memcached to
   buffer lists of 'long's that represent key values for larger more
   detailed objects to load if needed.  This list will have a maximum of
   10,000 items.
 
   ex:
 
   List of longs:
   key : actList     value : maximum 10,000 longs sorted chronologically
   based on their 'detailed' object timestamp.
 
   Detailed entry:
   key : act_id  value : text serialized object representing detailed
   activity object.  id is contained on the 'List of longs' above.
 
   My question is there are times where we would like to load only the
   first block of entries from the list of longs, say 10 records.  We
   would then look at those 10 records and see if they are new based on
   what a currently displayed list shows and if so grab these new entries
   without pulling all 10,000 across from cache only to utilize the
   relatively small number of new entries on the top of the list.  This
   kind of goes hand in hand with the prepend (or append) operations
   where when new activities arrive we push these new activities into the
   front of the 'List of longs' and it's these new entries that clients
   may be interested in if they do not yet have them.
 
   My question is, is there a way to do this?  Is there a way to grab
   only X bytes from a value in memcache?  I read over the protocol
   document and it doesn't appear there is.  Is there any interest or
   valid use case that this seems to fill for other users?
 
   An alternate solution I can see if to store the 'list of longs' as a
   flat list of keys with a naming convention, such as actList_1,
   actList_2, etc.  However, this will obviously lead to an extremely
   long key name in the multi-key 'get' as well as lots of churn in
   managing these objects .. so it appears far less than ideal.  However,
   we have many (200,000) lists of these 10,000 item 'List of longs'
   which leads to pulling loads of data over the wire when a large update
   occurs that needs to be communicated with cache ... it would be much
   more efficient to only go after a certain number of bytes in cache vs.
   the entire cached value.
 
   Any other thoughts?  Ideas?
 
   Thank you.
   Jim



Re: memcached and access control

2010-01-07 Thread dormando
 Are you suggesting that applications has to handle the scramble buffer
 correctly for each accesses? It seems to me we can obtain credential of
 the client using SASL authentication, without any additional hints.

 If the security map means something like access control list, what we
 are talking about is not fundamentally different.
 The issue is the way to store the properties within the item.

 BTW, Is the storage engine stackable? If not so, it seems to me we will
 face a tradeoff between persistent storage and access controls.

 Am I missing something?

I think you should just grab the latest engine branch and go for it. It's
tracked under trondn's github.com account. Hit up the list for
feedback/etc, but just be forewarned that very little will go into the
core to slow down or increase memory requirements.

However the engine API can be adjusted to better allow the tradeoffs if
necessary. If you really want to blow tons of CPU/RAM on having granular
access controls, you should be able to do it without having to patch
memcached... Unless you need to make significant changes to the protocol,
which again will be very hard since that absolutely has to stay simple.

So yeah. Most of the core devs are speed freaks, and the intent of
memcached is to supply minimal, if any, access control (or authentication)
for speed. That shouldn't stop you from using it as a proper framework if
you absolutely must. It's become enough of a standard that we can accept
this.

-Dormando


Re: memcached and access control

2010-01-07 Thread dormando
 http://github.com/memcached/memcached/tree/engine-pu

 Is it correct branch for the discussion base?

http://github.com/trondn/memcached/tree/engine is the tip. engine-pu is
... not quite master yet.

-Dormando


Re: memcached and access control

2010-01-06 Thread dormando
 Getting down to the item level would be tough to accept due to the overhead
 involved, one would think though.  There may be some ways of getting closer to
 access control though without going down to the item level.

This seems like it'll be solved by an engine dealio. Mapping a user to an
internal instance of the storage system. Sort of like running multiple
instances, *cough*. Getting super granular access levels for a web cache
(more than a few dozen users) would be insane, but if someone really wants
to they could implement a storage engine for it.

It'd be incredibly inefficient on memory, compared to keeping the number
of users down or even running multiple instances.

-Dormando


Re: memcached and access control

2010-01-06 Thread dormando
  It'd be incredibly inefficient on memory, compared to keeping the number
  of users down or even running multiple instances.

 Only if you were trying to go all the way down to the item level.  It's
 possible to have groups of slabs that are dedicated to one label/auth or
 something like that, right?

A few dozen users, yeah. Giving each registered user on a website his own
private cache is just silly. Maybe in ten years when terabytes of ram are
cheap.


Re: Cache::Memcached::GetParserXS

2010-01-05 Thread dormando
On Mon, 4 Jan 2010, Jim Spath wrote:

 The Perl module Cache::Memcached::GetParserXS is automatically used by
 Cache::Memcached when it's available.

 But Cache::Memcached::GetParserXS is version 0.01 and was created in 2007 ...
 it's either perfect or has been abandoned.

 Is it still a good idea to install Cache::Memcached::GetParserXS?

You probably want one of the libmemcached wrappers instead. It's not
entirely unused but it's essentially abandonware.

-Dormando




Re: Dump Cache

2009-12-30 Thread dormando
 Dumping Cache is a feature that is discussed in details. I agree that
 dumping cache to keep it warm may not be the best idea. However, I
 recently got into the situation where we lost the database and we lost
 the backup of the database. So, the only thing we had was running
 memcached (please stop laughing, it's only a little funny)... So I had
 to dump the content of memcached to attempt to recreate the data

 I used the tool described here: 
 http://lists.danga.com/pipermail/memcached/2007-April/003905.html

 But the dump was painful as I had to dump chunks and then delete them
 from memcached so that I can dump other chunks.

 Is the feature to dump cache still being considered?

Future storage engines might support it... but people should try really,
really, really hard to never need it.


Re: Libevent 2.0

2009-12-29 Thread dormando
On Tue, 29 Dec 2009, MGMega wrote:

 Libevent 2.0 seems to be 100-200% faster than 1.4 based on
 http://www.provos.org/index.php?/archives/61-Small-Libevent-2.0-Performance-Test.html
 .  Is it recommended to use it?  I realize it's in alpha but I didn't
 see any errors so far on a test box and does memcache take advantage
 of evbuffer_add_reference()?


No one's trying it on the developer end so far as I know...

The speedup here is about libevent's built in httpd. Using a new buffer
interface/etc. Memcached wouldn't be using any of that, as it's already
doing optimized IO internally.

Unless you're bigger than facebook, odds are you'll be fine with libevent
1.4 and the default threadcount with memcached.

-Dormando


Re: measuring memcache efficiency

2009-12-04 Thread dormando
 Thanks Matt.  I did look a bit through the 'stats slabs' command, but
 perhaps I'm not interpreting it correctly.  In the most basic test,
 when I put in a 100 byte object, I'm seeing a slab being created with
 a chunk size of 176.  However, 'stats sizes' shows me one item of
 '192'.  So there's part of my confusion...am I using 176 bytes for the
 object or 192?

 The second part of my confusion is the ability to actually see that
 100 byte object.  If instead of 100 bytes, I use 150, I'm not seeing
 any difference in the output of 'stats slabs' or 'stats sizes'.
 Obviously I can do these contrived tests and know what it is I'm
 putting into the cache, but I'm concerned that when it moves into a
 production setting I won't know the exact size of all the objects in
 the cache.  I'm using server version 1.2.8 at the moment.

 Am I reading these stats incorrectly?

 Any detailed help would be really appreciated.

 Thanks so much.

An item size is the value length + the key length + pointers + bytes to
store the length + CAS header + a couple terminators/misc things. I don't
have the exact item overhead offhand but will look it up and put it in the
wiki.

You can easily calculate your memory overhead on a slab:

STAT 3:chunk_size 152
STAT 3:chunks_per_page 6898
STAT 3:total_pages 291
STAT 3:total_chunks 2007318
STAT 3:used_chunks 2007310
STAT 3:free_chunks 8
STAT 3:free_chunks_end 0
STAT 3:mem_requested 271013713

chunk_size * chunks_per_page is the amount of bytes in a page for this
slab class, which is 1048496 here.

* 291 pages == 305112336 bytes allocated in the slab.

mem_requested is a shorthand that states the amount of memory actual items
(the total length, value + key + misc) take up within a slab.

271013713 / 305112336
~0.89% rounded.

So I've lost 11% memory overhead in this slab on top of the ~30 bytes per
item. used_chunks * the standard overhead will give you most of the rest
of the memory overhead. So it's probably closer to 60 megabytes, total?

'stats sizes' will throw items into the nearest slab if everything were
cut by 64 byte slabs. The rounding is probably putting it into the 192
byte bucket for you. If your item goes into the 172 byte slab, you're
definitely using 172 bytes or less.

The idea is that we trade off some memory overhead for consistent speed in
O(1) operations. We know ways to improve the efficiency and will be doing
so over the next few months, but I wouldn't say this is horrific at all.
Remove some headers, switch back to malloc or jemalloc/etc and you lose
consistent performance.

The overhead is most pronounced for small keys as well. Consider reducing
your key size, disabling CAS (-C) if you never use it (8 bytes per item),
or reducing the slab growth factor to close down the overhead.

As soon as I get a chance I'm adding some more modes to damemtop so folks
can more easily see slab overhead... The mem_requested stat trond added
and the pointer_size stat lets us trivially calculate overhead given that
you already understand that a stored value is actually key + value for
length, not just the value you're storing.

I'll throw out a side pointer here actually; This sort of knowledge is why
it's nice that memcached can store 0 byte values. If your client allows
you, you can store bits in the flags section, otherwise the existence of
the key itself may be enough data for some things you store.

-Dormando


Re: Slow connection and slow response time from memcached server

2009-12-02 Thread dormando
Just to be clear: when you hit max connections, new connections to
memcached can/will lag until connections are accepted again. Existing
connections won't be slow.

On Wed, 2 Dec 2009, head wrote:

 ok guys, so I am still having this problem

 we are using php client with persistent connections (this is pooled
 connections in other words) using $memcache_obj-pconnect, which means
 that each php thread has it's own connection to memcached
 there are 20 http servers, each with minimum 20 php threads, so this
 is total of 4000 clients (and that is minimum)

 however from about 3700 clients we are seeing performance decrease,
 the answer from memcached for a select can be up to 1 second, this is
 much, am I right?

 the memcached server from my calculations is getting about 5000
 requests per second, is this a lot? this is high performance machine,
 there is absolutely no swapping *and never was*!!! the load average is
 0.17!!

 Maybe I just need to install more memcached servers?  but this one
 seems ot be doing nothing anyway

 below are the stats and stats settings, the problem is visible right
 now


 stats settings
 STAT maxbytes 3221225472
 STAT maxconns 1
 STAT tcpport 11211
 STAT udpport 11211
 STAT inter NULL
 STAT verbosity 1
 STAT oldest 0
 STAT evictions on
 STAT domain_socket NULL
 STAT umask 700
 STAT growth_factor 1.25
 STAT chunk_size 48
 STAT num_threads 3
 STAT stat_key_prefix :
 STAT detail_enabled no
 STAT reqs_per_event 20
 STAT cas_enabled yes
 STAT tcp_backlog 1024
 STAT binding_protocol auto-negotiate
 END

 stats
 STAT pid 2709
 STAT uptime 1727263
 STAT time 1259772057
 STAT version 1.4.1
 STAT pointer_size 64
 STAT rusage_user 29672.975022
 STAT rusage_system 51701.090239
 STAT curr_connections 3188
 STAT total_connections 20452452
 STAT connection_structures 3623
 STAT cmd_get 5977832958
 STAT cmd_set 665620469
 STAT cmd_flush 0
 STAT get_hits 5355401281
 STAT get_misses 622431677
 STAT delete_misses 0
 STAT delete_hits 0
 STAT incr_misses 0
 STAT incr_hits 24302362
 STAT decr_misses 0
 STAT decr_hits 0
 STAT cas_misses 0
 STAT cas_hits 0
 STAT cas_badval 0
 STAT bytes_read 403401321019
 STAT bytes_written 971168343526
 STAT limit_maxbytes 7516192768
 STAT accepting_conns 1
 STAT listen_disabled_num 8554
 STAT threads 3
 STAT conn_yields 0
 STAT bytes 198980367
 STAT curr_items 1077436
 STAT total_items 665620605
 STAT evictions 0
 END



Re: Slow connection and slow response time from memcached server

2009-12-02 Thread dormando
It's forking and you're not following children.

Just attach to a runing process via strace -p, run it for 5 seconds, ^C,
gzip, put it somewhere

On Wed, 2 Dec 2009, head wrote:

 hmm, I started memcached with strace, but the output file is only 6k
 in size and is not increasing, I mean no data is written to the file
 maybe I did something wrong, I started strace this way:

 strace -t -o /strace.log /usr/local/bin/memcached -d -u nobody -m 7168
 -t 2 -P /var/run/memcached.pid -c 1 -v

 any ideas?



Stable release 1.4.4

2009-11-26 Thread dormando
New stable release 1.4.4 is up at http://memcached.org/

Release notes are available at
http://code.google.com/p/memcached/wiki/ReleaseNotes144

This release is a minor bugfix release to restore backwards compatibility
to a number of clients. Clients should be updated to the latest protocol
as soon as possible, but we want to not surprise people in the middle of
the 1.4 series as much as posssible.

Thanks to dustin/matt for the work on this release.

-Dormando


Memcached advocacy

2009-11-11 Thread dormando

Yo,

http://code.google.com/p/memcached/wiki/Advocacy

mostly stolen from:

http://wiki.postgresql.org/wiki/BoothCheckList
http://wiki.postgresql.org/wiki/AdvocacyGuides

Back in july-ish Brian attempted to start this conversation. Lets finish
it.

I'll be putting in all of the notes I feel are fair to the project, and
were useful to me as someone who has organized booths.

If you are a commercial entity and wish to perform advocacy with us or for
us, *please* respond to this thread with your thoughts. *Please* be
constructive. We've been waiting to hear from you folks for a long time.
The last few conventions were turbulent, to say the least. With the
addition of these pages this should be completely avoidable in the future.

If you're *not* commercial folk and would be interested in helping
advocate memcached in conferences far and wide, please help us flesh out
these rules and tips.

Thanks,
-Dormando


Re: New memcached homepage!

2009-11-07 Thread dormando
Rabbits :) (fast, `scalable`, stupid).

On Sat, 7 Nov 2009, Clint Webb wrote:

 I like it.   I really like the logo, and I like the banner image of the pack 
 of animals (what are those things?).


 On Sat, Nov 7, 2009 at 3:39 PM, dormando dorma...@rydia.net wrote:

   Ah, I forgot to note.

   The users page is open for requests. If you're a bigish/notable site 
 and
   would wish to be linked there, let me know by tomorrow! Otherwise I'm
   picking a bunch out of a hat.

   Thanks,
   -Dormando

 On Fri, 6 Nov 2009, dormando wrote:

 
 
  Coinciding with the release of 1.4.3 tomorrow, we will be retiring the old
  and tired http://danga.com/memcached homepage (archiving it, I guess). It
  will redirect to http://memcached.org, which will have this new site:
 
  http://memcached.dormando.com/
 
  ... We've been passing the layout/ideas around, and it's time to open it
  up to the rest of you folks. We'll take
  comments/requests/updates/complaints for a day and, unless something
  serious comes up, the site will go live tomorrow.
 
  Let us know especially if you love or hate the logo/banner. They were
  redesigned to be more unique and ... interesting :) Beat out that old
  green M thing and give us a mascot to work with.
 
  This is the final layout, except for two changes... Adding a sorted list
  of committers to the bottom of the About page, and changing the urls to be
  '/about/' instead of '/about'
 
  have fun,
  -Dormando
 




 --
 Be excellent to each other



Stable release 1.4.3

2009-11-07 Thread dormando

Hey Hey,

As mildly promised, 1.4.3-final is out as of a few moments ago. Alas,
there were a few bugs that did not make it into the release, but we
managed to get almost all of them :)

http://code.google.com/p/memcached/wiki/ReleaseNotes143

... and at the same time, the new community homepage is up at:
http://www.memcached.org/

Later tonight or tomorrow, given there're still no major issues, the old
site at www.danga.com/memcached/ will redirect to www.memcached.org

Next release is due up in 5-6 weeks, depending on how much progress we
make around the holiday. Wiki will be getting a refresh over november as
well.

If you have feedback, want to help with documentation, have bugs, etc, you
know what to do :)

have fun,
-Dormando


Re: New memcached homepage!

2009-11-06 Thread dormando

Ah, I forgot to note.

The users page is open for requests. If you're a bigish/notable site and
would wish to be linked there, let me know by tomorrow! Otherwise I'm
picking a bunch out of a hat.

Thanks,
-Dormando

On Fri, 6 Nov 2009, dormando wrote:



 Coinciding with the release of 1.4.3 tomorrow, we will be retiring the old
 and tired http://danga.com/memcached homepage (archiving it, I guess). It
 will redirect to http://memcached.org, which will have this new site:

 http://memcached.dormando.com/

 ... We've been passing the layout/ideas around, and it's time to open it
 up to the rest of you folks. We'll take
 comments/requests/updates/complaints for a day and, unless something
 serious comes up, the site will go live tomorrow.

 Let us know especially if you love or hate the logo/banner. They were
 redesigned to be more unique and ... interesting :) Beat out that old
 green M thing and give us a mascot to work with.

 This is the final layout, except for two changes... Adding a sorted list
 of committers to the bottom of the About page, and changing the urls to be
 '/about/' instead of '/about'

 have fun,
 -Dormando



Memcached 1.4.3-rc2

2009-11-02 Thread dormando

http://code.google.com/p/memcached/wiki/ReleaseNotes143rc2

Bug reported by Tomash, fixed by trond, reviewed by dustin, and now we
have a new tarball. Thanks, and please continue testing :)

-Dormando


Re: Using memcached as a distributed file cache

2009-11-02 Thread dormando

You could put something like varnish inbetween that final step and your
client..

so key is pulled in, file is looked up, then file is fetched *through*
varnish. Of course I don't know offhand how much work it would be to make
your app deal with that fetch-through scenario.

Since these files are large memcached probably isn't the best bet for
this.

On Mon, 2 Nov 2009, Jay Paroline wrote:


 I'm not sure how well a reverse proxy would fit our needs, having
 never used one before. The way we do streaming is a client sends a one-
 time-use key to the stream server. The key is used to determine which
 file should be streamed, and then the file is returned. The effect is
 that no two requests are identical, and that code must be run for
 every single request to verify the request and lookup the appropriate
 file. Is it possible or practical to use a reverse proxy in that way?

 Jay

 Adam Lee wrote:
  I'm guessing you might get better mileage out of using something written
  more for this purpose, e.g. squid set up as a reverse proxy.
 
  On Mon, Nov 2, 2009 at 4:35 PM, Jay Paroline boxmon...@gmail.com wrote:
 
  
   I'm running this by you guys to make sure we're not trying something
   completely insane. ;)
  
   We already rely on memcached quite heavily to minimize load on our DB
   with stunning success, but as a music streaming service, we also serve
   up lots and lots of 5-6MB files, and right now we don't have a
   distributed cache of any kind, just lots and lots of really fast
   disks. Due to the nature of our content, we have some files that are
   insanely popular, and a lot of long tail content that gets played
   infrequently. I don't remember the exact numbers, but I'd guesstimate
   that the top 50GB of our many TB of files accounts for 40-60% of our
   streams on any given day.
  
   What I'd love to do is get those popular files served from memory,
   which should alleviate load on the disks considerably. Obviously the
   file system cache does some of this already, but since it's not
   distributed it uses the space a lot less efficiently than a
   distributed cache would (say one popular file lives on 3 stream nodes,
   it's going to be cached in memory 3 separate times instead of just
   once).  We have multiple stream servers, obviously, and between them
   we could probably scrounge up 50GB or more for memcached,
   theoretically removing the disk load for all of the most popular
   content.
  
   My favorite memory cache is of course memcache, so I'm wondering if
   this would be an appropriate use (with the slab size turned way up,
   obviously). We're going to start doing some experiments with it, but
   I'm wondering what the community thinks.
  
   Thanks,
  
   Jay
  
 
 
 
  --
  awl



Re: Using memcached as a distributed file cache

2009-11-02 Thread dormando

 You could also redirect the client to the proxy/cache after computing the
 filename, but that exposes the name in a way that might be reusable.

perlbal is great for this... I think nginx might be able to do it too?
Internal reproxy. Server returns headers for where the load balancer is
to re-run a request through to. Mostly it's used for looking up mogilefs
addresses, but could also be used to redirect files through caches and
such.


dormando's awesome memcached top v0.1

2009-10-29 Thread dormando

Yo,

I couldn't sleep, so:
http://github.com/dormando/damemtop
(or: http://consoleninja.net/code/memcached/damemtop-0.1.tar.gz)

Early release of a utility I've been working on in the last few days. Yes,
sorry, I'm aware this makes /four/ memcached top programs. So, I had to
make mine awesome.

In order to be truly awesome, I need to spend another day working on it to
add a few things, but it's in a state now where it can be useful to
people. So, up it goes, and I'll take feedback/ideas/patches.

In short, it's a top utility which lets you take any stat memcached spits
out from 'stats', 'stats items', or 'stats slabs', and display it in a
'top' like interface. With totals, averages, etc. It also supports
computed columns, hit_rate, fill_rate, soon to be many more. Finally,
you can choose an arbitrary column to sort the output. I have more
memcached's than will fit on a stretched out terminal, so it's nice to be
able to sort :)

In order to change the display around you'll need to edit the
damemtop.yaml file (example included). Also in order to run it at all
you'll need to install AnyEvent and YAML CPAN modules. I'm brutally aware
of adversity for installing simple modules, but these are in very common
use, and AnyEvent allows the utility to scale to hundreds of instances. It
takes 0.2 seconds to poll every single stat and display against TypePad's
entire cluster.

Upcoming ideas/features:
- a '--helpme' mode that makes a big YAML dump folks can share with the
  mailing list to expediate assistance.
- many more computed columns.
- a drill down mode for exploring a single or custom set of instances.
- a slabs mode for easy analysis and aggregation of the individual slab
  stats.
- online config editing.
- more formatting. shorteners for large numbers. bytes - K - M -
  G/etc.
- better docs, more fleshed out config loader.
- scrolling output modes.
- multi-cluster support (switch views between groups of servers)
- rolling averages for some views.
- latency monitor (testing a bunch of commands)
- YAML output/input modes for logging, output into monitoring/graphing
  systems, input into multiple 'damemtop' listeners.
- pretty colors.
- reorganize code a little. It got messier than I like :/

Dunno... stuff? Maybe a quickie mode that can give you warnings or notes
about your configuration based on current stats? I'll work on this for a
few hours each week and kick out a new version for a month or so. I don't
expect (nor want) it to reach the complexity of something like innotop.

The intent for this module to replace the 'scripts/memcached-tool'
program, and be distributed with memcached itself.

have fun,
-Dormando


Last call for 1.4.3 (and apologies for the spam)

2009-10-29 Thread dormando

Hey,

We're doing a bunch of bug fixin' and issue killin' in advance of an
early release of 1.4.3-rc1, which is scheduled for this saturday, october
31st (ooh spooky!). 1.4.3-final will be out a week after that, possibly
earlier.

Apologies for the issue spam, please bear with us for two more days while
we empty the queue :)

If you have any bugs, requests, etc, now's the time to speak up!

-Dormando



<    4   5   6   7   8   9   10   >