Re: noreply error

2009-10-23 Thread dormando
Try it without the '0' in there. The delete expiration was (the only?)
incompatible change from 1.2.8 to 1.4.0. It's gone now.

On Fri, 23 Oct 2009, Brian Hawkins wrote:

 This is what the log shows

 28 delete key 0 noreply
 28 ERROR

 Not sure I can expand any further on that.

 Brian

 On Fri, Oct 23, 2009 at 11:06 AM, Dustin dsalli...@gmail.com wrote:


   On Oct 23, 9:43 am, brianhks brianh...@gmail.com wrote:
downloaded and compiled 1.4.2 and it didn't work.
1.2.8 however works just fine.

  Please expand on ``didn't work.''  We'd like to fix whatever is not
 working.





Re: Memcached Questions

2009-10-22 Thread dormando

  You can figure based on your sysctl settings, how much memory a tcp socket
  will use by default or with stuff being written to it. There're lots of
  sites that explain those in more depth. An idle connection can use around
  4-12k per.

 Would using UDP connections instead of TCP be a good way to eliminate
 that overhead?

It can help or hurt. Depends a lot on the size/type of data you're
fetching. Usually it's not a big enough deal to be worth worrying about,
and idle connections can use a lot less than that too.


Re: Memcached Questions

2009-10-21 Thread dormando

 1. Hoiw can we get memcache to use more of the allocated memory. On
 machines that im setting it to use 6886 megs it always looses about 1g
 that it never users.

What do you mean by this? Are you comparing the -m limit to the 'bytes'
stat? limit_maxbytes - bytes will be close to what your slab overhead is.
If this value is high you might need to restart memcached or tune slab
sizes. See the FAQ and list archives for more information.

 2. What is a good number to allocate on a machine based on its total
 ram. I want to use as much memory as possible since the machine is
 dedicated only to memcache.  on a machine with a total of 7872040
 kbytes i'm starting it with 6836 megs which comes out to total of
 7168065536 bytes 764Kbytes leaving in theory 872040K to deal with
 OS but i've still experienced times where the kernel kills the process
 because it runs out of memory.

You need to leave enough memory left over for the OS to handle TCP
sockets, filesystem buffers, shared memory, thread memory, etc

 3. how much should i allocate of memory per connection?

 I'm using Amazon EC2 so im trying to get as much performance out of
 each machine as possible. My instances at this point are using about
 6800 connections at peaks.

You can figure based on your sysctl settings, how much memory a tcp socket
will use by default or with stuff being written to it. There're lots of
sites that explain those in more depth. An idle connection can use around
4-12k per.

Looks like you have a good number of evictions and some amount of slab
overhead. You should look closer at `stats items` and `stats slabs` to get
a better picture of what size items are causing the biggest issue.

-Dormando


Re: memcached failover solution?

2009-10-19 Thread dormando

 On Mon, Oct 19, 2009 at 03:34, dormando dorma...@rydia.net wrote:

 http://blogs.sun.com/trond/date/20090625
 ^ client side replication.

 I like this and feel it's more powerful, since tt scales past two severs
 implicitly, and you can enable/disable it per key or key type. So instead
 of halving your effective cache, you can make a decision that some bulk of
 data is easier to recache than others.


 Ah ok, I see. It's an improvement over failover with consistent hashing, 
 because when you fail over, your data already exists on the failover server, 
 on a per-key basis, at the cost of storing the item
 multiple times, every time. Hm, that would work extremely well together with 
 parallel requests, which is also something I'd like to add to my client. And 
 as Trond writes, he uses it for the binary
 protocol only so he can do quiet sets that work properly.

 I'm still partially against failover because of the synchronization issues 
 with automatic recovery from failover, but adding replication really only 
 affects automatic failover and makes that case
 slightly nicer, so it's definitely something to consider if I start adding 
 failover support. I'll add it to my todo-list in any case, there's a bunch of 
 stuff I'd like to implement if I could find the
 time and the urge to do it. :-)

Heh... There're a few use cases that get me. One is new users who're
bolting memcached on to deal with horrific backend responsetime/etc. If
they're not too huge they can reduce cache efficiency by half and not
bankrupt themselves. Then over time remove keys from the replicated
scenario.

For larger folks it'd hopefully be used sparingly.

-Dormando


Re: memcached failover solution?

2009-10-18 Thread dormando



On Sun, 18 Oct 2009, Henrik Schr?der wrote:

 On Sun, Oct 18, 2009 at 00:47, dormando dorma...@rydia.net wrote:

   Anyone interested in getting one of the windows clients to support
   libmemcached, or at least the same replication method that the windows
   client uses?


 What do you mean? What would be required for this?

http://blogs.sun.com/trond/date/20090625
^ client side replication.

I like this and feel it's more powerful, since tt scales past two severs
implicitly, and you can enable/disable it per key or key type. So instead
of halving your effective cache, you can make a decision that some bulk of
data is easier to recache than others.

-Dormando


Re: Memory/Item sizes in 1.4.2

2009-10-14 Thread dormando

Hey,

 1) You can set the item size to be larger than the configured memory
 allocation:

 ./memcached -m 16 -I 128m

 We should at least warn that the two values are somewhat incongruous. Maybe
 add this to the recent warnings discussion for cmdline arguments?

I tossed this into that long e-mail I sent a few days back; I was working
on adding warnings over memory allocation, when I found that they were
impossible to write without being confusing, as the memory limit is
ignored in a few ways. Once I pull the trigger on that change, it should
be able to warn in more useful ways when you're obliterating the memory
limit.

 On a related note, should we mention that this will also increase the number
 of slabs up to the size of the item size you specify?

Yes. If you run memcached with -v after editing, it's obvious. I was going
to mention doing that in the wiki docs, and just added it to the manpage.

 2) Could we add into the output of the help that the limit for max item size
 is 128mb; you get a warning when you try to set something larger than this,
 but something in the output from -h would be nice.

Good idea. This made me realize that I forgot to update the manpage as
well. I just pushed a commit to my 'for_143' branch which adds this + the
manpage update.

 3) I've updated the relevant docs in
 http://dev.mysql.com/doc/refman/5.1/en/ha-memcached.html and the included FAQ
 on the configurable item size, but the FAQ on the project wiki still shows the
 1MB limits and not the recent changes in 1.4.2. I don't seem to have rights to
 update the pages, or I would do it myself ;)

I haven't circled back to do that yet, no :) It's not a feature we'd
generally encourage using, so I wasn't too excited about documenting it.

On a more serious note, if you want to be able to update the official
wiki, just e-mail me privately with your google account. At any moment
I'll be doing some significant damage to the wiki, so just let me know
what you're working on first so we don't collide :)

Thanks,
-Dormando


Re: Memcached user warnings

2009-10-14 Thread dormando

 This would help weed out the basic configuration blunders.

 Cloud these also show up in the stats display.  At the end?

 Warnings:

 100% Misses,  keys may be wrong.
 Memory waste over 50%.
 Threads greater than CPUs.
 Connections at or near the limit.
 Process swapping,  size set too large for system memory.

Man, I wish! These specific errors are more likely usable from a perl
script you'd run against the server, since those issues are plausibly
gleaned from the instance. Except for swap I guess.

$ ./whys_my_shit_acting_weird 127.0.0.1:11211


Memcached 1.4.2 released

2009-10-12 Thread dormando

Yo,

The memcached developers are proud to announce the latest stable release
of the most popular Free enterprise memcached distribution on the planet.

http://code.google.com/p/memcached/wiki/ReleaseNotes142

See further notes from the 1.4.2-rc1 announcement:
http://groups.google.com/group/memcached/browse_thread/thread/40722d268af05f2f

Enjoy! As usual reports bugs to us via the mailing list or issue tracker
on google code. You crash it, we... make it not do that anymore. As usual,
a big thanks to all the folks who work on the software, and support our
users on the mailing list, IRC, and meatspace.

-Dormando


Memcached Release Candidate 1.4.2-rc1

2009-10-08 Thread dormando

Yo,

PLEASE MAKE LOTS OF TESTING!

http://code.google.com/p/memcached/wiki/ReleaseNotes142rc1

This release comes again with many bugfixes, and a small number of useful
features, described briefly in the release notes above. 1.4.2 final will
be up in 2-3 days, based on response to testing. We feel that this release
is of high quality.

Linux hugetlbfs support is cool (eek a few % more speed out of memory
accesses). The new evictions_nonzero per-slab counter is also cool, as you
now have a seperate counter for evictions that had an explicit expiration
time.

However, this release comes with a loaded handgun. Please do not point it
at your feet. Or put it in your mouth. Or point it at your friends.

The [in]famous 1MB item size limit is now a start time tunable (-I)

As described in the release notes, it accepts a few formats:
-I 2048
^ bytes

-I 128k
^ kilobytes

-I 2m
^ megabytes

This doesn't *just* raise the item limit. What this actually does is raise
the size of the slab page. Previously, the slabber works like:

Hello! I need 50 bytes of memory for this fancy value I have here.

Okay, have a chunk from slab 1. Oh, but there's no RAM here. Hold on I'll
grab a page and butcher it.
... Then it fetches a 1MB page, then cuts it up into 10,000 little pieces,
and hands back the caller a small chunk.

If you set -I 64m, however.

Hello! I need 50 bytes of memory for this fancy value I have here.

Hold on let me get the truck. Also the chainsaw.

So before you abuse this value in the *upwards* direction, you should give
it some serious thought. On that note, it is useful to limit the maximum
item size to be smaller than a megabyte.

I'll be beating up the FAQ over the next few days for the next round of
usual clarifications. Hopefully we can keep this all clear ;)

-Dormando


Re: Download Links are getting redirected to http://www.mybikelane.com/

2009-10-08 Thread dormando

Hi.

What links? from where?

On Thu, 8 Oct 2009, Samy Rengasamy wrote:

 The links to download binaries are getting redirected to 
 http://www.mybikelane.com/;.
 Is there an alternative link ?

 Thanks,

 Sam.



Re: DNS vs IP

2009-10-07 Thread dormando

 DRBD really doesnt make sense for memcached, since memcached doesnt hit the 
 disk.

 What you really need is a way to keep the memcached memory arena (mostly) in 
 sync between the pair.

 There is an open source patch, but it's old and has problems.

 There is also a commercial solution that does it.

There's an open source one that works just fine, and is supported in a
main library:

http://blogs.sun.com/trond/date/20090625

-Dormando


Re: DNS vs IP

2009-10-07 Thread dormando

If code push is your issue, make code push easier...

In that I mean you don't have to push full code for twiddling something
like the list of memcached's. Split the list of servers off into a
seperate source file that's included, then give ops the ability to push
those subsets of files quickly and easily. Don't have your app code pushes
overwrite them either.

-Dormando

On Wed, 7 Oct 2009, Edward M. Goldberg wrote:


 Yes,  I use DNS Made Easy for this use:

 1) Create an A Record for each Memcached server with the SAME name
 memcached.domain.com
 2) Create an A Record for each Memcached server with a UNIQUE name
 memcache.00.domain.com

 Then do:

 $ dig +short memcache.domain.com  # This returns --
 10.0.0.1
 10.0.0.2
 10.0.0.3
 ...

 This gets the list of Memcache servers by IP,  You can also use the
 unique names for DNS Access.
 Or just use the Ip values as you like.

 As I add more server to Scale-Up the DIG command just returns more
 addresses.  So the idea scales.

 The A records as DNSMadeEasy can be updated with DDNS API calls.

 I like the performance,  but have had issues with DNS performance.
 I just wanted feedback as to other system level users do this scale-
 up operation in production.
 Pushing new code works for some users.  But lacks a dynamic update for
 OPs needs.

 Edward M. Goldberg

 http://myCloudWatcher.com/



Memcached user warnings

2009-10-06 Thread dormando

Yo,

I'm debating a small patch for 1.4.2 that would add (more) commandline
warnings for memcached. You'd of course only see most of them if you fire
it up without the -d option, but hopefully in the course of testing
someone tries that.

For exampple, if someone sets `-m 12` they'd get a warning saying the
amount of memory is too small, and they're likely to get out of memory
errors.

And if they set `-t 50` it would warn that having threads  num_cpus is
wasteful. I think a few of our frequent issues can be solved by having
memcached audibly warn when configured weird. I also want to add more
twiddles that can be used for foot-shooting.

Thoughts?

I'll do up the patch tomorrow anyway but want to see if there's any
discussion to this.

-Dormando


Re: memcached hangs suddenly and slows down

2009-10-03 Thread dormando
Please upgrade; version 1.2.5 has a number of spin/crash bugs that have
since been fixed.

We highly recommend 1.4.1, but 1.2.8 is still available.

-Dormando

On Sat, 3 Oct 2009, Ravi Chittari wrote:


 Version is 1.2.5

 memcached -d -m 256
 it is not going into swap or anything like that.  It has an 8GB RAM,  the 
 total RAM used up is 5GB for other apps. 

 But when it hangs, I see that cpu is 100% and memcached is using up the cpu.

 Thanks,
 Ravi.

 On Fri, Oct 2, 2009 at 6:46 PM, dormando dorma...@rydia.net wrote:

   Hey,

   What version are you on?

   Is the machine it's on swapping? how much memory is free? cpu free? 
 what's
   the general load on the box?

   What's the commandline you're using to start memcached?

   -Dormando

 On Fri, 2 Oct 2009, rch wrote:

 
  I am using memcached in my production environment.
  It works fine 90% of the time.
  But suddenly, it slows down. when I try to connect via telnet the
  connect part is slow.
  How can I debug this?
 
  Also one more thing that is not clear is,  stats shows the current
  connections more than 1024. I am starting memcached with default
  settings. so, I thought max connections that should be allowed are
  1024. But I see below.. which is strange.
 
  STAT curr_connections 1063
  STAT total_connections 79538
  STAT connection_structures 1092
 
  At this time the server is responding fine.  Why am I seeing more than
  1024 in current connections?
 
 
 
 
 





Re: memcached hangs suddenly and slows down

2009-10-03 Thread dormando
1.2.2 is even worse than 1.2.5. Please upgrade to 1.4.1 if possible.

-Dormando

On Sat, 3 Oct 2009, Ravi Chittari wrote:

 Thanks Edward..  One last question.
 Actually the version we are using is 1.2.2, I believe 1.2.2 has similar 
 issues as 1.2.5.
 You think 1.2.2 has similar issues as 1.2.5?

 is 1.2.8 stable version but less features than 1.4.1?

 Thanks,
 Ravi.




 On Sat, Oct 3, 2009 at 5:23 PM, dormando dorma...@rydia.net wrote:
   Please upgrade; version 1.2.5 has a number of spin/crash bugs that have
   since been fixed.

   We highly recommend 1.4.1, but 1.2.8 is still available.

   -Dormando

 On Sat, 3 Oct 2009, Ravi Chittari wrote:

 
  Version is 1.2.5
 
  memcached -d -m 256
  it is not going into swap or anything like that.  It has an 8GB RAM,  the 
  total RAM used up is 5GB for other apps. 
 
  But when it hangs, I see that cpu is 100% and memcached is using up the cpu.
 
  Thanks,
  Ravi.
 
  On Fri, Oct 2, 2009 at 6:46 PM, dormando dorma...@rydia.net wrote:
 
        Hey,
 
        What version are you on?
 
        Is the machine it's on swapping? how much memory is free? cpu free? 
  what's
        the general load on the box?
 
        What's the commandline you're using to start memcached?
 
        -Dormando
 
  On Fri, 2 Oct 2009, rch wrote:
 
  
   I am using memcached in my production environment.
   It works fine 90% of the time.
   But suddenly, it slows down. when I try to connect via telnet the
   connect part is slow.
   How can I debug this?
  
   Also one more thing that is not clear is,  stats shows the current
   connections more than 1024. I am starting memcached with default
   settings. so, I thought max connections that should be allowed are
   1024. But I see below.. which is strange.
  
   STAT curr_connections 1063
   STAT total_connections 79538
   STAT connection_structures 1092
  
   At this time the server is responding fine.  Why am I seeing more than
   1024 in current connections?
  
  
  
  
  
 
 
 
 





Re: GET command returning strange data

2009-10-03 Thread dormando

Hey,

Do you have a short script or any way of reproducing this? I can't
reproduce it offhand, or see how it's possible :)

What were you using to get that output?

-Dormando

On Tue, 22 Sep 2009, rwong wrote:


 Hello,

 I am using memcached 1.2.8 and on occasion I notice a GET command is
 not returning expected data.  The data I am keeping track of in
 memcached is an incrementing field.  The increment command seems to
 work fine i.e., when the increment command is issued for a given key,
 the expected incremented reply is returned, however when I try to
 retrieve the value of the key again, it returns garbage:

 get h_67cbbf0f5b6e30f4b93ef145773e12dad71017dc_73_20090921_191500
 VALUE h_67cbbf0f5b6e30f4b93ef145773e12dad71017dc_73_20090921_191500 0
 1
 ??KEND

 incr h_67cbbf0f5b6e30f4b93ef145773e12dad71017dc_73_20090921_191500 1
 1

 get h_67cbbf0f5b6e30f4b93ef145773e12dad71017dc_73_20090921_191500
 VALUE h_67cbbf0f5b6e30f4b93ef145773e12dad71017dc_73_20090921_191500 0
 1
 1?KEND

 incr h_67cbbf0f5b6e30f4b93ef145773e12dad71017dc_73_20090921_191500 1
 2

 get ncr h_67cbbf0f5b6e30f4b93ef145773e12dad71017dc_73_20090921_191500
 1
 VALUE h_67cbbf0f5b6e30f4b93ef145773e12dad71017dc_73_20090921_191500 0
 1
 2?KEND

 If anyone has experienced this and has any insights, please let me
 know.

 Thanks,
 -Robert



Re: memcached hangs suddenly and slows down

2009-10-02 Thread dormando

Hey,

What version are you on?

Is the machine it's on swapping? how much memory is free? cpu free? what's
the general load on the box?

What's the commandline you're using to start memcached?

-Dormando

On Fri, 2 Oct 2009, rch wrote:


 I am using memcached in my production environment.
 It works fine 90% of the time.
 But suddenly, it slows down. when I try to connect via telnet the
 connect part is slow.
 How can I debug this?

 Also one more thing that is not clear is,  stats shows the current
 connections more than 1024. I am starting memcached with default
 settings. so, I thought max connections that should be allowed are
 1024. But I see below.. which is strange.

 STAT curr_connections 1063
 STAT total_connections 79538
 STAT connection_structures 1092

 At this time the server is responding fine.  Why am I seeing more than
 1024 in current connections?







Re: Does memcached supports replicated caching?

2009-10-01 Thread dormando

Okay,

I talked with Mark about this in IRC too, but just so the rest of the
vendors don't get an idea; Please don't use this list for blatant
advertising.

I can assure you we're all aware of the wonderful extra features you've
hacked in to deal with the common complaints, but this is a mailing list
about an open source project.

I can't do anything about you guys privately contacting posters and
promising them wonderful features in exchange for cash, but please keep it
to that.

And to everyone else on the list, we don't need people to chime in for
support on either side of this. I'm stating this here, once, and we can
get along so long as everyone's constructive. Lets just cut this here.

-Dormando

On Wed, 30 Sep 2009, Mark Atwood wrote:


 On Sep 30, 2009, at 2:04 AM, Adi wrote:=
 
  Hi,
  Does memcached supports replicated caching between distributed caching
  servers? if 'Yes' than how to configure it in a windows based clients?


 Replicated caching is not built into the memcached servers.

 There are ways of doing something similar to replicated caching in the client
 drivers.

 The Gear6 memcached appliance does low level hot-server-to-warm-standby
 replication, but that is only for high availability in the event of server
 failure.


 (I work for Gear6, but I am not a salesman.)

 --
 Mark Atwood http://mark.atwood.name





Re: pylibmc vs python-libmemcached vs cmemcache

2009-09-23 Thread dormando

I've not seen pylibmc yet, dunno which one is better...
You can discount 'cmemcache' outright, since it's based off of the
deprecated 'libmemcache' C library. Both were buggy.

On Wed, 23 Sep 2009, Jehiah Czebotar wrote:


 It seems there are 3 memcached libraries for python now that wrap the
 c libmemcached for use in python.

 pylibmc - http://lericson.blogg.se/code/category/pylibmc.html
 python-libmemcached - http://code.google.com/p/python-libmemcached/
 cmemcache - http://gijsbert.org/cmemcache/index.html

 Is anyone using these in a heavy production environment, are any more
 reliable than any others? I havn't seen much discussion about any of
 these in the list archives.

 cmemcache lists some known problems (even important things like
 crashing on disconnects)
 pylibmc seems newer and appears to have the most active development

 thoughts?

 --
 Jehiah



Re: Memcached Connection Failure

2009-09-22 Thread dormando
Wrong;

for omg in `seq 1 30` ; do yes  /dev/null  done

observe load hit 30.

-Dormando

On Tue, 22 Sep 2009, Vladimir Vuksan wrote:

 I don't think running CPU hard would explain. You could have 100% CPU 
 utilization and load of one. Load of 35-40 is usually related to some type of 
 IO. Most cases disk IO however network IO is not out
 of question. I would suggest installing something like Ganglia to get some 
 actionable metrics. My money is on Apache consuming ever increasing amounts 
 of memory.

 dormando wrote:

 Can you troubleshoot it more carefully without thinking it's specific to
 memcached? How'd you track it down to memcached in the first place?

 When your load is spiking, what requests are hitting your server? Can you
 look at an apache server-status page to see what's in flight, or
 re-assemble such a view from the logs?

 It smells like you're getting a short flood of traffic. If you can see
 what type of traffic you're getting at the time of the load spike you can
 reproduce it yourself... Load the page yourself, time how long it takes to
 render, then break it down and see what it's doing.

 If it's related to memcached, it's still likely to be a bug in how you're
 using it internally (looping wrong, or something) - since your load is
 related to the number of apache procs, and you claim it's not swapping,
 it's either doing disk io or running CPU hard.

 -Dormando

 On Tue, 22 Sep 2009, nsheth wrote:



 Hmm, just saw the same issue occur again.  Load spiked to 35-40.
 (I've set MaxClients to 40 in apache, and looking at the status page,
 I see it basically using every thread, so that may explain that load
 level).

 Going back on the connections, it looks like we've got about 1.2k
 connections in various states, so nowhere near any of these limits.

 Any other thoughts?

 Thanks!

 On Sep 18, 3:30 pm, nsheth nsh...@gmail.com wrote:


 We weren't experiencing any abnormal connection levels.

 I did upgrade to the latest client and server version 1.4.1.  So far
 so good . . .

 On Sep 15, 10:36 pm, nsheth nsh...@gmail.com wrote:



 The machine isn't swapping, actually.  I'll try to catch it
 happening next time and see if I can get more information about the
 connections used . . . and also look into upgrading to 1.4.1,
 hopefully that helps.


 On Sep 15, 6:19 pm, Vladimir vli...@veus.hr wrote:


 I do question whether those would actually cause load to spike up.
 Perhaps connection refused but I suspect those two ie. load spike and
 connection refused are linked. Please correct if I am wrong. I just
 checked my tcp_time_wait metrics and they peak around 600 even during
 these load spikes.


 Eric Day wrote:


 If you discover this is a TIME_WAIT issue (too many TCP sockets
 waiting around in kernel), you can tweak this in the kernel:


 # cat /proc/sys/net/ipv4/tcp_fin_timeout
 60


 # cat /proc/sys/net/ipv4/ip_local_port_range
 32768   61000


 61000-32768= 28232


 (these are the defaults on Debian Linux).


 So you only have a pool of 28232 sockets to work with, and each will
 linger around for 60 seconds in a TIME_WAIT state even after being
 close()d on both ends. You can increase your port range and lower
 your TIME_WAIT value to buy you a larger window. Something to keep
 in mind though for any clients/servers that have a high connect rate.


 -Eric


 On Tue, Sep 15, 2009 at 08:48:39PM -0400, Vladimir wrote:


    Too many connections in CLOSE_WAIT state ?


    Anyways I would highly recommend installing something like Ganglia to get
    some types of metrics.


    Also at 35-50 machine is not doing much other than swapping.


    Stephen Johnston wrote:


      This is a total long shot, but we spent alot of time figuring out a
      similar issue that ended up being ephemeral port exhaustion.


      Stephen Johnston


      On Tue, Sep 15, 2009 at 8:27 PM, Vladimir vli...@veus.hr wrote:


        nsheth wrote:


          About once a day, usually during peak traffic times, I hit some
          major
          load issues.  I'm running memached on the same boxes as my
          webservers.  Load usually spikes to 35-50, and I see the apache
          error
          log flooded with messages like the following:


          [Sun Sep 13 14:54:34 2009] [error] [client 10.0.0.2] PHP Warning:
          memcache_pconnect() [a href='function.memcache-pconnect'function.
          memcache-pconnect/a]: Can't connect to 10.0.0.5:11211, Unknown
          error
          (0) in /var/www/html/memcache.php on line 174, referer: 


          Any thoughts?  Restart apache, and everything clears up.


        It's PHP. I have seen something but in last couple weeks it has
        cleared itself. It could be coincidental with using memcached 1.4.1,
        code changes etc. I actually have some Ganglia snapshots of the
        behavior you are describing here


        http://2tu.us/pgr


        Reason why load goes to 35-50 is that Apache starts consuming greater

Re: memcache on linux

2009-09-22 Thread dormando
libevent will try to use the best one available at runtime. If you're
installing a linux OS that was released in the last 5ish years, it'll have
epoll. Just build libevent as normal or install it via package.

You don't need to think to ensure it's using it.

-Dormando

On Fri, 18 Sep 2009, jim wrote:


 Matt,

 What configures libevent to use one event mechanism from other? e.g.
 epoll vs poll vs kqueue
 How is this(event mechanism to be used) configured in libevent and when
 (build time or run time)?

 Any relavant docs about this will be helpful.

 Thanks



 On Sep 18, 1:44 am, Matt Ingenthron ingen...@cep.net wrote:
  jim wrote:
   I want to compile memcache on linux kernel 2.6.30 with epoll.
 
   Is it already included in kernel? How do i check it?
   If not then, What do I need for epoll?
 
  I believe it is and has been standard for some time.  Well, man epoll
  will probably give you an idea.  It's probably in a header somewhere as
  well.
 
   Which is better - epoll or libevent?
 
  libevent uses epoll on Linux.  In any event (pardon the pun), libevent
  is a requirement.
 
   Is there compiled version available for linux?
 
  You'd have to be more specific on both what it is you're looking for and
  what you mean by linux.  Libevent is on most Linux distros.  It may
  come under the name libevent1 mapping to 1.x.   I'd recommend using
  whatever tools come with your distro to search.
 
  Good luck,
 
  - Matt


Re: Range Spec Posted to Wiki

2009-09-16 Thread dormando


On Wed, 16 Sep 2009, Dustin wrote:



 On Sep 16, 3:11 pm, Brian Aker br...@tangent.org wrote:

     a) Should we move other stuff out of doc/ and into the wiki?
 
  Yes, or better... put it in Wikipedia.

   That's a neat idea.  Any objections to having the well-defined
 protocols live there?


That isn't asking for vandalism? Who else does this?

Re: Range Spec Posted to Wiki

2009-09-16 Thread dormando

That's *kind* of what I thought. I'm unawrae of anyone having real
authoritative specifications on wikipedia?

Is there something to show otherwise?

On Wed, 16 Sep 2009, Adam Lee wrote:


 Am I confused or is it actually being proposed that documentation
 exist on Wikipedia?

 I see no problem with the current wiki and there's no way that
 Wikipedia would that...

 http://en.wikipedia.org/wiki/Wikipedia:NOT#Wikipedia_is_not_a_manual.2C_guidebook.2C_textbook.2C_or_scientific_journal

 --
 awl




Re: Fwd: [Bug 519375] New: rpmbuild of memcached-1.4.0-1 fails on RHEL-5

2009-09-04 Thread dormando

Ah, I missread, nevermind.

On Fri, 4 Sep 2009, Paul Lindner wrote:

 FYI - same issue recently reported also reported in fedora bugzilla.
 -- Forwarded message --
 From: bugzi...@redhat.com
 Date: Wed, Aug 26, 2009 at 4:55 AM
 Subject: [Bug 519375] New: rpmbuild of memcached-1.4.0-1 fails on RHEL-5
 To: lind...@inuus.com


 Please do not reply directly to this email. All additional
 comments should be made in the comments box of this bug.

 Summary: rpmbuild of memcached-1.4.0-1 fails on RHEL-5

 https://bugzilla.redhat.com/show_bug.cgi?id=519375

   Summary: rpmbuild of memcached-1.4.0-1 fails on RHEL-5
   Product: Fedora
   Version: rawhide
  Platform: i386
OS/Version: Linux
Status: NEW
  Severity: medium
  Priority: low
 Component: memcached
AssignedTo: lind...@inuus.com
ReportedBy: ninyis...@gmail.com
 QAContact: extras...@fedoraproject.org
CC: matth...@rpmforge.net, lind...@inuus.com,
ru...@rubenkerkhof.com
   Estimated Hours: 0.0
Classification: Fedora


 Description of problem:
 rpmbuild of memcached-1.4.0-1.fc12.src.rpm fails on RHEL-5.3 with following
 errors:-
 ---
 memcached.c:3764:1: error: embedding a directive within macro arguments is
 not
 portable
 memcached.c:3770:1: error: embedding a directive within macro arguments is
 not
 portable
 ---

 Version-Release number of selected component (if applicable):
 1.4.0-1

 How reproducible:
 Download
 http://download.fedora.redhat.com/pub/fedora/linux/development/source/SRPMS/memcached-1.4.0-1.fc12.src.rpm
 and build on RHEL-5.3

 Steps to Reproduce:
 1.
 2.
 3.

 Actual results:
 failed with error: embedding a directive within macro arguments is not
 portable while compiling memcached.c

 Expected results:
 rpms of memcached, memcached-devel, memcached-debuginfo and
 memcached-selinux

 Additional info:
 If we delete lines 3764 and 3770 from memcached.c, the compilation works on
 RHEL-5.3

 --
 Configure bugmail: https://bugzilla.redhat.com/userprefs.cgi?tab=email
 --- You are receiving this mail because: ---
 You are on the CC list for the bug.
 You are the assignee for the bug.



Re: Confused about slab/class allocation

2009-09-03 Thread dormando

What version are you using?

Memcached won't start evicting until all pages are allocated up to the
limit. It's possible memcached-tool is wrong; telnet to memcached and type
'stats slabs' and 'stats items' and add everything up.

Fill ratio and such is difficult to accurately predict since expirations
are lazy.

I also forget what the side effects of setting -n so low are, but I doubt
it's related.

-Dormando

On Thu, 3 Sep 2009, Vladimir wrote:


 That makes sense however it doesn't explain what I have been seeing :-).

 I have a scenario where no matter how much memory I throw in maximum number of
 current items always tops of at about 180k. When that happens I start seeing
 on average 200 evictions per minute. Also the fill ratio never goes above e.g.
 94%.

 Does that make sense :-) ?

 Vladimir

 On Thu, 3 Sep 2009, dormando wrote:

  They haven't been used yet. Keep throwing data in and it should allocate
  more pages. The full is deceptive... should probably fix that. That just
  means there's nothing known to be free on the tail... But set'ing a new
  variable may not cause an eviction or allocate more memory due to finding
  an expired item on the tail.
 
  If that makes any sense :)
 
  -Dormando
 
  On Thu, 3 Sep 2009, Vladimir wrote:
 
  
   I am starting up memcached with following options (giving it 1 GB of RAM)
  
   memcached -d -m 1024 -c 1024 -P -f 1.05 -n 3
  
   If I run the memcached-tool command I get
  
   # /usr/local/bin/memcached-tool  127.0.0.1:11211 display
  #  Item_Size   Max_age  1MB_pages Count   Full?
  6  80 B90275 s   1   1  no
  7  88 B89898 s   1   8  no
  8  96 B89732 s   1 142  no
  9 104 B88719 s   1   3  no
 10 112 B69930 s   1  63  no
 11 120 B22886 s   13328  no
 12 128 B70335 s   1   3  no
 13 136 B86964 s   1  15  no
 15 152 B85043 s   1  10  no
 17 168 B71797 s   1   2  no
 18 176 B87579 s   1  13  no
 19 184 B  292 s   11913  no
 20 200 B57631 s   4   20967 yes
 21 216 B57653 s   14854 yes
 22 232 B  308 s   1 274  no
 23 248 B59621 s   8   33824 yes
 24 264 B87823 s   1   9  no
 25 280 B87228 s   1   5  no
 26 296 B86816 s   1   6  no
 27 312 B85862 s   1   5  no
 28 328 B87199 s   1   4  no
 29 344 B86892 s   1   1  no
 30 368 B87367 s   1   6  no
 31 392 B82457 s   1   4  no
 32 416 B88179 s   1   3  no
 33 440 B86969 s   1   3  no
 34 464 B86087 s   1   2  no
 35 488 B87537 s   1   1  no
 36 512 B86104 s   1   4  no
 37 544 B87799 s   1   5  no
 38 576 B66362 s   1   2  no
 39 608 B86138 s   1   3  no
 40 640 B88066 s   1   4  no
  
  
   Adding up all the 1MB_pages would indicate that 43 1MB pages are used up.
   why is then e.g. class 23 full ? Running stats slabs shows 148
   active slabs. Any clues why other 1 MB pages are not allocated ?
  
   Thanks,
  
   Vladimir
  
  
 



Re: Confused about slab/class allocation

2009-09-03 Thread dormando

 I did this

 echo stats slabs | nc localhost 11211 | grep total_pages

 Then added all the pages. It came to 1036 (OK not 1024 but close)

  I also forget what the side effects of setting -n so low are, but I doubt
  it's related.

 Apparently it is not doing much for me. Looks like most of my objects end up
 in classes 100-110. Should I assume I should actually be bumping up the factor
 instead of making it smaller ie. 1.5 instead of 1.05 or 1.25 ?

 Vladimir


1.4.1 has fewer bugs ;) but not related to this.

1036 close enough... might be a bug there (should be 1023 or 1024 exactly)
in the way 1.4.0's reserving pages.

but, that does mean all your memory's allocated.

'stats items' will show you which slab classes are having evictions. Then
you can adjust your growth factor to better cover it. Also, match 'stats
slabs' up with 'stats items' to see if the evictions in a particular class
map to a slab with most of your pages. If there aren't enough pages
assigned to a heavily used class, that usually means the access pattern
changed over time and you need to restart memcached to have it re-dish
them out.

-Dormando


Re: Problem with memcache(d)

2009-09-02 Thread dormando

I don't see the cmd_flush stat, can you telnet to the port and run the raw
command? (pretty sure it's in 1.2.8...)

no, that's not too much for memcached. nobody in the world can overwhelm
the thing to the point where it does that.

Can you post a real code snippit that doesn't work for you? Not pseudo
code.

On Wed, 2 Sep 2009, jm.malat...@schaefer-shop.de wrote:


 Hi

 flags are set correctly (if we flip the args memcache gives an error).
 Here are the stats from a successful GET

 [192.168.252.51:11211] - Array
 -- [pid] - 2136
 -- [uptime] - 1810936
 -- [threads] - 5
 -- [time] - 1251876469
 -- [pointer_size] - 32
 -- [rusage_user_seconds] - 1041
 -- [rusage_user_microseconds] - 305999
 -- [rusage_system_seconds] - 2469
 -- [rusage_system_microseconds] - 628551
 -- [curr_items] - 323654
 -- [total_items] - 838086
 -- [limit_maxbytes] - 0
 -- [curr_connections] - 18
 -- [total_connections] - 3465613
 -- [connection_structures] - 35983
 -- [bytes] - -1345182764
 -- [cmd_get] - 5995198
 -- [cmd_set] - 838086
 -- [get_hits] - 1897535
 -- [get_misses] - 4097663
 -- [evictions] - 0
 -- [bytes_read] - 431872734
 -- [bytes_written] - 1974116548
 -- [version] - 1.2.8

 [192.168.252.51:11212] - Array
 -- [pid] - 1710
 -- [uptime] - 1811496
 -- [threads] - 5
 -- [time] - 1251876469
 -- [pointer_size] - 32
 -- [rusage_user_seconds] - 943
 -- [rusage_user_microseconds] - 885641
 -- [rusage_system_seconds] - 2242
 -- [rusage_system_microseconds] - 336896
 -- [curr_items] - 337817
 -- [total_items] - 906750
 -- [limit_maxbytes] - 0
 -- [curr_connections] - 19
 -- [total_connections] - 3312161
 -- [connection_structures] - 35202
 -- [bytes] - -1213428976
 -- [cmd_get] - 5740385
 -- [cmd_set] - 906750
 -- [get_hits] - 1532799
 -- [get_misses] - 4207586
 -- [evictions] - 0
 -- [bytes_read] - 1245733297
 -- [bytes_written] - 877017620
 -- [version] - 1.2.8

 No evictions, no flush, server not restartet... a few minutes later
 the same site do not come via GET.
 Requests per minute: about 100 - too much for memcache?

 Jean Michel



Re: Problem with memcache(d)

2009-09-02 Thread dormando

Is this the pecl/memcache client?

If so:

$o_memcache-set('key', $data, $flags, 60); # expires in a minute.

Something is backwards/wrong/weird, you can narrow this down yourself by:

- boiling down manual code snippits as far as you can until it works.
Write a small php script that just uses the pecl/memcache interface and
try it. If you can get it working, work your way back up.
- restart memcached in the foreground with -vv and watch to see that the
sets are happening correctly
- or avoid the above and use tcpflow/tcpdump to watch for the raw 'set'
commands. they should look like:
set foo 0 60 13
for no flags, 60 second timeout, 13 bytes, etc.

Explaining the script won't help, since this is probably a code bug,
you'll need to either share the whole program or just iterate your way
down through the code until you make it work :)

-Dormando

 Well a snippts :)
 Let me explain the script:
 We are using smarty an get the HTML-Code (whole site) in a variable
 ($s_html).
 By using the URL we create a unique key (less than 200 characters).
 Now we check, if the key is already saved

 # $sCacheFile = unique key!
 $s_html_cache = $o_memcache-get($sCacheFile);
 if($s_html_cache !== false)
   echo $s_html_cache;
 else
   $o_memcache-set($sCacheFile,$s_html,60*60*6);

 -- 60*60*6 = 6h (also tried 21600, time()+21600)

 After the GET and SET (tried ADD too) we check the result code by
 using
 $o_memcache-getResultCode();

 The result code returns 0 (success) or 7 (?) or 1 (?) or 16 (by using
 GET).

 Need more infos?
 Thanks for helping!

 Jean Michel



Re: Setting far distant expiry weirdness on new install

2009-09-02 Thread dormando

Willing to bet your clock was off? And/or it adjusted while you were
setting some values.

On Wed, 2 Sep 2009, roger.moffatt wrote:


 I have been completely stumped by this one.

 I've had a windows install of memcached (1.2.6) running for months
 without incident in Amazon's compute cloud. Yesterday the server went
 AWOL and I had to instantiate a new server with Amazon. The image was
 fresh and everything came back as expected. Except for memcache which
 started refusing to store objects with a far distant expiry time (eg
 12 hours in the future). I would write an object in, get no failure on
 the write, but the read immediately afterwards was returning null.

 The only difference between the old server and the new one was that
 the new one was a slower cpu than the original (in Amazon terms, a
 m1.small instance rather than a c1.medium).

 This affected every script that was using the cache to store objects
 with expiry times set. I'm using the CPAN memcached modules under
 mod_perl FYI. The scripts that were affected run on a different
 physical server.

 As soon as I shortened the expiry time from 12 hours to 4 hours, it
 all started working again as before. This was repeatable and the
 system was in this state for an hour or more and I then gave up and
 went to bed.

 Now that was some 9 hours ago. Presently I can raise the expiry time
 to 12 hours and the system works as expected.

 I appreciate there are a lot of variables here, but can anyone think
 of anything obvious that could cause this? For example, the fact that
 the new server was physically different from the original one and was
 just running the same virtual image? OK that's beyond the scope of
 this group, but from a memcached perspective, is there anything I
 should be aware when using expiry?

 Many Thanks
 Roger



Stable release 1.4.1

2009-08-29 Thread dormando

Hey all,

We've moved from -rc1 to final with one additional set of commits for
updating the rpm specfile. No code changes.

http://code.google.com/p/memcached/wiki/ReleaseNotes141

... release notes are very similar to 1.4.1-rc1. If you haven't checked it
out yet, have at it :)

1.4.2-rc1 is now tentatively scheduled for september 19th, in three weeks.

Send bug reports to our issue tracker:
http://code.google.com/p/memcached/issues/list

... or to the mailing list if you're not sure if it's a bug or not :)

have fun,
-Dormando


Re: Fwd: [Bug 516489] CVE-2009-2415 memcached: heap-based buffer overflow

2009-08-12 Thread dormando

Ehh fine. I guess I'll cut a 1.2.9.

It'll contain this single patch and there won't be a lot of fanfare to it.
I'll get this out ASAP.

This bug is definitely not serious, and anyone claiming it as a root hole
should be strangled. Please don't run this thing as root in a place where
people can put whatever random trash they want into the system.

The only reason why we consider this a notable bug is due to the potential
for deliberate memory corruption. There are still many ways to DoS
memcached if you have full and unfettered access to it.

-Dormando

On Tue, 11 Aug 2009, Paul Lindner wrote:

 I haven't seen this mentioned on the mailing list...  Is there a 1.2.9 in
 the works or should I just patch up my builds with the attached patch.

 -- Forwarded message --
 From: bugzi...@redhat.com
 Date: Mon, Aug 10, 2009 at 12:54 AM
 Subject: [Bug 516489] CVE-2009-2415 memcached: heap-based buffer overflow
 To: lind...@inuus.com


 Please do not reply directly to this email. All additional
 comments should be made in the comments box of this bug.


 https://bugzilla.redhat.com/show_bug.cgi?id=516489





 --- Comment #1 from Tomas Hoger tho...@redhat.com  2009-08-10 03:54:22 EDT
 ---
 Created an attachment (id=356858)
  -- (https://bugzilla.redhat.com/attachment.cgi?id=356858)
 Debian patch

 Patch extracted from Debian update for 1.2.2.

 Upstream fix for 1.2.8 should be this:

 http://consoleninja.net/code/memcached/memcached-1.2.8_proper_vlen_fix.patch

 --
 Configure bugmail: https://bugzilla.redhat.com/userprefs.cgi?tab=email
 --- You are receiving this mail because: ---
 You are on the CC list for the bug.



Re: Update / Populate cache in the background

2009-08-05 Thread dormando

What happens when we release 1.4.1?

Your site really should be able to tolerate at least the brief loss of a
single memcached instance.

It's definitely good to update the cache asyncronously where possible, and
all of this stuff stuff (a lot is detailed in the FAQ too). I'd just
really like to push the point that these techniques help reduce common
load. They can even help a lot in emergencies when you find a page taking
50 seconds to load for no good reason.

However, relying on it will lead to disaster. It allows you to create
sites which cannot recover on their own after blips in cache, upgrades,
hardware failure, etc.

If you have objects that really do take umpteen billions of milliseconds
to load, you should really consider rendering data (or just the really
slow components of it) into summary tables. I like using memcached to take
queries that run in a few tens of milliseconds or less, and make them
typically run in 0.1ms. If they take 20+ seconds something is horribly
wrong. Even 1 full second is something you should be taking very
seriously.

-Dormando

On Wed, 5 Aug 2009, Edward Goldberg wrote:


 I use the term Cache Warmer for task like this,  where I keep a key
 (or keys) warm all of the time.

 The best way to keep the load on the DB constant is to Warm the Cache
 at a well defined rate.

 If you wait for a request to update the cache,  then you may loose
 control over the exact moment of the load.

 The idea is to make a list of the items that need to be warmed,  and
 then do that list at a safe rate and time.

 Very long TTL values can be un-safe.  Old values can lead to code
 errors that are not expected days later.

 Edward M. Goldberg
 http://myCloudWatcher.com/


 On Wed, Aug 5, 2009 at 9:44 AM, dormandodorma...@rydia.net wrote:
 
  Also consider optimizing the page so it doesn't take 20 seconds to render
  - memcached should help you under load, and magnify capacity, but
  shouldn't be used as a crutch for poor design.
 
  -Dormando
 
  On Wed, 5 Aug 2009, Adam Lee wrote:
 
  Run a cron job that executes the query and updates the cache at an interval
  shorter than the expiration time for the cached item.
 
  On Wed, Aug 5, 2009 at 11:38 AM, Haes haes...@gmail.com wrote:
 
  
   Hi,
  
   I'm using memcached together with Django to speed up the database
   queries. One of my Django views (page) uses a query that takes over 20
   sec. Normally this query hits the memcached cache and the data is
   served almost instantly. The problem now is that if the cache expires,
   the next person accessing this page will have to wait about 20 seconds
   for it to load which is not really acceptable for me.
  
   Is there a way to update the memcached data before it times out, in a
   way that this query always hits the cache?
  
   Thanks for any hints.
  
   Cheers.
  
 
 
 
  --
  awl
 
 



Re: Update / Populate cache in the background

2009-08-05 Thread dormando

Yes, well... For most big sites I'd actually insist that it's okay if you
disappear from the internet if too many of your memcached instances go
away. Losing 10-50% might be enough to kill you and that's okay. Memcached
has been around long enough and integrated tight enough that it *is* a
critical service. Losing the typical number of common failures, or issuing
a slow rolling restart, shouldn't cause you to go offline. Losing a huge
sack all at once might make you limp severely or fail temporarily.

For very small sites with just 1-2 instances, I'm not too sure what to
recommend. Consider making sure you at least have 2-4 instances and that
you can survive the loss of one, even if you end up running multiple on
one box.

-Dormando

On Wed, 5 Aug 2009, Josef Finsel wrote:

 Or, to put it another way, memcached is *not a persistent data store nor
 should it be treated as one. If your application will fail if memcached is
 not running, odds are you are using memcached incorrectly.*

 On Wed, Aug 5, 2009 at 1:36 PM, dormando dorma...@rydia.net wrote:

 
  What happens when we release 1.4.1?
 
  Your site really should be able to tolerate at least the brief loss of a
  single memcached instance.
 
  It's definitely good to update the cache asyncronously where possible, and
  all of this stuff stuff (a lot is detailed in the FAQ too). I'd just
  really like to push the point that these techniques help reduce common
  load. They can even help a lot in emergencies when you find a page taking
  50 seconds to load for no good reason.
 
  However, relying on it will lead to disaster. It allows you to create
  sites which cannot recover on their own after blips in cache, upgrades,
  hardware failure, etc.
 
  If you have objects that really do take umpteen billions of milliseconds
  to load, you should really consider rendering data (or just the really
  slow components of it) into summary tables. I like using memcached to take
  queries that run in a few tens of milliseconds or less, and make them
  typically run in 0.1ms. If they take 20+ seconds something is horribly
  wrong. Even 1 full second is something you should be taking very
  seriously.
 
  -Dormando
 
  On Wed, 5 Aug 2009, Edward Goldberg wrote:
 
  
   I use the term Cache Warmer for task like this,  where I keep a key
   (or keys) warm all of the time.
  
   The best way to keep the load on the DB constant is to Warm the Cache
   at a well defined rate.
  
   If you wait for a request to update the cache,  then you may loose
   control over the exact moment of the load.
  
   The idea is to make a list of the items that need to be warmed,  and
   then do that list at a safe rate and time.
  
   Very long TTL values can be un-safe.  Old values can lead to code
   errors that are not expected days later.
  
   Edward M. Goldberg
   http://myCloudWatcher.com/
  
  
   On Wed, Aug 5, 2009 at 9:44 AM, dormandodorma...@rydia.net wrote:
   
Also consider optimizing the page so it doesn't take 20 seconds to
  render
- memcached should help you under load, and magnify capacity, but
shouldn't be used as a crutch for poor design.
   
-Dormando
   
On Wed, 5 Aug 2009, Adam Lee wrote:
   
Run a cron job that executes the query and updates the cache at an
  interval
shorter than the expiration time for the cached item.
   
On Wed, Aug 5, 2009 at 11:38 AM, Haes haes...@gmail.com wrote:
   

 Hi,

 I'm using memcached together with Django to speed up the database
 queries. One of my Django views (page) uses a query that takes over
  20
 sec. Normally this query hits the memcached cache and the data is
 served almost instantly. The problem now is that if the cache
  expires,
 the next person accessing this page will have to wait about 20
  seconds
 for it to load which is not really acceptable for me.

 Is there a way to update the memcached data before it times out, in
  a
 way that this query always hits the cache?

 Thanks for any hints.

 Cheers.

   
   
   
--
awl
   
   
  
 



 --
 If you see a whole thing - it seems that it's always beautiful. Planets,
 lives... But up close a world's all dirt and rocks. And day to day, life's a
 hard job, you get tired, you lose the pattern.
 Ursula K. Le Guin

 What's different about data in the cloud? http://www.azuredba.com

 http://www.finsel.com/words,-words,-words.aspx (My blog) -
 http://www.finsel.com/photo-gallery.aspx (My Photogallery)  -
 http://www.reluctantdba.com/dbas-and-programmers/blog.aspx (My Professional
 Blog)



Re: Is clearing out the cache necessary?

2009-08-03 Thread dormando

On Wed, 29 Jul 2009, blazah wrote:

 Hi,
 What do folks do with the objects stored in memcached when a new
 version of the software is deployed?  There is the potential that the
 data could be stale depending on the code changes so do people
 typically just flush the cache?

Generally you know what's being changed when you roll out new code. If the
format of some keys/values are being changed, you should intuitively know
exactly how to handle it. The best method is to have code be intutive
about getting back values it doesn't understand anymore, and being wary of
changing the format of popular or high value keys.

-Dormando


Re: Memcache Question

2009-08-03 Thread dormando

Hey,

You might want to consider using 'add' with a zero byte value (or just a
single byte value, whatever). Then every time you just run the single add
command. If it fails you're fetching too fast. If it works then the key
didn't exist already.

There're a bunch of ways of doing more proper rate limiting if you ever
need it - google around and check the FAQ.

-Dormando

On Tue, 28 Jul 2009, Beyza wrote:


 Hi,

 This is not an important question, but I wonder the answer.

 When I develop websites by using memcache, I also use it for spam
 checking.

 For example; When someone post a comment, or report something, or
 search something, I create a memcache object and set an expiration
 time, i.e. 5 seconds. It goes something like that

 memcache_set($memcache, comment-check-userid, '1', 0, 5);

 I check every time whether this value is set or not. If it's set, I
 consider it as a spam.

 My question is that, is it expensive to create this if you compare
 with other options for this purpose? I could not find any information
 about this on the internet. I am just curious :)

 Thanks from now on,



Re: memcached - expire (again)

2009-08-03 Thread dormando

I wouldn't recommend anyone use the ALLOW_SLABS_REASSIGN code as it is -
it is unproven and we probably should have removed it a long time ago.
It'll come back as a proper implementation soon enough.

-Dormando

On Tue, 28 Jul 2009, Mike Lambert wrote:


 Werner you never mentioned which version of memcache you're using. For
 what it's worth, it seems things differ between 1.4.0 and 1.2.6.
 (Haven't peeked at 1.3.x series.)

 1.2.6 will allocate a new slab if memory is available, before grabbing
 the last item off the free list (which may or may not have expired.)

 1.4.0 will look for an expired object off the free list (last 50
 items), before performing the above allocate-then-evict logic.

 The behavior of 1.2.6 results in the full slab allocation very
 quickly, which is the problem you are talking about. I suspect things
 are much better in 1.4.0, though of course over time you may still run
 out of slabs and need to restart servers in order to get a more
 optimal balance.

 Your other option is to compile with ALLOW_SLABS_REASSIGN and then
 right a cron/daemon which uses the memcache evicts and maxage stats to
 attempt to dynamically rebalance the pool, automating the
 one-at-a-time slabs reassign behavior to make it more effective.

 Mike

 On Thu, Jul 23, 2009 at 06:42, dl4nerdl4...@gmail.com wrote:
 
  Hi,
 
  The thing is that we use a linked list to store all of the items in.
  Every time a user tries to use an item (get / set / incr etc), the
  item will be moved to the beginning of the linked list. Whenever you
  try to insert an item in the cache, the first thing we will do is to
  look in the tail of the linked list to see if we have any expired
  items (we will only look at 50 items). If we don't find any expired
  items there, we will try to allocate memory. If the allocation fails,
  we will return to the linked list and evict an object from the tail.
 
  thanks for explanation.
 
  Cheers,
 
  Werner.



Issue tracker now set to spam the mailing list

2009-08-03 Thread dormando

Yo,

Updates to the issue tracker should now all hit the mailing list. The
traffic over there is relatively low, but only a small handful of us
actively check it - I'd prefer all information go to the same place. A few
threads which should have been mailing list discussions have instead been
held over there. This solves that problem :)

If it gets too spammy I'll separate it out into a -bugs mailing list.
Since we used to do all bug discussions on the ML anyway, this should be
no big deal.

-Dormando


Re: Patch to allow -F option to disable flush_all

2009-08-03 Thread dormando

Hey,

I read the whole thread and thought about it for a bit... I'm not sure we
should do this. Especially not as an explicit solution to a security
problem with a shared hosting cluster. We can roadmap out a method of
multitenancy (dustin's apparently done proof of concept before, and I can
imagine it being pretty easy)... but more long term.

If you disable flush, there're still a hundred ways where one customer can
screw another customer, or if there's a site injection, they could gather
information and destroy things you'd really rather they not otherwise.

They can either inject large faulty SET's to push things out of cache.. or
run 'stats cachedump' and fetch/delete random values from other
customers... or run 'stats sizes' in a loop and hang all your servers.

The -o discussion is good, but a separate discussion, and not related
towards security fixes in a multitenancy situation. We should revisit that
as the engine work comes in, since we'd need a set of extended options we
could pass into the engines?

-Dormando

On Fri, 24 Jul 2009, Adrian Otto wrote:


 Dormando,

 Thanks for your reply. The use case is for using memcached from a hosting
 environment where multiple subscribers share the same source IP address
 because they run application code together on the same cluster of web servers.
 The clusters are large, typically in the hundreds of nodes range. In this
 arrangement it's possible for one subscriber to dump the cache belonging to
 another, even when they have their own memcached instance running.

 We are also aware of horror stories where app developers don't properly
 sanitize user input that gets sent to memcached, potentially resulting in the
 equivalent of an SQL injection. It's possible to dump the cache using an
 exploit of such code to send a flush_all command and lead to rather serious
 database performance problems for busy sites when the cache is cold. Because
 we can't control the code that is run on our platform to protect us from this,
 we'd like a simple way to nip it in the bud right in memcached.

 We recognize that we could implement a more elaborate method of partitioning
 access to memcached on a per-subscriber basis, but we just wanted something
 simple to let them use an individual memcached instance if they want to,
 accepting the security implications of the shared environment.

 The feature is optional, defaults to off, and it only adds a simple check of a
 boolean to bypass the code in normal configuration. Furthermore, purge_all
 should be infrequently accessed anyway, so the performance implication of the
 additional data comparison should be mute. I appreciate the consideration.

 Thanks,

 Adrian

 On Jul 24, 2009, at 12:54 PM, dormando wrote:

 
  Hey,
 
  We've rejected a few similar patches in the past. Usually if folks need
  this they have bigger problems... What is your particular use case?
 
  I could see this going in though. It squicks me out but I'm open to
  opinions from the others :)
 
  -Dormando
 
  On Fri, 24 Jul 2009, Adrian Otto wrote:
 
   Hi,
  
   I've attached a patch for a tan option flag -F to disable the
   purge_all command in memcached. It also includes:
  
   1) A tiny tweak to testapp.c that allowed make test to pass
   2) Fixed a minor bug in t/binary.t with a variable scope.
   3) Fixed the memcached.spec file to include the protocol_binary.h and
   use the current version
  
   Please consider this for inclusion into future releases.
  
   It works like this:
  
   $ telnet localhost 11211
   Trying 127.0.0.1...
   Connected to localhost.localdomain (127.0.0.1).
   Escape character is '^]'.
   flush_all
   SEVER_ERROR flush_all command disabled
   quit
   Connection closed by foreign host.
   $
  
   I've attached a SPEC file that I adapted from DAG that works with
   RHEL5 for 1.4.0. Please consider adding that as an additional file in
   the dist.
  
   Cheers,
  
   Adrian Otto
  
  



Re: Connection timed out

2009-08-02 Thread dormando

Use 1.2.8 or 1.4.0... If your 'listen_disabled_num' stat is increasing,
you need to increase the connection limit.

Otherwise, you should post more information about how you specifically ran
the test. Code examples/etc.

-Dormando

On Sat, 1 Aug 2009, fredrik wrote:


 I have a question about network and memcached. All servers are in the
 same hosting location. Are 13 failed requests of 1 560 000 requests
 per minute acceptable fail ratio or should we expect to have 100%
 success rate with the tcp protocol?

 Memcached 1.2.6, but we have also tried memcached 1.4.0 with same
 result.
 Pecl/Memcache 3.0.4
 PHP Version 5.2.9

 Error:
 Callback says it's connection timed out. But increasing the timeout
 does not help and pecl/memcached says system error.

 Running an stresstest with client and server(localhost) on the same
 machine does not reproduce the error.



Re: Patch to allow -F option to disable flush_all

2009-07-24 Thread dormando

Hey,

We've rejected a few similar patches in the past. Usually if folks need
this they have bigger problems... What is your particular use case?

I could see this going in though. It squicks me out but I'm open to
opinions from the others :)

-Dormando

On Fri, 24 Jul 2009, Adrian Otto wrote:

 Hi,

 I've attached a patch for a tan option flag -F to disable the
 purge_all command in memcached. It also includes:

 1) A tiny tweak to testapp.c that allowed make test to pass
 2) Fixed a minor bug in t/binary.t with a variable scope.
 3) Fixed the memcached.spec file to include the protocol_binary.h and
 use the current version

 Please consider this for inclusion into future releases.

 It works like this:

 $ telnet localhost 11211
 Trying 127.0.0.1...
 Connected to localhost.localdomain (127.0.0.1).
 Escape character is '^]'.
 flush_all
 SEVER_ERROR flush_all command disabled
 quit
 Connection closed by foreign host.
 $

 I've attached a SPEC file that I adapted from DAG that works with
 RHEL5 for 1.4.0. Please consider adding that as an additional file in
 the dist.

 Cheers,

 Adrian Otto




Re: 1.3.3 marked stable in portage

2009-07-23 Thread dormando

Hey (fauli?),

I'd like to back this up... 1.4.0 is the latest stable, but 1.3.x
should've never been unmasked in gentoo. We know it passes tests, since we
like to release high quality release candidates and developer builds, but
we do not intend to support those binaries. Given the nature of memcached,
we want people to be extra educated when we put out something that is
experimental, since the downside of any bug can be catastrophic.

Thanks,
-Dormando

On Wed, 22 Jul 2009, Brian Moon wrote:


 Hi guys,

 I was not sure who was on the memcached mailing list, so I picked some of the
 folks from the memcached changelog who seem to be active on the package.

 1.3.3 is marked as stable in portage.  This tree was never stabilized by
 upstream (aka memcached developers).  Talking in #memcached IRC, we would like
 to request that 1.3.3 not be marked as stable.  If that is not possible, we
 would like to see 1.4.0 in portage ASAP to avoid any support issues with 1.3.3
 in IRC or on the mailing list.

 In addition, 1.2.8 is stable from upstream, but is not marked stable in
 portage.  Is this just because 1.3.3  1.2.8?  We would prefer that 1.2.8 is
 stable and 1.3.3 is not.

 Thanks for the great support for memcached on Gentoo.


 Brian.
 
 http://brian.moonspot.net/




Re: What does the options -b means

2009-07-20 Thread dormando

Unfinished feature - it was removed in later versions.

You'd do yourself a favor in upgrading to something much newer than that,
as well :)

-Dormando

On Sun, 19 Jul 2009, jacky wrote:


 Hi all:
I'm running the memcached 1.2.4 version, and it has one option -b,
 -brun a managed instanced (mnemonic: buckets)

 I'm not sure what is means and the purpose using this option. Can
 anybody explain it or tell me whether there is document explaining it.
 Thanks a lot.



Re: Vendors, the project

2009-07-16 Thread dormando
Hey,

First; you are a Gear6 employee, correct? I would also like to point out
this is the first public note of the drama of the last few days. So if
you folks come back later, you can't blame me for bringing it public.

The reason it's come to this is highly obvious to me, but will not be easy
to communicate to everone, and I really doubt you guys want me to list
every reason why that happened.

In short, you folks have gone out of your way to keep communications
private, keep to phone conversations, and otherwise refuse to talk unless
it's face to face. I don't need a guilt trip - I told you folks in the
beginning the same things I told Sun, and other companies. Send patches to
the mailing list. Send ideas to the mailing list. Communicate and bridge
issues on the mailing list, in irc, etc. When you refuse to maintain
public contact with us, you put us both bad positions.

I do not have complaints about you as people, or what you are trying to
do. I do sincerely hope you realize that when the project maintainers send
an e-mail to the list saying Hey, we have a dotorg booth available.
However, if you're a commercial vendor, we would highly prefer you do not
represent the community, and then you respond *privately*, and *directly*
to the booth supplier, negotiate the booth, and then forget to say a
damn thing to the /mailing list/ that you've done so... Then doing that
*again* for a *second* conference... You're going to piss somebody off.

Please accept that, and accept an olive branch from us in starting over.
This as an issue does not have to extend past this message. The documents
brian listed are a good starting point, and I know Matt's had ideas on
this in the past as well. We should bounce around ideas on the list,
perhaps finalize this in person at OSCON next week, and stuff them onto
the wiki.

Then there shouldn't be any weird drama without someone clearly being at
fault, which is the even ground we thoroughly enjoy as an open source
project.

-Dormando

On Thu, 16 Jul 2009, luciano11 wrote:


 Good comments Brian.

 I don't know what sort of personal phobias or other irrational fears
 were behind the drama of the last few days but it really needs to
 stop.

 There is significant vendor interest in providing the community
 resources for OSCON (and other conferences) that could be used to make
 our presence professional and worthy of the effort put into the
 project.  I have heard discussion of the fancy signage that we had
 last time, documentation to hand out, as well as (good) machines and
 other hardware we can use for demos or performance races.

 I really can't wrap my head around what sort of argument can be made
 against accepting these offerings!


 On Jul 15, 5:11 pm, Brian Aker br...@tangent.org wrote:
  Hi!
 
  There has been a lot recently brought up about vendor interaction with  
  the project, and I wanted to add a few thoughts.
 
  My personal take is that seeing vendors show up and offer support/
  hardware/services is a good thing. It is a sign of both the growth and  
  health of the project.
 
  The thing about growth is that it is not always comfortable and there  
  can be more the a few sore points that happen along the way.  
  Personally? I'd like to find a way to have as much of this smoothed  
  over as possible.
 
  No one should be penalized for their efforts. There are a lot of hours  
  spent on memcached per week, hundreds of when you consider bug  
  testing, code, promotion, etc... all of this has value. There is no  
  one entity for this project, it is pretty mutli-company/person (which  
  I personally think adds to the value of it).
 
  All of the growth in the project should be to the benefit of everyone.  
  This really is a all boats rise in water.
 
  So how do we get everyone participating in a manner that achieves the  
  end goal, which is the promotion, adoption, spread of Memcached?
 
  Let me throw out some material to read:
 
  http://wiki.postgresql.org/wiki/AdvocacyGuideshttp://wiki.postgresql.org/wiki/BoothCheckList
 
  Postgres has a long history of being many vendor and when I look  
  around I see them as one community we can learn from.  I suspect there  
  are others as well but having a common license and a common  
  distributed identity I am wondering whether we could follow their  
  model (or better improve on it).
 
  So what should be the plan? How do we encourage people and at the same  
  time set a level of what is appropriate for the community at large?
 
  On the same token, we really need to realize and accept that people  
  feed their families from the use of memcached, and we shouldn't be  
  creating barriers which harms this.
 
  Cheers,
          -Brian


OSCON booth reboot.

2009-07-16 Thread dormando

Hey,

We're rebooting the OSCON Memcached booth effort. This is next week, in
San Jose:
http://en.oreilly.com/oscon2009/public/content/expo-hall

wednesday/thursday. If you're interested in participating on the booth
planning and scheduling, please contact me privately and we'll be
discussing the final details off list (so there's some surprise for the
folks who show up at the booth;).

I have 3 dealer/expo/thingie passes left for those you want to volunteer
to help man the booth but do not have passes. Please let me know ASAP
since we're doing this a little last minute now. If you already have a
pass but want to help with the booth, that's fine, just let me know.

For those of you attending OSCON, please drop by to see us in the exhibit
hall. Ask goofy questions, blame us for all your earthly problems, or just
say hello :)

Thanks,
-Dormando


Re: Opinions on default UDP binding?

2009-07-15 Thread dormando

   It seems that the UDP binding that's on by default is causing more
 confusing than it's worth (difficulty to bring up a second instance on
 a different port, for example).

   I propose we do one of two things:

   1) Assume almost nobody uses it just disable it by default, allowing
 people who actually use it to burn the resources and do the extra
 work.

   2) Create some kind of complicated, but intuitive port-follow rules
 so that when someone specifies a TCP binding port parameter, but not a
 UDP port binding parameter, that the UDP port binding is on the same
 number (and vice versa).


#2. We keep disabling/re-enabling the UDP stuff since folks want to write
clients that assume it's there sometimes.

Think the follow rules just need to be: if only one setting has been
overridden, the other one follows? Or is there need for something weirder?

-Dormando


Re: Cant make another instance of memcached to run on another port

2009-07-15 Thread dormando
 On Jul 14, 7:21 am, gunyhake gunyh...@gmail.com wrote:
  @tom sorry, i got the wrong switch.
 
  @dustin I have run again without -d switch, here is the result
 
  [root]# memcached -u nobody -m 512 -l 127.0.0.1 -U 11222
  failed to listen
 
  [root]# memcached -u root -m 512 -l 127.0.0.1 -U 11222
  failed to listen

   This is a particularly bad error message.  It's trying to tell you
 that it's failing to listen on TCP port 11211 (since you already have
 another instance doing so).  Add -p 11211

   I'll file a bug for this.


What version of memcached are you running? Newer versions print the exact
error already...

-Dormando

Re: What happened to --enable-threads

2009-07-13 Thread dormando

Yup. There's no more a single threaded mode. If you want to emulate the
old behavior you can start it with a single thread via -t 1

-Dormando

On Mon, 13 Jul 2009, NICK VERBECK wrote:


 Was working on upgrading some servers from 1.2 to 1.4 and noticed that
 the --enable-threads config option no longer exists. Are threads
 enabled by default with no --disable-threads option now?

 --
 Nick Verbeck - NerdyNick
 
 NerdyNick.com
 SkeletalDesign.com
 VivaLaOpenSource.com
 Coloco.ubuntu-rocks.org



Memcached 1.4.0 (stable) Released!

2009-07-10 Thread dormando

Hey,

I'm pleased to announce the latest stable milestone for memcached, 1.4.0!

notes: http://code.google.com/p/memcached/wiki/ReleaseNotes140

download: http://memcached.googlecode.com/files/memcached-1.4.0.tar.gz

The newest version of 1.4.0 has been a lot of work from Trond, Dustin, and
others. Bringing us the binary protocol, many new statistics, and
significant performance improvements. See the release notes for full
information, but in short, this release kicks ass. It opens the door for
us in future memcached development.

This is a dot-oh product, and we give no guarantees, but it is very well
tested. Please deploy this software and let us know how it goes for you.
The release is fully backwards compatible, so please feel free to replace
one or two servers and test it first - in fact we always recommend you
exercize similar caution when upgrading.

We are presently running 1.4.0 in production at Six Apart. We'll be
completing the full rollout over the next few business days.

Report bugs, give us feedback, let us know if you've tried it. It'll be a
big help.

Thanks,
-Dormando


Release schedule for 1.4.1 and beyond

2009-07-10 Thread dormando

Yo,

It's obvious I really screwed up on the delay for the 1.4.0 release. All I
can really do at this point is to make it up to the community by not ever
sucking like that again, or by hanging my hat and passing on.

It should be easy to stick to a schedule from now on. If I can't keep on
it we'll sort out a rotation.

I'd like to propose something mildly similar to the linux kernel schedule,
but inverted a little:

- 3 weeks after each new stable release, we release -rc1 for the next
release.
- New -rc's will be kicked out daily or bidaily if there are fixes.
- After 1 week in RC, unless there are still bug reports coming in, stable
is released.

So we should have a good stable release roughly once per month. Exceptions
can be made, as usual. Major bug finds, emergencies, of course warrant
earlier releases. Cycles with large code changes all at once might warrant
an earlier cut to -rc1 and a 2-3 week -rc cycle (still trying to round it
out at a month overall).

Development should stay the way it is. All of us making code changes
should be cc'ing as much as possible to the mailing list. I've noticed
some of this has dropped off a little in recent days, but it's easy to
pick that back up again. I do realize it's easier for people to just
follow us on github, but lets stay open for the sake of discussion.

Since OSCON is coming up, I'd like to give an extra week for the first
cycle. Tentative date for 1.4.1-rc1 will be july 30th, with the stable
release of 1.4.1 being on august 6th.

Thoughts?
-Dormando


List pseudo moderation.

2009-06-15 Thread dormando

Yo,

I hear people hate spam. Well, I do too. When we moved the lists, I wasn't
really expecting to see spam under the almighty google groups. However, it
happens.

Just ticked the box that says new members get their posts moderated. This
means the moderators have a *little* more work to do when people first
sign up, but the whole dang list won't be getting spammed.

On that note, I would like to bless a few more folks to 'manager' status,
if you're willing to help with list moderation and I know you or would
recognize you as a reliable list member. Reply to me privately with your
preferred google account/e-mail/whatever if you're up to it.

(I hear people like work too, we're still doing that;) stay tuned)

Thanks,
-Dormando


Re: segfaults

2009-05-26 Thread dormando

32-bit systems have a few different options of memory address splitting
based on what distro/kernel/patches you're using.

The ancient approach was 2G userpace 2G kernelspace for a process. Ingo
molnar added a 3G/1G split at some point, so userspace applications can
use 3G of space while reserving 1G for stacks/org/etc kernel crap.

64-bit servers are highly recommended :) -m 3000 might not be low enough
as well, since memcached memory floats up and down a bit depending on how
busy it is and what types of requests it's servicing. Large multigets can
cause some bloat, for instance.

-Dormando

On Tue, 26 May 2009, Gavin Hamill wrote:


 This seems to have been memory related after all.

 When I reduce the memory usage with -m 3000 rather than -m 3200 then the
 machines are stable :/

 These are running on 32-bit Debian lenny without any 'large mem'
 patches, so is the maximum RAM per process simply 3072MB (3 x 1024) ? I
 thought each 32-bit app had a full 4GB memory space, just not permitted
 any more than that..)

 Cheers,
 Gavin.





Re: memcache-top

2009-04-22 Thread dormando

Yo,

Normally you'd say hey, thanks for the contribution and then something
along the lines of hmm actually I think it'd be awesome if this were part
of the libmemcached distro and had a few more features. I'll go hack this
up and have it done in a few days!. Saying gack, dirty! I can do
better! is actually kind of dickish.

Please don't contribute more to *this* side of the thread though. I've
written this memcached-top like utility several times before, but never
managed to OSS it. I'd been wanting to create a memtop for the
libmemcached repo as a final replacement, so having both of these out
there would be fantastic. I really don't want to discourage either poster
from doing work, I just like to encourage that this list tend to hold up
its reputation of being cordial and helpful.

-Dormando

On Wed, 22 Apr 2009, Jozef Sevcik wrote:

 Hi,

 I would usually stay out of things like this but what's offensive on saying
 really good idea, but your code is not good, IMHO. (note the IMHO)

 I really haven't checked out the code and I'm not going to rate it, but I
 don't think such statement is offensive.

 How would you feel if I (or the community) torched the new
 code that you're working on?
 I would ask for them arguments/opinions and then think about them.

 Jozef

 2009/4/22 Toru Maesaka tmaes...@gmail.com

 
  Hi!
 
  I would usually stay out of things like this but could you
  please be less offensive? Saying things like this to someone
  that shared something nice to the public is beyond rude.
  This is clearly bad community spirit.
 
  How would you feel if I (or the community) torched the new
  code that you're working on?
 
  Cheers,
  Toru
 
  On Wed, Apr 22, 2009 at 2:34 PM, gf kak.serpom.po.yait...@gmail.com
  wrote:
  
   Hi. Just it's dirty. So i'm writing good solution using libmemcached,
   with daemon-mode that can report abnormal stats, with formatting
   output Just wait :) 2-3 days.
  
   On 23 ???, 01:16, Mat Williams williams@gmail.com wrote:
   hi,
  
   i like the idea for this project, nice work nicholas. i have looked at
   the output and it seems like it will be a useful tool for me too.
  
   gf, can you please tell me why this code is not good - should i be
   worried about running it against production servers?
  
   thanks,
   m...@.
  
   On Wed, 2009-04-22 at 13:00 -0700, gf wrote:
Hi. It's really good idea, but your code is not good, IMHO.
I've started the same project now..
It will be released soon.
  
On 22 ???, 23:34, ntang nicholast...@gmail.com wrote:
 Hey all.  First post...
  
 We've been using memcached for a while, but we've never really done
 much to monitor it past making sure the servers were up and running.
 Anyways, we recently had some issues that looked like they might
  have
 been related to memcached performance/ usage, and I figured it was
 about time that we started taking a look at it.  We've added graphs
 for various stats so we can track them over time, and added nagios
 checks for the stats as well, but I also wanted a quick way to see
  the
 immediate state of the cluster.
  
 So I wrote a little tool.  Hopefully people will find it useful.
   It's
 mostly configured by editing a config block up top, sue me.  It's
 cheesy but works.  The first time you run it, you'll need (at a
 minimum) to populate @servers.
  
 It's here:http://code.google.com/p/memcache-top/
  
 (In retrospect I should've named it memcached-top, but such is life.
 I think people will be able to figure it out, and maybe if I put out
 another 'release' (*cough*) I'll rename it.  ;)  )
  
 Thanks,
 Nicholas
 



 --
 Jozef



Re: Max length incr integer

2009-04-19 Thread dormando

incr values are 64-bit.

-Dormando

On Sun, 19 Apr 2009, Abdul-Rahman Advany wrote:


 Hi guys,

 I am abusing memcached as a message queue. But I realized that there
 must be an maximum length for the integer stored when using incr.

 How can I determine what the maximum value is? I use memcache both on
 a 32bit system and a 64bit...

 Regards,

 Abdul



Re: Max length incr integer

2009-04-19 Thread dormando
Yes.

On Sun, 19 Apr 2009, Abdul-Rahman Advany wrote:


 Even on a 32bit machine?

 On Apr 20, 12:38 am, dormando dorma...@rydia.net wrote:
  incr values are 64-bit.
 
  -Dormando
 
  On Sun, 19 Apr 2009, Abdul-Rahman Advany wrote:
 
   Hi guys,
 
   I am abusing memcached as a message queue. But I realized that there
   must be an maximum length for the integer stored when using incr.
 
   How can I determine what the maximum value is? I use memcache both on
   a 32bit system and a 64bit...
 
   Regards,
 
   Abdul


pecl memcache/memcached

2009-04-16 Thread dormando

Anyone want to volunteer to throw up a wiki page with a (dated) comparison
of the two? You can either submit the page to the list for someone to post
or request wiki access if we know you.

It's coming up a lot and I sorta just want to throw a tinyurl pointing to
an FAQ entry at people.

Thanks,
-Dormando


Re: segmentation fault from stats slabs

2009-04-13 Thread dormando
Shouldn't be ... the allocation is the same, but no new stats have been
added into stats slabs under 1.2.8, so the allocation is still correct.

-Dormando

On Mon, 13 Apr 2009, Evan Weaver wrote:


 Is this bug present in 1.2.8?

 Evan

 On Thu, Apr 2, 2009 at 12:56 PM, Dustin dsalli...@gmail.com wrote:
 
 
  On Apr 2, 1:03 am, Toru Maesaka tmaes...@gmail.com wrote:
 
  So we've been testing memcached-1.3.2 beta at mixi.jp for
  a week and we found an issue in the stats code.
 
  The problem is that the server segfaults if we issue the
  stats slabs command to a daemon that has lots of slab
  classes (approximately 40).
 
  The cause is simply not allocating enough memory for the
  result buffer in do_slabs_stats() in slabs.c:
 
    char *buf = (char *)malloc(power_largest * 200 + 100);
 
   That's a good find, though really kind of unsurprising since there
  were a lot of places where we were sort of guessing about how much
  memory we needed to build out stats.
 
   Trond did some awesome work on making *all* stat buffers dynamic.
  There was one small bug in it that we found early, but this should
  categorically clean up these problems.
 
 



 --
 Evan Weaver


Re: For review: Stats slabs should include the number of requested bytes

2009-04-08 Thread dormando

I second the idea of making the value a little more clear/useful. My only
reservation is to not convert it to a lossy calculated value... It's
useful to be able to write a monitor to watch the more exact ratios, or
graph exact sizes. Which you lose if you calculate down the information
too far.

That said ... I don't have any great alternative ideas offhand.

-Dormando

On Wed, 8 Apr 2009, Eric Lambert wrote:


 Looks fine to me. Although perhaps we could use a better description. The text
 'mem_requested' is a little unclear to me. May be we call it
 bytes_used_in_slab  but that's a bit wordy 

 Also  I wonder if we should represent this value a little differently. So the
 point of this is to expose wasted space in the slab, right? If we just
 represent this as a raw number of bytes, its not immediately clear that there
 is an issue ... I have to compare this value against the chunks size and
 number of chunks used. Couldnt we instead represent this as a ratio of bytes
 used to bytes available (bytes_used/used_chunks * chunk_size) , thereby making
 it a bit clearer, at least to me :-), what the significance of this number is.

 For example, in your case, you have one entry that uses 51 bytes of an 80 byte
 chunk. If we represent this value as a ratio, the stats would show that this
 slab config results in wasting ~38% of the available slab space.

 my $.02

 Eric

 Trond Norbye wrote:
 
  Issue: http://code.google.com/p/memcached/issues/detail?id=42
 
  It should be possible to detect the number of bytes actually allocated in a
  given slab class to make it easier to detect if one is using the wrong
  growth factor.
 
 
  Patch: http://github.com/trondn/memcached/tree/issue_42
 
  The stats slabs output looks like:
 
  set a 1 0 1
  a
  STORED
  stats slabs
  STAT 1:chunk_size 80
  STAT 1:chunks_per_page 13107
  STAT 1:total_pages 1
  STAT 1:total_chunks 13107
  STAT 1:used_chunks 1
  STAT 1:free_chunks 0
  STAT 1:free_chunks_end 13106
  STAT 1:mem_requested 51
  STAT 1:get_hits 0
  STAT 1:cmd_set 1
  STAT 1:delete_hits 0
  STAT 1:incr_hits 0
  STAT 1:decr_hits 0
  STAT 1:cas_hits 0
  STAT 1:cas_badval 0
  STAT active_slabs 1
  STAT total_malloced 1048560
  END
  quit
 
  Cheers,
 
  Trond
 




Re: 1.2.7-rc tree for final review.

2009-04-06 Thread dormando

Hey,

I see some warnings but no compilation errors. (1.2.7 was build-tested on
openbsd 4.3). Does make test fail for you?

-Dormando

On Tue, 7 Apr 2009, Artur wrote:


 Hi,
I have got a problem with compilation of last release 1.2.7 under OpenBSD
 4.3.
 Messages form ./configure:
 # ./configure
 checking build system type... i386-unknown-openbsd4.3
 checking host system type... i386-unknown-openbsd4.3
 checking target system type... i386-unknown-openbsd4.3
 checking for a BSD-compatible install... /usr/bin/install -c
 checking whether build environment is sane... yes
 checking for a thread-safe mkdir -p... ./install-sh -c -d
 checking for gawk... no
 checking for mawk... no
 checking for nawk... nawk
 checking whether make sets $(MAKE)... yes
 checking for gcc... gcc
 checking for C compiler default output file name... a.out
 checking whether the C compiler works... yes
 checking whether we are cross compiling... no
 checking for suffix of executables...
 checking for suffix of object files... o
 checking whether we are using the GNU C compiler... yes
 checking whether gcc accepts -g... yes
 checking for gcc option to accept ISO C89... none needed
 checking for style of include used by make... GNU
 checking dependency style of gcc... gcc3
 checking whether gcc and cc understand -c and -o together... yes
 checking for a BSD-compatible install... /usr/bin/install -c
 checking for libevent directory... (system)
 checking for library containing socket... none required
 checking for library containing gethostbyname... none required
 checking for library containing mallinfo... no
 checking for daemon... yes
 checking how to run the C preprocessor... gcc -E
 checking for grep that handles long lines and -e... /usr/bin/grep
 checking for egrep... /usr/bin/grep -E
 checking for ANSI C header files... yes
 checking for sys/types.h... yes
 checking for sys/stat.h... yes
 checking for stdlib.h... yes
 checking for string.h... yes
 checking for memory.h... yes
 checking for strings.h... yes
 checking for inttypes.h... yes
 checking for stdint.h... yes
 checking for unistd.h... yes
 checking for stdbool.h that conforms to C99... yes
 checking for _Bool... yes
 checking for an ANSI C-conforming const... yes
 checking malloc.h usability... yes
 checking malloc.h presence... yes
 checking for malloc.h... yes
 checking for struct mallinfo.arena... no
 checking for socklen_t... yes
 checking for endianness... little
 checking for mlockall... yes
 checking for getpagesizes... no
 checking for memcntl... no
 configure: creating ./config.status
 config.status: creating Makefile
 config.status: creating doc/Makefile
 config.status: creating config.h
 config.status: config.h is unchanged
 config.status: executing depfiles commands

 And the error during compilation process:

 # make
 make  all-recursive
 Making all in doc
 gcc -DHAVE_CONFIG_H -I.  -DNDEBUG   -g -O2 -MT memcached-memcached.o -MD -MP
 -MF .deps/memcached-memcached.Tpo -c -o memcached-memcached.o `test -f
 'memcached.c' || echo './'`memcached.c
 mv -f .deps/memcached-memcached.Tpo .deps/memcached-memcached.Po
 gcc -DHAVE_CONFIG_H -I.  -DNDEBUG   -g -O2 -MT memcached-slabs.o -MD -MP -MF
 .deps/memcached-slabs.Tpo -c -o memcached-slabs.o `test -f 'slabs.c' || echo
 './'`slabs.c
 mv -f .deps/memcached-slabs.Tpo .deps/memcached-slabs.Po
 gcc -DHAVE_CONFIG_H -I.  -DNDEBUG   -g -O2 -MT memcached-items.o -MD -MP -MF
 .deps/memcached-items.Tpo -c -o memcached-items.o `test -f 'items.c' || echo
 './'`items.c
 mv -f .deps/memcached-items.Tpo .deps/memcached-items.Po
 gcc -DHAVE_CONFIG_H -I.  -DNDEBUG   -g -O2 -MT memcached-assoc.o -MD -MP -MF
 .deps/memcached-assoc.Tpo -c -o memcached-assoc.o `test -f 'assoc.c' || echo
 './'`assoc.c
 mv -f .deps/memcached-assoc.Tpo .deps/memcached-assoc.Po
 gcc -DHAVE_CONFIG_H -I.  -DNDEBUG   -g -O2 -MT memcached-thread.o -MD -MP -MF
 .deps/memcached-thread.Tpo -c -o memcached-thread.o `test -f 'thread.c' ||
 echo './'`thread.c
 In file included from thread.c:15:
 /usr/include/malloc.h:4:2: warning: #warning malloc.h is obsolete, use
 stdlib.h
 mv -f .deps/memcached-thread.Tpo .deps/memcached-thread.Po
 gcc -DHAVE_CONFIG_H -I.  -DNDEBUG   -g -O2 -MT memcached-stats.o -MD -MP -MF
 .deps/memcached-stats.Tpo -c -o memcached-stats.o `test -f 'stats.c' || echo
 './'`stats.c
 mv -f .deps/memcached-stats.Tpo .deps/memcached-stats.Po
 gcc  -g -O2   -o memcached memcached-memcached.o  memcached-slabs.o
 memcached-items.o  memcached-assoc.o memcached-thread.o  memcached-stats.o
 -levent
 memcached-memcached.o(.text+0x3601): In function `server_socket_unix':
 /usr/src/memcached-1.2.7/memcached.c:2545: warning: strcpy() is almost always
 misused, please use strlcpy()
 memcached-memcached.o(.text+0x188c): In function `process_stat':
 /usr/src/memcached-1.2.7/memcached.c:1076: warning: sprintf() is often
 misused, please use snprintf()
 gcc -DHAVE_CONFIG_H -I.  -g -O2 -MT memcached.o -MD -MP -MF
 .deps/memcached.Tpo -c -o memcached.o

Re: New Memcached Releases Today: 1.2.7 and 1.3.3

2009-04-03 Thread dormando

Yay to everyone involved!

-Dormando

On Fri, 3 Apr 2009, Dustin wrote:


 Two new memcached releases are available today.

 Stable 1.2.7

 The new stable release which is a maintenance release of the 1.2
 series containing several bugfixes and a few features.

 This version is recommended for any production memcached instances.

   Release Notes:
   http://code.google.com/p/memcached/wiki/ReleaseNotes127

   Download:
   http://memcached.googlecode.com/files/memcached-1.2.7.tar.gz


 Beta 1.3.3

 The new 1.3 beta brings lots of new features, performance, protocol
 support and more to memcached.

 Everyone is encouraged to get this into their labs and abuse it as
 much as possible.  This will be the stable tree.  We've been testing
 it quite thoroughly in the memcached community already and find it to
 be quite stable, but we're always looking for more complaints.

   Release notes:
   http://code.google.com/p/memcached/wiki/ReleaseNotes133

   Download:
   http://memcached.googlecode.com/files/memcached-1.3.3.tar.gz



memcached 1.2.7-rc1

2009-04-01 Thread dormando

Yo,

We have a release candidate up for 1.2.7 (stable tree):
http://memcached.googlecode.com/files/memcached-1.2.7-rc1.tar.gz

... give it a shot. Check out the git history for the changelog:
http://consoleninja.net/gitweb/gitweb.cgi?p=memcached.git;a=shortlog;h=1.2.7-rc

I'm running it on a production server on typepad for a day or two to be
absolutely sure, but I have a high confidence level in this release.
Please try it out and compile/test/beat on it.

I expect that sometime tomorrow we'll convert this to a final release,
along with formal release notes. Unless someone finds a glaring bug in the
next few hours ;) All of the patches in here have been up for review for a
few days now.

have fun,
-Dormando


1.2.7-rc tree for final review.

2009-03-28 Thread dormando

Yo,

http://consoleninja.net/gitweb/gitweb.cgi?p=memcached.git;a=shortlog;h=1.2.7-rc

In addition to the changes dustin posted, I'm proposing the following
changes.

Most of my own changes/can/should/will be pulled into the dev branch too.

The changes in here are merges from dev that I believe are minor enough
and useful enough to go into the stable tree, fixes, and some minor new
features from myself. Other changes, while temping, will instead to serve
as motivation to get all ya'll lazy asses onto the development series.

The change I want the most scruitiny on is
7208606e817dc401bcd3fde17038a3dfad604cab
... which adds a really goofy hack to mitigate refcount leaks in the
system. I've not been able to nail down the last pesky bug or two which
causes those to happen, and this should prevent severe issues going
forward, and should stick around even if we find/fix all bugs.

I'll let this simmer through tomorrow, maybe monday, then if there're no
major complaints it'll become 1.2.7-rc. We'll try to keep the 1.2.7-rc
process short since the tree's been settled for so long.

Fanks,
-Dormando

commit c607401efd030d019a47bb100fc4d397801143ce
Author: dormando dorma...@rydia.net
Date:   Fri Mar 27 16:12:46 2009 -0700

two new troubleshooting stats

accepting_conns for completeness, and listen_disabled_num to see how many
times you've hit maxconns and disabled incoming connections. probably a good
stat to monitor and flip out on.

commit 8620ef062f0cdacffcdcd6d40c70fcdc9cdb9c02
Author: dormando dorma...@rydia.net
Date:   Wed Mar 25 00:55:19 2009 -0700

add a cmd_flush stat

shouldn't add much lock contention for just this.
I want to add this one stat (mayb a few more?) since it's happened more than
once that folks think memcached is broken when a cron or something is 
calling
'flush_all' once a minute.

commit 758f6548acb88d9f446e7369a7564d5382d1e2d2
Author: dormando dorma...@rydia.net
Date:   Tue Mar 24 23:55:09 2009 -0700

print why a key was not found when extra verbose

simple logs for simple people. Patch inspired by a bug hunting session with
evan weaver. It's been useful a few times since.

commit 7208606e817dc401bcd3fde17038a3dfad604cab
Author: dormando dorma...@rydia.net
Date:   Sat Mar 28 00:16:38 2009 -0700

dumb hack to self-repair stuck slabs

since 1.2.6, most of the refcount leaks have been quashed.
I still get them in production, extremely rarely.
It's possibly we'll have refcount leaks on and off even in the future.

This hack acknowledges this and exists since we want to guarantee, as much 
as
possible, that memcached is a stable service. Having to monitor for and
restart the service on account of rare bugs isn't acceptable.

commit bc3b2e295ef770b682e90a12e2b5e9acfdf85971
Author: dormando dorma...@rydia.net
Date:   Thu Mar 26 00:19:00 2009 -0700

fix a handful of socket listen bugs.

AF_UNSPEC is still necessary for UDP sometimes.
We guarantee that at least one address returned from getaddrinfo binds
successfully, and in cases of lacking network or ipv6 addresses some of 
those
socket() calls might fail. That's normal. We were bailing on them.
This change also removes the need to pass AI_ADDRCONFIG on machines with 
ipv6
stacks disabled.

commit cd9f5e17a5db997af16a8fccb7a59efb0c8e52c0
Author: Dustin Sallings dus...@spy.net
Date:   Tue Mar 17 15:02:07 2009 -0700

stats slab's used_chunks should show chunks put to use

It was a bit unclear what it was doing before, but it started out with
a value equal to total_chunks, which was surely wrong.

This change and its accompanying test ensure the value makes a bit
more sense.

commit bf832f3ae339ce8dbcd72dc4d38eb2985a122a32
Author: Dustin Sallings dus...@spy.net
Date:   Fri Mar 13 20:24:14 2009 -0700

A bit more space for server_stats, and an assertion (bug 27).

I couldn't figure out how to get the stats output big enough to exceed
1024, but I accept it might.

I've given it a bit more space here and added an assertion to detect
when we fail in case we can figure out how to actually have this occur.

Maxing out pretty much everything got me up to 828 bytes:

STAT pid 35893
STAT uptime 152
STAT time 1237000260
STAT version 1.3.2
STAT pointer_size 32
STAT rusage_user 0.002487
STAT rusage_system 0.005412
STAT curr_connections 4
STAT total_connections 5
STAT connection_structures 5
STAT cmd_get 18446744073709551610
STAT cmd_set 18446744073709551610
STAT get_hits 18446744073709551610
STAT get_misses 18446744073709551610
STAT delete_misses 18446744073709551610
STAT delete_hits 18446744073709551610
STAT incr_misses 18446744073709551610
STAT incr_hits 18446744073709551610
STAT decr_misses 18446744073709551610
STAT decr_hits 18446744073709551610
STAT cas_misses 18446744073709551610
STAT cas_hits 18446744073709551610
STAT

Re: 1.2.7-rc tree for final review.

2009-03-28 Thread dormando



On Sat, 28 Mar 2009, dormando wrote:


 Yo,

 http://consoleninja.net/gitweb/gitweb.cgi?p=memcached.git;a=shortlog;h=1.2.7-rc

 In addition to the changes dustin posted, I'm proposing the following
 changes.


... I guess it's worth noting I haven't updated the changelog or bumped
the version yet. Might also need to add some docs for some of those stats
I added?

Anyway. Thanks all, review away. 1.2.7 is shaped up to be a good, stable,
farewell to the old memcached we've grown to hate^Wlove.

-Dormando


Re: For review: using git describe for version numbers

2009-03-26 Thread dormando

Initially I thought hey, that'll probably break shipit, then I remembered
shipit's just used for every other project I ship.

Seems fine to me. Maybe add the git steps into the HACKING or README or
whatever file somewhere?
-Dormando

On Thu, 26 Mar 2009, Dustin wrote:



 Oh, patch might be good...

 http://github.com/dustin/memcached/commit/923a335bf8613696d658448cd9c48a963924d436

 commit 923a335bf8613696d658448cd9c48a963924d436
 Author: Dustin Sallings dus...@spy.net
 Date:   Mon Mar 9 11:52:25 2009 -0700

 Use git's version number for releases.

 This will allow more specific version numbers, while simplifying a
 proper release down to a tag and make dist.

 During development, ./version.sh needs to run periodically to
 update
 the version number.  I'd recommend just adding a call to
 version.sh as
 a git post commit hook:

 % cat .git/hooks/post-commit

 echo Updating version.
 ./version.sh

 (and make sure the file is executable)

 diff --git a/.gitignore b/.gitignore
 index 195b246..f3d6757 100644
 --- a/.gitignore
 +++ b/.gitignore
 @@ -33,4 +33,5 @@ memcached-*.tar.gz
  doc/protocol-binary-range.txt
  doc/protocol-binary.txt
  /sizes
 -/internal_tests
 \ No newline at end of file
 +/internal_tests
 +/version.m4
 diff --git a/Makefile.am b/Makefile.am
 index 4cf00b4..af41fe4 100644
 --- a/Makefile.am
 +++ b/Makefile.am
 @@ -58,7 +58,7 @@ memcached_debug_dtrace.o: $(memcached_debug_OBJECTS)

  SUBDIRS = doc
  DIST_DIRS = scripts
 -EXTRA_DIST = doc scripts TODO t memcached.spec memcached_dtrace.d
 +EXTRA_DIST = doc scripts TODO t memcached.spec memcached_dtrace.d
 version.m4

  MOSTLYCLEANFILES = *.gcov *.gcno *.gcda *.tcov

 diff --git a/autogen.sh b/autogen.sh
 index 873f0a4..3db9801 100755
 --- a/autogen.sh
 +++ b/autogen.sh
 @@ -7,6 +7,9 @@
  #apt-get install automake1.7 autoconf
  #

 +# Get the initial version.
 +sh version.sh
 +
  echo aclocal...
  ACLOCAL=`which aclocal-1.10 || which aclocal-1.9 || which aclocal19
 || which aclocal-1.7 || which aclocal17 || which aclocal-1.5 || which
 aclocal15 || which aclocal || exit 1`
  $ACLOCAL || exit 1
 diff --git a/configure.ac b/configure.ac
 index 182b105..f3fa8b7 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -1,5 +1,6 @@
  AC_PREREQ(2.52)
 -AC_INIT(memcached, 1.3.2, b...@danga.com)
 +m4_include([version.m4])
 +AC_INIT(memcached, VERSION_NUMBER, b...@danga.com)
  AC_CANONICAL_SYSTEM
  AC_CONFIG_SRCDIR(memcached.c)
  AM_INIT_AUTOMAKE(AC_PACKAGE_NAME, AC_PACKAGE_VERSION)
 diff --git a/version.sh b/version.sh
 new file mode 100755
 index 000..8a58aef
 --- /dev/null
 +++ b/version.sh
 @@ -0,0 +1,8 @@
 +#!/bin/sh
 +
 +if git describe  version.tmp
 +then
 +echo m4_define([VERSION_NUMBER], [`tr -d '\n'  version.tmp`])
 \
 + version.m4
 +fi
 +rm version.tmp



1.2.7 call for fixes

2009-03-25 Thread dormando

Yo,

In a few hours (or sometime tomorrow, at this point), I'm posting the
final patch series for 1.2.7-rc for public review. They'll be up for about
a day or two before we kick off the -rc.

I've been combing my inbox for days and collecting the little odds and
ends I might've missed, along with my old notes (it *has* been a while,
holy crap I'm sorry). If there's something on your wish list let me know
directly or on the list.

The 1.2 series is going into pseudo-maintenance mode. The 1.3 series will
get any significant work, and 1.2 will be restricted to bugfixes and
usability improvements. Very minor feature enhancements might still be
considered, but are better candidates for the development series.

This *does* mean we're buckling down for the eventual next stable series
(1.4 and beyond), so I strongly encourage folks to go grab the beta and
try it out:
http://code.google.com/p/memcached/downloads/list

have fun,
-Dormando


Broken commit: 5a44468 Use AI_ADDRCONFIG more selectively.

2009-03-25 Thread dormando

Yo,

'stable' tree from dustin has a commit (5a44468) which disables the
AI_ADRCONFIG flag unless -l is specified. This causes memcached to not
start on machines without ipv6 configured unless you specify the -l
option.

I ... don't have a testbed onhand to figure out the original intent of the
patch (make memcached start with no networking configured?).

Before I back out the change, anyone else know how to fix it properly?

Thanks,
-Dormando


Re: For review: Issue 37

2009-03-23 Thread dormando

maybe copy/paste the bug subject into the e-mail subject? :P Sometimes I
wake up to a list of numbers to review. hard to keep in order until I use
the actual issue viewer :P

-Dormando

On Mon, 23 Mar 2009, Trond Norbye wrote:


 Issue: http://code.google.com/p/memcached/issues/detail?id=37
 Patch: http://github.com/trondn/memcached/tree/issue_37

 Cheers

 Trond




Re: memcache memory limit

2009-03-20 Thread dormando
It's at least a couple megs. It depends on how many parallel connections
you have, and the typical size of your read buffers.

If you do a lot of large multi-gets, you'll use more ram than otherwise.
Future releases will probably track this memory more closely? It's not
hard to do.

-Dormando

On Fri, 20 Mar 2009, JC wrote:


 By the way, has anyone already tried to quantify in a *more
 scientific* manner the actual extra memory we should let on a box to
 be sure memcached won't swap ?

 I guess memcached related overhead should be proportional to the
 expected number of connection and their traffic. I guess this is about
 the same for the OS with its internal buffers. But I must confess that
 so far, I don't know the exact multiplying factor(s) ...

 Jean-Charles

 On Mar 20, 8:36 am, Trond Norbye trond.nor...@sun.com wrote:
  On Mar 20, 2009, at 8:10 AM, Sudipta Banerjee wrote:
 
   it has 8 gigs of ram and its 64 bit
 
  Compile memcached as a 64bit binary and you should be able to use as  
  much memory as you want (I have tested with up to 30Gb)... Please note  
  that memcached use more memory than the memory specified with -m (for  
  internal buffers etc), and you should leave memory for the os and  
  other processes. The last thing you want is that your server is paging  
  in and out memory pages..
 
  Cheers,
 
  Trond


Re: UNSUBSCRIBE

2009-02-01 Thread dormando

Guys, please don't respond to these e-mails anymore. One of the list
maintainers will unsubscribe them eventually.

I know I know, the Everyone shut up e-mail, but the list is getting a
little chatty recently. It's a big list, please stay on topic if you can.

Thanks,
-Dormando

On Fri, 30 Jan 2009, Ray Krueger wrote:


  UNSUBSCRIBE
 
  This message is private and confidential. If you have received it in error,
  please notify the sender and remove it from your system.
 

 Fail



Re: stats subcommands in memcached...

2009-02-01 Thread dormando

I think some clients have been developed around the cachedump command...
Okay maybe just the FUSE client, but I caught someone trying to use that
in prod at *least* once...

It's more user friendly as a command that pulls a subset of the data. If
it were to be modified and brought in as an official command, it would at
least need an argument to specify a limit. Otherwise those quick
debugging routines would require a massive datadump you'd then have to
trawl through.

I'm still convinced a full dump command will be abused more than it's
worth, but I forget where we left off last on that debate.

-Dormando

On Wed, 28 Jan 2009, Toru Maesaka wrote:


 Hi,

 'stats maps' imho, isn't so useful since we can get the same information
 with commands like ldd(1). This doesn't need to be in the server since
 low layer folks that are interested in this information would know what
 to do to obtain this information.

 'stats malloc', would be useful for debugging the slabber but you do
 have a point, it doesn't need to be in memcached. The actual useful
 information is provided with 'stats slabs'.

 'stats cachedump', ASFAIK is not an officially supported subcommand.
 As in, it is not stated in the protocol document. I think we can say that
 it is safe to remove an undocumented command.

 My two cents :)

 Toru

 On Wed, Jan 28, 2009 at 5:39 PM, Trond Norbye trond.nor...@sun.com wrote:
 
  Hi,
 
  I have been looking at some of the stats subcommands in memcached, and
  personally I would like to kill some of them (I believe that they have
  nothing to do in memcached):
 
  stats malloc - What would you use the output of this command for?? the
  biggest malloc user is the slab allocator, but that always allocates 1MB
  chunks.. (and stats slabs will give detailed info for the slab allocation).
  I believe that getting detailed information about the malloc implementation
  doesn't belong in the memcached protocol, but you should be able to use
  other tools on your system to monitor this.
 
  stats maps - I don't think this belongs in the protocol (and I have no idea
  what you would use the output to in a deployed scenario). There are plenty
  of tools available on different platforms that would give you more info..
 
  So.. Are people using these features and would you be extremely sad if we
  removed them from the next version of memcached.
 
  My next question is stats cachedump. Are people using that feature??
  Personally I would like to kill that as a stats subcommand, and try to think
  of a better way we may help developers to try to debug their application.
 
  So what does people think?
 
  Trond
 
 



Re: binary noreply commands

2009-02-01 Thread dormando

Yeah, my main complaint about the original 'noreply' code is that the
error handling was completely FUBAR. I'm sure someone somewhere will want
a noreply command that never gives them errors, but that's really really
broken.

+1 to having it crap back errors.

-Dormando

On Mon, 26 Jan 2009, Toru Maesaka wrote:


   In the binary protocol, we can get both the efficiency gains of
  sending commands that are likely to be successful, and still be able
  to handle specific failure cases due to having opaques that can map
  back to the requests.

 True. It definitely sounds wasteful to not take advantage of this.

   Separate in what way?  setq vs. setqq?

 Heh, I was trying to think of ways to provide the behavior as we know
 from the ascii protocol and a case that can return an error but you've
 made it clear that there is no need to do this in your reply, thanks.

 So, time to update the binary protocol? where's dormando ;)

 Cheers,
 Toru


 On Mon, Jan 26, 2009 at 3:52 PM, Dustin dsalli...@gmail.com wrote:
 
 
  On Jan 25, 10:13 pm, Toru Maesaka tmaes...@gmail.com wrote:
 
  I personally think that the behavior of not responding at all is okay
  since this is for complementing the no-reply
 
   I never particularly liked the no-reply mode for roughly the same
  reason.  I think specifically supressing any status, even the rare
  failure will lead to hard-to-debug situations.
 
   It was unavoidable in the text protocol because there was no way to
  indicate which of several pipelined no-reply commands failed.
 
   In the binary protocol, we can get both the efficiency gains of
  sending commands that are likely to be successful, and still be able
  to handle specific failure cases due to having opaques that can map
  back to the requests.
 
  Here's a thought, could we separate the set commands?
 
   Separate in what way?  setq vs. setqq?



Forwarded mail....

2009-01-31 Thread dormando

Hey folks,

It's almost that time of year again. The MySQL conference in santa clara
is coming up. We have a couple good talks scheduled (tutorial, session,
etc) for memcached and its related subjects. Should be a good show.

If possible it would be nice to have some community representation for the
invite below. I will not be able to participate this year due to time
constraints, but could provide assistance to someone else who wants to
volunteer to head it up.

I will add, emphasize, and enforce, that this is an invitation for the
*dotorg* pavilion. This means if you're one of those half dozen companies
who're trying to boostrap around memcached and run the community, you're
probably not the right folks to represent the community in this setting.
We need to get community feedback on what folks want to do, and answer
questions honestly. I would prefer to not do this at all if there's a
chance of upselling or hindering community feedback and open
participation.

We appreciate your contributions to memcached, but some of us still prefer
neutrality where possible.

-Dormando

-- Forwarded message --
Date: Mon, 19 Jan 2009 13:04:51 +0100
From: Lenz Grimmer l...@sun.com
To: Dormando dorma...@rydia.net, Steven Grimm sgr...@facebook.com,
Paul Lindner lind...@inuus.com

Hi Alan, Steven and Paul,

As you are probably aware of, the MySQL Conference  Expo 2009 will take place
on April 20th-23rd in Santa Clara, California:

 http://www.mysqlconf.com/

As for the last conferences, we are arranging a DotOrg-Pavilion where we
would like to give Open Source Projects related to or based on the MySQL
Server (or other MySQL Products) an opportunity to showcase their work.

I was wondering if you or somebody else from the memcached Community would be
interested in representing and demonstrating your project there.

Sun will provide the booth space (incl. electrical power and Internet access)
as well as free attendance to the conference and exhibitor hall for up to 4
people per project. In addition, we will provide one full, (shareable)
conference pass per project, that permits access to the tutorials as well as
all other sessions of the conference.

What you will need to bring/prepare:

 - Your own computers/demo equipment
 - Banners, flyers, other marketing material (e.g. Demo-CDs, Merchandise)

If you are interested and would like to learn more about this, please contact
me directly or the MySQL Community Relations team at commun...@mysql.com.

Please feel free to spread this invitation by publishing it on your blog or
your project's forums/mailing list or by forwarding it to other people inside
your community!

Thank you for supporting MySQL! Keep up the good work.

Bye,
LenZ
-- 
Lenz Grimmer - MySQL Community Relations Manager -  http://de.sun.com/
Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten,  DE
Geschaeftsfuehrer: Thomas Schroeder, Wolfang Engels, Dr. Roland Boemer
Vorsitz d. Aufsichtsrates: Martin Haering   AG Muenchen: HRB161028



Re: Memcache with UNIX socket.

2009-01-31 Thread dormando

Hey folks,

Yes, please use persistent connections. Be wary of potential bugs causing
connections to stack up (creating too many objects, creating a new
persistent connection per pageview, etc). The PHP clients make this
relatively difficult to do but it's still possible.

Also, for note, any recent benchmark I've done has shown identical
performance for sockets against 127.0.0.1 compared to UNIX domain sockets.
The enefit of unix domain sockets is more to avoid ... I dunno. Allow a
daemon to be restricted by a particular user via file ownership
permissions.

have fun,
-Dormando

On Fri, 30 Jan 2009, Abhinav Gupta wrote:

 I am now thinking of using persistent connection for connecting to memcache
 server.
 so that we don't have to create connection again and again, i think this
 will also lower some over head of TCP/IP for local machine.

 Regards,

 --
 

 The future belongs to those who believe in the beauty of their dreams
 =
 Abhinav Gupta
 Software Engineer @99acres.com



Re: Wiki overhauled and updated

2009-01-02 Thread dormando

cute ;) Noted.

Anyone want to catalogue what distros have memcached at what version? I
intend to get on the PPA bandwagon for ubuntu, gentoo seems to be up to
date, but who else needs a kick in the pants after the next release?

-Dormando

On Mon, 29 Dec 2008, Boris Partensky wrote:

 Thanks a lot Dormando Regarding MacOsx install: one can just use dawrin
 ports to install 1.2.6:

 sudo port install memcached
 ---  Fetching libevent
 ---  Attempting to fetch libevent-1.4.9-stable.tar.gz from
 http://monkey.org/~provos/
 ---  Verifying checksum(s) for libevent
 ---  Extracting libevent
 ---  Configuring libevent
 ---  Building libevent
 ---  Staging libevent into destroot
 ---  Installing libevent @1.4.9_0
 ---  Activating libevent @1.4.9_0
 ---  Cleaning libevent
 ---  Fetching memcached
 ---  Attempting to fetch memcached-1.2.6.tar.gz from
 http://distfiles.macports.org/memcached
 ---  Verifying checksum(s) for memcached
 ---  Extracting memcached
 ---  Configuring memcached
 ---  Building memcached
 ---  Staging memcached into destroot
 ---  Creating launchd control script
 ###
 # A startup item has been generated that will aid in
 # starting memcached with launchd. It is disabled
 # by default. Execute the following command to start it,
 # and to cause it to launch at startup:
 #
 # sudo launchctl load -w /Library/LaunchDaemons/org.macports.memcached.plist
 ###
 ---  Installing memcached @1.2.6_0
 ---  Activating memcached @1.2.6_0
 ---  Cleaning memcached

 Cheers
 Boris


 On Mon, Dec 29, 2008 at 12:55 AM, dormando dorma...@rydia.net wrote:

 
  Yo,
 
  I went over the wiki with some steel wool and a blowtorch:
  http://code.google.com/p/memcached/
 
  ... and cleaned out the old one.
 
  Now has a fancy sidebar, fewer pages, easier to access content,
  reorganized content. Looks pretty darn good now.
 
  The FAQ has a TOC (omfg!) again, edited content, and a bunch of new QA's.
  Not quite done adding to it yet, but it's about 95%.
 
  Please send corrections/ideas/requests for access/etc to the mailing list,
  and one of us will catch them.
 
  I know I know, why are you working on the wiki when we want CODE?!?!? CODE
  MAN! Well, I don't have a good answer for that. Wiki looked unprofessional
  and I felt like fixing it now instead of later :) Whenever we do releases
  a bunch of new users come in, and I would rather they get nice clean
  content.
 
  have fun,
  -Dormando
 



 --
 --Boris



Further wiki work / bug reports.

2008-12-31 Thread dormando

Yo,

First, I'm not *quite* done updating the wiki content. I'm taking the
feedback into account (thanks!) and still have some hand-scribbled notes
of further FAQ entries to add.

However, I'd love to have more sections on the wiki I can't write myself:
- War stories
- DTrace examples
- Links / examples to implemented memcached design patterns.

Like if someone wrote that session library I wish someone would write, it
would be linked there. If not, pseudo code examples are useful. I'm adding
pseudo code to more of the FAQ examples, and help with that would be
appreciated.

... the memcached code is peppered with DTrace propes. I saw some old
examples on the mailing list history, but would any sun folks like to
draft up wiki pages showing people how to use DTrace with memcached, along
with links to proper DTrace references?

And, for anyone out there with a memcached war/success/failure/etc story
and wants to write it up and share, I think these are useful for people to
read. At the very least entertaining.

---

Also, note to community members (and soon to be noted on the wiki as
well). The memcached developers all actually care about the issue tracker
at google code: http://code.google.com/p/memcached/issues/list

... so if you have a bug report, you may report it there, then link it on
the list. Or report to the list and file a bug later, whatever. Easier for
us to track if it's cross-referenced, as well as having the most eyeballs.
The issue tracker will be useful as easy historical reference for us devs.

happy new year,
-Dormando


Re: Wiki overhauled and updated

2008-12-29 Thread dormando


 On Mon, Dec 29, 2008 at 6:09 AM, Toru Maesaka tmaes...@gmail.com wrote:
 
  Hi!
 
  This is awesome! you've put a lot of useful content up there :) ++
 
  Toru


 ditto that, it looks great... can we get
 http://www.danga.com/memcached/ to redirect there as that is where
 many new users will land when starting to look/learn about memcached.?

I switched that sometime yesterday already. The FAQ / Wiki link at the
top goes to the new wiki.

-Dormando


Re: Memcache set and get command. Are they atomic?

2008-12-29 Thread dormando

  Yeah, other search results are probably about get+set sequence, so
  seems that i have somewhat exaggerated confusion about memcache
  atomicity, sorry.
 
  Still, i think this simple question should be in memcached faq.

   Agreed -- it needs to be clear.


Is now:
http://code.google.com/p/memcached/wiki/FAQ#Is_memcached_atomic?


Re: why track free items in an array

2008-12-26 Thread dormando

Hey,

'memslap' in the libmemcached library can be used to help load test.
There're no standard set of test though.

-Dormando

On Fri, 26 Dec 2008, Mike Lambert wrote:


 Thanks for the answer, the argument about trying to cleanly separate
 slabs and items makes sense. (Though yes, I would have brought up
 exactly those asserts you mentioned, thanks for clarifying. :P)

 In fact, if you want to treat the items as raw chunks of memory (with
 no introspection), you can just assume they are larger than the size
 of a pointer, you then cast them to void**'s and use them as
 forwarding pointers. Just need to be sure that you're the only one
 messing with their memory as long as they're on the free list.

 But no, I have idea the pain/cost associated with the dynamic
 allocation. I have no evidence to support it might be a problem with
 anything (fragmentation, memory bloat, cost of growing it, etc). It
 was more something I stumbled on while trying to grok the codebase,
 and wanted to make sure I wasn't missing anything. Does the codebase
 have any loadtest frameworks that would make such a change easier to
 evaluate?

 Thanks,
 Mike

 On Fri, Dec 26, 2008 at 06:30, Anatoly Vorobey avoro...@gmail.com wrote:
  On Fri, Dec 26, 2008 at 11:02 AM, Mike Lambert mlamb...@gmail.com wrote:
 
  Hey all,
 
  We use memcached in a big way here, so I started digging into the
  memcached codebase to poke around and see how things work. One
  question I had, is why free items are stored in an array of pointers
  instead of using the linked list items inherent in the items
  themselves.
 
  When items are alloc'd, the next and prev pointers are cleared. When
  they are free'd, only the -slabs_clsid is cleared as an indicator
  before passing it to slabs_free. slabs_free then stores it in p-
  slots, an array of pointers to free items. Why aren't these instead
  using the -next pointers of the items to create a singly linked list.
  This would save on the need for dynamic memory allocation in the p-
  slots array.
 
  The only downside I can see is that computing slab stats (used_chunks
  and free_chunks) suddenly becomes an expensive operation. Is the stats
  computation efficiency the sole reason for this use of a p-slots
  array, or am I missing something? I understand the value of these
  useful memcache stats, and can appreciate making this tradeoff, I'm
  more curious if these stats were the *sole* reason for using an array,
  or if there's something more subtle (or obvious) that I'm missing.
 
  Hey Mike,
  I don't think anyone cared much about the efficiency of slab stats at the
  time.
  The original reason was probably that we didn't want the slabs allocator to
  require
  anything particular of the bits it allocates. Even though we ended up using
  it only for
  item structs, it was planned to at least try using it for malloc'd things
  like connection
  structs and other bits and ends.
  You'll say, but slabs.c checks slabs_clsid==0 in the item struct all the
  item in asserts!
  And the poorly known slabs_reassign modifies it, too! And you'll be
  completely right,
  but neither of these were in the first few versions, and neither is crucial
  to how
  the allocator works.
  It seems quite possible to change the allocator to reuse the pointers inside
  the item struct.
  It might be a good idea to estimate the headache due to the currently
  dynamic allocation
  of p-slots first, however. Do you know that it ends up costing a lot of
  memory in your usage?
 
  --
  Anatoly Vorobey, avoro...@gmail.com
  http://avva.livejournal.com (Russian)
  http://www.lovestwell.org (English)
 
 



retiring socialtext wiki

2008-12-26 Thread dormando

Yo all,

It's time. Steven yen's done a good job at porting most of it over. I just
eyeballed the whole thing and will patch up the holes.

Two options, both I like but one is more mean:

1) just add huge headings to each page redirecting people to the new wiki
2) add the headings /and/ wipe out existing content, to prevent it from
being indexed as stale information. This also means preserving all of the
pages so inbound links to existing pages still work, they just lack
content and redirect to the new wiki.

Thoughts?
-Dormando


Re: Does memcached implement any connection scheduling policy?

2008-12-15 Thread dormando


Yo,

Current stable head adds a little bit of scheduling in... if a connection 
has queued up many requests memcached will flip to another connection and 
idle the bulk worker a bit. Otherwise since all operations take about the 
same amount of time, no other scheduling is necessary.


That change was a facebook patch, and only happens if you do a lot of 
bulk loading/fetching over tcp. Most folks don't do this.


-Dormando

On Mon, 15 Dec 2008, mcuser wrote:



Hi,

Referring to Connection Scheduling in Web Servers
http://www.cs.bu.edu/faculty/crovella/paper-archive/usits99.pdf

Traditional scheduling theory for simple, single device systems shows
that if task sizes are known,
policies that favor short tasks provide better mean response time than
policies that do not make use
of task size. In a single device system, if running jobs can be
preempted, then the optimal work conserving policy with respect to
mean response time is shortest remaining processing time First
(SRPT).

I am curious to know if memcached server adopts any connection
scheduling policy?

Thanks.



Re: facebook memcached on github

2008-12-13 Thread dormando
 rewritten elsewhere...

- The FB tree appears to have written very few, if any, tests around its
changes. Do your tests exist in a different tree, or do they not exist?

- Being nitpicky, the changelog wasn't even updated :)

---

There are certainly interesting things in the branch that we'd like. Maybe
the flat allocator? I don't know. Certainly bugfixes, the stats lock
changes, and the per-thread memory pools.

During the hackathons, in IRC, and elsewhere, we've been discussing
changes around the client memory buffers and the stats locks. I've been
personally nitpicky about the connection buffers. In my MySQL Proxy I'd
prototyped a shared connection buffer, so idle connections would use a
minimal amount of memory. This seemed like a good idea, but hadn't come up
on the 'super important' radar for memcached - the binary protocol seemed
more important, as it increases flexiblity for users. In the grand scheme
of things they need that more than extra speed. That's dangerous for me to
say, I know, but true.

The other thing are the stats locks. It'll be fun to see how you folks did
it, since I couldn't figure out how it was done without memory barriers on
the thread-local level. Or if that just worked okay. The correct way is
to use atomic CAS around a many-writer single-reader or single-writer
single-reader pattern. Which suck a bit for 8 byte values, and aren't
perfectly fast since they are still memory barriers.

What else is merge worthy? I'm not sure. The facebook article seems to
mention a lot of linux kernel hacks were required in order to achieve that
speed. How does the FB branch benchmark without those kernel
modifications? Are those changes also public?

If you folks are truly uninterested in maintaining a fork of the
software, you'll have to decide what distinct changes actually go
upstream. We have a binary protocol, we have a storage engine interface on
the way, and many of the other changes are unmergable. Will you work with
us to get the most beneficial performance changes upstream and adopt the
main tree, or continue to use your internal tree? We have to decide what
work will be merged at all before flocking to a merge-a-thon.

have fun,
-Dormando


Re: facebook memcached

2008-12-12 Thread dormando


+1.

I'm not up for a lecture on how to post patches to a public project right 
now, but that isn't the way to go, nor what we've discussed in the past.


-Dormando


 I think the results speak for themselves, but I don't know that a
merge can actually occur.

 The tree the published is entirely unrelated from the trees the rest
of us are working on.  There's no common ancestry or even similar
directory layout.  As published, it sort of puts us in a position to
either reimplement everyone else's work, or reimplement the facebook
work.

 If anyone at facebook is listening, is it possible at all to add
this work onto the codebase where everyone else has been working?
We've got a lot of bug fixes and features we'd really like to not
throw away here:

 http://github.com/dustin/memcached/tree/rewritten-bin



Re: facebook memcached

2008-12-12 Thread dormando




On Fri, 12 Dec 2008, mike wrote:



On Fri, Dec 12, 2008 at 1:46 PM, Dustin dsalli...@gmail.com wrote:


 If anyone at facebook is listening, is it possible at all to add
this work onto the codebase where everyone else has been working?


+1

not to mention couldn't that technically be required in the
GPL/whatever license memcached is under?



memcached is BSD licensed... no goodwill is technically required. it's 
just considered anti-goodwill to code dump and stalk off.


-Dormando


Re: Generic protocol for message queues

2008-11-08 Thread dormando

It's never too late! Just ... depends on who'd do it :) The gearman
protocol's pretty terse compared to the memcached one...

So, out of my hands at least.

On Sat, 8 Nov 2008, Aaron Stone wrote:


 Is it too late to pick a middle course: using memcached binary
 protocol with commands for queues, but implemented as an independent
 project?

 On Fri, Nov 7, 2008 at 7:02 PM, dormando [EMAIL PROTECTED] wrote:
 
  It's always been a binary-ish protocol. Brian and a few others are writing
  a C implementation the client/server.
 
  Initially I was weakly expecting to add gearman's commands to the binary
  protocol and having it exist as a storage engine for memcached, but I
  concede to brian's intent to keep it a separate project :)
 
  -Dormando
 
  On Fri, 7 Nov 2008, Aaron Stone wrote:
 
 
  I didn't realize that gearman is a binary protocol -- are you just now
  defining one?
 
  Aaron
 
 
  On Fri, Nov 7, 2008 at 5:47 PM, Brian Aker [EMAIL PROTECTED] wrote:
  
   Hi!
  
   Just something else to throw into the equation:
   http://gearmanproject.org/doku.php?id=protocol
  
   This is the protocol as outlined by Gearman. A few of us are reworking 
   it at
   the moment, but keeping backwards compatibility with the current version.
  
   Cheers,
  -Brian
  
   On Nov 8, 2008, at 12:38 PM, Aaron Stone wrote:
  
  
   Heh, nice blog post :-)
  
   I agree that the queues that are out there and using hacked up
   memcache protocols are doing us a great disservice -- I think if we
   build a smart way to do a message queue into a binary protocol
   extension, we can drive minds in the right direction, and provide
   ourselves with something useful in the process.
  
   Also, it would help us to slow down the rate of splinter projects.
   Pluggable storage engines and message queues are I think the only
   areas where the memcached-alikes play.
  
   Aaron
  
  
   On Thu, Nov 6, 2008 at 1:21 PM, Dustin [EMAIL PROTECTED] wrote:
  
  
   On Nov 6, 12:48 pm, Aaron Stone [EMAIL PROTECTED] wrote:
  
   Have a mailing list link? It'd be good to continue with where you left
   off / review what you were thinking at the time.
  
   I wrote about it on my embarrassingly tongue-in-cheek titled blog:
  
  
   http://www.rockstarprogrammer.org/post/2008/oct/04/what-matters-asynchronous-job-queue/
  
   --
   ___
   Brian Krow Aker, brian at tangent.org
   Seattle, Washington
   http://krow.net/ -- Me
   http://tangent.org/-- Software
   ___
   You can't grep a dead tree.
  
  
  
  
 
 



Re: Generic protocol for message queues

2008-11-07 Thread dormando

It's always been a binary-ish protocol. Brian and a few others are writing
a C implementation the client/server.

Initially I was weakly expecting to add gearman's commands to the binary
protocol and having it exist as a storage engine for memcached, but I
concede to brian's intent to keep it a separate project :)

-Dormando

On Fri, 7 Nov 2008, Aaron Stone wrote:


 I didn't realize that gearman is a binary protocol -- are you just now
 defining one?

 Aaron


 On Fri, Nov 7, 2008 at 5:47 PM, Brian Aker [EMAIL PROTECTED] wrote:
 
  Hi!
 
  Just something else to throw into the equation:
  http://gearmanproject.org/doku.php?id=protocol
 
  This is the protocol as outlined by Gearman. A few of us are reworking it at
  the moment, but keeping backwards compatibility with the current version.
 
  Cheers,
 -Brian
 
  On Nov 8, 2008, at 12:38 PM, Aaron Stone wrote:
 
 
  Heh, nice blog post :-)
 
  I agree that the queues that are out there and using hacked up
  memcache protocols are doing us a great disservice -- I think if we
  build a smart way to do a message queue into a binary protocol
  extension, we can drive minds in the right direction, and provide
  ourselves with something useful in the process.
 
  Also, it would help us to slow down the rate of splinter projects.
  Pluggable storage engines and message queues are I think the only
  areas where the memcached-alikes play.
 
  Aaron
 
 
  On Thu, Nov 6, 2008 at 1:21 PM, Dustin [EMAIL PROTECTED] wrote:
 
 
  On Nov 6, 12:48 pm, Aaron Stone [EMAIL PROTECTED] wrote:
 
  Have a mailing list link? It'd be good to continue with where you left
  off / review what you were thinking at the time.
 
  I wrote about it on my embarrassingly tongue-in-cheek titled blog:
 
 
  http://www.rockstarprogrammer.org/post/2008/oct/04/what-matters-asynchronous-job-queue/
 
  --
  ___
  Brian Krow Aker, brian at tangent.org
  Seattle, Washington
  http://krow.net/ -- Me
  http://tangent.org/-- Software
  ___
  You can't grep a dead tree.
 
 
 
 



Re: Release QA process

2008-11-06 Thread dormando

Hey,

The QA process is a little iffy, but I do try awful hard :)

The process, in short:

- Mailing list is layer 1 of QA process. Patches for release are first
vetted here.
- I am the final layer of QA process for the stable tree... Patches are
tested (almost always...), reviewed, then included. All patches are
individually build-tested locally.
- In a happier world we'd have buildbot run at this point, but I kinda
broke that and need to fix it, sorry.
- When enough patches are in, we ready for a release candidate (-rc). I
usually fire up VM's for 4-5 different OS's and run a build test, review
the patches queued again, then send out a release candidate for public
testing.
- We wait a few days to a week between -rc's until no new bug reports come
in.
- We tag, stamp, and release the next stable.

I don't usually, but for 1.2.7 and beyond, will be dogfooding the -rc in
Six Apart's memcached cluster before release... Swap in one server and see
how it does... After giving to the developers/etc. I'd encourage others do
so as well.

I'm excessively paranoid about the quality of memcached's releases, and
can say with certainty that we've been removing more bugs than we've been
adding for the last three. If I can get my head out of my ass we'll even
be able to move faster and release more often :)

have fun,
-Dormando

On Tue, 4 Nov 2008, Victor wrote:


 Hi,
 I'm wondering if anyone can tell me which QA activities are in
 involved everytime we release a new memcached version. I had planned
 to attend the Hackathon and talk to people about this there but I was
 not able to go. If the answer to my question is very simple that's
 fine - I just want to know what the situation is right now ... and I'm
 not going to suggest that we change anything right now either.
 However, if anyone wants to make suggestions that's fine too.

 Thanks,
 Victor



Re: Memcache serving 24gig a node?

2008-11-03 Thread dormando

Use threaded mode... don't use more threads than you have CPU's.

... otherwise it should *work* fine. There're benefits to splitting the 
instances up if you were logically splitting the types of items stored 
into different sub-pools (optimized by size, data access, etc). If you'd 
be addressing it as one large pool anyway, use larger instances in order 
to make multiget more efficient.


-Dormando

On Mon, 3 Nov 2008, Andy Hawkins wrote:



Currently they run across 6 machines already.

My real question is should I run multiple memcached's with smaller
memory caches on seperate ports instead of running one memcached with
a 24gig pool?

I will be doing a scheduled maintenance to upgrade the nodes and
clients so we won't see down time do to this.

~@

On Oct 31, 7:39 am, David Stanek [EMAIL PROTECTED] wrote:

Spreading across more boxes also makes you more fault tolerant. If one
or two go down your database (or other expensive resource) would still
be OK.

On Fri, Oct 31, 2008 at 8:47 AM, Stephen Johnston



[EMAIL PROTECTED] wrote:

I think the major point of consideration is that if memcached had a must
have upgrade tomorrow. What would the impact of taking down one of those
24g instances to upgrade be? If that makes you cringe, then you should
probably reduce the size of each instance even if you are running just them
on the same machine.



-Stephen
On Thu, Oct 30, 2008 at 11:40 PM, Andy Hawkins [EMAIL PROTECTED] wrote:



I've got around 200 gigs of ram I'm running 6 nodes all set around
24gigs each.



Is this appropriate or should I cluster them out?



~@


--
Davidhttp://www.traceback.org


Re: Session state in memcached without database backup

2008-11-02 Thread dormando

I wrote a rant on this a while ago:
http://dormando.livejournal.com/495593.html

If you really must do it, keep a super careful eye on your eviction
rate... but I believe whatever session handler you're using should use the
design pattern I describe in this post, if it's not already.

-Dormando

On Sat, 1 Nov 2008, TheJonathan wrote:


 I'm using memcached (1.2.6) combined with a .NET session provider
 (http://www.codeplex.com/memcachedproviders/Release/
 ProjectReleases.aspx?ReleaseId=10468) to store sessions on my site.  I
 currently have the database backup feature turned off, so the sessions
 exist only in memcached.

 Sessions are used on my site to hold login credentials across load-
 balanced servers.  In times of peak traffic, there's obviously a lot
 of sessions being created constantly (~1% for logged-in users, 99%
 anonymous) so I'm trying to cut down on database traffic by getting
 those completely out of the database.  From the docs, it looks like
 when memcached fills up, it starts dumping key-values based on usage,
 with high-use items having a higher chance of staying in memory
 longer.  Given that that's the case, can I be reasonably sure that the
 1% of my sessions/cache that are being used by active users (POSTing
 back often) will last in memcached if it reaches capacity?

 More generally, is it safe for me to keep those sessions in
 memcached only?  I don't want my users to start getting logged out
 constantly because the site is getting hammered.  Does anyone else do
 this?  How has it worked out?

 Thanks for everyone's help so far in this group!

 --Jonathan



Re: Memcache 1.3.0 ETA?

2008-10-25 Thread dormando

1.3.0 binary preview happened a while ago ;)

1.3.1 is real soon now. It's been reviewed.

-Dormando

On Wed, 1 Oct 2008, Nick Le Mouton wrote:


 I notice that 1.2.7 is being prepared for release, but is there a time
 frame for a 1.3.0 release with the binary protocol?



Re: Memcached Hackathon Report(s)

2008-10-25 Thread dormando

   I would actually like to do a rewrite of this branch to fix the
 authors at some point.  That will require coordination, but I think
 it'd be valuable to make sure everyone is credited for the work that
 they put into the project.

Well... I'm trying to think of the best way to do the merge right now.

It's time to pull the binary tree into the canonical repo and start
pushing changes/tags/releases there, but do we fix up the tree first?

Dustin, if you have a preferred list of author contacts and wouldn't mind
me poking you for some hints, could you please send it over? I'll give a
shot at rewriting the tree.

Another option is ignoring it for now... I make a 'binary' branch based
off of everyone's latest changes, push that, tag it as 1.3.1 and ship.
Then rewrite the tree for 1.3.2? :/ Not sure how I feel about that, since
it could make it harder for people to hop on and help fix bugs in the
tree.

Thoughts?
-Dormando


Re: save/restore memcache db?

2008-09-11 Thread dormando
Almost everybody who initially asks for this feature later figures out 
that restarting a memcached with stale date doesn't work for their 
application.


So at least with the dozens of people I've talked with about this subject, 
the demand drops quickly. No big push, no follow up. I guess one or two 
people have implemented it, but not gotten the code back to us. 
Personally, I still don't see a use for it so I won't be writing that code 
myself.


On Wed, 10 Sep 2008, PlumbersStock.com wrote:



It sounds as if this feature has been added by others already and
hasn't found it's way into the main tree. Maybe it just wasn't coded
well enough to make it in. It seems it should be a really simple
feature to add, that shouldn't interfere with the running cache in any
way, and those whose use of the cache would make using save/restore a
bad thing could just choose not to use it. I can understand not adding
features that would have a negative impact on the project but stuff
that is essentially painless I can't understand leaving out. Project
management is never fun. At least with open source projects if I have
to have the feature I don't have to pay $300+ an hour and wait months
to get it done.

On Sep 10, 8:23 pm, Stephen Johnston
[EMAIL PROTECTED] wrote:

One would hope that any important contribution, written by a competent
developer, would find it's way into the main trunk of code instead of
forking. This is the catch-22 of open source. It really becomes a project
managment excercise.

On Wed, Sep 10, 2008 at 10:10 PM, PlumbersStock.com 

[EMAIL PROTECTED] wrote:


The problem with any software though is that if you can't convince the
owner of the software to add your feature into the main tree then
you're forever playing catchup.


Re: Test code coverage

2008-09-09 Thread dormando


The test coverage is exceptionally low.

Is this only accounting the coverage of the last executed test?

On Tue, 9 Sep 2008, Victor Kirkebo wrote:


Hi,
I've attached a new version of this patch that excludes printing of coverage 
on non-source-tree files.

The printout looks like this:

with gcov:
File `assoc.c' Lines executed:1.66% of 181
File `items.c' Lines executed:2.82% of 248
File `memcached.c' Lines executed:9.40% of 1554
File `slabs.c' Lines executed:14.84% of 128
File `stats.c' Lines executed:2.60% of 77

with tcov:
assoc.c : 1.02 Percent of the file executed
items.c : 2.39 Percent of the file executed
memcached.c : 8.15 Percent of the file executed
slabs.c : 15.65 Percent of the file executed
stats.c : 1.72 Percent of the file executed

-Victor

dormando wrote:

Hey,

Think this could be modified to not spit coverage on files not in the
source tree?

File '/usr/include/gentoo-multilib/amd64/sys/stat.h'
Lines executed:0.00% of 1
/usr/include/gentoo-multilib/amd64/sys/stat.h:creating 'stat.h.gcov'

File '/usr/include/gentoo-multilib/amd64/stdlib.h'
Lines executed:27.27% of 11
/usr/include/gentoo-multilib/amd64/stdlib.h:creating 'stdlib.h.gcov'

File 'slabs.c'
Lines executed:16.39% of 122
slabs.c:creating 'slabs.c.gcov'

File '/usr/include/gentoo-multilib/amd64/stdlib.h'
Lines executed:0.00% of 3
/usr/include/gentoo-multilib/amd64/stdlib.h:creating 'stdlib.h.gcov'

Anyone else have comments?
-Dormando

On Thu, 4 Sep 2008, Victor Kirkebo wrote:



This patch adds test coverage to the 'make test' target.
gcov is used for gcc and tcov is used for Sun studio compiler.

Victor





--
Victor Kirkebo
Database Technology Group
TRO01, x43408/(+47)73842108




Re: what is a connection refused(111) error?

2008-09-05 Thread dormando

Hey,

When you're getting started with memcached, it might be a good idea to 
start it without the -d option the first time. We daemonize really early 
(maybe this is fixable...) so a startup error isn't always printed when 
you use -d.


By removing the daemonize option it'll run in the foreground. So you can 
see why it doesn't start, but you'll have to remember to restart it with 
-d once you're sure it works.


-Dormando

On Fri, 5 Sep 2008, pedalpete wrote:



Thanks Steve,

I am kinda new to admining my own server, so no, I hadn't tried
telnet.
And you are correct, I can't telnet to 11211.
But at the same time, I can't be sure that memcached is running
either. I'm not sure how to test for that.
I have started memcached with
[code]
memcached -d -m 1024 -l 10.0.0.40 -p 11211 -u nobody
[/code]

and I get returned to the command line. it doesn't say if it is
actually running or not, but I assumed without errors that it was
running.

when I look through ps aux | less, I don't see an entry for Memcache,
so I'm guessing that maybe memcache isn't running? or would it not
show up in running processes?

Sorry I'm at a bit of a loss here, I'm surprised memcached doesn't
return an error if it can't start or something.



On Sep 5, 1:01 pm, Steve Yen [EMAIL PROTECTED] wrote:

Could be any number of things...

111 is just the ECONNREFUSED error code value from underlying socket
connect() syscall.

Apologies if you've tried or thought of these top-of-head ideas already...
a - is memcached really running?  on port 11211?
b - can you telnet to it?  can you telnet as the same user as your
websvr/php, on the same box?
c - if a is yes and b is no, you might have weird firewall rules?

On Fri, Sep 5, 2008 at 11:47 AM, pedalpete [EMAIL PROTECTED] wrote:


I've tried reinstalling memcache a few times, and I can connect to it
from root, but when attempting to connect via my php page, I get a
[code]
Memcache::connect() [memcache.connect]: Server 127.0.0.1 (tcp 11211)
failed with: Connection refused (111)
[/code]



I have tried using 'localhost' as the server which works on my dev
machine, but not on testing or prod.



I can't seem to find what this error means.
Anybody know?





Re: memcache stats

2008-09-05 Thread dormando

The -u option is for specifying the user to drop privileges to?

If you see mentions of managed instances - that's unfinished code, which 
we should silence. You can safely ignore that.


-Dormando

On Fri, 5 Sep 2008, TK wrote:



thanks, steve.

Was looking at the help section of the memcached server and there is a
-u option
for managed instance. Can somebody explain when and where will someone
use
this option.

Thanks in advance.

TK

On Sep 5, 11:32 am, Steve Yen [EMAIL PROTECTED] wrote:

On Fri, Sep 5, 2008 at 11:29 AM, Steve Yen [EMAIL PROTECTED] wrote:

You're running an older memcached version.  They've been there since
at least 1.2.5 and maybe earlier.



On Fri, Sep 5, 2008 at 9:50 AM, TK [EMAIL PROTECTED] wrote:



thanks steve,



I am not able to get anything for:
1. stats items
2. stats sizes



even though I have bunch of items stored in the memcache.



Also for the stats cachedump what is the limit parameter?



stats cachedump [id] [limit]



The id param is an integer slab id, starting at 0 for the smallest slab.



The limit param is an integer number of max items in that slab to output.


Also, the limit can be just 0 to output all items in that slab.

Should probably throw stuff like this onto the new wiki.




best,
steve



TK



On Sep 5, 8:48 am, Steve Yen [EMAIL PROTECTED] wrote:

On Fri, Sep 5, 2008 at 7:35 AM, TK [EMAIL PROTECTED] wrote:



Hi! All



I am new to memcache. I wanted to find out what are the statistics
which we care about. I just came across the following:
#tackable-image:11211 Field       Value
                  bytes    17962932
             bytes_read    17450985
          bytes_written       90519
                cmd_get           0
                cmd_set       10047
  connection_structures           3
       curr_connections           2
             curr_items       10047
              evictions           0
               get_hits           0
             get_misses           0
         limit_maxbytes    67108864
                    pid       25215
           pointer_size          64
          rusage_system    0.627904
            rusage_user    0.172973
                threads           1
                   time  1220625172
      total_connections          30
            total_items       10047
                 uptime       57071
                version       1.2.6



And



 #  Item_Size   Max_age  1MB_pages Count   Full?
 9     696 B    56043 s       7   1      no
 32   117.5 kB   57031 s       1       1      no
 36   286.9 kB   56991 s      10      30     yes
 37   358.6 kB   56950 s       5      10     yes
 38   448.2 kB   57052 s       3       6     yes



Are there any other statistics which we can look at? Also what is the
1MB_pages? And if someone can explain in detail the -m option? By
default the memory allocated is 64MB, is it for each slab or for the
whole memcache?



If these questions are answered some other place, please point me the
location.



thanks in advance.



TK



The stats commands include...
 stats
 stats slabs
 stats items
 // dumps out a list of objects of each size, with granularity of 32 bytes
 stats sizes
 // turn on/off details stats collections, at some performance cost
 stats detail [on|off|dump]
 stats cachedump [id] [limit]



The -m option is max amount of memory for the whole of memcached.



steve


<    5   6   7   8   9   10