Re: Real-world mget fan-out and cluster size?

2011-10-06 Thread dormando
 We are planning to deploy a memcached cluster  of 40 odd machines and
 were wondering about mgets and the size of the cluster.

 We could either:
 - have all the machines in one cluster, which would mean that an mget
 on a set of keys could potentially span the entire cluster
 - partition our cluster into smaller sizes and assign cache lines to
 certain cluster, thereby reducing the mget fan-out for that cache line

 Does anyone have any recommendation/benchmarks regarding such a
 network latency/available bandwidth trade-off?

How much traffic are you intending to send? how many mgets in one go?

Unless you do tons of traffic it doesn't tend to matter much either way.
It's good to spread out a little since clients can send requests to each
server in parallel, but then you lose out a bit by contacting too many
machines per page view.

If you're planning to deploy, this sounds like a perfect opportunity to go
test it. Run your cluster with the traffic you expect to hit and try it
both ways with a simple bench script. Keep the one you're happiest with,
or pick the simplest if it doesn't make a difference.

-Dormando


Re: Real-world mget fan-out and cluster size?

2011-10-06 Thread dormando
 Right. It does sound like we'll have to conduct some experiments.
 Would've been nice to get some other inputs though.

 We might make 2000-3000 mgets/sec. 100-500 keys each.


It's straight up math.

if (keylength * keycount  tcp_buffer) - latency benefit from having
shorter mgets (or just splitting them up and sending them to the same
server in parallel).

Otherwise it's a syscall count for the # of servers you hit. It's actually
really hard to determine at a glance which will work better. Clients
implement this part differently, so it may just be up to which they
implement better.


1.4.9-beta1 and mc-crusher

2011-10-05 Thread dormando
Hey,

Enjoying 1.4.8? Thought I'd share some rough things that you guys may
enjoy:

https://github.com/dormando/mc-crusher
^ I've thrown my hat into the ring of benchmark utilities. This is
probably on par with some work Dustin's been doing, but I went in a
slightly different direction with features.

Now, 1.4.9-beta1:

http://memcached.googlecode.com/files/memcached-1.4.9_beta1.tar.gz

Which is the result of the 14perf tree, now up:

https://github.com/memcached/memcached/commits/14perf

This beta will be up for at least two weeks before going final. The
changes need more tuning, and some normal bugfixes/feature fixes need to
go in as well. I'm giving it to you folks early so it has a good long
soak.

Major changes:

- The Big cache lock is much shorter. Partly influenced by the patches
Ripduman Sohan sent, as well as me trying 3 different approaches and
failing back to this one.

- a per-item hash table of mutex locks is used to widen the amount of
locks available. There are many instances where we don't want two threads
to progress on the same item in parallel, but many fewer places where it's
paramount for the hash table and LRU to be accessed by a single thread.

- cache_lock uses a pseudo spinlock. In my bench testing, preventing
threads from going to sleep when hitting the short cache_lock helped with
thread scalability quite a bit.

- item_alloc no longer does a depth search for items to expire or evict.
I gave it a lot of thought and am dubious it ever helped. If you don't
have any expired items at the tail, it will always iterate 50 items, which
is slow. This was one of the larger performance improvements from the
changes I made.

- Hash calculations are now mostly done outside of the big lock. This was
a change in 1.6 already.

Reasoning:

- Most was reasoned above. I looked through Ripduman's patches and decided
to go a slightly different route. I studied all of the locks carefully to
audit what changes to make. In addition, I made no change which
significantly increases the memory usage. While we can still release a
specialized engine which inflates datastructures in a tradeoff for speed,
I have a strong feeling it's not even necessary.

- I only kept patches that had a measurable benefit. I threw away a lot of
code!

Results:

- On my desktop, I was able to increase the number of set commands per
second from 300,000 to 930,000.

- With one get per request, I saw 500k to 600k per second. This was
largely limited by the localhost driver, it may be faster with real
hardware.

- With multigets, I was able to drive up to 4.5 million keys per second.
(4.5 million get_hits per second). Reality will be a bit lower than this.

- Saturate 10gbps of localhost traffic with 256-512 byte objects.

- Saturate 35gbps of localhost traffic with 4k objects.

- Saturate 45gbps of localhost traffic with 8k objects.

- Patches increase the thread scalability. Under high load, performance
dropoffs now happen around 5 or 6 threads, whereas previously as many as 4
(the default!) could cause slowdown.

Future work:

I have some ideas to play with, some might go into 1.4.9, some later. I
don't believe any further performance enhancement is really necessary, as
it's trivial to saturate 10gbps of traffic now.

Need to hammer out more of the bench tool and make a formal blog post with
pretty pictures. That's more interesting.

- Item hash needs tuning. It's using a modulo instead of a hashmask. Needs
a way to initialize the size of the table, etc.

- I played with using the intel hardware crc32c instruction, but that
lowered performance as it slammed the locks together too early. This needs
more work before I push the branch up, as well as verification as to the
hash distribution.

- It may be safe to split the cache_lock into cache_lock and lru_locks,
but I haven't verified the safety of this personally yet and the
performance is already too high for my box to verify the change.

Notes:

- NUMA is kind of a bitch. If you want to reproduce my results on a big
box, you'll need to bind memcached to a single numa node:

numactl --cpunodebind=0 ./memcached -m 4000 -t 4

You can also try twiddling --interleave and seeing how the performance
changes. There isn't a hell of a lot we can do here, but we can move many
connections buffers to be numa-local and get what we can out of it.

The performance, even with memcached interleaved, isn't too bad at all,
and the patches do improve things (for me).

- I have not done any verification on latency yet. Given the low number of
connections I've been using in testing, it's not really possible for
requests to have taken longer than 0.1ms. Still, over the weeks I will
build the necessary functionality into mc-crusher and more formally test
how latency is affected by a mix of set/get commands.

have fun, and everyone who makes presentations about memcached scales to
a limit can bite me. If you honestly need it to run faster than this,
just send us a fucking e-mail.

If you like what I do

Re: Patch for start-memcached writing wrong PID to $pidfile

2011-10-04 Thread dormando
Looks correct to me... Apologies to everyone if it isn't :P I applied it
and it's going into 1.4.8.

thanks!

On Thu, 29 Sep 2011, Nate wrote:

 At some point it looks like start-memcached was changed to fork an
 extra time, as the comments put it, now that the tty is closed.  I'm
 not sure why, but what I noticed is that the wrong PID was being
 written to the PID file.  It was writing the PID of the intermediate
 fork, not the one of the child that was getting replaced by the exec.
 So I just moved the code that writes the PID to the child of the first
 fork and it's working for me again.  With this fix the init script can
 stop the server that it had started.  Here's a patch for that very
 simple fix.

 Thanks,
 Nathan Shafer

 --- start-memcached.orig  2011-09-29 11:19:42.0 -0700
 +++ start-memcached   2011-09-29 11:23:15.0 -0700
 @@ -112,18 +112,17 @@
  # must fork again now that tty is closed
  $pid = fork();
  if ($pid) {
 +  if(open PIDHANDLE,$pidfile)
 +  {
 +  print PIDHANDLE $pid;
 +  close PIDHANDLE;
 +  }else{
 +
 +  print STDERR Can't write pidfile to $pidfile.\n;
 +  }
exit(0);
  }
  exec $memcached $params;
  exit(0);

 -}else{
 -if(open PIDHANDLE,$pidfile)
 -{
 -print PIDHANDLE $pid;
 -close PIDHANDLE;
 -}else{
 -
 -print STDERR Can't write pidfile to $pidfile.\n;
 -}
  }



Memcached 1.4.8

2011-10-04 Thread dormando
Memcached 1.4.8 is up: http://memcached.org - Cache long and prosper! New
features, important fixes, interesting counters.

I caught one important bug inbetween the -rc and now, where binprot gets
weren't updating the LRU. If you rely on the binary protocol, you probably
want this update.

Also the rest of the stuff we added/fixed are totally fawesome.

-Dormando


Re: Multi-get getting stuck

2011-09-29 Thread dormando
 I am trying to use mget() to get multiple data pieces using the C++
 API. When it works the performance difference is great.
 However, I seem to hit some sort of a communication hang when I have
 about 5000 or higher key requests together.

What client is that, exactly? The C++ API isn't specific enough.

 Has anyone seen this issue earlier? Is there a size limit to how much
 combined data can be requested using multiget?(I read its 32 MB
 somewhere but this happens for even smaller data sizes).
 The hanging is during the call to mget() (even before calling fetch)
 and it seems the server gets stuck trying to handle too many requests.
 Any idea if I might be overrunning the server buffer/ TCPIP buffer?

If you're using the binary protocol, the server starts sending back
responses as soon as you send in requests. Eventually your read buffer
fills and you won't be able to send any packets until you first read some.

iirc libmemcached has a workaround for this by starting to call the read
handler callbacks while sending requests still.

-Dormando


1.4.8-rc1

2011-09-28 Thread dormando
http://code.google.com/p/memcached/wiki/ReleaseNotes148rc1

Please test if you can! Since we're a few weeks behind on the schedule, I
intend to run the -rc for 24, maybe 48 hours at the most instead of the
usual week. Doesn't seem like anyone fetches the -rc after the first 24
hours anyway.

This time you get some fancypants FEATURES like COMMANDS and SWITCHES and
FIDDLYDINKS and some awesome COUNTERS for judging how well you do things.

enjoy,
-Dormando


Re: Value size distribution change and slab allocation issue.

2011-09-27 Thread dormando
 This is an issue described on the memcached documentation:
 ...Unfortunately, slabs allocated early in memcached's process life
 might over time be effectively in the wrong slabclass. Imagine, for
 example, that you store session data in memcached, and memcached has
 been up and running for months. Finally, you deploy a new version of
 your application code, which stores more interesting information in
 your sessions -- so your session sizes have grown. Suddenly, memcached
 starts thrashing with huge amounts of evictions. What might have
 happened is that since the session size grew, the slab allocator needs
 to use a different slabclass. Most of the slabs are now sitting idle
 and unused in the old slabclass. The usual solution is to just restart
 memcached, unless you've turned on ALLOW_SLABS_REASSIGN...

 We were having that same issue in many of our servers, and since
 ALLOW_SLABS_REASSING is no longer supported the only thing we could do
 was to restart the servers, which lead to a storm of cache misses and
 other operational issues for us.
 That's why we developed an experimental command named drop_slab
 which when run just deletes all values in a slab class and deallocates
 that memory returning it to the OS.

 My question are:
 a) Has any of you run into this issue and if so how did you handle it?
 b) Do you think this command is something you would use? If so I can
 submit a patch. I'm planning to port it to version 1.6 (currently is
 for version 1.4)

Yes, we're aware of this. Feel free to post your patch somewhere and talk
about it.

However what we end up using for mainline is taking more time to develop
as it's difficult to do this automatically, correctly, for most users.

It's coming up pretty soon in my TODO list though; we've been catching up
on the backlog with 1.4.

-Dormando


Re: Value size distribution change and slab allocation issue.

2011-09-27 Thread dormando


On Tue, 27 Sep 2011, Gonzalo de Pedro wrote:

  It's coming up pretty soon in my TODO list though; we've been catching up
  on the backlog with 1.4.

 Are you planning to implement this for version 1.6?

I can't/won't predict what version number that change will be in.


Re: Memcached Analysis

2011-09-25 Thread dormando
  How many do you want it to run? After a point you have to start tuning
  your OS kernel to reserve less RAM per TCP connection, but it'll scale to

 Which parameter is that?

It's a lot of parameters. Google for linux TCP tuning.


Re: Hardware Donations

2011-09-24 Thread dormando
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 23/09/2011 12:24, Paul Lindner wrote:
  Contact Josh Berkus he may be able to get you access to the Postgres
  performance testing systems (not sure how high-powered they are...)
 
  I wonder if Intel/AMD or a 10g card vendor could get you access to hardare
  too..
 
 

 Could also try the folks at Oregon State University. They support a
 number of FOSS projects in that way and are generally a very cool bunch
 of folks to work with. Can do an intro if you like.

If they can help, sure I'd love an intro!

-Dormando


Re: Memcached Analysis

2011-09-24 Thread dormando
 Hi,
 I am trying to do some scalability analysis (scaling the number of clients) 
 for Memcached.
 Is there any available benchmark for such experiments ?

 Also, I am curious to know about the number of clients that a memcached 
 server can service in a typical deployment.
 By default, the number of connections is 1K. I was wondering how this will be 
 in a real world deployment scenario.

 Any insights/suggestions will be of great help.

How many do you want it to run? After a point you have to start tuning
your OS kernel to reserve less RAM per TCP connection, but it'll scale to
a great many (30,000+) pretty well. The default is 1k because it's based
off of the default file descriptor limit in linux-land, which is 1024.


Re: Several instances vs single instance

2011-09-24 Thread dormando
 I know that problably it would depend of my specific scenario, but in
 general, do you think it would be better to have a single instance of
 60MB or 6 instances  of 10MB on the same server?
 I wonder if having multiples instances would highly imprrove the
 concurrency capacity and if having multiple instances would have a
 considerable negative impact over performance.
 Thanks!

Well, ignoring numbers since those are really small instances; no you can
probably just run one. It's multithreaded and will be fast enough for your
needs. I have a hard time getting my hands on hardware which memcached
can't saturate.


Re: Hardware Donations

2011-09-23 Thread dormando
 +dormando (who seems to have been dropped from the cc list)

 On Fri, Sep 23, 2011 at 9:46 AM, Mark Wong mark...@gmail.com wrote:
 On Fri, Sep 23, 2011 at 9:30 AM, Josh Berkus j...@postgresql.org wrote:
  On 9/23/11 3:24 AM, Paul Lindner wrote:
  Contact Josh Berkus he may be able to get you access to the Postgres
  performance testing systems (not sure how high-powered they are...)
 
  Hey, Mark Wong is in charge of the performance test machines.  What did
  you need?

 Hi everyone,

 This web page is mostly correct, but this should give you an idea of
 what is currently available:

 http://wiki.postgresql.org/wiki/QA_Platform_hosted_at_Command_Prompt

Hmm... (heh, I have a T2000 as well). Unfortunately I really need more
than 8 total cores to stress the existing code. 24+ being ideal.

Thanks for looking!


Hardware Donations

2011-09-21 Thread dormando
Hey,

http://memcached.org/feedme :)

Preemptive thanks to anyone who decides to stand up and help!
-Dormando


Re: Scaling Memcache to 10G

2011-09-15 Thread dormando
 I've been trying to opimise the performance of memcached (1.6) on 10G
 Ethernet and in doing so have created a series of patches that enable
 it to scale to link speed.

 Presently, memcache is unable to provide more than 450K
 transactions-per-second (TPS) (as measured with Membase's memslap
 benchmark on a set of Solarflare SFC 9020 NICs) with the kernel TCP/IP
 stack and about 600K TPS with Solarflare's OpenOnload TCP/IP stack.
 With the patches it scales to about 850K TPS with the kernel TCP/IP
 stack and 1100K TPS with Solarflare's OpenOnload TCP/IP stack, as
 illustrated in the graph at http://www.cl.cam.ac.uk/~rss39/mm_comp.pdf

 I have tried to keep the changes as self-contained and small as
 possible and have tested them as extensively as I can, but I look
 forward to your feedback and comments on the set.

Thanks for open sourcing this! And thanks for attempting to keep the
changes small and documenting each patch. That's a big help.

It'll probably be a while before any of us can verify or adopt these
patches, but it's good to have them out there. I can give you some quick
feedback which will also help the process;

Most of your changes are in the default_engine/, a large part of the point
of 1.6 is so we can fork this engine and modify it. At a glance, I see
that you've added the 32bit hash into the item structure. I'm sad to say
that almost all users of memcached care about its memory efficiency more
than the vertical scalability, and 4 bytes per item can be horrendous to
some workloads.

That can probably be worked on, but to start with I would recommend you
actually fork the default_engine and port your changes into that.

ie - copy default_engine tree to lockscale_engine (or whatever)
- port your patches onto that
- isolate the patches which touch the main tree

... then we can decide on if we want to distribute both engines and give
users a choice, or keep one in the repo and slowly adopt the scaling
changes from one to the other (if possible).

Thanks,
-Dormando


Re: Scaling Memcache to 10G

2011-09-15 Thread dormando
 Hi,
 Thanks for your feedback.  I'll start to port the changes to a new engine as 
 you've described.  It is probable that a significant portion of the gains are 
 realized even without storing the hash value of
 the item.  If this is the case it would make it simpler and easier to 
 integrate the changes into default_engine.  Would you like me to experiment 
 with this approach?

Yes, if you could test that and show the results of either approach that'd
be great! You should still put it into its own engine for now, though.

-Dormando


Re: wiki in Portuguese

2011-09-06 Thread dormando
 Hi there,

  I would like to contribute to the memcached brazilian community. Can
 I translate all the content?

Sure? :)


Re: Several clients

2011-08-26 Thread dormando
 Hi everyone,
 I'm new to memcached. Here's my question:
 When we are configuring clients with memcached, we enter server addresses 
 running memcached in our client application.
 Our servers; S1, S2, S3, S4
 Clients: C1, C2

 Now suppose for C1, we give memcached servers as S1, S2, S3
 for C2: S2, S3, S4

 Now we give key1 to C1 to store.
 And query for same key1 with C2. 

 How would it behave?

C1 would store on Sx and C2 would store on Sy, which will likely not be
the same host.

All clients must have the same exact server list, in order to store keys
in the same places.


Re: Several clients

2011-08-26 Thread dormando
 Shouldn't we make a single application server (and clone of it using a common 
 config server for load balancing)? As done with sharded/distributed
 databases. 
 Just one application server. I guess it can be easily done with the present 
 model but for new people/ movers from conventional model, would be more
 convenient.

 Also, if using many clients it would be wise to have a single entry point to 
 the backend cache. 

 Changing the server list on all clients could be a pain.

I'm not really sure what you're talking about anymore.

Why do you think there's a problem with giving each client the same server
list? What's so hard about that?

Clients issue requests *directly against servers*. The entire point of
this system is to create a large, fast, scalable, shared cache between
*all of your application servers*.

The *only* thing you have to do is to use the same server list among all
your clients. The rest is magic, and it's awesome magic. Don't resist it.

-Dormando


Re: Several clients

2011-08-26 Thread dormando
 Yep I got that point, but I'm saying is suppose you have many many clients  
 servers. Now if you want to scale your system, you'd add more servers.
 Now then you'll need to update server list on each client.

 Basically creating a single end-point (which can be replicated too using a 
 same config server).

You probably want to look into something like Puppet or Chef sooner than
later. If you're scaling a system, but don't understand or haven't even
heard of configuration management, you're going to be fucked in a *lot* of
ways.

Keeping configuration files in sync across many servers is *part* of
running a large system. It's not optional, and it's not that hard.
There're many ways to do it.


Re: Slab Allocator question

2011-08-25 Thread dormando
 - If given mem limit satisfies each slab class then preallocate
 everything(all slabs, chunks...etc..).
 - If given mem limit does not satisfy all slab classes then fallback
 to system malloc.

 Is this correct?

sorta. I wouldn't try to understand the proposals too hard as we may end
up with something completely different after playing with it more. The
bulk of that bug entry explains the current situation though.

-Dormando


Re: Memcached network usage

2011-08-25 Thread dormando
Wrong mailing list. Go talk to the couchbase people.

On Thu, 25 Aug 2011, Sergio Garcia wrote:

 Hi,
 We are using memcached to cache .net serialized items, as a asp .net session 
 provider, a database cache, and to store real time data.

 Now, we are using 4 server, with 40GB of data to memcached, in the same 
 server that we run the IIS.

 All of these server have 48GB of RAM, and 2 processors Xeon Quad HT.

 These server have 2 1Gbps network interfaces, one for local (memcached) and 
 other for internet (IIS).

 These servers are showing a strange network usage in the local interface.

 In one server we have around 60% of network usage and in the 3 others we have 
 around 25%.

 All server have around 4k operations per second.

 Coincidence or not, the server with high load is the last server added to 
 cluster.

 So the question, has memcached some kind of master server who need to 
 coordinate the other servers or are something that could explain the high
 network usage in this interface?

 We are using the couchbase version of memcached (membase version 1.7.0, with 
 memcached buckets) with the eniym client in .net.


 We appreciate any kind of information that could help.

 Best regards,

 Sergio Garcia






Re: Slab Allocator question

2011-08-24 Thread dormando
 I am currently fiddling with the slab allocator and one thing messes
 my understanding of the concept. Think about the following scenario:

 1) start memcached with ./memcached -m 1 (with 1 Mb of memory limit)
 2) set mykey 0 0 11 -- allocates a 1 Mb slab and a chunk is returned
 for about the smallest possible chunk
 3) set mykey2 0 0 200 -- ?

 Now I cannot understand what happens in the 3rd step? There is no
 memory left for another slab error returned or something else? I have
 tried to test this on my machine but somehow cannot interpret the
 results correctly. 2nd step does allocate 1 MB of slab with 96 bytes
 of chunk size? I am right?

http://code.google.com/p/memcached/issues/detail?id=95

will be fixed at some point...


Re: --with-libevent argument to configure script

2011-08-24 Thread dormando

 I am trying to build memcached 1.4.7 with a specific libevent version.
 To do this, I ran configure as follows:

 ./configure --prefix=/home/radhesh/memcached-dist --with-libevent=/usr/
 lib/libevent.so.1


--with-libevent=/usr/lib (path to the lib install dir, not the full path
to the file)

not sure about the rest. what platform/os/versions/etc is this?


Re: Seeking approval to use Memcached Logo for presentation.

2011-08-24 Thread dormando

 At next week’s VMworld Conference, we are presenting a study that we have 
 just completed on virtualizing Memcached on an AMD Opteron based server
 and would like permission to use the Memcached logo on one of our slides.  
 Can someone point me in the right direction to get this approved?


Hello! Apologies, as I don't yet have clearer copies of the logo at the
moment (I should be getting them soon, actually). If you'd like you can
use the whole top banner from memcached.org for now.

This e-mail serves as permission for this instance. However I retain my
right to be perilously grumpy if your presentation is disparaging or
claims ownership over the project at all? :)

-Dormando


Re: Apache + Drupal 6 + Memcache Segfault

2011-08-24 Thread dormando
 hi all

 I have memcache 1.4.5, Apache 2.2.3 and libmemcache.so all on 64bit

 Since upgrading to php 5.2.17 we have had two outages where apache
 segfaults.

Close. You give all the versions except for the php memcached client. What
exact software client are you using and what exact version is it?

showing some of your initialization code may be helpful as well.


Re: Issue 218 in memcached: Mac OS/X install fails

2011-08-23 Thread dormando
 Also having the same issue on Max OS X Lion:

 checking for library containing pthread_create... no
 configure: error: Can't enable threads without the POSIX thread
 library.

 1.4.7

I wish you guys would try the -rc's that I leave up for a week :P

 sh ./configure --with-libevent=../libevent --enable-threads

 -
 I removed the failure test from configure, and there also seems to be
 a configuration issue with SIGIGNORE:

http://code.google.com/p/memcached/issues/detail?id=218
^ have you tried the exact patch that I posted in the issue? Or what
exactly did you do to remove the failed test?

-Dormando


Re: Is `get key` the only way to check if a key exists?

2011-08-20 Thread dormando
 The main reason I ask is, I was wondering if there is any smarter way
 to check if a key exists, because, I'm just thinking that it would be
 advantageous to be able to differentiate a get_miss for a reason
 different than the key wasn't cached yet, like application logic
 error, versus everything working correctly but my key isn't cached yet
 or it expired. In my mind, it seems like it would be cleaner from
 client-side code, to check if a key exists, then issue a get only if
 the key exists, and be able to see 100% get_hits, but this may not be
 possible.

Not sure how you'd expect memcached to know the difference. If your
application logic fails, you wouldn't have stored anything in memcached,
so there's no way to tell the difference.

The only way you *could* tell the difference is if you issue a get and the
key existed but was expired for some reason, but it doesn't return that
info.

If you're worried about the application never setting the keys, you could
set a canary key at the very top of your app logic.

ie:

if ($cache = $mc-get($page_blah)) {
# use $cache
} else {
$canary = $mc-get($page_canary);
if ($canary) {
die(Something *might* have gone wrong);
}
$mc-set($page_canary, chirp);
# Do real logic, fetch from database, render page.
$mc-set($page_blah, $page_data);
$mc-delete($page_canary);
}

So if your page successfully works, the canary should't exist, but the
cache entry will. I don't see that as being necessary though... if your
app errors that should go to log somewhere, which you should really be
watching anyway.

-Dormando


Re: Memcached running very slow

2011-08-18 Thread dormando
 85 seconds was because of the network latency (was using EC2 with my
 computer. pinging time was 350 ms itself..)

 Perhaps for x many number of items, it was taking x*350 ms time for
 making calls.. while mongo, it was sending all data at one go.

 So I ran the script on the server itself:
 storing in db  0.108298063278  for  1487  items
 storing in memcached  0.208426952362
 reading from db  0.0738799571991
 reading from memcached  0.145488023758

 what am I doing wrong here?

Can you attach your benchmark program?

You should be using multiget to fetch back items quicker. Try using
pylibmc, which is based off of libmemcached. That may be able to do
multisets and get you better speed. The other library is pure python, I
think. Would be slower than a native DB driver.

-Dormando


Re: Curiosity killed the `stats cachedump`

2011-08-18 Thread dormando
 
  On a positive note,  it does seem like there is some consensus on the
  value of random-transaction-sampling here.   But do we have agreement
  that this feed should be made available for external consumption (i.e.
  the whole cluster sends to one place that is not itself a memcached
  node),  and  that UDP should be used as the transport?   I'd like to
  understand if we are on the same page when it comes to these broader
  architectural questions.
 
  I think I do agree with that.  The question is whether we do that by
  making an sFlow interface or a sample interface?

 Do you mean a hook that can be used by a plugin to receive randomly
 sampled transactions?  That would allow you to inline the
 random-sampling and eliminate most of the overhead.  An sFlow plugin
 would then just have to register for the feed;  possibly sub-sample if
 the internal 1-in-N rate was more aggressive than the requested sFlow
 sampling-rate;  marshall the samples into UDP datagrams, and send them
 to the configured destinations.  I like this solution because it means
 the performance-critical part would be baked in by the experts and fully
 tested with every new release.

 But if you've already done the hard work, and everyone is going to want
 the UDP feed, then why not offer that too?  I probably made it look hard
 with my bad coding, but all you have to do is XDR-encode it and call
 sendto().

We can ship plugins with the core codebase, so sflow would still work out
of the box, it just wouldn't be what the system was based off of.

On that note, how critical is it for sflow packets to contain timing data?
Benchmarking will show for sure, but history tells me that this should be
optional.

What would be pretty awesome is sflow-ish from libmemcached, since the
only place it *really* matters how long something took is from the
perspective of a client. Profiling the server is only going to tell me if
the box is swapping, as it's extremely uncommon to nail the locks.

 Finally, I accept that the engine-pu branch is the focus of future
 development, but... any thoughts on what to do for the 1.4.* versions?

I'm kicking out one release of 1.4.* monthly until 1.6 supersedes it. That
said I have a backlog of bugs and higher priority changes that will likely
keep me busy for a few months. Unless of course someone sponsors me to
spend more time on it :)

-Dormando


Re: Curiosity killed the `stats cachedump`

2011-08-18 Thread dormando

 Although there are already 30+ companies and open-source projects with
 sFlow collectors I fully expect most memcached users will write their
 own collection-and-analysis tools once they can get this data!   Don't
 you agree?   So it's not about any one collector,   it's about
 defining a useful, scalable measurement that everyone can feel
 comfortable using,  even in production,  even on the largest clusters.

 On a positive note,  it does seem like there is some consensus on the
 value of random-transaction-sampling here.   But do we have agreement
 that this feed should be made available for external consumption (i.e.
 the whole cluster sends to one place that is not itself a memcached
 node),  and  that UDP should be used as the transport?   I'd like to
 understand if we are on the same page when it comes to these broader
 architectural questions.

Don't forget the original thread as well. I'm trying to solve two issues:

1) Sampling useful data out of a cluster.

2) Providing something useful for application developers

The second case is an OS X user who fires up memcached locally, writes
some rails code, then wonders what's going on under the hood. 1-in-1000
sampling there is counterproductive. Headers only is often useless.

stats cachedump is most often used for the latter, and everyone needs to
remember that users never get to 1) if they can't figure out 2). Maybe I
should flip those priorities around?

-Dormando


Re: Curiosity killed the `stats cachedump`

2011-08-18 Thread dormando
 Not critical at all.  The duration_uS field can be set to -1 in the XDR
 output to indicate that it is not implemented.  I added this measurement
 when porting to the 1.6 branch, where it makes more sense.  I left it in
 when I updated the 1.4 branch because, well, the overhead seemed
 negligible and the numbers still seemed like they might be revealing
 something (though I wasn't sure what exactly).  The start-time field is
 currently used as the we're going to sample this one flag.  However
 that could easily be changed to just set a bit instead.  Two system
 calls per sample would be saved.  The practice of marking a transaction
 to be sampled at the beginning and then actually taking the sample at
 the end when the status is known could also be replaced by the old
 scheme from last year where we do both steps at the same time.  However
 it was actually easier to implement with the two-step approach because
 of the way that there are only two or three ways that a transaction can
 start and a whole myriad of ways that it can end.  So the first step
 (the coin-tossing) only has to happen in those two or three places and
 it's easier to know that you have counted everything once.  Breaking it
 up like this also gives you the choice of accumulating details
 incrementally (the key, the status-code etc.) in whatever is the easiest
 place.

Not totally sure I follow. The system calls would be nice to avoid, since
we can't guarantee the system will use a vsyscall for the clock...

 Yes, a client might well offer sFlow-MEMCACHE transaction samples (as
 well as enclosing sFlow-HTTP transaction samples, if applicable).
 However you would probably still want to instrument at the server end to
 ensure that you were getting the full picture.  There might be a whole
 menagerie of different C, Python, Perl and Java clients in use.

Many/most are based off of libmemcached. You can catch quite a bit with
that one.

  I'm kicking out one release of 1.4.* monthly until 1.6 supersedes it. That
  said I have a backlog of bugs and higher priority changes that will likely
  keep me busy for a few months. Unless of course someone sponsors me to
  spend more time on it :)

 In the mean time I could strip down the current patch and reduce it's
 code footprint considerably - but would that help?

It could help to serve as a reference, but I'm not sure we can merge it.
Every time we merge something we make some sort of idiotic pinkie swear to
the planet to never touch that feature ever again forever. ASCII noreply
haunts me to this day.

Depending on how long 1.4 goes on though, it might end up having its own
internal sampler, and thus sflow could be slapped on top of it.

thanks,
-Dormando


Re: Curiosity killed the `stats cachedump`

2011-08-18 Thread dormando
  1) Sampling useful data out of a cluster.
 
  2) Providing something useful for application developers
 
  The second case is an OS X user who fires up memcached locally, writes
  some rails code, then wonders what's going on under the hood. 1-in-1000
  sampling there is counterproductive. Headers only is often useless.
 
  stats cachedump is most often used for the latter, and everyone needs to
  remember that users never get to 1) if they can't figure out 2). Maybe I
  should flip those priorities around?
 

 I certainly agree that you want both of these features.  However they
 are wildly different.  (1) is for monitoring in production, and (2) is
 for testing and troubleshooting.  The requirements are so divergent that
 there may not be any overlap at all in the implementation of each.  In
 fact the more separate they are the better because there is a lot of
 pressure on (1) to be ultra-stable and never change, while you are
 likely to think of new ideas for (2) all the time.

 So there's no need to hesitate if you can already do (1) today.  Let's
 face it, you have been very successful and there are rather a lot of
 users who have already gotten past (2) :)

Okay, I'm kinda tired of that argument. Just beacuse you say something
isn't possible, doesn't mean we can't make it work anyway. If you believe
they're divergent, stop saying that they're divergent and prove it with
examples. However I'd rather spend my time writing features than
pretending to know if a theoretical patch will work or not.

We want to work towards a system that can encompass a replacement for
stats cachedump. If we can design something which generates sflow as a
subset, that'll be totally amazing! We can even use your patches as
reference for creating a core shipped plugin.

If people want to use sflow today, they can apply your patches and use it.
As is such with open source.

-Dormando


1.4.7 is up

2011-08-16 Thread dormando
No changes since -rc1 as nobody reported any bugs, and I haven't found any
myself.

Fetch and enjoy: http://memcached.org/

In the roughly every three weeks schedule, expect 1.4.8 around september
6th.

-Dormando


Re: Fwd: [rt.cpan.org #28095] [PATCH] Suggest: UTF8 flag support

2011-08-10 Thread dormando
I deal with it when I have time...

...but I don't usually see that bug tracker, so it's easily forgotten.

Won't have time for a week or two, but I can kill some of those bugs,
though someone else is welcome to try as well. If you do though, please
actually test the fixes and ensure they won't do anything awful like
corrupt or expire all of someone's cache.

On Wed, 10 Aug 2011, Brad Fitzpatrick wrote:

 Who's the owner of the Perl client library these days?
 Should the rt.cpan.org tickets go somewhere else (somehow)?


 -- Forwarded message --
 From: Jan 'Yenya' Kasprzak via RT bug-cache-memcac...@rt.cpan.org
 Date: Wed, Aug 10, 2011 at 6:47 AM
 Subject: [rt.cpan.org #28095] [PATCH] Suggest: UTF8 flag support
 To:


       Queue: Cache-Memcached
  Ticket URL: https://rt.cpan.org/Ticket/Display.html?id=28095 

 Hello, is the package being maintained? It has been several years and
 several releases since the report, and utf-8 is still broken in 1.29 as
 of today.

 The above patch still applies with some fuzz, and fixes the problem.






1.4.7-rc1, please test!

2011-08-10 Thread dormando
Hey folks,

I'm a few days late, but here goes 1.4.7-rc1:
http://code.google.com/p/memcached/wiki/ReleaseNotes147rc1

I'm probably going to leave this up for a full week before cutting -final.
*please* test it with all of your might.

Now that we've cleared most of the open bug reports, perhaps 1.4.8 will
have more feature updates ;)

Thanks!
-Dormando


RE: Curiosity killed the `stats cachedump`

2011-08-07 Thread dormando
  From: memcached@googlegroups.com [mailto:memcached@googlegroups.com] On
  Behalf Of Peter Portante
  Sent: Monday, 8 August 2011 10:49 AM
 
  How 'bout random sample request profiling?

 Profiling for monitoring and activity estimation purposes - isn't that the 
 point of the sFlow set of patches mentioned a few times on list?

The sFlow patches bother me as I'd prefer to be able to generate sFlow
events from a proper internal system, as opposed to the inverse. You
shouldn't have to be an sFlow consumer, and it's much more difficult to
vary the type of data you'd be ingesting (full headers, vs partial, vs
item bodies, etc).

The internal statistical sampling would be the start, then come methods of
shipping it. You could send to listeners connected over a socket, or have
a plugin listen as an internal consumer to the samplings. The internal
consumer could provide builtin statistical summaries the same as an
external daemon could. Which could make everyone happy in this case.

I like the sFlow stuff, I'm just at a loss for why it's so important that
everything be generated on top of sFlow. So far nobody's addressed my
specific arguments as listed above.

-Dormando


Re: Fwd: [Fedora Update] [comment] memcached-1.4.6-1.fc15

2011-08-03 Thread dormando
Thanks for doing this!

On Wed, 3 Aug 2011, Paul Lindner wrote:

 FYI -- please give this a test and upvote this build if it works for you.
 Thanks!


 -- Forwarded message --
 From: upda...@fedoraproject.org
 Date: Wed, Aug 3, 2011 at 3:54 AM
 Subject: [Fedora Update] [comment] memcached-1.4.6-1.fc15
 To: plindner


 The following comment has been added to the memcached-1.4.6-1.fc15 update:

 bodhi - 2011-08-03 10:54:28 (karma: 0)
 This update has been submitted for testing by plindner.

 To reply to this comment, please visit the URL at the bottom of this mail

 
     memcached-1.4.6-1.fc15
 
    Release: Fedora 15
     Status: pending
       Type: enhancement
      Karma: 0
    Request: testing
      Notes: Upgrades memcached to 1.4.6, release notes available here:
           : http://code.google.com/p/memcached/wiki/ReleaseNotes14
           : 6
  Submitter: plindner
  Submitted: 2011-08-03 10:54:25
   Comments: bodhi - 2011-08-03 10:54:28 (karma 0)
             This update has been submitted for testing by
             plindner.

  https://admin.fedoraproject.org/updates/memcached-1.4.6-1.fc15




 --
 Paul Lindner -- lind...@inuus.com -- linkedin.com/in/plindner




Curiosity killed the `stats cachedump`

2011-07-31 Thread dormando
Yo,

We've threatened to kill the `stats cachedump` command for probably five
years. I've daydreamed about randomizing the command name on every minor
release, every git push, ensuring that it stays around as a last ditch
debugging tool.

A lot of you continue to build programs which rely on stats cachedump.
This both confuses and enrages us. Removing it outright sounds like a
failure, though. Your malevolent overlords have decided that this thing
you want and occasionally use should be taken away.

So instead I'd like to start a discussion which I'll seed with some
ideas; we want to shitcan this feature, but it should be a fair trade. If
we shitcan it, we first need to make you not want it anymore.

Here are some ideas I have for making you not want this feature anymore:

- Better documentation.

95% of the time when users want to use cachedump, they want to verify that
their application is working right. There're better ways to do this, but
it's clearly too hard to figure out.

- Better toolage.

That 95% of users overlaps with users who want to know better about what's
going on inside memcached. Our usual response is restart in screen with
-vvv or point to a logfile or blah blah blah. This is unacceptable.
mk-query-digest helps, and I will hopefully be releasing a tool to do the
same for the binary protocol. This should allow you to watch or summarize
the flow of data, which is much more useful anyway.

- Streaming commands.

Instead of (or as well as) running tcpdump tools, we could add commands
(or simply use TAP? I'm not sure if it overlaps fully for this) which lets
you either telnet in and start streaming some subset of information, or
run tools which act like varnishlog. Tools that can show the command,
the return value, and also the hidden headers.

An off the cuff example:

Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
watch every=5000,request=full,response=headers

The above would stream back one out of every 5000 requests, with the full
request, and the headers of the response, but not the full binary data.
I'm not promising to implement this as-is, but I could see it helping to
solve the issue.

Astute readers will notice that this is my biased push on the TOPKEYS
feature; 1.6 already has a way to discover the most accessed keys, but I
feel strongly that its approach is too limited.

- Commands to poll the Head or Tail of the LRU

Probably the most controversial. It is much more efficient to pretend that
the head or the tail are nebulous, nefarious, malicious things. As
instances grow into the tens of millions of items, polling at the head or
the tail doesn't give you a consistent view of very much. I imagine this
would be immediately abused by people implementing queues (or perhaps
that's a good thing?)

It also weighs heavy in my mind as we reserve the right to make the LRU
more loose or more strict as we evolve. It may not exist at all at some
point.

- Commands to stream the keys of evictions, or also reclaims or expired
items

People want cachedump so they can see what's still in there. This would be
an extension (or instead of) the previous streaming commands. You would
register for events with a set of flags, and when items expire or are
evicted or whatever you decided to watch, it would copy a result to the
stream.

It is much, much more efficient to read out of the statistical counters to
get the information. But as people want to see what's in there, often
they're really wondering about what's no longer in there.

---

I'm not really sold on any of these. These are not all the ideas we should
even consider, if you have better ones. Please help distribute this ML
post around as much as possible so we can have a better chance of having
an intelligent discussion about it.

Thanks,
-Dormando


Re: Curiosity killed the `stats cachedump`

2011-07-31 Thread dormando
   I owe all of you better tap documentation (the last couple of weeks
 have really killed me).  It does some pretty great stuff in this area
 and has many practical uses.

Now would be a great time to sell us on it, then :)



Re: Memcached not fast enough?

2011-07-25 Thread dormando
 I'm using both APC and Memcached. I had to setup a two-layers
 caching system. First I'm caching into Memcached and then caching into
 APC. I guess it's pretty dirty but that's the only solution I found to
 reduce significantly the amount of GET to Memcache.

 APC is local to each web obviously and Memcache is going through
 network, so I assumed it was normal that APC would be much faster. But
 somehow I feel something's wrong because Memcache sometimes is really
 slow, and eventully even hit the 1s timeout.

 I have done a small and stupid benchmark :

 ---
 Testing 1000 GETs.

 Value size : 149780 Bytes

 Memcache Testing...
 Value size : 149780 Bytes
 Time: 13.78 seconds.

 APC Testing...
 Value size : 149780 Bytes
 Time: 0.31 seconds.
 ---

Not entirely sure what you're bottlenecking on? Is your issue that APC is
faster than memcached, or you hit some limit? This benchmark isn't that
enlightening.

That value size is pretty large. ~146k/req, 1000 reqs. 146 megabytes.
13.78 seconds is 10.6 megabytes/sec, which is about 85 megabits? If you're
going over the network, is your webserver capped at 100 mbits?

Otherwise, what *is* the value? Provide your whole bench script?

When talking to memcached you have to fully serialize/deserialize the
data. APC doesn't necessarily have to do that. For large values this can
take a long time.

The other issue here is that you're not testing what memcached actually
does. The limit isn't in how many requests you can fetch in a loop. If
you're doing that you should hit yourself. Memcached is there so your 40
webservers can all access it at the same time and still get useful speed
from it. In my experience the network has always been the bottleneck,
unless you're running it on an atom box or something.

 Would you say this is normal response time? And normal to see APC that
 much faster?

Yes. APC will be faster.

 I'm not even sure how to debug this.

Once again, what's the *actual issue* you're seeing?

 ./mc_conn_tester.pl 10.0.0.23:11211 1000 1
 Averages: (conn: 0.00050306) (set: 0.00057981) (get: 0.00046550)

That seems fine... half a millisecond-ish? normal for an over the network
ping.

-Dormando


Re: Please clarify me on memcached efficiency

2011-07-25 Thread dormando
http://code.google.com/p/memcached/wiki/NewPerformance

On Mon, 25 Jul 2011, Chetan Gadgilwar wrote:

  I have one question, if multiple users are accessing a DB then the speed get 
 decreases then is it the same thing with memcached if number of users
 access the same cached data? or it would work smoothly without any effect if 
 number of users are increases?

 --
  Best wishes,
 Chetan Gadgilwar




Re: Memcached and Perl

2011-07-25 Thread dormando
 Hello,

 I have been playing with Memcached between two servers using Perl and 
 Memcached::Client which is working great. I would like to take it one step 
 further and use AnyEvent to trigger when a new key and value has been written 
 to Memcached. I am struggling to understand how to do it though as the code I 
 have so far is:

 use AnyEvent;
 my $cv = AnyEvent-condvar;
 $client-get('iplist', $cv);
 my $iplist = $cv-recv;

 while (($ip,$count) = each(%$iplist)) {
 print $ip . \n;
 }

 my $loop = AnyEvent-condvar;
 $loop-recv;

 If somebody has this working would be very grateful for an example please.

You can't get a trigger/notification when a new key has been added to
memcached. Windmills do not work that way.

You could poll for a value, but that's not really how you're supposed to
use the thing... The pattern is just:

if (my $var = $memc-get('key')) {
   # Use the cached var
} else {
   my $var = slowdatastore_fetch('key')
   $memc-set('key', $var, 60); # cache it for a minute
}

... and then slowly improving that code as you get more familiar with
memc.


Re: Overriding the size of each slab page

2011-07-25 Thread dormando
 On memcached version 1.4.5-1ubuntu1, there are two entries for the ‘-
 I’ parameter in the memcached(1) man page.

 -I Override the size of each slab page in bytes.  In mundane
 words, it adjusts the maximum item size that memcached will accept.
 You can use the suffixes K and M to  specify  the size as well, so use
 200 or 2000K or 2M if you want a maximum size of 2 MB per object.
 It is not recommended to raise this limit above 1MB due just to
 performance reasons.  The default value is 1 MB.

I have no idea who wrote this. It's not in the original source tree.

 -I size  Override the default size of each slab page. Default is
 1mb. Default is 1m, minimum is 1k, max is 128m. Adjusting this value
 changes  the  item  size  limit.  Beware that this also increases the
 number of slabs (use -v to view), and the overal memory usage of
 memcached.

 It seems to me that the first entry is misleading.  The parameter does
 not adjust the maximum item size; rather, the parameter adjusts the
 slab page size, and the number of items stored in each slab page.
 These two entries should be combined into one entry.

No, that second part is a side effect, and doesn't affect performance. It
primarily increases the maximum item size, by way of increasing the max
page size.

 The second entry could be further clarified by saying that reducing
 the page size below the 1 megabyte default page size will result in an
 increased number of slabs.

Uhhh. Can you post the output of `memcached -vvv` with your -I
adjustments? If you reduce the max page size it most certainly reduces the
number of slabs. It will increase the number of slab *pages* available.
Which doesn't affect anything.

 By the way, '-I 10M' does not work.  Neither does '-I 10m'.  I
 discovered that you have to specify the byte size, i.e., '-I
 10485760'.

Can you try building from source via http://memcached.org/ (you don't have
to do a make install, just ./memcached blah blah)? That most certainly
WFM, and given the above magic manpage entry I'm assuming some overzealous
package maintainer broke this. In fact there're tests in the test tree
which verify that syntax works...

 Please correct my understanding, if I am missing something.

I tried, did I help?

 Also, I do not understand the warning, It is not recommended to raise
 this limit above 1MB due just to performance reasons.  What exactly
 are the performance issues?

 If my default chunk size is 480 bytes and if I am storing items in 416
 byte chunks and 448 byte chunks, then, I can store more chunks in 10
 megabytes pages than I can in 10 kilobyte pages.  So, why wouldn't I
 opt to store my chunks in 10 megabyte pages (rather than 10 kilobyte
 pages or even 1 megabyte pages)?  The vast majority of my chunks are
 448 byte chunks.  So, it seems to me that I can use my memory more
 efficiently by opting for 10 megabyte slab pages.  What, if anything,
 is behind the peformance warning?

IF ANYTHING. So ominous.

Using a non-brokeassed build of memcached, start it with the following
examples:

memcached -vvv -I 128k
memcached -vvv -I 512k
memcached -vvv -I 1M
memcached -vvv -I 5M
memcached -vvv -I 10M
memcached -vvv -I 20M

You'll see that the slab sizes get further apart. You're missunderstanding
what everything is inside the slabber.

- slab page is set (1M by default)
- slab classes are created by starting at a minimum, multiplying by a
number (1.2 by default?) and repeating until the slab class size is equal
to the page size (1M by default)
- when you want to store an item:
  - finds slab class (item size 416 bytes would go into class 8, so long
as the key isn't too long)
  - try to find memory in class 8. No memory? Pull in one *page* (1M)
  - divide that page into chunks of size 480 bytes (from class 8)
  - hand back one page chunk for the item
  - repeat
- memory between your actual item size, and the slab chunk size, is wasted
overhead
- the further apart slab classes are, the more memory you waste (perf
issue #1)

If you give memcached a memory limit of 60 megs, and a max item size of
10 megs, it only has enough pages to give one page to 6 slab classes.
Others will starve (tho it's actually a little more complicated than
that, but that's the idea).

-Dormando


Re: Tuning memcached on ubuntu x86_64 (2.6.35)

2011-07-21 Thread dormando
 Is it normal to have a 16 percent virtual memory overhead in memcached
 on x86_64 linux?  memcached STAT bytes is  reporting 3219 megabytes of
 data, but virtual memory is 16 percent higher at 3834. Resident memory
 is 14 percent higher at 3763 megabytes.

 Is there a way to tune linux/memcached to get memcached to consume
 less virtual memory?


Are you using some bizarre VM system where virtual memory actually
matters? I can start up apps with terabytes of VM allocated just fine.

The overhead in RSS is normal. you lose some memory to buffers, pointers,
the hash table structure, etc.


Memcached release 1.4.6

2011-07-15 Thread dormando
http://code.google.com/p/memcached/wiki/ReleaseNotes146

Enjoy!

We just closed an assload of bugs, and continue to work on a few more
(thanks dustin, trond!). If you have a favorite bug, feature request,
whatever, just keep nagging the issue or the list. I'll be doing my best
to make regular releases.

Unless it's please add tags or something, in that case please chew
rocks.

-Dormando


Reviving the 1.4 development tree

2011-07-13 Thread dormando
Hey,

http://code.google.com/p/memcached/wiki/NewDevelopmentProcess
^ I've updated this page to hopefully reflect reality now.

Since it's taking so long to stabilize the 1.6 tree, I've decided to
revive the 1.4 tree.

We will begin again to make regular releases of 1.4 and 1.6 will track on
its own until such time as it is no longer necessary to work on 1.4.

So if you have bug reports, small feature requests, etc, we will take them
for 1.4. I will do my best to cut a new -rc of 1.4 at a maximum of 3 weeks
apart from the previous release, but if interesting things sit in the tree
we should be trying to release even sooner than that.

The 1.6 tree will continue to see beta releases whenever something
interesting happens. The engine-pu branch receives random upstream
merges, but I am not able to personally follow them very closely.

-Dormando


Memcached 1.4.6-rc1

2011-07-12 Thread dormando
http://code.google.com/p/memcached/wiki/ReleaseNotes146rc1

If no new major bugs are reported in the next day or two, I'll cut this as
-final.

We're looking for someone with GCC 4.6.0+ to do a build test. If you have
a new enough GCC could you give this a shot and let us know how it looks?

I know there're a few other patches/fixes/reports/etc people have for 1.4;
I'd like to reiterate again that you should be working off of 1.6, the
posted beta or the memcached/engine-pu branch on github.

Thanks!
-Dormando


Re: Failed reading line from stream (0): over 1'000'000 pages on google and no fix?

2011-06-28 Thread dormando
 In the logs:
 Notice:  Memcache::get() [a href='memcache.get'memcache.get/a]:
 Server localhost (tcp 11211) failed with: Failed reading line from
 stream (0) in b/var/www/html/inc/chat/libraries/server/NodeJS.php/
 b on line b37/bbr /
 {r:registered}

 At line 37:
 $session = json_decode($this-memcache-get('session/' . $cookie-
 sid));

I hate that client...

That's either a timeout, or a corrupt key?


Re: Failed reading line from stream (0): over 1'000'000 pages on google and no fix?

2011-06-28 Thread dormando
http://code.google.com/p/memcached/wiki/Timeouts -- run through this.

You can also try to get the key in question from your error, then fetch
the key manually and inspect it to see if the data is corrupt.

On Tue, 28 Jun 2011, kfa1983 wrote:

 Yea, I think the server is timing out but WHY? xD the port is open.
 But the problem is actually the memcache or the code? Thanks!

 On 28 juin, 19:26, dormando dorma...@rydia.net wrote:
   In the logs:
   Notice:  Memcache::get() [a href='memcache.get'memcache.get/a]:
   Server localhost (tcp 11211) failed with: Failed reading line from
   stream (0) in b/var/www/html/inc/chat/libraries/server/NodeJS.php/
   b on line b37/bbr /
   {r:registered}
 
   At line 37:
   $session = json_decode($this-memcache-get('session/' . $cookie-
   sid));
 
  I hate that client...
 
  That's either a timeout, or a corrupt key?



Re: Issue 202 in memcached: TOP_KEYS feature fixes

2011-06-06 Thread dormando

 I just noticed that I didn't answer this question from Dormando:  Is
 the only reason to keep it exactly the way it is because it's already
 done and you have customers who rely on it?

I was actually asking the couchbase folks why they were so insistent on
pushing the feature the way it was, which runs all that code in the hot
path every time.

 Nothing is cast in stone at this stage.  If there is something else
 that should be included that is important for operational monitoring,
 then please suggest(!)

I've been procrastinating like a bastard and still don't have any method
of actually verifying any of the work we've been doing.

Had a couple false starts on a bench setup, then hauled off and rewrote
the guts of another project I work on. Now I feel refreshed and more
willing to tackle this issue again :)

The problem is still, same as it was before, that we can discuss this to
death but we need to actually test it to see where it's worth putting
effort.

In my experience of trying to break memcached, it's always been a little
surprising where you end up bottlenecking performance-wise.

-Dormando


Re: How does memcache determine what non-expired items to evict?

2011-06-04 Thread dormando

 I had a look at: 
 https://code.google.com/p/memcached/wiki/NewUserInternals#How_the_LRU_Decides_What_to_Evict
 which sais memcache will evict one that isn't expired if it cannot
 find an expired item. Am I correct to assume that it's evicting by LRU
 logic? Is this an ordered stack eviction, or are there any random bits
 in there?

Heh. Not sure where the fuzzyness is coming from here. As it says in the
URL, in the title of the section, and twice in the text:

'If there are no free chunks, and no free pages in the appropriate slab
class, memcached will look at the end of the LRU for an item to reclaim.
It will search the last few items in the tail for one which has already
been expired, and is thus free for reuse.'

So no, not random, but it tries to walk up the tail a little to find
something expired if the very bottom isn't expire.

-Dormando


Re: How to determine if memcache is full

2011-05-30 Thread dormando
 Hello everyone,

  every weeks my memcache server stop accepting more connections. Today
 before restart daemon, i've check stats.

 stats
 STAT pid 30026
 STAT uptime 938964
 STAT time 1306667508
 STAT version 1.2.2

Please upgrade to a newer version :) That one has grown a lot of hair,
memcached does not stop accepting connections when it gets full. It's
supposed to do more useful things instead.

-Dormando


Re: How to determine if memcache is full

2011-05-30 Thread dormando
 Hello Dormando,

  thans for your feedback. In fact i'm using the last stable version at debian 
 repositories (http://packages.debian.org/lenny/memcached). Why will no
 longer accept new connections?  Can i determine the cause based on stats?

 I'm collecting data from memcache with cacti templates, and can't find any 
 reason for this situation.

Lenny is too old to support; the cause is because you're 10+ revisions
behind and we fixed too many crashes to list. We're on 1.4.5 (soon to be
1.4.6 or 1.6.0). You can try to fetch it from backports or build yourself.


Re: a c memcached client

2011-05-29 Thread dormando
 Hi Matt,              I think it is my fault did not explain my motivation 
 clearly. I am not the one who has the power to
 tear Spymemcached up. Spymemcached helps me a lot , I love Spymemcahced. I 
 just want to share some thing which is valuable. 
               Thank you for your reply. You must be a loyal user of 
 Spymemcached. I understood you completely. Since it was a open-source
 project, I have my right to suggest and improve it. 
                One thing is true, I use my client to store 10 keys in 
 memcached , and it runs well . For spymecahced it failed. 
                I can say with a bit of experience, dealing with all of the 
 possible connection issues takes some effort. For god's sake,
 as a will-be member of IT , I have  to say  we were born to solve the 
 problems. I solved a problem, I wanted to share with people. I
 thought it could help someone out of trouble. 
                Now you are saying a great number of people use Spymemcached 
 quite successfully. People have  the right to choose what they
 love, you can not stop them. You never can. 
               Thanks Matt, you gave me a idea about my client's future.  I am 
 looking forward to your reply. 

I don't think he meant that you should go fix spymemcached, I think he
meant you should've sent a bug report to their mailing list to see if they
can fix it before hauling off and writing your own client.

He's not a loyal user, he's one of the authors.

-Dormando


Re: Learning memcached

2011-05-21 Thread dormando

   timethese($count, {
   'query_mem' =  sub {
   my $sth = $dbh-prepare($sql);
   my @res = ();
   for (@ids) {
   my $str = $memd-get($_);
   unless ($str) {
   $sth-execute($_);
   ($str) = $sth-fetchrow_array;
   $memd-set($_, $str);
   }

   push @res, [$_, $str];
   }

   out(@res);
   }

The confusion here is probably more about you comparing the fastest
nearly-fully-C-based DB path vs the slowest possible memcached path.
Asking DBI for a simple query and then calling fetchrow_array (its fastest
deserializer) is very hard to beat.

If you're looking to see if an actual loaded DB would improve or not, you
need to make the DB queries as slow as you see in production. You might as
well reduce them to timed sleeps that roughly represent what you'd see in
prod... If your dataset really is this fast, small, single-threaded, and
in memory, there is no purpose to memcached.

 Benchmark: timing 1 iterations of query_dbh, query_mem...
  query_dbh:  6 wallclock secs ( 3.29 usr +  1.03 sys =  4.32 CPU) @
 2314.81/s (n=1)
  query_mem: 54 wallclock secs (30.02 usr +  8.24 sys = 38.26 CPU) @
 261.37/s (n=1)

Cache::Memcached is a slowish pure-perl implementation. It helps speed
things up with very slow DB queries, or very large hot data set that won't
fit in your DB (ie; even fastish DB queries start to slow down as they hit
disk more often than not). It's also great to relieve concurrency off of
your DB.

If you're trying to compare raw speed vs speed you probably want to start
with the Memcached::libmemcached library. That's the faster C based guy.
There's also a wrapper to make it look more like Cache::Memcached's
interface...

-Dormando


Re: Learning memcached

2011-05-21 Thread dormando
 The funny thing is, while in real production, the queries are not this
 simple, in most web apps I make, the queries are really not all that
 complicated. They do retrieve data from large data stores, but the SQL
 itself is relatively straightforward. Besides, none of the web sites I
 make are hosting thousands of hits a day. These are scientific web
 sites, so even if the query is complicated, it being performed only a
 few 10s, at the most 100s of times a day.

 After all this, it may well be that memcache may not be my immediate
 step toward better speeds, nevertheless, it is a great concept worth
 learning for me.

It's a great tool to have around when shit hits the fan. It's also great
for random other things; like caching the results of multiple queries, of
rendered page templates, of full pages renderings, etc.

So if you compare 5 DB fetches (over the network! or at least not
in-process) vs one memcached fetch, it might start looking better.

But that's all moot, if your site is fast enough as is there's no point in
working on it. If it's slow, profile why it's slow and cache or fix those
queries.


Re: Learning memcached

2011-05-21 Thread dormando

 Ha ha! I installed Cache::Memcached::Fast, which seems to be a C based
 drop in replacement for C::M, and now I get the following results
 (this includes the get_multi method to get many keys in one shot)

  query_dbh:  5 wallclock secs ( 3.31 usr +  1.03 sys =  4.34 CPU) @
 2304.15/s (n=1)
  query_mem:  6 wallclock secs ( 1.87 usr +  1.46 sys =  3.33 CPU) @
 3003.00/s (n=1)

 much, much nicer. In fact, the memcache option is now slightly faster
 than pure SQLite.

*cough*. I hope nobody jumps up my ass for this; but C::M::F is based off
of some faulty mutation optimizations. I would highly recommend
Memcached::libmemcached in its favor. If you want a drop-in interface try
this out:
http://search.cpan.org/~timb/Cache-Memcached-libmemcached-0.02011/

libmemcached is a much more robust library, if you end up stuck with
something it's probably best off to be stuck over there.


Re: Cache::Memcached select_timeout

2011-05-19 Thread dormando
 Hi all,

 I'm encountered an issue with the Cache::Memcached client where I 
 occasionally experience slowness due to timeout on the select() call.  By 
 default,
 Cache::Memcached has an undocumented select_timeout parameter, which 
 specifies the timeout duration, which I'm currently hitting sporadically.

 I saw a post a long time ago involving a similar issue, but there were no 
 subsequent updates:

 http://lists.danga.com/pipermail/memcached/2006-March/002066.html

 Any help would be greatly appreciated!

So you're hitting the default timeout occasionally, and it's throwing an
error?

http://code.google.com/p/memcached/wiki/Timeouts -- walk through this
wiki to narrow down why your memcached server is occasionally taking too
long to respond. It covers most of the bases, and provides a tool to help
narrow down trouble.

-Dormando


Re: Cache::Memcached select_timeout

2011-05-19 Thread dormando
 nice dormando, could you check if this line is ok in your link and if
 not correct it?


  ./mc_conn_tester.pl memcached-host:11211 5000 4  log_three_seconds

 5000 4  log_three?
 4 != three

 hehehe, maybe should be 3  log_three or
 4  log_four

 right?
 thanks nice guide

That's correct but confusing. I'll reword it if I come up with something
better... The log is looking for a 3 second timeout (dropped SYN
packets) by setting the timeout to four seconds.


Re: Cache::Memcached select_timeout

2011-05-19 Thread dormando
 hum, if this line is ok, why the next
 8, shouldn´t be 9? or log_seven?


  ./mc_conn_tester.pl memcached-host:11211 5000 4  log_three_seconds
  ./mc_conn_tester.pl memcached-host:11211 5000 8  log_eight_seconds

That should be seven, but odd numbers bother my OCD.


Re: Memcache::get failed with null

2011-05-19 Thread dormando
 We're getting this error sometimes on a memcache call in php on
 memcache-get( some key );

 PHP Notice: Memcache::get() a href='memcache.get'memcache.get/a:
 Server 192.168.100.53 (tcp 11211) failed with: (null) (0)

 And I can't find anything online about this error. Is this a time out
 or what's going on here?

I vaguely recall this being timeout related? I think the memcache client
has slightly different errors on timeout on connect vs timeout on fetch,
one of which was stupid like this one you present.

http://code.google.com/p/memcached/wiki/Timeouts -- you can try fiddling
with the timeouts and walking through this to see if anything's relevant.


Re: maximum size of memcached instance

2011-05-05 Thread dormando
 HI
 Can i define memcached instance of 32 GB /64 GB or 96 GB. Typical rac
 server has 16 core 96 GB. can i utilize this 96 GB with memcached
 cache. I have large objects to cache. value of a key is 1 MB -3MB.
 This object is xml data having binary data in it.

 Reason for  doing this is that this data is accessed multiple times
 during processing. This data is discarded from cache once processing
 is over for this data.

 Is this right usage of memcached and will memcached scale to meet this
 requirement?

 Application that access cache is java based. So which is the right
 protocol for java client to communicated with memcached server.

You can use the -I option to increase the max object size (it defaults to
1mb), but that will reduce the overall memory efficiency. So you still
shouldn't set it too high.

If your server has 96G of ram, you still need to leave some left over for
the OS, memcached's hash table, connection management, buffers, and TCP
sockets. So I'd put that closer to 92G or 90G for memcached.

It shouldn't be hard to prototype and see if it'd help? It's not clear if
you have multiple servers accessed this shared data, or if it's just one
process accessing the same information multiple times, etc.

-Dormando


Re: Ynt: Re: Ynt: Re:Ynt: Re: Can not delete an existing value

2011-05-01 Thread dormando
On Fri, 29 Apr 2011, ilkinulas wrote:

 Hi, i am using memcached 1.4.3

http://code.google.com/p/memcached/issues/detail?id=74 -- might be
related to this if you're solaris?

In either way, can you reproduce the issue under the latest version?




Re: What's new in memcached (part 2)

2011-04-11 Thread dormando
ps. folks please look this over and evaluate. Do you understand
everything? Does anything suck? Need more clarification? Whatever?

http://code.google.com/p/memcached/downloads/detail?name=memcached-1.6.0_beta1.tar.gz
^ easy-bake oven form beta release. Passes tests on a bunch of platforms,
but possibly not OpenBSD.

Make evaluating! Give major feedback.

-Dormando

On Mon, 11 Apr 2011, Trond Norbye wrote:

What's new in memcached
===

 (part two - new feature proposals)

 Table of Contents
 =
 1 Protocol
 1.1 Virtual buckets!
 1.2 TAP
 1.3 New commands
 1.3.1 VERBOSITY
 1.3.2 TOUCH, GAT and GATQ
 1.3.3 SET_VBUCKET, GET_VBUCKET, DEL_VBUCKET
 1.3.4 TAP_CONNECT
 1.3.5 TAP_MUTATION, TAP_DELETE, TAP_FLUSH
 1.3.6 TAP_OPAQUE
 1.3.7 TAP_VBUCKET_SET
 1.3.8 TAP_CHECKPOINT_START and TAP_CHECKPOINT_END
 2 Modularity
 2.1 Engines
 2.2 Extensions
 2.2.1 Logger
 2.2.2 Daemon
 2.2.3 ASCII commands
 3 New stats
 3.1 Stats returned by the default stats command
 3.1.1 libevent
 3.1.2 rejected_conns
 3.1.3 stats related to TAP
 3.2 topkeys
 3.3 aggregate
 3.4 settings
 3.4.1 extension
 3.4.2 topkeys


 1 Protocol
 ~~~

 Intentionally, there is no significant difference in protocol over
 1.4.x.  There is one minor change, but it should be transparent to
 most users.

 1.1 Virtual buckets!
 =

 We don't know who originally came up with the idea, but we've heard
 rumors that it might be Anatoly Vorobey or Brad Fitzpatrick.  In lieu
 of a full explanation on this, the concept is that instead of mapping
 each key to a server we map it to a virtual bucket.  These virtual
 buckets are then distributed across all of the servers.  To ease the
 introduction of this we've assigned the two reserved bytes in the
 binary protocol for specifying the vbucket id, which allowed us to
 avoid protocol extensions.

 Note that this change should allow for complete compatibility if the
 clients and the server are not aware of vbuckets.  These should have
 been set to 0 according to the original binary protocol specification,
 which means that they will always use vbucket 0.

 The idea is that we can move these vbuckets between servers such that
 you can grow or shrink your cluster without losing data in your
 cache. The classic memcached caching engine does _not_ implement
 support for multiple vbuckets right now, but it is on the roadmap to
 create a version of the engine in memcached to support this (it is a
 question of memory efficiency, and there are currently not many
 clients that support them).

 Defining this now will allow us to start moving down the path to
 vbuckets in the default_engine and allow other engine implementors to
 consider vbuckets in their design.

 You can read more about the mechanics of it here:
 [http://dustin.github.com/2010/06/29/memcached-vbuckets.html]

 However, you _cannot_ use a mix of clients that are vbucket aware and
 clients who don't use vbuckets, but then again it doesn't make sense
 to use a vbucket aware backend if your clients don't know how to
 access them.  This is why we believe a protocol change isn't
 warranted.

 Defining this now will allow us to start moving down the path to
 vbuckets in the default_engine and allow other engine implementors to
 consider vbuckets in their design.

 1.2 TAP
 

 In order to facilitate vbucket transfers, among other use cases where
 people want to see what's inside the server, we added to the binary
 protocol a set of commands collectively called TAP.  The intention is
 to allow clients to receive a stream of notifications whenever data
 change in the server.  It is solely up to the backing store to
 implement this, so it can make decisions about what resources are used
 to implement TAP.  This functionality is commonly needed enough though
 that the core is aware of it, leaving specific implementation to
 engines.

 1.3 New commands
 =

 There are a few new commands available.  The following sections
 provides a brief description of them.  Please check protocol_binary.h
 for the implementation details.

 1.3.1 VERBOSITY
 

 We did not have an equivalent of the verbosity command in the textual
 protocol.  This command allows the user to change the verbosity level
 on your running server by using the binary protocol.  Why do we need
 this? There is a command line option you may use to disable the ascii
 protocol, so we need this command in order to change the logging level
 in those configurations.

 1.3.2 TOUCH, GAT and GATQ
 --

 One of the problems with the existing commands in memcached is that
 you couldn't tell the memcached server that the object is still valid
 and we just want a longer expiration.  Normally you want

Re: What's new in memcached (part 2)

2011-04-11 Thread dormando


On Mon, 11 Apr 2011, Adam Lee wrote:


 is there somewhere i can copy edit this document?

 a bit nitpicky, i know, but i found a few mistakes just while browsing it... 
 section 2.1 both suites should be suits, section 3.4 it's should
 be its, etc.

 awl


Does anything besides the english not sit well with you???


Re: Memcachd with httpd to cache user stickyness to a datacenter

2011-04-06 Thread dormando


On Wed, 6 Apr 2011, Mohit Anchlia wrote:

 Thanks! These points are on my list but none of them are useful. The
 reason is I think I mentioned before that most of these servers that
 are sending requests to us are hosted inside the co. but by different
 group. So geoReplication will not work in this case since 70% of
 request comes from one region, infact same data center.

 Point# 1 mentioned by you is the best option but I am having some
 challanges there. Problem like I mentioned is that User A - connects
 to one of the servers in the pool and that server sends - http to our
 server. Now user A can sign out and connect to other server in the
 pool and then we get the request. Only way we can solve this is by
 changing the server code, this would be best. However, we are having
 hard time and I am trying to see if there are other solutions like say
 a nosql distributed db that keeps track of user session.

I could write that redirector as an apache plugin, or perlbal plugin, or
varnish plugin. Which seems like the only place you have access to.

You reaally sure geodns won't work? Even though your
servers are 70% from one datacenter and 30% from another, are they all
coming from the same exact IP address? You *could* use by-ip granularity
for the load balancing, which I was sort of hinting at there.

NoSQL isn't magic problem solving, you still have that race condition
unless your app only makes one request every hour, or you replicate
synchronously.

Anyway that's the last I'll say on this, I just wanted to be clear :P It
sorta seems like you just want something prebuilt.


Re: Memcachd with httpd to cache user stickyness to a datacenter

2011-04-05 Thread dormando
 shitty reason for
not doing it, I would push harder unless you have a good way to dodge when
the hack put in place ultimately fails.

4) Assuming that the client hits a *random datacenter* *every single
time*, ALL OTHER OPTIONS, which use asynchronous replication, will have a
race condition failure. If you want to implement this, you *must* have the
source datacenter block the client's response until it has written the
session note to the remote datacenter. Perhaps you only need to do this
once per hour.

5) None of this will work if the client can make multiple requests at a
time, or if your service makes decisions based on *any* data that isn't
uniquely paired to that original client (like a feed list or twitter
timeline)

6) I can think of more variations of #1 while using backhauls, but tbh
they're all super gross.

n' stuff.
-Dormando


Re: cant install memcahed into cpanel showing error

2011-04-05 Thread dormando
What the ... urgh.

I have no idea where you're getting that RPM of memcached, but it looks
like the packager didn't remove the deps for the damemtop script I
shoved in the scripts/ directory. Yum is being helpful and trying to
install a ton of useless perl depedencies.

If you just tell it to force the install it'll work fine (--skip-broken or
whatever). I'll make sure the RPM specs we supply will not try to pull in
deps for damemtop.

On Mon, 4 Apr 2011, onel0ve wrote:

 root@srv [~]# yum install memcached
 Loaded plugins: fastestmirror
 Loading mirror speeds from cached hostfile
  * addons: mirror.denit.net
  * base: mirror.denit.net
  * epel: mirrors.nl.eu.kernel.org
  * extras: mirror.denit.net
  * rpmforge: ftp-stud.fht-esslingen.de
  * updates: mirror.denit.net
 Excluding Packages in global exclude list
 Finished
 Setting up Install Process
 Resolving Dependencies
 -- Running transaction check
 --- Package memcached.i386 0:1.4.5-1.el5.rf set to be updated
 -- Processing Dependency: perl(AnyEvent) for package: memcached
 -- Processing Dependency: perl(AnyEvent::Socket) for package:
 memcached
 -- Processing Dependency: perl(AnyEvent::Handle) for package:
 memcached
 -- Processing Dependency: libevent-1.1a.so.1 for package: memcached
 -- Processing Dependency: perl(YAML) for package: memcached
 -- Processing Dependency: perl(Term::ReadKey) for package: memcached
 -- Running transaction check
 --- Package compat-libevent-11a.i386 0:3.2.1-1.el5.rf set to be
 updated
 --- Package memcached.i386 0:1.4.5-1.el5.rf set to be updated
 -- Processing Dependency: perl(AnyEvent) for package: memcached
 -- Processing Dependency: perl(AnyEvent::Socket) for package:
 memcached
 -- Processing Dependency: perl(AnyEvent::Handle) for package:
 memcached
 -- Processing Dependency: perl(YAML) for package: memcached
 -- Processing Dependency: perl(Term::ReadKey) for package: memcached
 -- Finished Dependency Resolution
 memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
   -- Missing Dependency: perl(YAML) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
 memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
   -- Missing Dependency: perl(Term::ReadKey) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
 memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
   -- Missing Dependency: perl(AnyEvent::Socket) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
 memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
   -- Missing Dependency: perl(AnyEvent::Handle) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
 memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
   -- Missing Dependency: perl(AnyEvent) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
 Error: Missing Dependency: perl(Term::ReadKey) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
 Error: Missing Dependency: perl(AnyEvent) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
 Error: Missing Dependency: perl(YAML) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
 Error: Missing Dependency: perl(AnyEvent::Handle) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
 Error: Missing Dependency: perl(AnyEvent::Socket) is needed by package
 memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  You could try using --skip-broken to work around the problem
  You could try running: package-cleanup --problems
 package-cleanup --dupes
 rpm -Va --nofiles --nodigest


 how to fix this



Re: cant install memcahed into cpanel showing error

2011-04-05 Thread dormando
sigh.

yum install --skip-broken memcached

or whatever combo actually works.

On Wed, 6 Apr 2011, smiling dream wrote:

 can you give me the rpm for installing memcahed

 On 4/6/11, dormando dorma...@rydia.net wrote:
  What the ... urgh.
 
  I have no idea where you're getting that RPM of memcached, but it looks
  like the packager didn't remove the deps for the damemtop script I
  shoved in the scripts/ directory. Yum is being helpful and trying to
  install a ton of useless perl depedencies.
 
  If you just tell it to force the install it'll work fine (--skip-broken or
  whatever). I'll make sure the RPM specs we supply will not try to pull in
  deps for damemtop.
 
  On Mon, 4 Apr 2011, onel0ve wrote:
 
  root@srv [~]# yum install memcached
  Loaded plugins: fastestmirror
  Loading mirror speeds from cached hostfile
   * addons: mirror.denit.net
   * base: mirror.denit.net
   * epel: mirrors.nl.eu.kernel.org
   * extras: mirror.denit.net
   * rpmforge: ftp-stud.fht-esslingen.de
   * updates: mirror.denit.net
  Excluding Packages in global exclude list
  Finished
  Setting up Install Process
  Resolving Dependencies
  -- Running transaction check
  --- Package memcached.i386 0:1.4.5-1.el5.rf set to be updated
  -- Processing Dependency: perl(AnyEvent) for package: memcached
  -- Processing Dependency: perl(AnyEvent::Socket) for package:
  memcached
  -- Processing Dependency: perl(AnyEvent::Handle) for package:
  memcached
  -- Processing Dependency: libevent-1.1a.so.1 for package: memcached
  -- Processing Dependency: perl(YAML) for package: memcached
  -- Processing Dependency: perl(Term::ReadKey) for package: memcached
  -- Running transaction check
  --- Package compat-libevent-11a.i386 0:3.2.1-1.el5.rf set to be
  updated
  --- Package memcached.i386 0:1.4.5-1.el5.rf set to be updated
  -- Processing Dependency: perl(AnyEvent) for package: memcached
  -- Processing Dependency: perl(AnyEvent::Socket) for package:
  memcached
  -- Processing Dependency: perl(AnyEvent::Handle) for package:
  memcached
  -- Processing Dependency: perl(YAML) for package: memcached
  -- Processing Dependency: perl(Term::ReadKey) for package: memcached
  -- Finished Dependency Resolution
  memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
-- Missing Dependency: perl(YAML) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
-- Missing Dependency: perl(Term::ReadKey) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
-- Missing Dependency: perl(AnyEvent::Socket) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
-- Missing Dependency: perl(AnyEvent::Handle) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  memcached-1.4.5-1.el5.rf.i386 from rpmforge has depsolving problems
-- Missing Dependency: perl(AnyEvent) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  Error: Missing Dependency: perl(Term::ReadKey) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  Error: Missing Dependency: perl(AnyEvent) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  Error: Missing Dependency: perl(YAML) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  Error: Missing Dependency: perl(AnyEvent::Handle) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
  Error: Missing Dependency: perl(AnyEvent::Socket) is needed by package
  memcached-1.4.5-1.el5.rf.i386 (rpmforge)
   You could try using --skip-broken to work around the problem
   You could try running: package-cleanup --problems
  package-cleanup --dupes
  rpm -Va --nofiles --nodigest
 
 
  how to fix this
 
 


 --
 Ultimate Download Center

 www.smilng-dream.info



Re: small hackathon, March 17

2011-03-17 Thread dormando
what time does it start? ;)

On Wed, 16 Mar 2011, Matt Ingenthron wrote:

 Hi all,

 Since a few of the core folks are in the same place at the same time,
 we're going to hold a small hackathon... this time really hacking... to
 work on some merging of a couple of branches together to push to the
 next release.

 If you are interested in participating and already have some experience
 here, we'd love to have you.  If you're just looking to learn how to use
 memcached or work on a client, this is probably not the place since we
 aim to be pretty focused on the server innards.

 This is in the bay area.  Specifically:
 Couchbase, Inc. (may still say Membase on the signs!)
 200 W. Evelyn Ave. Suite 110
 Mountain View, CA 94041

 Please let me know if you'd like to come in so we can be ready to
 accommodate you.

 I'm sure you're all interested in what will come out of it, so I'll aim
 to send out some notes afterwards!

 Matt

 p.s.: sorry for the last minute notice



Re: It's a long story... (TL;DR?)

2011-03-09 Thread dormando
 #2 lead to a lot of talk of how to handle 'network fault tolerance'.
 I've realized that *most other* people's fault-tolerance is achieved
 by being on an intranet and the clients' ability to handle going from
 N - (N-1) physical nodes fairly gracefully. This doesn't really apply
 for us, because due to going across the wire via stunnels, our caches
 go from N - 0 nodes when there are network hiccups, and in these
 cases the clients (spymemcached for us) doesn't handle that nearly as
 gracefully. [NTS: Hrm... Maybe that is a concrete suggestion I can
 make on the spymemcached project page...?]

All of your caches are remote? Or is this part referring to the spy client
not being able to properly SET on remote servers when that tunnel is down?

 Today I found http://code.google.com/p/memagent ... It seems to have
 (at least a 2-logical) broadcast for non-GET operations, which is
 cool. But the docs are shy on details, and it looks like kinda barren.

 TL:DR

 I guess I'm asking:
 Has anyone used Memagent, and what do they think of it?
 If you took the time to read my long-winded explanation, do you have
 other suggestions for addressing the issues?

No offense to the memagent guy, but it seems like an odd solution.

If you're trying to synchronize the cache for roughly when the local
database gets the update, is there a reason why you're trying to do that
within memcached at all?

There're plenty of people I've talked to who do this by modifying the
database or using a UDF (UDF highly preferred :P).

So when you CRUD against the database, you send another command with the
UDF in it to DELETE or SET against the memcached cluster local to that
database. You can pair it inside a dummy INSERT or UPDATE or trigger or
sproc or whatever floats your boat. (unless I missed something and you
don't use mysql at all).

So in short, you install the UDF's and configure each mysql server to talk
to its local set of memcacheds, then you ensure the application is using
the same hash method the UDF's are. Once the replicated changes land local
to that copy, the cache is updated after the data has been safely applied.

No race or anything, I guess the one downside would be if you were
intending to have the caches updated ahead of the database being updated.

-Dormando


Re: It's a long story... (TL;DR?)

2011-03-09 Thread dormando
 On 3/9/11 8:36 PM, dormando wrote:
 
  So when you CRUD against the database, you send another command with the
  UDF in it to DELETE or SET against the memcached cluster local to that
  database. You can pair it inside a dummy INSERT or UPDATE or trigger or
  sproc or whatever floats your boat. (unless I missed something and you
  don't use mysql at all).

 I thought the scenario was that he was updating a single master database which
 then replicates to remote read-only copies, and the client that just did an
 update may immediately read back a stale cache version from a different
 location instead of what it just changed.   The real problem is the speed the
 replication propagates, so having the db push the cache update at each
 location probably can't fix it.

The problem as I understand it is that the replication lag causes the
remote ends to recache stale data. correct me if I'm wrong, but a recap:

- guy has multiple caches at different DC's
- update runs on master DC
- caches now out of date!
- issue delete from master DC to slave DC's
- delete appears before new data appears! recaches bad data
- changed to issue SET's from master DC to slave DC's
- works okay
- adds even more slave DC's, now issuing gobs of SETs from main DC
- kinda sucks. async queue used but better idea needed
- dormando posts, perhaps, better idea

-Dormando


Re: Replication ?

2011-03-04 Thread dormando
 guys, the creators of this much loved tool -- viz-a-viz memcache -- designed 
 it with one goal in mind: CACHING!!

 using sessions with memcache would only make sense from a CACHING standpoint, 
 i.e. cache the session values in your memcache server and if the
 caching fails for some reason or another, hit your permanent storage system: 
 RDBMS or No-SQL... obvioulsy, your caching server specs (and supporting
 environment like interconnect fabrics, Mbps download capacity, server 
 durability, etc..) should reflect your user load + dat importance for
 efficiency, among other factors.. i generally use Memcache  (+ PHP) out of 
 the box with this in mind and never found any earth-moving issues... For
 sessions particularly, i never found any issues.

 I think it's vitally important to keep in mind what Memcache is for ... a 
 CACHING TOOL.. and not a permanent storage system (also it's a Friday
 evening here in England so please excuse the language.. and any typos ;) )

 Moses.

As I pointed out in that blog post, it's also handy for achieving write
amplifications of less than 1.0 for more lossy data.

soo. it's more about matching the tool
vs your actual needs. most of the problem here has always been separating
perceieved requirements from actual requirements.


Re: Replication ?

2011-03-03 Thread dormando
 Hi all,
 I know I'll get blasted for not googling enough, but I have a quick question.

 I was under the impression memcached servers replicated data, such that if i 
 have 2 servers and one machine goes down the data would all still be
 available on the other machine.  this with the understanding that some data 
 may not yet have been replicated as replication isn't instantaneous.

 Can you clarify for me?

 thx,

 -nathan

I sound like a broken record about this, but I like restating things
nobody cares about;

- memcached doesn't do replication by default
- because not replicating your cache gives you 2x cache space
- and when you have 10 memcached servers and one fails...
- ... you get some 10% miss rate.
- and may cache 2x more crap in the meantime.

if your workload really requires cache data never disappear, you're
looking more for a database (mysql, NoSQL, or otherwise).

the original point (and something I still see as a feature) is the ability
to elastically add/remove cache space in front of things which don't scale
as well or take too much time to process.

For everything else there's
mastercard^Wredis^Wmembase^Wcassandra^Wsomeotherproduct

-Dormando


Re: Memcached performance issues

2011-02-21 Thread dormando
Have you walked through those links I gave you? You haven't mentioned
exactly what you're seeing and those links walk you through narrowing it
down a lot as well as listing a lot of things to look for.

On Mon, 21 Feb 2011, Patrick Santora wrote:

 Hrmm. Still having issues. Here is the latest stats dump. I also talked with 
 my IT person and he mentioned the following setup, which does
 not look like an issue?
 NIC SETTINGS
 the servers should all be autonegotiating to 100/Full and we apply these 
 additional kernel tuning parameters
 net.core.rmem_max = 16777216
 net.core.wmem_max = 16777216
 net.ipv4.tcp_rmem = 4096 87380 16777216
 net.ipv4.tcp_wmem = 4096 65536 16777216

 LATEST STATS
 STAT pid 1788
 STAT uptime 44811
 STAT time 1298311271
 STAT version 1.4.5
 STAT pointer_size 64
 STAT rusage_user 178.875806
 STAT rusage_system 763.939863
 STAT curr_connections 811
 STAT total_connections 2012
 STAT connection_structures 813
 STAT cmd_get 876886
 STAT cmd_set 74747
 STAT cmd_flush 0
 STAT get_hits 858907
 STAT get_misses 17979
 STAT delete_misses 0
 STAT delete_hits 2
 STAT incr_misses 0
 STAT incr_hits 0
 STAT decr_misses 0
 STAT decr_hits 0
 STAT cas_misses 0
 STAT cas_hits 0
 STAT cas_badval 0
 STAT auth_cmds 0
 STAT auth_errors 0
 STAT bytes_read 17426408671
 STAT bytes_written 180479901035
 STAT limit_maxbytes 536870912
 STAT accepting_conns 1
 STAT listen_disabled_num 0
 STAT threads 4
 STAT conn_yields 0
 STAT bytes 3501518
 STAT curr_items 3230
 STAT total_items 74747
 STAT evictions 0
 STAT reclaimed 20950
 END


 On Mon, Feb 21, 2011 at 8:12 AM, Patrick Santora patwe...@gmail.com wrote:
   @Dustin
   Thanks, I will be disabling them to see if that helps.

   -Pat


 On Mon, Feb 21, 2011 at 5:59 AM, Dustin dsalli...@gmail.com wrote:

   On Feb 21, 12:31 am, Patrick Santora patwe...@gmail.com wrote:
Heh. I had a funny feeling that was going to be the answer. I was 
 curious
mostly because the Binary mode seemed to do quite a deal of good for
Facebook when it was used. I'm imagining that they cached images so 
 binary
was a good idea, but for simple structures like json, it might not 
 make much
sense. So thought I would get some opinions :).

  binary protocol doesn't make much of a difference wrt what you're
 caching, but can help you optimize some access patterns with a
 sufficiently smart client.  If you're concerned that it may be making
 things worse (it probably doesn't have a huge effect from what I'm
 hearing here), you can just try disabling it.







Re: Memcached performance issues

2011-02-21 Thread dormando
Have you been running the connection tester tool while observing the
client slowdown?

The tool is there so you can rule if your client is an issue or not, ie;
if the tool never sees a blip but all/most/some of your clients are seeing
blips, it's the client's fault. If the tool sees a blip, you can see
exactly where it's getting hung up and further narrow it down.

On Mon, 21 Feb 2011, Patrick Santora wrote:


 Its just strange. Memcaced with verbose logging looks ok but the client 
 machines just take forever to get data. Like in the stats I don't
 see anything out of the ordinary. The nic settings look ok too. Quite 
 frustrating...

 On Feb 21, 2011 11:51 AM, Patrick Santora patwe...@gmail.com wrote:
  I will need to look at those further today. This weekend went a little
  haywire for me. :)
  On Feb 21, 2011 11:42 AM, dormando dorma...@rydia.net wrote:
  Have you walked through those links I gave you? You haven't mentioned
  exactly what you're seeing and those links walk you through narrowing it
  down a lot as well as listing a lot of things to look for.
 
  On Mon, 21 Feb 2011, Patrick Santora wrote:
 
  Hrmm. Still having issues. Here is the latest stats dump. I also talked
  with my IT person and he mentioned the following setup, which does
  not look like an issue?
  NIC SETTINGS
  the servers should all be autonegotiating to 100/Full and we apply these
  additional kernel tuning parameters
  net.core.rmem_max = 16777216
  net.core.wmem_max = 16777216
  net.ipv4.tcp_rmem = 4096 87380 16777216
  net.ipv4.tcp_wmem = 4096 65536 16777216
 
  LATEST STATS
  STAT pid 1788
  STAT uptime 44811
  STAT time 1298311271
  STAT version 1.4.5
  STAT pointer_size 64
  STAT rusage_user 178.875806
  STAT rusage_system 763.939863
  STAT curr_connections 811
  STAT total_connections 2012
  STAT connection_structures 813
  STAT cmd_get 876886
  STAT cmd_set 74747
  STAT cmd_flush 0
  STAT get_hits 858907
  STAT get_misses 17979
  STAT delete_misses 0
  STAT delete_hits 2
  STAT incr_misses 0
  STAT incr_hits 0
  STAT decr_misses 0
  STAT decr_hits 0
  STAT cas_misses 0
  STAT cas_hits 0
  STAT cas_badval 0
  STAT auth_cmds 0
  STAT auth_errors 0
  STAT bytes_read 17426408671
  STAT bytes_written 180479901035
  STAT limit_maxbytes 536870912
  STAT accepting_conns 1
  STAT listen_disabled_num 0
  STAT threads 4
  STAT conn_yields 0
  STAT bytes 3501518
  STAT curr_items 3230
  STAT total_items 74747
  STAT evictions 0
  STAT reclaimed 20950
  END
 
 
  On Mon, Feb 21, 2011 at 8:12 AM, Patrick Santora patwe...@gmail.com
  wrote:
  @Dustin
  Thanks, I will be disabling them to see if that helps.
 
  -Pat
 
 
  On Mon, Feb 21, 2011 at 5:59 AM, Dustin dsalli...@gmail.com wrote:
 
  On Feb 21, 12:31 am, Patrick Santora patwe...@gmail.com wrote:
   Heh. I had a funny feeling that was going to be the answer. I was
  curious
   mostly because the Binary mode seemed to do quite a deal of good for
   Facebook when it was used. I'm imagining that they cached images so
  binary
   was a good idea, but for simple structures like json, it might not make
  much
   sense. So thought I would get some opinions :).
 
  binary protocol doesn't make much of a difference wrt what you're
  caching, but can help you optimize some access patterns with a
  sufficiently smart client. If you're concerned that it may be making
  things worse (it probably doesn't have a huge effect from what I'm
  hearing here), you can just try disabling it.
 
 
 
 
 




Re: Memcached performance issues

2011-02-21 Thread dormando
Run two, and keep them running all the time, so you see log from
before/after. You can also enable the debug switch and have it log
everything.

So yeah. run one on the client and one on an idle machine elsewhere.

On Mon, 21 Feb 2011, Patrick Santora wrote:


 Yeah. I will run it the next time the issue comes up. Does it matter if I run 
 the tester on the same box the clients on? It should not matter but
 thought ii would ask.

 Thanks!

 On Feb 21, 2011 6:25 PM, dormando dorma...@rydia.net wrote:
  Have you been running the connection tester tool while observing the
  client slowdown?
 
  The tool is there so you can rule if your client is an issue or not, ie;
  if the tool never sees a blip but all/most/some of your clients are seeing
  blips, it's the client's fault. If the tool sees a blip, you can see
  exactly where it's getting hung up and further narrow it down.
 
  On Mon, 21 Feb 2011, Patrick Santora wrote:
 
 
  Its just strange. Memcaced with verbose logging looks ok but the client 
  machines just take forever to get data. Like in the stats I don't
  see anything out of the ordinary. The nic settings look ok too. Quite 
  frustrating...
 
  On Feb 21, 2011 11:51 AM, Patrick Santora patwe...@gmail.com wrote:
   I will need to look at those further today. This weekend went a little
   haywire for me. :)
   On Feb 21, 2011 11:42 AM, dormando dorma...@rydia.net wrote:
   Have you walked through those links I gave you? You haven't mentioned
   exactly what you're seeing and those links walk you through narrowing it
   down a lot as well as listing a lot of things to look for.
  
   On Mon, 21 Feb 2011, Patrick Santora wrote:
  
   Hrmm. Still having issues. Here is the latest stats dump. I also talked
   with my IT person and he mentioned the following setup, which does
   not look like an issue?
   NIC SETTINGS
   the servers should all be autonegotiating to 100/Full and we apply 
   these
   additional kernel tuning parameters
   net.core.rmem_max = 16777216
   net.core.wmem_max = 16777216
   net.ipv4.tcp_rmem = 4096 87380 16777216
   net.ipv4.tcp_wmem = 4096 65536 16777216
  
   LATEST STATS
   STAT pid 1788
   STAT uptime 44811
   STAT time 1298311271
   STAT version 1.4.5
   STAT pointer_size 64
   STAT rusage_user 178.875806
   STAT rusage_system 763.939863
   STAT curr_connections 811
   STAT total_connections 2012
   STAT connection_structures 813
   STAT cmd_get 876886
   STAT cmd_set 74747
   STAT cmd_flush 0
   STAT get_hits 858907
   STAT get_misses 17979
   STAT delete_misses 0
   STAT delete_hits 2
   STAT incr_misses 0
   STAT incr_hits 0
   STAT decr_misses 0
   STAT decr_hits 0
   STAT cas_misses 0
   STAT cas_hits 0
   STAT cas_badval 0
   STAT auth_cmds 0
   STAT auth_errors 0
   STAT bytes_read 17426408671
   STAT bytes_written 180479901035
   STAT limit_maxbytes 536870912
   STAT accepting_conns 1
   STAT listen_disabled_num 0
   STAT threads 4
   STAT conn_yields 0
   STAT bytes 3501518
   STAT curr_items 3230
   STAT total_items 74747
   STAT evictions 0
   STAT reclaimed 20950
   END
  
  
   On Mon, Feb 21, 2011 at 8:12 AM, Patrick Santora patwe...@gmail.com
   wrote:
   @Dustin
   Thanks, I will be disabling them to see if that helps.
  
   -Pat
  
  
   On Mon, Feb 21, 2011 at 5:59 AM, Dustin dsalli...@gmail.com wrote:
  
   On Feb 21, 12:31 am, Patrick Santora patwe...@gmail.com wrote:
Heh. I had a funny feeling that was going to be the answer. I was
   curious
mostly because the Binary mode seemed to do quite a deal of good for
Facebook when it was used. I'm imagining that they cached images so
   binary
was a good idea, but for simple structures like json, it might not 
make
   much
sense. So thought I would get some opinions :).
  
   binary protocol doesn't make much of a difference wrt what you're
   caching, but can help you optimize some access patterns with a
   sufficiently smart client. If you're concerned that it may be making
   things worse (it probably doesn't have a huge effect from what I'm
   hearing here), you can just try disabling it.
  
  
  
  
  
 
 




Re: Memcached performance issues

2011-02-20 Thread dormando
 What I am seeing is that when my memcached container hits around 10MB
 of written traffic is starts to bottleneck causing my front end
 systems to slow WAY down. I've turned on verbose debugging and see no
 issues and there are no complaints on the front end stating that the
 connection clients are not able to hit memcached.

 Has anyone seen anything like this before?

 I would appreciate any feedback that could help out with this.

Lower the threads down to 4 or 8 or so. It's rare that it needs adjusting.

Things we'd like to know:

- your version of memcached
- how many queries/second you run
- stats output usually helps

have you gone through this page yet?
http://code.google.com/p/memcached/wiki/Timeouts
then there's this:
http://code.google.com/p/memcached/wiki/NewServerMaint




Re: Where do I find the invalid characters for a memcached key?

2011-02-10 Thread dormando
   Or wiki - protocol/commands (two clicks!) though the key stuff should 
 be
   repeated at the top there (just fixed that). Probably should be repeated
   somewhere else too, which we'll improve next time.


 The wiki says that only space and newlines are disallowed, have you changed 
 the code to allow return, tab, vertical tab and form feed?

nitpicky! though we seem to get plenty of people who would see no spaces
and think space-like things such as tabs would be perfectly good.

guess I'll fix that harder soon. or anyone else is welcome to edit the
wiki too :P doesn't always have to be me.


Re: Issue 122 in memcached: failed to write, and not due blocking: No error

2011-02-10 Thread dormando
I wish I knew. Perhaps we'll get a better release out for windows soon
enough...

On Tue, 8 Feb 2011, Sean wrote:

 I am getting it from membase:
 http://blog.elijaa.org/index.php?post/2010/08/25/Memcached-1.4.5-for-Windows

 I also tried build it myself using mingw. It turns out with the same
 error.



 On Feb 8, 2:46 pm, dormando dorma...@rydia.net wrote:
  I don't think anyone knows why that particular port of memcached has
  trouble; I'm assuming it's a buggy libevent, or buggy interaction with
  libevent. Given that it's an unhandled socket error :P
 
  Where did you get the windows binary from?
 
 
 
  On Tue, 8 Feb 2011, Roberto Spadim wrote:
   =] can you use 1.2.1 without problems? use it ehheeheh
   i don't know if you will get more speed with newer version
   if you don't know, please continue trying in this mail list. i can't
   help with windows :(
 
   2011/2/8 Sean sean.y@gmail.com:
I am using only one telnet to test the memcache. So only one client
and 0 keys. On the same machine, if I run memcached.exe version 1.2.1,
it works fine.
 
On Feb 8, 2:02�pm, Roberto Spadim robe...@spadim.com.br wrote:
i don't know, but can memcache use windows pipes?
shared memory protocol?
for loop back (127.0.0.1) arp table have always 1 register (internal
on arp engine, or inside arp table), i think that's not a arp problem
too...
maybe windows virtual memory problem?
what's your today app size?
how many clients at same time?
how many keys?
what's the average key length?
 
2011/2/8 �memcac...@googlecode.com:
 
 Comment #9 on issue 122 by sean.y@gmail.com: failed to write, 
 and not
 due blocking: No error
http://code.google.com/p/memcached/issues/detail?id=122
 
 I don't think it's related to ARP either, since I am using the 
 loopback
 interface 127.0.0.1. The arp table is not full. I can stably repro 
 this
 issue on a few machines right after I start the memcache.exe. On 
 some other
 machines, it works fine though. I can't figure out the differences 
 between
 these machines. They are of the same Windows version with all last 
 patches.
 
--
Roberto Spadim
Spadim Technology / SPAEmpresarial
 
   --
   Roberto Spadim
   Spadim Technology / SPAEmpresarial



Re: Where do I find the invalid characters for a memcached key?

2011-02-08 Thread dormando
 Where do I find the invalid characters for a memcached key?

Buried in the wiki somewhere + protocol.txt.

in short, for ascii; no spaces or newlines.


Re: Where do I find the invalid characters for a memcached key?

2011-02-08 Thread dormando
protocol.txt also comes with the software... so technically it's
memcached.org - tarball - spend 1.75 seconds doing `ls`

Or typing memcached key into google gets you enough results.

Or wiki - protocol/commands (two clicks!) though the key stuff should be
repeated at the top there (just fixed that). Probably should be repeated
somewhere else too, which we'll improve next time.

On Tue, 8 Feb 2011, Marc Bollinger wrote:

 I understand why the 'official' protocol.txt is where it is, but to get there 
 from memcached.org, you go from
 memcached.org - code.google.com - github

 which just seems kind of janky.

 - Marc

 On Tue, Feb 8, 2011 at 9:55 AM, dormando dorma...@rydia.net wrote:
Where do I find the invalid characters for a memcached key?

 Buried in the wiki somewhere + protocol.txt.

 in short, for ascii; no spaces or newlines.







Re: Issue 122 in memcached: failed to write, and not due blocking: No error

2011-02-08 Thread dormando
I don't think anyone knows why that particular port of memcached has
trouble; I'm assuming it's a buggy libevent, or buggy interaction with
libevent. Given that it's an unhandled socket error :P

Where did you get the windows binary from?

On Tue, 8 Feb 2011, Roberto Spadim wrote:

 =] can you use 1.2.1 without problems? use it ehheeheh
 i don't know if you will get more speed with newer version
 if you don't know, please continue trying in this mail list. i can't
 help with windows :(

 2011/2/8 Sean sean.y@gmail.com:
  I am using only one telnet to test the memcache. So only one client
  and 0 keys. On the same machine, if I run memcached.exe version 1.2.1,
  it works fine.
 
  On Feb 8, 2:02 pm, Roberto Spadim robe...@spadim.com.br wrote:
  i don't know, but can memcache use windows pipes?
  shared memory protocol?
  for loop back (127.0.0.1) arp table have always 1 register (internal
  on arp engine, or inside arp table), i think that's not a arp problem
  too...
  maybe windows virtual memory problem?
  what's your today app size?
  how many clients at same time?
  how many keys?
  what's the average key length?
 
  2011/2/8  memcac...@googlecode.com:
 
 
 
   Comment #9 on issue 122 by sean.y@gmail.com: failed to write, and not
   due blocking: No error
  http://code.google.com/p/memcached/issues/detail?id=122
 
   I don't think it's related to ARP either, since I am using the loopback
   interface 127.0.0.1. The arp table is not full. I can stably repro this
   issue on a few machines right after I start the memcache.exe. On some 
   other
   machines, it works fine though. I can't figure out the differences 
   between
   these machines. They are of the same Windows version with all last 
   patches.
 
  --
  Roberto Spadim
  Spadim Technology / SPAEmpresarial



 --
 Roberto Spadim
 Spadim Technology / SPAEmpresarial



Re: Memcached PHP Configuration

2011-02-07 Thread dormando
Hi,

I don't want to be rude but can you perhaps stop advocating using UDP?
It's not actually faster if using persistent connections and is full of
bugs and limitations (like a max packet size of 1.4k).

Uhm. Actually in general your information is a little off from how we
usually go about things; perhaps you could read some of the history or pad
through the wiki a bit? I much enjoy your enthusiasm but there're good
reasons why we recommend a list of other things for people to try first.

ie; striping/replicating data halves your effective cache size and can
introduce bugs. General you benefit more from performance by using more
RAM.

UDP is faster than TCP is ... mild failure as general knowledge. It's
more complicated than that :( I'm glad more online games are moving away
from UDP and toward TCP connections, as NAT'ing UDP is buggy and
slaughtering slow connections with extra traffic wastes bandwidth for
everyone involved.

On Tue, 8 Feb 2011, Roberto Spadim wrote:

 1) some libraries implement hash to stripe informations (like raid0 do
 with disks), you should use deterministic hash function (always set
 the key, to the same server)
 2) failover should be a mirror flag (like raid1 with disks), it should
 write to all servers that variable (write on all servers = write and
 wait all servers to talk: that's ok), in case of a server problem, all
 servers have the same information (you can use repcache, a memcache
 similar server, with same memcache protocol and based on memcache, but
 with replication feature, in this case replication is done in server,
 not in client, check if it's a good sync time for you, and if it's a
 network problem or not)
 3) no, you can use UDP in a good network, it's faster (don't need
 connection) and don't have a lot of latency (TCP can have latency, but
 some options can reduce it) persistent connection remove the
 connection time, but it's make another problem... the TCP list get
 bigger, maybe you TCP list can overflow the operational system TCP
 list, and some connections must be closed... UDP don't have this
 problem, it's connectionless =)

 2011/2/8 y1rm3y4hu y1rm3y...@gmail.com:
  Hi,
 
  I've been trying to find resources online to address a few questions i
  had regarding the various configuration options available with
  Memcached client/server without much success.
 
  Heres how my setup would look like
  i'd have two web servers [amazon EC2 instances] load balancing
  incoming requests in a round robin fashion - each of these web servers
  would have memcached[client and server] installed in it
 
  Now it would be great if somebody could give me pointers on the below
  questions.
 
  #1) Should i use consistent hashing.
  I am not expecting instances to go down randomly. But whenever one
  machine has to be taken out for maintenance etc, would like to
  minimize the impact. i read about a reduced performance when switched
  to consistent hashing. Not sure whether it is still valid.
 
  #2 ) If we are using standard vs consistent hashing how would failover
  work?
  I see that pecl/memcache has a failover flag but can't find anything
  similar to it in pecl/memcached. What are the implications.
 
  #3) Should i always go with persistent connections?
 
 
  Any help/links/pointers would be highly appreciated :)
 
 
  Have a good day
  y1rm3y4hu



 --
 Roberto Spadim
 Spadim Technology / SPAEmpresarial



Re: Memcached PHP Configuration

2011-02-07 Thread dormando
 I've been trying to find resources online to address a few questions i
 had regarding the various configuration options available with
 Memcached client/server without much success.

The wiki at http://memcached.org/wiki has most of this, though perhaps not
in the most clear way. The old FAQ has a good chunk of it as well. Perhaps
it's a little read between the lines, but most things you don't want to
do have warnings.

 Heres how my setup would look like
 i'd have two web servers [amazon EC2 instances] load balancing
 incoming requests in a round robin fashion - each of these web servers
 would have memcached[client and server] installed in it

 Now it would be great if somebody could give me pointers on the below
 questions.

 #1) Should i use consistent hashing.
 I am not expecting instances to go down randomly. But whenever one
 machine has to be taken out for maintenance etc, would like to
 minimize the impact. i read about a reduced performance when switched
 to consistent hashing. Not sure whether it is still valid.

Yes.

 #2 ) If we are using standard vs consistent hashing how would failover
 work?
 I see that pecl/memcache has a failover flag but can't find anything
 similar to it in pecl/memcached. What are the implications.

Hashing doesn't change how failover works. It's generally a better idea to
let it fail if it dies, then have enough servers to cope. Otherwise if
servers flap you'll end up with an inconsistent cache or even more cache
misses.

So a failed server == extra cache misses based on how many servers you
had. Adding servers, removing servers, with consistent hashing causes the
same thing. Vs it being far worse.

 #3) Should i always go with persistent connections?

If you can yes. Be careful as some clients don't present a reusable
connection very well and you can end up leaking connections. You should
try to use persistent conns, then fail back to non persistent if they
don't work.

enjoy.
-Dormando


Re: Memcached PHP Configuration

2011-02-07 Thread dormando

 #1) Should i use consistent hashing.
 I am not expecting instances to go down randomly. But whenever one
 machine has to be taken out for maintenance etc, would like to
 minimize the impact. i read about a reduced performance when switched
 to consistent hashing. Not sure whether it is still valid.

Er, yes you should use it. pecl/memcache 3.0 would degrade if you used
it but you should never use that sfotware.


Re: Memcached PHP Configuration

2011-02-07 Thread dormando
Yes that's the point. It loses some packets then scales back. Beats having
the machine drop offline.

On Tue, 8 Feb 2011, Roberto Spadim wrote:

 check this discussion for more info about UDP pratical informations (not 
 theory)
 http://stackoverflow.com/questions/1098897/what-is-the-largest-safe-udp-packet-size-on-the-internet
 http://www.29west.com/docs/THPM/packet-loss-myths.html

 check that TCP can loss packets (any protocol without RTS can):
 Reality--The normal operation of TCP congestion control may cause loss
 due to queue overflow. See this report for more information. Loss
 rates of several percent were common under heavy congestion.



 2011/2/8 Roberto Spadim robe...@spadim.com.br:
  just a obs
  hash is for server selection, a fast hash function = no problems on
  performance (low latency)
  in some libraries you can do your hash function too, read source code
  and documents of you client library (for a better help in memcached
  mail group, use memcached based libraries, try to not use independent
  libraries since you can use diferent hash algorithms and put data on
  one server and read from another = bad cache hit rate)
 
  2011/2/8 dormando dorma...@rydia.net:
 
  #1) Should i use consistent hashing.
  I am not expecting instances to go down randomly. But whenever one
  machine has to be taken out for maintenance etc, would like to
  minimize the impact. i read about a reduced performance when switched
  to consistent hashing. Not sure whether it is still valid.
 
  Er, yes you should use it. pecl/memcache 3.0 would degrade if you used
  it but you should never use that sfotware.
 
 
 
 
  --
  Roberto Spadim
  Spadim Technology / SPAEmpresarial
 



 --
 Roberto Spadim
 Spadim Technology / SPAEmpresarial



Re: Memcached PHP Configuration

2011-02-07 Thread dormando
Well you were saying speed is the point, but RAM is there as well.

If you properly tune the TCP stack the memory usage isn't bad at all. I've
ran a number of hosts with 100,000+ tcp connections on them at once and
while RAM gets sorta heavy it doesn't implode or anything. I say things
based on practical experience running huge shit; memcached was designed
and has been further tuned to run with 10,000+ connections just fine.

UDP still uses buffers, but the requests disappear when you overflow,
which leaks to the memcache client needing to wait on a timeout to ensure
its response is really gone. In TCP land (ignoring the first SYN/ACK
sequence) it can drop and retry packets with a relatively short timeout.
Meaning you get the answer back and things work fine.

There *are* some cases where UDP can be useful, but we would absolutely
never recommend anyone do that unless they have to. The point is sort of
moot since most people use data too large for memcached to handle. Those
who've wanted UDP to handle larger packets end up reimplementing TCP on
top of UDP to make it work.

So, we've opted to not do that.

On Tue, 8 Feb 2011, Roberto Spadim wrote:

 not about faster or not, the point of udp is the ram used to allow tcp
 connection alive, udp don't need ram to allow connections (just server
 side, or when send/receive package), it's connectionless... (you know,
 i know, everyone that use it know)
 with many clients (more than 1) udp for my benchmarks works better
 than tcp (read more before above)
 i know about limitations of udp protocol and lower layers (packet
 fragmentation and others problems), it's good for some type of values
 (data size) and network layout (internet / intranet)
 for example in a local network it's very good, for a internet with
 many routers a tcp connection is better, the point is, what type of
 value is being stored, and what's the network layout? a big value
 (more than 1kbyte) or a small? if all values are small, udp works very
 well on local network, i use it without bugs... with jumbo frame you
 can get more than 1kbyte without data loss with udp

 speed isn't the point, the data size the network layout and the cache
 hit rate is the point, with broken packet we have no communication =]

 2011/2/8 dormando dorma...@rydia.net:
  Hi,
 
  I don't want to be rude but can you perhaps stop advocating using UDP?
  It's not actually faster if using persistent connections and is full of
  bugs and limitations (like a max packet size of 1.4k).
 
  Uhm. Actually in general your information is a little off from how we
  usually go about things; perhaps you could read some of the history or pad
  through the wiki a bit? I much enjoy your enthusiasm but there're good
  reasons why we recommend a list of other things for people to try first.
 
  ie; striping/replicating data halves your effective cache size and can
  introduce bugs. General you benefit more from performance by using more
  RAM.
 
  UDP is faster than TCP is ... mild failure as general knowledge. It's
  more complicated than that :( I'm glad more online games are moving away
  from UDP and toward TCP connections, as NAT'ing UDP is buggy and
  slaughtering slow connections with extra traffic wastes bandwidth for
  everyone involved.
 
  On Tue, 8 Feb 2011, Roberto Spadim wrote:
 
  1) some libraries implement hash to stripe informations (like raid0 do
  with disks), you should use deterministic hash function (always set
  the key, to the same server)
  2) failover should be a mirror flag (like raid1 with disks), it should
  write to all servers that variable (write on all servers = write and
  wait all servers to talk: that's ok), in case of a server problem, all
  servers have the same information (you can use repcache, a memcache
  similar server, with same memcache protocol and based on memcache, but
  with replication feature, in this case replication is done in server,
  not in client, check if it's a good sync time for you, and if it's a
  network problem or not)
  3) no, you can use UDP in a good network, it's faster (don't need
  connection) and don't have a lot of latency (TCP can have latency, but
  some options can reduce it) persistent connection remove the
  connection time, but it's make another problem... the TCP list get
  bigger, maybe you TCP list can overflow the operational system TCP
  list, and some connections must be closed... UDP don't have this
  problem, it's connectionless =)
 
  2011/2/8 y1rm3y4hu y1rm3y...@gmail.com:
   Hi,
  
   I've been trying to find resources online to address a few questions i
   had regarding the various configuration options available with
   Memcached client/server without much success.
  
   Heres how my setup would look like
   i'd have two web servers [amazon EC2 instances] load balancing
   incoming requests in a round robin fashion - each of these web servers
   would have memcached[client and server] installed in it
  
   Now it would be great if somebody could

Re: Memcached PHP Configuration

2011-02-07 Thread dormando
s/leaks/leads to

On Mon, 7 Feb 2011, dormando wrote:

 Well you were saying speed is the point, but RAM is there as well.

 If you properly tune the TCP stack the memory usage isn't bad at all. I've
 ran a number of hosts with 100,000+ tcp connections on them at once and
 while RAM gets sorta heavy it doesn't implode or anything. I say things
 based on practical experience running huge shit; memcached was designed
 and has been further tuned to run with 10,000+ connections just fine.

 UDP still uses buffers, but the requests disappear when you overflow,
 which leaks to the memcache client needing to wait on a timeout to ensure
 its response is really gone. In TCP land (ignoring the first SYN/ACK
 sequence) it can drop and retry packets with a relatively short timeout.
 Meaning you get the answer back and things work fine.

 There *are* some cases where UDP can be useful, but we would absolutely
 never recommend anyone do that unless they have to. The point is sort of
 moot since most people use data too large for memcached to handle. Those
 who've wanted UDP to handle larger packets end up reimplementing TCP on
 top of UDP to make it work.

 So, we've opted to not do that.

 On Tue, 8 Feb 2011, Roberto Spadim wrote:

  not about faster or not, the point of udp is the ram used to allow tcp
  connection alive, udp don't need ram to allow connections (just server
  side, or when send/receive package), it's connectionless... (you know,
  i know, everyone that use it know)
  with many clients (more than 1) udp for my benchmarks works better
  than tcp (read more before above)
  i know about limitations of udp protocol and lower layers (packet
  fragmentation and others problems), it's good for some type of values
  (data size) and network layout (internet / intranet)
  for example in a local network it's very good, for a internet with
  many routers a tcp connection is better, the point is, what type of
  value is being stored, and what's the network layout? a big value
  (more than 1kbyte) or a small? if all values are small, udp works very
  well on local network, i use it without bugs... with jumbo frame you
  can get more than 1kbyte without data loss with udp
 
  speed isn't the point, the data size the network layout and the cache
  hit rate is the point, with broken packet we have no communication =]
 
  2011/2/8 dormando dorma...@rydia.net:
   Hi,
  
   I don't want to be rude but can you perhaps stop advocating using UDP?
   It's not actually faster if using persistent connections and is full of
   bugs and limitations (like a max packet size of 1.4k).
  
   Uhm. Actually in general your information is a little off from how we
   usually go about things; perhaps you could read some of the history or pad
   through the wiki a bit? I much enjoy your enthusiasm but there're good
   reasons why we recommend a list of other things for people to try first.
  
   ie; striping/replicating data halves your effective cache size and can
   introduce bugs. General you benefit more from performance by using more
   RAM.
  
   UDP is faster than TCP is ... mild failure as general knowledge. It's
   more complicated than that :( I'm glad more online games are moving away
   from UDP and toward TCP connections, as NAT'ing UDP is buggy and
   slaughtering slow connections with extra traffic wastes bandwidth for
   everyone involved.
  
   On Tue, 8 Feb 2011, Roberto Spadim wrote:
  
   1) some libraries implement hash to stripe informations (like raid0 do
   with disks), you should use deterministic hash function (always set
   the key, to the same server)
   2) failover should be a mirror flag (like raid1 with disks), it should
   write to all servers that variable (write on all servers = write and
   wait all servers to talk: that's ok), in case of a server problem, all
   servers have the same information (you can use repcache, a memcache
   similar server, with same memcache protocol and based on memcache, but
   with replication feature, in this case replication is done in server,
   not in client, check if it's a good sync time for you, and if it's a
   network problem or not)
   3) no, you can use UDP in a good network, it's faster (don't need
   connection) and don't have a lot of latency (TCP can have latency, but
   some options can reduce it) persistent connection remove the
   connection time, but it's make another problem... the TCP list get
   bigger, maybe you TCP list can overflow the operational system TCP
   list, and some connections must be closed... UDP don't have this
   problem, it's connectionless =)
  
   2011/2/8 y1rm3y4hu y1rm3y...@gmail.com:
Hi,
   
I've been trying to find resources online to address a few questions i
had regarding the various configuration options available with
Memcached client/server without much success.
   
Heres how my setup would look like
i'd have two web servers [amazon EC2 instances] load balancing
incoming requests in a round

Re: how to get the memcache statistics?

2011-02-06 Thread dormando
 Hi
  Currently my application response time is too slow and able to see lot 
 of lock failure and memcache timeouts in the log. We are using
 spy-memcache client. We are assuming, time to increase the # of memcache 
 servers. Before coming to that conclusions need to identify the statistics
 of memcache.

 In a simple sentence.. what is the better parameter to identify my memcache 
 farm need more servers ?

http://code.google.com/p/memcached/wiki/Timeouts -- do this to verify
what your timeouts are.
http://code.google.com/p/memcached/wiki/NewServerMaint -- for general
server health
http://code.google.com/p/memcached/wiki/NewPerformance -- how fast it
should be

Also make sure that your server is running the latest software available
(1.4.5). 1.2.x releases are not supported anymore.

-Dormando


Re: Reducing virtual memory usage of memcached

2011-02-06 Thread dormando
 I've found (part of) the answer to my own question - the virtual
 memory comes from the thread stacks.  Setting -t 1 reduces the virtual
 memory to around 20MB.

 I've read from other posts that it is no longer possible to have a non-
 threaded memcached version.  It appears on my system that the stack
 size being used is 10MB.  This is almost certainly way too large.  Is
 there any way to conveniently (i.e. without major edits to the source
 code) set the thread stack size for the threads that memcached uses,
 e.g. through a macro setting?

There's some amount of overhead with pre-allocating the hash table and
this and that... that'll show up as virtual memory until data's written
into it. Also note that memcached will lazily allocate one slab per slab
class, so even if you set -m 12 you'll end up using 50+ megs of ram if you
put one item in each slab class.

You could also use -I to lower the max item size and reduce some overhead.

Don't think it explicitly sets the thread stack size, and I forget how to
tweak that offhand, I think google will tell you :P


Re: features - interesting!!

2011-02-04 Thread dormando
 problem:
 today i need 4 packets to make 'lock'/'unlock' using memcache
 i need to use less ROM/CPU/RAM/network at client side

 solution:
 1) make server side lock function with 2 packet (with atomic operations)
 2) make a proxy if function = lock/unlock, proxy make these packets to
 server (i don't have atomic here, but i can implement with a big work
 on protocol, maybe a new memcached hehe)

 i will try the second one without atomic (today i'm using it, i have
 some problems, but client side workaround it with more packets...
 (retry) )
 the first is better (i don't need a proxy, i have less packets, use
 less cpu, ram, rom, time)

 packets:
 LOCK
 add value (1 packet sent by client, 1 packet sent by server)
 if no error (end here)
 if value exist
 read value (1packet sent by client, 1 packet sent by server)
 if value=my value (my lock) if not (not my lock)

 UNLOCK
 read value (1packet sent by client, 1 packet sent by server)
 if value != my value
 (end here)
 delete value (1packet sent by client, 1 packet sent by server)
 (end here)

I think he means to step back a bit, and describe what problem it is that
lead you to needing to access memcached from a PIC.

ie; what is the specific work that the PIC is doing which needs
synchronization and perhaps repeated accesses to memcached?


Re: features - interesting!!

2011-02-04 Thread dormando
 pic use memcache as a RAM memory
 first i lock memcache (with my today lock function)
 set a key to 0
 i read data from pic, and put at memcache
 after 128 writes i put a sum (save this sum at pic internal memory)
 total bytes is about 1MB(1048576 bytes) (pic don't have this memory,
 it's a ADC read 2Bytes/read)
 after all 1MB sent, i check all sums with my internal memory, to make
 sure some data wasn't lost
 if lost, i start a new lock
 set a key to 1
 after it, i unlock memcache (my today unlock function)
 wait just some minutes
 if get set isn't 2 return to first lock
 if 2 continue with another function inside pic program

 after key=1 my server (another app), process and set key to 2  if ok
 i'm using memcache as a key-value and as a server-client protocol
 ok it's not fast, but it's work
 the other implementation use a server side, and a ARM client side
 server don't use more memcache, ARM doesn't use it too, i'm using
 mysql at ARM side and tcp/ip socket to contact my app about new data
 (it's faster than a table update in mysql)

Ah okay, this is actually perfect for gearmand, but I don't think you'll
have much fun writing your own client for it.

If you would go for gearmand, you would:

- send open packet to gearmand
- write/sum your data
- bail the connection if you get an error and try again
- if all data sent, all is good
- you can either use async job, or sync job and wait for a worker process
on the server to pick up the job and read it
^ but as I said writing a gearmand client and figuring out how to write a
worker is more work.

in your case, perhaps you could consider using a key prefix for your PIC
army?

The dumbest/quickest thing I can think of:

- assign each PIC a unique client id number
- make sure the server knows the list of ids
- each PIC will write to keys named: key_IDNUMBER (ie if the PIC's ID is
5, it writes to key_5
- the server issues a large multiget to memcached asking for all of the
PIC ids, and processes any that it gets back (get key_1 key_2 key_3 etc)

Then there's no locking, and multiple PIC's can use memcached in parallel?
That sounds like a win/win to me, if gross :)

-Dormando


Re: features - interesting!!

2011-02-04 Thread dormando
 for data i use this:
 key= pic ip number_fragment
 lock key = pic ip number_lock

Shouldn't need the lock_key, then?

 That sounds like a win/win to me, if gross :)
 what's win/win ? hehehe

It's when there's no lose.


Re: Memcached delete command consistency

2011-01-26 Thread dormando
 Hi to all,

 First of all please accept my apologies if this question answered
 before.
 I've searched for a straight answer but I couldn't find one. Maybe I'm
 not looking in the right place.

 So, I've started using the memcached for user sessions and I back it
 up with a database storage (in case of a miss or a memcached failure
 or other events). So far so good, I'm happy with the results, it
 really improves the performance.

 But, having two or more memcached servers and two or more web servers
 in a load balance, when a user logs out (basically it issues a delete
 in the memcached), can memcached client (I'm using the PHP client) or
 memcached server guarantee that the session (or any key for that
 matter) will be deleted from all memcached pool? Or I have to do this
 myself?

 Thank you,
 Adrian

While not specifically talking about deletes, this talks about how queries
address a pool of memcached servers generically:

http://code.google.com/p/memcached/wiki/TutorialCachingStory

The short answer is that keys only exist on one server, so the client
directly adds or deletes the only key every time.


Re: Expire problem

2011-01-21 Thread dormando
Think we worked this out on IRC, there was some thing running flush_all
over and over on his server.

On Fri, 21 Jan 2011, Ivan wrote:

 Hello,

 i have memcached 1.4.5 on Centos 5.5, my sets seem to expire (no
 matter what i put as expire time, infinite or 5mins 1 hour etc) after
 few seconds and my cache is not full
 its not code problem since i've retested it with most basic example
 like this -
 ?php
 $memcache_obj = new Memcache;
 $memcache_obj-connect('127.0.0.1', 11211);
 if ($memcache_obj-get('time') == ) {
 $date = date(H:i:s);
 $memcache_obj-set('time', $date, MEMCACHE_COMPRESSED, ');
 }
 echo At .date(H:i:s)., your key is .$memcache_obj-get('time');
 ? 
 Also restarting memcached didn't help.



Re: Session store

2010-12-09 Thread dormando
 So, pretty general question:

 Seems against the recommendation of this list, Memcached is often used as a 
 session store.  I'm working with a client now that uses two clusters of 
 memcached servers and every write is saved on two
 clusters and on failed reads the backup is read.  Poor mans HA -- kind of.
 From what I hear the session data is updated very often -- that is almost 
 every read is followed by an update.  And I believe the sessions are large, 
 too.  Memcached has been reliable and easy to scale.
  Still, it's a cache.

 I'm just curious what people here would recommend now for a durable and 
 highly available, yet fast and scalable session store for a high traffic site.

 Combine the two clusters into a single cluster and use Mysql and replication? 
  Memcachedb?

http://dormando.livejournal.com/495593.html - I wrote this old post on the
topic. Need to rewrite that post to be a lot more succinct :/


Re: MemCached Evictions

2010-11-30 Thread dormando
Not yet, no :/

On Tue, 30 Nov 2010, Artur Ejsmont wrote:

 there was supposed to be a rebalancing functionality but i am not sure
 if it went into stable already? lately i was busy doing other things.

 can anyone confirm what is the status of reclaiming slabs please? I am
 also interested is it available.

 thanks

 art

 On 30 November 2010 14:49, Kaiwang Chen kaiwang.c...@gmail.com wrote:
  Is there any way to shrink slab class so that space allocated at peak
  time can be reused for other slab classes later?
 
  Thanks,
  kc
 
  2010/11/5 Artur Ejsmont ejsmont.ar...@gmail.com:
  Mikael can be right.
 
  Each slab is 1mb and its designated to hold items of particualr size.
  So if you had 5000 items over 500KB inserted into the cache at some
  time ( at the beginning ) they would consume 5GB (they would take 1MB
  each even if they were 501KB).
 
  So after some time caching patterns change and you dont insert big
  items any more. This 5GB is wasted if you are not using such big items
  any more. So distribution changes and big items are not needed any
  more but more smaller items cant fit into the cache as memcached cant
  put small items into slabs designated for big items.
 
  Sorry if my explanation is not super good :)
 
  checkout this tool out though
  http://artur.ejsmont.org/blog/content/first-version-of-memcache-stats-script-based-on-memcachephp
 
  i took a open source stats scripts and added detailed view of slabs.
  It should tell you how many slabs you have allocated per size and how
  many items are there. I guess this should give you good idea what is
  really happening.
 
  :- )
 
  art
 
 
 
  On 5 November 2010 05:59, vishnu vishnudee...@gmail.com wrote:
  What is slab distribution?
 
  How can i resolve this issue?
 
  Thanks.
 
  On Thu, Nov 4, 2010 at 10:40 PM, dormando dorma...@rydia.net wrote:
 
  reclaims are good, evictions are bad
 
  On Thu, 4 Nov 2010, Kate Wang wrote:
 
   We are experiencing high reclaims instead of evictions. Could slab
   distribution shift cause that as well?
  
   If the slab distribution shifted could cause high eviction rate, what's
   the best way to fix it or avoid it?
  
   Thanks!
  
   On Thu, Nov 4, 2010 at 5:52 PM, Mikael Fridh fri...@gmail.com wrote:
         On Nov 4, 6:32 pm, rahul_kcle vishnudee...@gmail.com wrote:
         
          From last 2 weeks i am seeing evictions happening on our
   memcached
          boxes even though there is lot of memory left . Here are the
   stats
          from memcached
         
          STAT bytes_read 434627188758
          STAT bytes_written 357821569260
          STAT limit_maxbytes 23622320128
          STAT accepting_conns 1
          STAT listen_disabled_num 0
          STAT threads 5
          STAT conn_yields 0
          STAT bytes 14573115225
          STAT curr_items 1853350
          STAT total_items 66439158
          STAT evictions 7000591
         
          Bytes is much lessser than limit_maxbytes.
  
   See stats slabs, possibly your slab distribution profile have shifted
   over time.
  
   Mikael
  
  
  
  
 
 
 
 
  --
  Visit me at:
  http://artur.ejsmont.org/blog/
 
 



 --
 Visit me at:
 http://artur.ejsmont.org/blog/



<    3   4   5   6   7   8   9   10   >