Re: Issue 125 in memcached: Compile error with gcc

2010-03-12 Thread memcached


Comment #2 on issue 125 by i...@quintura.com: Compile error with gcc
http://code.google.com/p/memcached/issues/detail?id=125

Thanks for answer.
Memcached has compiled correctly with ./configure CC=gcc-4.3.
You can close this ticket.
I have some errors, when i tried with ./configure CC=gcc-4.4
make  all-recursive
make[1]: Entering directory `/root/memcached-1.4.4'
Making all in doc
make[2]: Entering directory `/root/memcached-1.4.4/doc'
make  all-am
make[3]: Entering directory `/root/memcached-1.4.4/doc'
make[3]: Nothing to be done for `all-am'.
make[3]: Leaving directory `/root/memcached-1.4.4/doc'
make[2]: Leaving directory `/root/memcached-1.4.4/doc'
make[2]: Entering directory `/root/memcached-1.4.4'
gcc-4.4 -std=gnu99 -DHAVE_CONFIG_H -I.  -DNDEBUG   -g -O2 -pthread -Wall  
-Werror

-pedantic -Wmissing-prototypes -Wmissing-declarations -Wredundant-decls -MT
memcached-memcached.o -MD -MP -MF .deps/memcached-memcached.Tpo -c -o
memcached-memcached.o `test -f 'memcached.c' || echo './'`memcached.c
cc1: warnings being treated as errors
memcached.c: In function Б─≤complete_incr_binБ─≥:
memcached.c:1022: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:1043: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:1060: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c: In function Б─≤process_bin_getБ─≥:
memcached.c:1192: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c: In function Б─≤process_bin_updateБ─≥:
memcached.c:1888: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:1904: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c: In function Б─≤process_bin_append_prependБ─≥:
memcached.c:1948: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c: In function Б─≤process_bin_deleteБ─≥:
memcached.c:2013: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c: In function Б─≤do_store_itemБ─≥:
memcached.c:2126: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:2126: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:2143: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:2144: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:2157: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:2159: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:2159: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:2201: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c:2213: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c: In function Б─≤process_get_commandБ─≥:
memcached.c:2591: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c: In function Б─≤process_update_commandБ─≥:
memcached.c:2750: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
memcached.c: In function Б─≤do_add_deltaБ─≥:
memcached.c:2869: error: dereferencing type-punned pointer will break  
strict-aliasing

rules
make[2]: *** [memcached-memcached.o] Error 1
make[2]: Leaving directory `/root/memcached-1.4.4'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/memcached-1.4.4'
make: *** [all] Error 2





--
You received this message because you are listed in the owner
or CC fields of this issue, or because you starred this issue.
You may adjust your issue notification preferences at:
http://code.google.com/hosting/settings


Re: Memcache as session server with high cache miss?

2010-03-12 Thread TheOnly92
Here is our current setup:
webserver1 (also runs session memcache server)
webserver2 (also runs session memcache server)
database (specialized memcache storage for data caching)

We are not really a high loaded site, at peak time only about 1500
users online together. Network is not really saturated, as not much
data being transferred I believe.

Here is the phpinfo() part for sessions:
session.auto_start  Off Off
session.bug_compat_42   On  On
session.bug_compat_warn On  On
session.cache_expire180 180
session.cache_limiter   nocache nocache
session.cookie_domain   no valueno value
session.cookie_httponly Off Off
session.cookie_lifetime 0   0
session.cookie_path /   /
session.cookie_secure   Off Off
session.entropy_fileno valueno value
session.entropy_length  0   0
session.gc_divisor  100 100
session.gc_maxlifetime  14401440
session.gc_probability  0   0
session.hash_bits_per_character 4   4
session.hash_function   0   0
session.namePHPSESSID   PHPSESSID
session.referer_check   no valueno value
session.save_handlermemcachememcache
session.save_path   tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
session.serialize_handler   php php
session.use_cookies On  On
session.use_only_cookiesOff Off
session.use_trans_sid   0   0

And yes, we use PECL memcache extension.

On Mar 11, 11:15 pm, Jérôme Patt jerome.p...@googlemail.com wrote:
 TheOnly92 schrieb:

  We used memcache as a session storage server because we needed to
  balance load across servers and share session data. Persistent file
  storage is not usable, and since memcache session storage is easy to
  configure in PHP, so we have decided to use it.

  Before this, memcache maximum storage was set to 64 MB, the data went
  up to 54 MB and so we thought it was the cause of cache misses. But
  after we increased to 256 MB, the problem is still occurring. Users
  report that after they logged in, they click on another page and they
  get logged out. But after a refresh they appear logged in again.

  session.gc_maxlifetime     1440    1440

 As far as i can see, nobody has asked this questions before: if you want to 
 use memcached
 as a session store, how is your setup designed? specialised memcached's 
 (good, good, good)
 or one instance per webserver (bad, bad, bad)? how does your php.ini look 
 like (all
 entries starting with session)? Do you have a any or even a large number of 
 evictions on
 your memcached's? Is your network saturated (using memcached as a session 
 store produces a
 large amount of network traffic, which can lead to connection problems, which 
 could also
 explain your issue).

 Kind regards,

 J r me


reusing sockets between Perl memcached objects

2010-03-12 Thread Jonathan Swartz
We make liberal use of the namespace option to
Cache::Memcached::libmemcached, creating one object for each of our
dozens of namespaces.

However I recently discovered that it will create new socket
connections for each object. i.e.:

   #!/usr/bin/perl -w
   use Cache::Memcached;
   use strict;

   my ( $class, $iter ) = @ARGV or die usage: $0 class iter;
   eval require $class;

   my @memd;
   for my $i ( 0 .. $iter ) {
   $memd[$i] = $class-new( { servers = [localhost:11211] } );
   $memd[$i]-get('foo');
   }

   my $memd = new Cache::Memcached { servers = [localhost:11211] };
   my $stats = $memd-stats;
   print curr_connections:  . $stats-{total}-{curr_connections} .
\n;

swartz ./sockets.pl Cache::Memcached 50
curr_connections: 10
swartz ./sockets.pl Cache::Memcached::Fast 50
curr_connections: 61
swartz ./sockets.pl Cache::Memcached::libmemcached 50
curr_connections: 61

I don't know why curr_connections starts at 10. In any case,
curr_connections will not grow with the number of Cache::Memcached
objects created, but will grow with the number of
Cache::Memcached::Fast or Cache::Memcached::libmemcached objects
created.

I was a little surprised that libmemcached, at least, didn't have this
feature. Just wondering if I'm doing something wrong.

If I want to keep using libmemcached, I guess I will have to create
just one option and override its namespace each time I use it.

Jon


Re: reusing sockets between Perl memcached objects

2010-03-12 Thread dormando
We discovered this as well a few months ago... I don't think we found a
workaround :(

Maybe someone else has?

On Fri, 12 Mar 2010, Jonathan Swartz wrote:

 We make liberal use of the namespace option to
 Cache::Memcached::libmemcached, creating one object for each of our
 dozens of namespaces.

 However I recently discovered that it will create new socket
 connections for each object. i.e.:

#!/usr/bin/perl -w
use Cache::Memcached;
use strict;

my ( $class, $iter ) = @ARGV or die usage: $0 class iter;
eval require $class;

my @memd;
for my $i ( 0 .. $iter ) {
$memd[$i] = $class-new( { servers = [localhost:11211] } );
$memd[$i]-get('foo');
}

my $memd = new Cache::Memcached { servers = [localhost:11211] };
my $stats = $memd-stats;
print curr_connections:  . $stats-{total}-{curr_connections} .
 \n;

 swartz ./sockets.pl Cache::Memcached 50
 curr_connections: 10
 swartz ./sockets.pl Cache::Memcached::Fast 50
 curr_connections: 61
 swartz ./sockets.pl Cache::Memcached::libmemcached 50
 curr_connections: 61

 I don't know why curr_connections starts at 10. In any case,
 curr_connections will not grow with the number of Cache::Memcached
 objects created, but will grow with the number of
 Cache::Memcached::Fast or Cache::Memcached::libmemcached objects
 created.

 I was a little surprised that libmemcached, at least, didn't have this
 feature. Just wondering if I'm doing something wrong.

 If I want to keep using libmemcached, I guess I will have to create
 just one option and override its namespace each time I use it.

 Jon



Re: Memcache as session server with high cache miss?

2010-03-12 Thread elSchrom
Ok, that seems correct for me. Your described issue still sounds for
me like a connection problem.
On which interface is your memcached bound (memcached.conf?)? Does a

telnet 172.23.111.11 11211

work from every webserver instance? I could imagine following
scenario:

User1 opens a session on server1, which is stored in memd3. The next
page impression balances him to
server2, which is not able to connect to memd3 - His sessions is not
existing for the time of this request.
The next request is again balanced to server1 and the session is
existing again.

On 12 Mrz., 11:25, TheOnly92 05049...@gmail.com wrote:
 Here is our current setup:
 webserver1 (also runs session memcache server)
 webserver2 (also runs session memcache server)
 database (specialized memcache storage for data caching)

 We are not really a high loaded site, at peak time only about 1500
 users online together. Network is not really saturated, as not much
 data being transferred I believe.

 Here is the phpinfo() part for sessions:
 session.auto_start      Off     Off
 session.bug_compat_42   On      On
 session.bug_compat_warn On      On
 session.cache_expire    180     180
 session.cache_limiter   nocache nocache
 session.cookie_domain   no value        no value
 session.cookie_httponly Off     Off
 session.cookie_lifetime 0       0
 session.cookie_path     /       /
 session.cookie_secure   Off     Off
 session.entropy_file    no value        no value
 session.entropy_length  0       0
 session.gc_divisor      100     100
 session.gc_maxlifetime  1440    1440
 session.gc_probability  0       0
 session.hash_bits_per_character 4       4
 session.hash_function   0       0
 session.name    PHPSESSID       PHPSESSID
 session.referer_check   no value        no value
 session.save_handler    memcache        memcache
 session.save_path       tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
 tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
 session.serialize_handler       php     php
 session.use_cookies     On      On
 session.use_only_cookies        Off     Off
 session.use_trans_sid   0       0

 And yes, we use PECL memcache extension.

 On Mar 11, 11:15 pm, Jérôme Patt jerome.p...@googlemail.com wrote:

  TheOnly92 schrieb:

   We used memcache as a session storage server because we needed to
   balance load across servers and share session data. Persistent file
   storage is not usable, and since memcache session storage is easy to
   configure in PHP, so we have decided to use it.

   Before this, memcache maximum storage was set to 64 MB, the data went
   up to 54 MB and so we thought it was the cause of cache misses. But
   after we increased to 256 MB, the problem is still occurring. Users
   report that after they logged in, they click on another page and they
   get logged out. But after a refresh they appear logged in again.

   session.gc_maxlifetime     1440    1440

  As far as i can see, nobody has asked this questions before: if you want to 
  use memcached
  as a session store, how is your setup designed? specialised memcached's 
  (good, good, good)
  or one instance per webserver (bad, bad, bad)? how does your php.ini look 
  like (all
  entries starting with session)? Do you have a any or even a large number 
  of evictions on
  your memcached's? Is your network saturated (using memcached as a session 
  store produces a
  large amount of network traffic, which can lead to connection problems, 
  which could also
  explain your issue).

  Kind regards,

  J r me


Re: Memcache as session server with high cache miss?

2010-03-12 Thread martin.grotzke
On Mar 11, 9:01 pm, Les Mikesell lesmikes...@gmail.com wrote:
 On 3/11/2010 1:01 PM, Martin Grotzke wrote:

  Hi,

  I'm trying to follow this thread on my mobile, i hope i didn't miss
  anything. But AFAICS it was not yet explained, when memcached might
  drop cached data as long as there's enough memory and expiration is
  not reached. Or is this not deterministic at all? Perhaps you can
  point me to resources providing more details on this?

 'Enough' memory may not be what you expect unless you understand how
 your data fits in the allocated slabs.  And I'm not sure what happens if
 the keys have hash collisions.
What about setting the minimum slab size to 1 mb (-n 1048576) so that
there's only one slab and one can calculate with this?

Cheers,
Martin

Cheers,
Martin


 --
    Les Mikesell
     lesmikes...@gmail.com


Re: Memcache as session server with high cache miss?

2010-03-12 Thread Les Mikesell

On 3/12/2010 11:21 AM, martin.grotzke wrote:




Hi,



I'm trying to follow this thread on my mobile, i hope i didn't miss
anything. But AFAICS it was not yet explained, when memcached might
drop cached data as long as there's enough memory and expiration is
not reached. Or is this not deterministic at all? Perhaps you can
point me to resources providing more details on this?


'Enough' memory may not be what you expect unless you understand how
your data fits in the allocated slabs.  And I'm not sure what happens if
the keys have hash collisions.

What about setting the minimum slab size to 1 mb (-n 1048576) so that
there's only one slab and one can calculate with this?


After seeing more of this thread, I'm inclined to think that the problem 
that started it is really a misconfiguration or network issue.  While 
you shouldn't expect memcache to be a reliable store, the miss 
percentage should not double when you add another server.


--
  Les Mikesell
   lesmikes...@gmail.com


objects deleted from cache after 20min

2010-03-12 Thread Alexandre Ladeira
I'm using memcached to store these data:
- user object - an java object consisting of username, password,
city...;
- user status info;
- counter,

so for every user login, I will make three memcached put operation.
After making 10.000 logins, I have these stats via memcached stats:
bytes - ~1Mb
counter - 9980, as this is an stress test, the server doesn't reply to
every request.

The problem is that after 20 min, memcached deleted all login info -
counter decresed to 0, even if I set the expire time (example: 3600 or
60*60*24*29 or 0). Have you ever faced a problem like that?


How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread martin.grotzke
Hi,

I know that this topic is rather burdened, as it was said often enough
that memcached never was created to be used like a reliable datastore.
Still, there are users interested in some kind of reliability, users
that want to store items in memcached and be sure that these items
can be pulled from memcached as long as they are not expired.

I read the following on memcached's memory management:
  Memcached has two separate memory management strategies:
- On read, if a key is past its expiry time, return NOT FOUND.
- On write, choose an appropriate slab class for the value; if it's
full, replace the oldest-used (read or written) key with the new one.
Note that the second strategy, LRU eviction, does not check the expiry
time at all. (from peeping into memcached, [1])

I also found Slabs, Pages, Chunks and Memcached ([2]) a really good
explanation of memcached's memory model.

Having this as background, I wonder how it would be possible to get
more predictability regarding the availability of cached items.

Asume that I want to store sessions in memcached. How could I run
memcached so that I can be sure that my sessions are available in
memcached when I try to get them? Additionally asume, that I expect
to have 1000 sessions at a time in max in one memcached node (and that
I can control/limit this in my application). Another asumption is,
that sessions are between 50kb and 200 kb.

The question now is how do I have to run memcached to store these
sessions in memcached?

Would it be an option to run memcached with a minimum slab size of
200kb. Then I would know that for each session a 200kb chunk is used.
When I have 1000 session between 50kb and 200kb this should take 200mb
in total. When I run memcached with more than 200mb memory, could I be
sure, that the sessions are alive as long as they are not expired?

What do you think about this?

Cheers,
Martin


[1] http://blog.evanweaver.com/articles/2009/04/20/peeping-into-memcached/
[2] http://www.mikeperham.com/2009/06/22/slabs-pages-chunks-and-memcached/


Re: Memcache as session server with high cache miss?

2010-03-12 Thread martin.grotzke
I just started a new thread How to get more predictable caching
behavior - how to store sessions in memcached to discuss my more
general question on how to achieve more predictability/reliability (I
might have hijacked this thread a little bit, sorry for this).

Cheers,
Martin


On Mar 12, 6:43 pm, Les Mikesell lesmikes...@gmail.com wrote:
 On 3/12/2010 11:21 AM, martin.grotzke wrote:



  Hi,

  I'm trying to follow this thread on my mobile, i hope i didn't miss
  anything. But AFAICS it was not yet explained, when memcached might
  drop cached data as long as there's enough memory and expiration is
  not reached. Or is this not deterministic at all? Perhaps you can
  point me to resources providing more details on this?

  'Enough' memory may not be what you expect unless you understand how
  your data fits in the allocated slabs.  And I'm not sure what happens if
  the keys have hash collisions.
  What about setting the minimum slab size to 1 mb (-n 1048576) so that
  there's only one slab and one can calculate with this?

 After seeing more of this thread, I'm inclined to think that the problem
 that started it is really a misconfiguration or network issue.  While
 you shouldn't expect memcache to be a reliable store, the miss
 percentage should not double when you add another server.

 --
    Les Mikesell
     lesmikes...@gmail.com


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Ren
I believe most of this is covered in

http://dormando.livejournal.com/495593.html

Jared

On Mar 12, 9:02 pm, martin.grotzke martin.grot...@googlemail.com
wrote:
 Hi,

 I know that this topic is rather burdened, as it was said often enough
 that memcached never was created to be used like a reliable datastore.
 Still, there are users interested in some kind of reliability, users
 that want to store items in memcached and be sure that these items
 can be pulled from memcached as long as they are not expired.

 I read the following on memcached's memory management:
   Memcached has two separate memory management strategies:
 - On read, if a key is past its expiry time, return NOT FOUND.
 - On write, choose an appropriate slab class for the value; if it's
 full, replace the oldest-used (read or written) key with the new one.
 Note that the second strategy, LRU eviction, does not check the expiry
 time at all. (from peeping into memcached, [1])

 I also found Slabs, Pages, Chunks and Memcached ([2]) a really good
 explanation of memcached's memory model.

 Having this as background, I wonder how it would be possible to get
 more predictability regarding the availability of cached items.

 Asume that I want to store sessions in memcached. How could I run
 memcached so that I can be sure that my sessions are available in
 memcached when I try to get them? Additionally asume, that I expect
 to have 1000 sessions at a time in max in one memcached node (and that
 I can control/limit this in my application). Another asumption is,
 that sessions are between 50kb and 200 kb.

 The question now is how do I have to run memcached to store these
 sessions in memcached?

 Would it be an option to run memcached with a minimum slab size of
 200kb. Then I would know that for each session a 200kb chunk is used.
 When I have 1000 session between 50kb and 200kb this should take 200mb
 in total. When I run memcached with more than 200mb memory, could I be
 sure, that the sessions are alive as long as they are not expired?

 What do you think about this?

 Cheers,
 Martin

 [1]http://blog.evanweaver.com/articles/2009/04/20/peeping-into-memcached/
 [2]http://www.mikeperham.com/2009/06/22/slabs-pages-chunks-and-memcached/


Re: reusing sockets between Perl memcached objects

2010-03-12 Thread Jonathan Swartz
The way I would solve this would be to change  
Cache::Memcached::libmemcached to *contain* a Memcached::libmemcached  
object (go from ISA to HAS-A), and forward all methods appropriately.  
Then multiple C::M::l objects could share the same M::l object.


Other than the ref() of the object, everything else in the API should  
remain the same.


I've written to dmaki asking if he'd be willing to accept such a patch.

Of course it would be even nicer if the C libmemcached handled this,  
but I've got a lot less ability to affect that. :)


Jon

On Mar 12, 2010, at 2:56 AM, dormando wrote:

We discovered this as well a few months ago... I don't think we  
found a

workaround :(

Maybe someone else has?

On Fri, 12 Mar 2010, Jonathan Swartz wrote:


We make liberal use of the namespace option to
Cache::Memcached::libmemcached, creating one object for each of our
dozens of namespaces.

However I recently discovered that it will create new socket
connections for each object. i.e.:

  #!/usr/bin/perl -w
  use Cache::Memcached;
  use strict;

  my ( $class, $iter ) = @ARGV or die usage: $0 class iter;
  eval require $class;

  my @memd;
  for my $i ( 0 .. $iter ) {
  $memd[$i] = $class-new( { servers = [localhost:11211] } );
  $memd[$i]-get('foo');
  }

  my $memd = new Cache::Memcached { servers = [localhost:11211] };
  my $stats = $memd-stats;
  print curr_connections:  . $stats-{total}-{curr_connections} .
\n;

swartz ./sockets.pl Cache::Memcached 50
curr_connections: 10
swartz ./sockets.pl Cache::Memcached::Fast 50
curr_connections: 61
swartz ./sockets.pl Cache::Memcached::libmemcached 50
curr_connections: 61

I don't know why curr_connections starts at 10. In any case,
curr_connections will not grow with the number of Cache::Memcached
objects created, but will grow with the number of
Cache::Memcached::Fast or Cache::Memcached::libmemcached objects
created.

I was a little surprised that libmemcached, at least, didn't have  
this

feature. Just wondering if I'm doing something wrong.

If I want to keep using libmemcached, I guess I will have to create
just one option and override its namespace each time I use it.

Jon





Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Martin Grotzke
With my question how do I have to run memcached to 'store' these sessions
in memcached I was not referring to a general approach, but I was referring
to the concrete memcached options (e.g. -n 204800 for 200kb slabs) to use.

The post you mentioned is very high level and does not answer my question.
For this you should go into a little more depth.

Thanx  cheers,
Martin



On Fri, Mar 12, 2010 at 10:27 PM, Ren jared.willi...@ntlworld.com wrote:

 I believe most of this is covered in

 http://dormando.livejournal.com/495593.html

 Jared

 On Mar 12, 9:02 pm, martin.grotzke martin.grot...@googlemail.com
 wrote:
  Hi,
 
  I know that this topic is rather burdened, as it was said often enough
  that memcached never was created to be used like a reliable datastore.
  Still, there are users interested in some kind of reliability, users
  that want to store items in memcached and be sure that these items
  can be pulled from memcached as long as they are not expired.
 
  I read the following on memcached's memory management:
Memcached has two separate memory management strategies:
  - On read, if a key is past its expiry time, return NOT FOUND.
  - On write, choose an appropriate slab class for the value; if it's
  full, replace the oldest-used (read or written) key with the new one.
  Note that the second strategy, LRU eviction, does not check the expiry
  time at all. (from peeping into memcached, [1])
 
  I also found Slabs, Pages, Chunks and Memcached ([2]) a really good
  explanation of memcached's memory model.
 
  Having this as background, I wonder how it would be possible to get
  more predictability regarding the availability of cached items.
 
  Asume that I want to store sessions in memcached. How could I run
  memcached so that I can be sure that my sessions are available in
  memcached when I try to get them? Additionally asume, that I expect
  to have 1000 sessions at a time in max in one memcached node (and that
  I can control/limit this in my application). Another asumption is,
  that sessions are between 50kb and 200 kb.
 
  The question now is how do I have to run memcached to store these
  sessions in memcached?
 
  Would it be an option to run memcached with a minimum slab size of
  200kb. Then I would know that for each session a 200kb chunk is used.
  When I have 1000 session between 50kb and 200kb this should take 200mb
  in total. When I run memcached with more than 200mb memory, could I be
  sure, that the sessions are alive as long as they are not expired?
 
  What do you think about this?
 
  Cheers,
  Martin
 
  [1]
 http://blog.evanweaver.com/articles/2009/04/20/peeping-into-memcached/
  [2]
 http://www.mikeperham.com/2009/06/22/slabs-pages-chunks-and-memcached/




-- 
Martin Grotzke
http://www.javakaffee.de/blog/


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Brian Moon
The resounding answer you will get from this list is: You don't, can't 
and won't with memcached. That is not its job. It will never be its job. 
Perhaps when storage engines are done, maybe. But then you won't get the 
performance that you get with memcached. There is a trade off for 
performance.


Brian.

http://brian.moonspot.net/

On 3/12/10 3:02 PM, martin.grotzke wrote:

Hi,

I know that this topic is rather burdened, as it was said often enough
that memcached never was created to be used like a reliable datastore.
Still, there are users interested in some kind of reliability, users
that want to store items in memcached and be sure that these items
can be pulled from memcached as long as they are not expired.

I read the following on memcached's memory management:
   Memcached has two separate memory management strategies:
- On read, if a key is past its expiry time, return NOT FOUND.
- On write, choose an appropriate slab class for the value; if it's
full, replace the oldest-used (read or written) key with the new one.
Note that the second strategy, LRU eviction, does not check the expiry
time at all. (from peeping into memcached, [1])

I also found Slabs, Pages, Chunks and Memcached ([2]) a really good
explanation of memcached's memory model.

Having this as background, I wonder how it would be possible to get
more predictability regarding the availability of cached items.

Asume that I want to store sessions in memcached. How could I run
memcached so that I can be sure that my sessions are available in
memcached when I try to get them? Additionally asume, that I expect
to have 1000 sessions at a time in max in one memcached node (and that
I can control/limit this in my application). Another asumption is,
that sessions are between 50kb and 200 kb.

The question now is how do I have to run memcached to store these
sessions in memcached?

Would it be an option to run memcached with a minimum slab size of
200kb. Then I would know that for each session a 200kb chunk is used.
When I have 1000 session between 50kb and 200kb this should take 200mb
in total. When I run memcached with more than 200mb memory, could I be
sure, that the sessions are alive as long as they are not expired?

What do you think about this?

Cheers,
Martin


[1] http://blog.evanweaver.com/articles/2009/04/20/peeping-into-memcached/
[2] http://www.mikeperham.com/2009/06/22/slabs-pages-chunks-and-memcached/


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Les Mikesell

On 3/12/2010 5:10 PM, Martin Grotzke wrote:

With my question how do I have to run memcached to 'store' these
sessions in memcached I was not referring to a general approach, but I
was referring to the concrete memcached options (e.g. -n 204800 for
200kb slabs) to use.

The post you mentioned is very high level and does not answer my
question. For this you should go into a little more depth.


Some details here:
http://dev.mysql.com/doc/refman/5.0/en/ha-memcached-using-memory.html
But it's still not very clear how much extra you need to make sure that 
the hash to a server/slab will always find free space instead of 
evicting something even though space is available elsewhere.


--
  Les Mikesell
   lesmikes...@gmail.com


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Martin Grotzke
Hi Brian,

you're making a very clear point. However it would be nice if you'd provide
concrete answers to concrete questions. I want to get a better understanding
of memcached's memory model and I'm thankful for any help I'm getting here
on this list. If my intro was not supporting this please forgive me...

Cheers,
Martin


On Sat, Mar 13, 2010 at 12:27 AM, Brian Moon br...@moonspot.net wrote:

 The resounding answer you will get from this list is: You don't, can't and
 won't with memcached. That is not its job. It will never be its job. Perhaps
 when storage engines are done, maybe. But then you won't get the performance
 that you get with memcached. There is a trade off for performance.

 Brian.
 
 http://brian.moonspot.net/


 On 3/12/10 3:02 PM, martin.grotzke wrote:

 Hi,

 I know that this topic is rather burdened, as it was said often enough
 that memcached never was created to be used like a reliable datastore.
 Still, there are users interested in some kind of reliability, users
 that want to store items in memcached and be sure that these items
 can be pulled from memcached as long as they are not expired.

 I read the following on memcached's memory management:
   Memcached has two separate memory management strategies:
 - On read, if a key is past its expiry time, return NOT FOUND.
 - On write, choose an appropriate slab class for the value; if it's
 full, replace the oldest-used (read or written) key with the new one.
 Note that the second strategy, LRU eviction, does not check the expiry
 time at all. (from peeping into memcached, [1])

 I also found Slabs, Pages, Chunks and Memcached ([2]) a really good
 explanation of memcached's memory model.

 Having this as background, I wonder how it would be possible to get
 more predictability regarding the availability of cached items.

 Asume that I want to store sessions in memcached. How could I run
 memcached so that I can be sure that my sessions are available in
 memcached when I try to get them? Additionally asume, that I expect
 to have 1000 sessions at a time in max in one memcached node (and that
 I can control/limit this in my application). Another asumption is,
 that sessions are between 50kb and 200 kb.

 The question now is how do I have to run memcached to store these
 sessions in memcached?

 Would it be an option to run memcached with a minimum slab size of
 200kb. Then I would know that for each session a 200kb chunk is used.
 When I have 1000 session between 50kb and 200kb this should take 200mb
 in total. When I run memcached with more than 200mb memory, could I be
 sure, that the sessions are alive as long as they are not expired?

 What do you think about this?

 Cheers,
 Martin


 [1]
 http://blog.evanweaver.com/articles/2009/04/20/peeping-into-memcached/
 [2]
 http://www.mikeperham.com/2009/06/22/slabs-pages-chunks-and-memcached/




-- 
Martin Grotzke
http://www.javakaffee.de/blog/


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Marc Bollinger
Part of the disconnect is that, how do I have to run memcached to
'store' these sessions in memcached, is not a concrete question. It's
wibbly wobbly at best to try and achieve this behavior, and, You
can't, _is_ concrete in that there is no way to do this in a
mathematically provable way. The best you're going to get is something
that anecdotally works, contingent on the existence of roughly
homogeneous object sizes, and that you're allocating enough memory to
memcached. The scheme you described above (maybe tweak the growth
factor downward to taste?) will probably work, but that assumes a
limit of 1000 users, which is unrealistic for the scale that memcached
was really designed for in the first place.

- Marc

On Fri, Mar 12, 2010 at 3:56 PM, Martin Grotzke
martin.grot...@googlemail.com wrote:
 Hi Brian,
 you're making a very clear point. However it would be nice if you'd provide
 concrete answers to concrete questions. I want to get a better understanding
 of memcached's memory model and I'm thankful for any help I'm getting here
 on this list. If my intro was not supporting this please forgive me...
 Cheers,
 Martin

 On Sat, Mar 13, 2010 at 12:27 AM, Brian Moon br...@moonspot.net wrote:

 The resounding answer you will get from this list is: You don't, can't and
 won't with memcached. That is not its job. It will never be its job. Perhaps
 when storage engines are done, maybe. But then you won't get the performance
 that you get with memcached. There is a trade off for performance.

 Brian.
 
 http://brian.moonspot.net/

 On 3/12/10 3:02 PM, martin.grotzke wrote:

 Hi,

 I know that this topic is rather burdened, as it was said often enough
 that memcached never was created to be used like a reliable datastore.
 Still, there are users interested in some kind of reliability, users
 that want to store items in memcached and be sure that these items
 can be pulled from memcached as long as they are not expired.

 I read the following on memcached's memory management:
   Memcached has two separate memory management strategies:
 - On read, if a key is past its expiry time, return NOT FOUND.
 - On write, choose an appropriate slab class for the value; if it's
 full, replace the oldest-used (read or written) key with the new one.
 Note that the second strategy, LRU eviction, does not check the expiry
 time at all. (from peeping into memcached, [1])

 I also found Slabs, Pages, Chunks and Memcached ([2]) a really good
 explanation of memcached's memory model.

 Having this as background, I wonder how it would be possible to get
 more predictability regarding the availability of cached items.

 Asume that I want to store sessions in memcached. How could I run
 memcached so that I can be sure that my sessions are available in
 memcached when I try to get them? Additionally asume, that I expect
 to have 1000 sessions at a time in max in one memcached node (and that
 I can control/limit this in my application). Another asumption is,
 that sessions are between 50kb and 200 kb.

 The question now is how do I have to run memcached to store these
 sessions in memcached?

 Would it be an option to run memcached with a minimum slab size of
 200kb. Then I would know that for each session a 200kb chunk is used.
 When I have 1000 session between 50kb and 200kb this should take 200mb
 in total. When I run memcached with more than 200mb memory, could I be
 sure, that the sessions are alive as long as they are not expired?

 What do you think about this?

 Cheers,
 Martin


 [1]
 http://blog.evanweaver.com/articles/2009/04/20/peeping-into-memcached/
 [2]
 http://www.mikeperham.com/2009/06/22/slabs-pages-chunks-and-memcached/



 --
 Martin Grotzke
 http://www.javakaffee.de/blog/



Re: objects deleted from cache after 20min

2010-03-12 Thread Matt Ingenthron

Hi,

Alexandre Ladeira wrote:

I'm using memcached to store these data:
- user object - an java object consisting of username, password,
city...;
- user status info;
- counter,

so for every user login, I will make three memcached put operation.
After making 10.000 logins, I have these stats via memcached stats:
bytes - ~1Mb
counter - 9980, as this is an stress test, the server doesn't reply to
every request.
  
The problem is that after 20 min, memcached deleted all login info -

counter decresed to 0, even if I set the expire time (example: 3600 or
60*60*24*29 or 0). Have you ever faced a problem like that?
  


I don't think we've had any reports of things disappearing without any 
load going on.  What are you referring to with counter decreasing to 
0?  Are you referring to the curr_items or total_items statistics?


- Matt



Re: Memcache as session server with high cache miss?

2010-03-12 Thread dormando
 Here is our current setup:
 webserver1 (also runs session memcache server)
 webserver2 (also runs session memcache server)
 database (specialized memcache storage for data caching)

 We are not really a high loaded site, at peak time only about 1500
 users online together. Network is not really saturated, as not much
 data being transferred I believe.

 Here is the phpinfo() part for sessions:
 session.auto_startOff Off
 session.bug_compat_42 On  On
 session.bug_compat_warn   On  On
 session.cache_expire  180 180
 session.cache_limiter nocache nocache
 session.cookie_domain no valueno value
 session.cookie_httponly   Off Off
 session.cookie_lifetime   0   0
 session.cookie_path   /   /
 session.cookie_secure Off Off
 session.entropy_file  no valueno value
 session.entropy_length0   0
 session.gc_divisor100 100
 session.gc_maxlifetime14401440
 session.gc_probability0   0
 session.hash_bits_per_character   4   4
 session.hash_function 0   0
 session.name  PHPSESSID   PHPSESSID
 session.referer_check no valueno value
 session.save_handler  memcachememcache
 session.save_path tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
 tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
 session.serialize_handler php php
 session.use_cookies   On  On
 session.use_only_cookies  Off Off
 session.use_trans_sid 0   0

 And yes, we use PECL memcache extension.

Can you paste the output of 'stats' against both of your memcached
servers? Is your configuration identical on both servers? How have you
been calculating the miss rate?


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Martin Grotzke
Hi Les,

On Sat, Mar 13, 2010 at 12:29 AM, Les Mikesell lesmikes...@gmail.comwrote:

 On 3/12/2010 5:10 PM, Martin Grotzke wrote:

 With my question how do I have to run memcached to 'store' these
 sessions in memcached I was not referring to a general approach, but I
 was referring to the concrete memcached options (e.g. -n 204800 for
 200kb slabs) to use.

 The post you mentioned is very high level and does not answer my
 question. For this you should go into a little more depth.


 Some details here:
 http://dev.mysql.com/doc/refman/5.0/en/ha-memcached-using-memory.html

thanx for this link. Some details in the text are confusing to me. It says:

When you start to store data into the cache, memcached does not allocate the
 memory for the data on an item by item basis. Instead, a slab allocation is
 used to optimize memory usage and prevent memory fragmentation when
 information expires from the cache.


 With slab allocation, memory is reserved in blocks of 1MB. The slab is
 divided up into a number of blocks of equal size.

Ok, blocks of equal size, all blocks have 1 MB (as said in the sentence
before).

When you try to store a value into the cache, memcached checks the size of
 the value that you are adding to the cache and determines which slab
 contains the right size allocation for the item. If a slab with the item
 size already exists, the item is written to the block within the slab.

written to the block within the slab sounds, as if there's one block for
one slab?



 If the new item is bigger than the size of any existing blocks,

I thought all blocks are 1MB in their size? Should this be of any existing
slab?


 then a new slab is created, divided up into blocks of a suitable size.

Again, I thought blocks are 1MB, then what is a suitable size here?


 If an existing slab with the right block size already exists,

Confusing again.


 but there are no free blocks, a new slab is created.


In the second part of this documentation, the terms page and chunk are
used, but not related to block. block is not used at all in the second
part. Can you clarify the meaning of block in this context and create a link
to slab, page and chunk?

Btw, I found Slabs, Pages, Chunks and Memcached ([1]) really well written
and easy to understand. Would say this explanation is complete?




 But it's still not very clear how much extra you need to make sure that the
 hash to a server/slab will always find free space instead of evicting
 something even though space is available elsewhere.

What are you meaning with hash to a server/slab?
I'm selecting the memcached node an item goes to manually btw, without
hashing.
The selected memcached node is stored in a cookie, so that I know where to
get my session from.
Btw: my sessions are stored only for backup in memcached, they're still kept
in local memory for normal operations (see [2] for more details).

Regarding the extra space: asume we have a minimum space allocated for
key+value+flags of 1mb.
Then I have only a single slab class and each chunk is going to take 1mb. So
I'd know that I could store total memory / 1mb items (item here is
key+value+flags) in memcached (e.g. with 1gb, I could store 1000 items).
Is there s.th. missing?

Thanx  cheers,
Martin


[1] http://www.mikeperham.com/2009/06/22/slabs-pages-chunks-and-memcached/
[2] http://code.google.com/p/memcached-session-manager/
http://www.mikeperham.com/2009/06/22/slabs-pages-chunks-and-memcached/


 --
  Les Mikesell
   lesmikes...@gmail.com




-- 
Martin Grotzke
http://www.javakaffee.de/blog/


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Martin Grotzke
I'm trying to get a better understanding of how memcached works. I'm
starting with a simple example here to see how this could be handled. The
numbers are just taken for example, if I didn't mention this already. When
this simple example is ok, more complexity is added and numbers change.
Still, I want to start simple and bottom up instead of with a full blown
you-can't-handle-this-at-all-example. And also I don't expect mathematically
exact numbers but I just want to get an idea from this.

Back to the topic: I created the memcached-session-manager (a tomcat session
manager keeping sessions in memcached just for backup / session failover,
they're still stored in memory for normal operations; see [1]) and I want to
find out how memcached should be tuned to provide best results. Of
course this will be application specific and require some knowledge of
memcached. Still I want to be able to give advices. And in the end it would
be totally ok for me if the result would be s.th. like use the
max-possible-session-size as min slab size, multiply this with the
max-number-of-sessions and this gives you 10% of the memory you need to give
memcached.

Thanx for your help,
cheers,
Martin


[1] http://code.google.com/p/memcached-session-manager/



On Sat, Mar 13, 2010 at 1:07 AM, Marc Bollinger mbollin...@gmail.comwrote:

 Part of the disconnect is that, how do I have to run memcached to
 'store' these sessions in memcached, is not a concrete question. It's
 wibbly wobbly at best to try and achieve this behavior, and, You
 can't, _is_ concrete in that there is no way to do this in a
 mathematically provable way. The best you're going to get is something
 that anecdotally works, contingent on the existence of roughly
 homogeneous object sizes, and that you're allocating enough memory to
 memcached. The scheme you described above (maybe tweak the growth
 factor downward to taste?) will probably work, but that assumes a
 limit of 1000 users, which is unrealistic for the scale that memcached
 was really designed for in the first place.

 - Marc

 On Fri, Mar 12, 2010 at 3:56 PM, Martin Grotzke
 martin.grot...@googlemail.com wrote:
  Hi Brian,
  you're making a very clear point. However it would be nice if you'd
 provide
  concrete answers to concrete questions. I want to get a better
 understanding
  of memcached's memory model and I'm thankful for any help I'm getting
 here
  on this list. If my intro was not supporting this please forgive me...
  Cheers,
  Martin
 
  On Sat, Mar 13, 2010 at 12:27 AM, Brian Moon br...@moonspot.net wrote:
 
  The resounding answer you will get from this list is: You don't, can't
 and
  won't with memcached. That is not its job. It will never be its job.
 Perhaps
  when storage engines are done, maybe. But then you won't get the
 performance
  that you get with memcached. There is a trade off for performance.
 
  Brian.
  
  http://brian.moonspot.net/
 
  On 3/12/10 3:02 PM, martin.grotzke wrote:
 
  Hi,
 
  I know that this topic is rather burdened, as it was said often enough
  that memcached never was created to be used like a reliable datastore.
  Still, there are users interested in some kind of reliability, users
  that want to store items in memcached and be sure that these items
  can be pulled from memcached as long as they are not expired.
 
  I read the following on memcached's memory management:
Memcached has two separate memory management strategies:
  - On read, if a key is past its expiry time, return NOT FOUND.
  - On write, choose an appropriate slab class for the value; if it's
  full, replace the oldest-used (read or written) key with the new one.
  Note that the second strategy, LRU eviction, does not check the expiry
  time at all. (from peeping into memcached, [1])
 
  I also found Slabs, Pages, Chunks and Memcached ([2]) a really good
  explanation of memcached's memory model.
 
  Having this as background, I wonder how it would be possible to get
  more predictability regarding the availability of cached items.
 
  Asume that I want to store sessions in memcached. How could I run
  memcached so that I can be sure that my sessions are available in
  memcached when I try to get them? Additionally asume, that I expect
  to have 1000 sessions at a time in max in one memcached node (and that
  I can control/limit this in my application). Another asumption is,
  that sessions are between 50kb and 200 kb.
 
  The question now is how do I have to run memcached to store these
  sessions in memcached?
 
  Would it be an option to run memcached with a minimum slab size of
  200kb. Then I would know that for each session a 200kb chunk is used.
  When I have 1000 session between 50kb and 200kb this should take 200mb
  in total. When I run memcached with more than 200mb memory, could I be
  sure, that the sessions are alive as long as they are not expired?
 
  What do you think about this?
 
  Cheers,
  Martin
 
 
  [1]
  

Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread dormando
 Some details here:
 http://dev.mysql.com/doc/refman/5.0/en/ha-memcached-using-memory.html

 thanx for this link. Some details in the text are confusing to me. It says:

   When you start to store data into the cache, memcached does not 
 allocate the memory for the data on an item by item basis. Instead, a slab 
 allocation is used to optimize memory usage and
   prevent memory fragmentation when information expires from the cache.


   With slab allocation, memory is reserved in blocks of 1MB. The slab is 
 divided up into a number of blocks of equal size.

 Ok, blocks of equal size, all blocks have 1 MB (as said in the sentence 
 before).

   When you try to store a value into the cache, memcached checks the size 
 of the value that you are adding to the cache and determines which slab 
 contains the right size allocation for the item.
   If a slab with the item size already exists, the item is written to the 
 block within the slab.

 written to the block within the slab sounds, as if there's one block for 
 one slab?
  


   If the new item is bigger than the size of any existing blocks,

 I thought all blocks are 1MB in their size? Should this be of any existing 
 slab?
  
   then a new slab is created, divided up into blocks of a suitable size.

 Again, I thought blocks are 1MB, then what is a suitable size here?
  
   If an existing slab with the right block size already exists,

 Confusing again.
  
   but there are no free blocks, a new slab is created.


 In the second part of this documentation, the terms page and chunk are 
 used, but not related to block. block is not used at all in the second 
 part. Can you clarify the meaning of block in this
 context and create a link to slab, page and chunk?

 Btw, I found Slabs, Pages, Chunks and Memcached ([1]) really well written 
 and easy to understand. Would say this explanation is complete?

The memory allocation is a bit more subtle... but it's hard to explain and
doesn't really affect anyone.

Urr... I'll give it a shot.

./memcached -m 128
^ means memcached can use up to 128 megabytes of memory for item storage.

Now lets say you store items that will fit in a slab class of 128, which
means 128 bytes for the key + flags + CAS + value.

The maximum item size is (by default) 1 megabyte. This ends up being the
ceiling for how big a slab page can be.

8192 128 byte items will fit inside the slab page limit of 1mb.

So now a slab page of 1mb is allocated, and split up into 8192 chunks.
Each chunk can store a single item.

Slabs grow at a default factor of 1.20 or whatever -f is. So the next slab
class after 128 bytes will be 154 bytes (rounding up). (note I don't
actually recall offhand if it rounds up or down :P)

154 bytes is not evenly divisible into 1048576. You end up with
6808.93 chunks. So instead memcached allocates a page of 1048432 bytes,
providing 6,808 chunks.

This is slightly smaller than 1mb! So as your chunks grow, they
don't allocate exactly a megabyte from the main pool of 128 megabytes,
then split that into chunks. Memcached attempts to leave the little scraps
of memory in the main pool in hopes that they'll add up to an extra page
down the line, rather than be thrown out as overhead when if a slab class
were to allocate a 1mb page.

So in memcached, a slab page is however many chunks of this size will
fit into 1mb, a chunk is how many chunks will fit into that page. The
slab growth factor determines how many slab classes exist.

I'm gonna turn this into a wiki entry in a few days... been slowly
whittling away at revising the whole thing.

-Dormando


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread dormando
 The memory allocation is a bit more subtle... but it's hard to explain and
 doesn't really affect anyone.

 Urr... I'll give it a shot.

 ./memcached -m 128
 ^ means memcached can use up to 128 megabytes of memory for item storage.

 Now lets say you store items that will fit in a slab class of 128, which
 means 128 bytes for the key + flags + CAS + value.

 The maximum item size is (by default) 1 megabyte. This ends up being the
 ceiling for how big a slab page can be.

 8192 128 byte items will fit inside the slab page limit of 1mb.

 So now a slab page of 1mb is allocated, and split up into 8192 chunks.
 Each chunk can store a single item.

 Slabs grow at a default factor of 1.20 or whatever -f is. So the next slab
 class after 128 bytes will be 154 bytes (rounding up). (note I don't
 actually recall offhand if it rounds up or down :P)

 154 bytes is not evenly divisible into 1048576. You end up with
 6808.93 chunks. So instead memcached allocates a page of 1048432 bytes,
 providing 6,808 chunks.

 This is slightly smaller than 1mb! So as your chunks grow, they
 don't allocate exactly a megabyte from the main pool of 128 megabytes,
 then split that into chunks. Memcached attempts to leave the little scraps
 of memory in the main pool in hopes that they'll add up to an extra page
 down the line, rather than be thrown out as overhead when if a slab class
 were to allocate a 1mb page.

 So in memcached, a slab page is however many chunks of this size will
 fit into 1mb, a chunk is how many chunks will fit into that page. The
 slab growth factor determines how many slab classes exist.

 I'm gonna turn this into a wiki entry in a few days... been slowly
 whittling away at revising the whole thing.

To be most accurate, it is how many chunks will fit into the max item
size, which by default is 1mb. The page size being == to the max item
size is just due to how the slabbing algorithm works. It creates slab
classes between a minimum and a maximum. So the maximum ends up being the
item size limit.

I can see this changing in the future, where we have a max item size of
whatever, and a page size of 1mb, then larger items are made up of
concatenated smaller pages or individual malloc's.


Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Les Mikesell

dormando wrote:



To be most accurate, it is how many chunks will fit into the max item
size, which by default is 1mb. The page size being == to the max item
size is just due to how the slabbing algorithm works. It creates slab
classes between a minimum and a maximum. So the maximum ends up being the
item size limit.

I can see this changing in the future, where we have a max item size of
whatever, and a page size of 1mb, then larger items are made up of
concatenated smaller pages or individual malloc's.


So what happens when a key is repeatedly written and it grows a bit each time? 
I had trouble with that long ago with a berkeleydb version that I think was 
eventually fixed.  As things work now, if the new storage has to move to a 
larger block, is the old space immediately freed?


--
  Les Mikesell
   lesmikes...@gmail.com





Re: How to get more predictable caching behavior - how to store sessions in memcached

2010-03-12 Thread Dustin

On Mar 12, 6:10 pm, Les Mikesell lesmikes...@gmail.com wrote:
 So what happens when a key is repeatedly written and it grows a bit each time?
 I had trouble with that long ago with a berkeleydb version that I think was
 eventually fixed.  As things work now, if the new storage has to move to a
 larger block, is the old space immediately freed?

  Every write is to a new block, then the previous is made available
for the next write.  Size isn't relevant for this case.


Re: Memcache as session server with high cache miss?

2010-03-12 Thread TheOnly92
I'm retrieving statistics via Memcache::extendedStats function, here
are the basics:

Session Server 1
Version 1.2.2
Uptime  398,954 sec
Cache Hits  2,065,061
Cache Misses987,726 (47.83%)
Current Items   381,928
Data Read   4,318,055.02 KB
Data Written2,011,004.09 KB
Current Storage 100,688.96 KB
Maximum Storage 256.00 MB
Current Connections 9
Total Connections   5,278,414
Session Server 2
Version 1.2.2
Uptime  398,943 sec
Cache Hits  2,225,697
Cache Misses987,733 (44.38%)
Current Items   381,919
Data Read   4,323,893.05 KB
Data Written2,159,309.95 KB
Current Storage 100,685.52 KB
Maximum Storage 256.00 MB
Current Connections 11
Total Connections   5,278,282

We are absolutely sure that both webservers are able to access the
memcache server instances, we selected memcache because it was an easy
configuration and setup without any changes of source code required,
not to think that it is absolutely reliable. We just need to make
sure that it works most of the time, but current situation is just
unacceptable.

On Mar 13, 8:33 am, dormando dorma...@rydia.net wrote:
  Here is our current setup:
  webserver1 (also runs session memcache server)
  webserver2 (also runs session memcache server)
  database (specialized memcache storage for data caching)

  We are not really a high loaded site, at peak time only about 1500
  users online together. Network is not really saturated, as not much
  data being transferred I believe.

  Here is the phpinfo() part for sessions:
  session.auto_start Off     Off
  session.bug_compat_42      On      On
  session.bug_compat_warn    On      On
  session.cache_expire       180     180
  session.cache_limiter      nocache nocache
  session.cookie_domain      no value        no value
  session.cookie_httponly    Off     Off
  session.cookie_lifetime    0       0
  session.cookie_path        /       /
  session.cookie_secure      Off     Off
  session.entropy_file       no value        no value
  session.entropy_length     0       0
  session.gc_divisor 100     100
  session.gc_maxlifetime     1440    1440
  session.gc_probability     0       0
  session.hash_bits_per_character    4       4
  session.hash_function      0       0
  session.name       PHPSESSID       PHPSESSID
  session.referer_check      no value        no value
  session.save_handler       memcache        memcache
  session.save_path  tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
  tcp://172.23.111.11:11211,tcp://172.23.111.12:11211
  session.serialize_handler  php     php
  session.use_cookies        On      On
  session.use_only_cookies   Off     Off
  session.use_trans_sid      0       0

  And yes, we use PECL memcache extension.

 Can you paste the output of 'stats' against both of your memcached
 servers? Is your configuration identical on both servers? How have you
 been calculating the miss rate?


Re: Memcache as session server with high cache miss?

2010-03-12 Thread dormando
Can you telnet to the instances, type stats, stats items, and stats
slabs, then copy/paste all that into pastebin?

echo stats | nc host 11211  stats.txt works too

You version is very old... It's missing many statistical counters that
could help us diagnose a problem. The extendedstats isn't printing an
evictions counter, but I can't remember if that version even had one.

Can you describe your problem in more detail? If I recall:

- User logs in.
- Clicks somewhere. now they're logged out?
- They click somewhere else, and they're logged in again? Does this mean
they found their original session again, or did you app log them in again?

-Dormando

On Fri, 12 Mar 2010, TheOnly92 wrote:

 I'm retrieving statistics via Memcache::extendedStats function, here
 are the basics:

 Session Server 1
 Version   1.2.2
 Uptime398,954 sec
 Cache Hits2,065,061
 Cache Misses  987,726 (47.83%)
 Current Items 381,928
 Data Read 4,318,055.02 KB
 Data Written  2,011,004.09 KB
 Current Storage   100,688.96 KB
 Maximum Storage   256.00 MB
 Current Connections   9
 Total Connections 5,278,414
 Session Server 2
 Version   1.2.2
 Uptime398,943 sec
 Cache Hits2,225,697
 Cache Misses  987,733 (44.38%)
 Current Items 381,919
 Data Read 4,323,893.05 KB
 Data Written  2,159,309.95 KB
 Current Storage   100,685.52 KB
 Maximum Storage   256.00 MB
 Current Connections   11
 Total Connections 5,278,282

 We are absolutely sure that both webservers are able to access the
 memcache server instances, we selected memcache because it was an easy
 configuration and setup without any changes of source code required,
 not to think that it is absolutely reliable. We just need to make
 sure that it works most of the time, but current situation is just
 unacceptable.