Re: Tuning memcached on ubuntu x86_64 (2.6.35)

2011-07-21 Thread dormando
> Well, the problem is that memcached will use swap, when it runs out of
> resident memory.  When swap space fills up, memcached will crash under
> load.
>
> Yesterday, I had the -m option set to 3700 (or 3700 megabtyes), since
> I have a 4GB system.  But, I started getting evictions when the
> dataset reached the size of 3219.9 megabytes.  As I mentioned above, I
> started getting evictions at 3763 megabytes of RSS and 3834 megabytes
> of VSZ.  Is the -m option the size of the dataset or is it the size of
> resident memory?
>
> Today, I increased the -m option to 8000 (or 8000 megabytes) to see
> what would happen.  I only have 3954 megabytes total memory in the
> system.  Now, memcached is filling up the swap space.  I assume that I
> will start getting evictions when the virtual memory is full.
>
> It seems to me that I should avoid touching the swap space, since
> memcached can become unstable when using swap space.  But, last week,
> I got into trouble because I set the -m option close to the total
> available memory on the system, and I guess that I had the value set
> too high, since the swap space filled up and memcached crashed.
> Today, I am trying to duplicate the issue that I saw last week.

-m is the limit of the internal slabber. your stored data will be some
amount smaller than that due to overhead. As is stated in a lot of places,
there're a few hundred megs of extra things going on, -m isn't a global
limit.

You seem to have confused virtual memory and resident initially; the
virtual memory doesn't matter at all. just keep reducing -m until your
system is happy.


Re: Tuning memcached on ubuntu x86_64 (2.6.35)

2011-07-21 Thread David Mitchell
On Jul 21, 1:41 pm, Trond Norbye  wrote:
> Are you running out of virtual memory ;)
>

Well, the problem is that memcached will use swap, when it runs out of
resident memory.  When swap space fills up, memcached will crash under
load.

Yesterday, I had the -m option set to 3700 (or 3700 megabtyes), since
I have a 4GB system.  But, I started getting evictions when the
dataset reached the size of 3219.9 megabytes.  As I mentioned above, I
started getting evictions at 3763 megabytes of RSS and 3834 megabytes
of VSZ.  Is the -m option the size of the dataset or is it the size of
resident memory?

Today, I increased the -m option to 8000 (or 8000 megabytes) to see
what would happen.  I only have 3954 megabytes total memory in the
system.  Now, memcached is filling up the swap space.  I assume that I
will start getting evictions when the virtual memory is full.

It seems to me that I should avoid touching the swap space, since
memcached can become unstable when using swap space.  But, last week,
I got into trouble because I set the -m option close to the total
available memory on the system, and I guess that I had the value set
too high, since the swap space filled up and memcached crashed.
Today, I am trying to duplicate the issue that I saw last week.

David


Re: Understanding MemCached

2011-07-21 Thread Les Mikesell

On 7/21/2011 3:58 PM, Organic Spider wrote:

Hi Brian,

Yes have had a good read and still feel a little lost and stupid. What triggers 
each node to have the cache populated ?


The client has to populate the data after a get attempt for a key fails.


How does the data move between each node ?


The other client's gets will succeed until the time to live expires. 
When that happens, the client making the failing request should pull a 
copy of the data from your database and refresh it in memcache.



Or, is what I am requiring the repcached mod to replicate the data ?


As long as your clients are configured with the same list of servers in 
the same order they will find a single copy.


--
  Les Mikesell
   lesmikes...@gmail.com


Re: Understanding MemCached

2011-07-21 Thread Organic Spider
Hi Brian,

Yes have had a good read and still feel a little lost and stupid. What triggers 
each node to have the cache populated ? How does the data move between each 
node ?

Or, is what I am requiring the repcached mod to replicate the data ?

Again, sorry for silly questions just cannot get my head around it at the 
moment; sorry!
-- 
Thanks, Organic Spider | Weaving Open Source Technology
- Original Message - 

From: "Brian Moon"  
To: memcached@googlegroups.com 
Cc: "Organic Spider"  
Sent: Thursday, 21 July, 2011 9:43:22 PM 
Subject: Re: Understanding MemCached 

Required reading: 
http://code.google.com/p/memcached/wiki/TutorialCachingStory 

Brian. 
http://brian.moonspot.net 

On 7/21/11 1:26 PM, Organic Spider wrote: 
> Hello all, 
> 
> Forgive my ignorance as I am trying to get my head around memcached and how 
> it could help in a idea I have. My idea is to bring OSSEC and OpenVAS 
> together and share results between multiple nodes with the aim of being able 
> to re-write iptables or WAF rules. 
> 
> I would be holding all data in a central MySQL database with a cron/daemon 
> Perl script that would query it and build datasets. This data would be 
> written into a memcached database if it does not already exist. Each 
> web/application server would have an instance of memcached running as-well so 
> would the data automatically be shared with the other servers ? 
> 
> When memcached data is distributed to other nodes (servers) is there any sort 
> of trigger to show that new data exists ? or would one need to use a serial 
> number akin to DNS ? 
> 
> Sorry if these are all silly questions! 


Re: Understanding MemCached

2011-07-21 Thread Brian Moon
Required reading: 
http://code.google.com/p/memcached/wiki/TutorialCachingStory


Brian.
http://brian.moonspot.net

On 7/21/11 1:26 PM, Organic Spider wrote:

Hello all,

Forgive my ignorance as I am trying to get my head around memcached and how it 
could help in a idea I have. My idea is to bring OSSEC and OpenVAS together and 
share results between multiple nodes with the aim of being able to re-write 
iptables or WAF rules.

I would be holding all data in a central MySQL database with a cron/daemon Perl 
script that would query it and build datasets. This data would be written into 
a memcached database if it does not already exist. Each web/application server 
would have an instance of memcached running as-well so would the data 
automatically be shared with the other servers ?

When memcached data is distributed to other nodes (servers) is there any sort 
of trigger to show that new data exists ? or would one need to use a serial 
number akin to DNS ?

Sorry if these are all silly questions!


Understanding MemCached

2011-07-21 Thread Organic Spider
Hello all,

Forgive my ignorance as I am trying to get my head around memcached and how it 
could help in a idea I have. My idea is to bring OSSEC and OpenVAS together and 
share results between multiple nodes with the aim of being able to re-write 
iptables or WAF rules.

I would be holding all data in a central MySQL database with a cron/daemon Perl 
script that would query it and build datasets. This data would be written into 
a memcached database if it does not already exist. Each web/application server 
would have an instance of memcached running as-well so would the data 
automatically be shared with the other servers ?

When memcached data is distributed to other nodes (servers) is there any sort 
of trigger to show that new data exists ? or would one need to use a serial 
number akin to DNS ?

Sorry if these are all silly questions!
-- 
Thanks, Organic Spider | Weaving Open Source Technology


Re: Tuning memcached on ubuntu x86_64 (2.6.35)

2011-07-21 Thread Trond Norbye

On 21. juli 2011, at 19.16, dormando wrote:

>> Is it normal to have a 16 percent virtual memory overhead in memcached
>> on x86_64 linux?  memcached STAT bytes is  reporting 3219 megabytes of
>> data, but virtual memory is 16 percent higher at 3834. Resident memory
>> is 14 percent higher at 3763 megabytes.
>> 
>> Is there a way to tune linux/memcached to get memcached to consume
>> less virtual memory?
>> 
> 
> Are you using some bizarre VM system where virtual memory actually
> matters? I can start up apps with terabytes of VM "allocated" just fine.
> 
> The overhead in RSS is normal. you lose some memory to buffers, pointers,
> the hash table structure, etc.

Are you running out of virtual memory ;) 

Please note that you may also tune the slab classes if your object size doesn't 
match the default slabclasses causing a poor memory utilization...

Trond



Re: Tuning memcached on ubuntu x86_64 (2.6.35)

2011-07-21 Thread dormando
> Is it normal to have a 16 percent virtual memory overhead in memcached
> on x86_64 linux?  memcached STAT bytes is  reporting 3219 megabytes of
> data, but virtual memory is 16 percent higher at 3834. Resident memory
> is 14 percent higher at 3763 megabytes.
>
> Is there a way to tune linux/memcached to get memcached to consume
> less virtual memory?
>

Are you using some bizarre VM system where virtual memory actually
matters? I can start up apps with terabytes of VM "allocated" just fine.

The overhead in RSS is normal. you lose some memory to buffers, pointers,
the hash table structure, etc.


Tuning memcached on ubuntu x86_64 (2.6.35)

2011-07-21 Thread David Mitchell
Is it normal to have a 16 percent virtual memory overhead in memcached
on x86_64 linux?  memcached STAT bytes is  reporting 3219 megabytes of
data, but virtual memory is 16 percent higher at 3834. Resident memory
is 14 percent higher at 3763 megabytes.

Is there a way to tune linux/memcached to get memcached to consume
less virtual memory?

At the moment, my 4GB system is full with 3219 megabtyes of data
loaded in memcached.  I am seeing lots of evictions when I try to load
more data.

Below is my configuration and stats.

Ubuntu SMP x86_64 2.6.35-22-server
memcached version 1.4.5-1ubuntu1
libevent-1.4-2 version 1.4.13-stable-1

I am using the default settings for memcached, namely, chunk_size of
48 and 4 threads.

Stack size on my linux system is set to 8192 kilobytes.  Should I
reduce the stack size?

I am using the default slab page size of 1MB.  Should I reduce this
amount?

system: using free
3954 megabytes total memory
3928 megabytes used
26 megabytes free

memcached process: using ps
3763 megabytes RSS
3834 megabytes VSZ
3219 megabtyes reported by STAT bytes

settings:
STAT maxbytes 3879731200
STAT maxconns 1024
STAT tcpport 11211
STAT udpport 11211
STAT inter NULL
STAT verbosity 0
STAT oldest 0
STAT evictions on
STAT domain_socket NULL
STAT umask 700
STAT growth_factor 1.25
STAT chunk_size 48
STAT num_threads 4
STAT stat_key_prefix :
STAT detail_enabled no
STAT reqs_per_event 20
STAT cas_enabled yes
STAT tcp_backlog 1024
STAT binding_protocol auto-negotiate
STAT auth_enabled_sasl no
STAT item_size_max 1048576

STAT pid 716
STAT uptime 10271
STAT time 1311199316
STAT version 1.4.5
STAT pointer_size 64
STAT rusage_user 175.14
STAT rusage_system 325.20
STAT curr_connections 11
STAT total_connections 73
STAT connection_structures 30
STAT cmd_get 9137902
STAT cmd_set 9067050
STAT cmd_flush 0
STAT get_hits 337524
STAT get_misses 8800378
STAT delete_misses 0
STAT delete_hits 266674
STAT incr_misses 0
STAT incr_hits 0
STAT decr_misses 0
STAT decr_hits 0
STAT cas_misses 0
STAT cas_hits 0
STAT cas_badval 0
STAT auth_cmds 0
STAT auth_errors 0
STAT bytes_read 3468460589
STAT bytes_written 244250132
STAT limit_maxbytes 3879731200
STAT accepting_conns 1
STAT listen_disabled_num 0
STAT threads 4
STAT conn_yields 0
STAT bytes 3376314121
STAT curr_items 8080800
STAT total_items 9067050
STAT evictions 719576
STAT reclaimed 0