Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread Shweta Agrawal


On Saturday, July 4, 2020 at 9:41:49 AM UTC+5:30, Dormando wrote:
>
> No attachment 
>
> On Fri, 3 Jul 2020, Shweta Agrawal wrote: 
>
> > 
> > Wooo...so quick. :):) 
> > > Correct, close. It actually uses more like 3 512k chunks and then one  
> > > smaller chunk from a different class to fit exactly 1.6MB.  
> > I see.Got it. 
> > 
> > >Can you share snapshots from "stats items" and "stats slabs" for one 
> of  
> > these instances?  
> > 
> > Currently I have summary of it, sharing the same below. I can get 
> snapshot by Tuesday as need to request for it. 
> > 
> > pages have value from total_pages from stats slab for each slab 
> > item_size have value from chunk_size from stats slab for each slab 
> > Used memory is calculated as pages*page size ---> This has to corrected 
> now. 
> > 
> > 
> > prod_stats.png 
> > 
> > 
> > > 90%+ are perfectly doable. You probably need to look a bit more 
> closely 
> > > into why you're not getting the efficiency you expect. The detailed 
> stats 
> > > output should point to why. I can help with that if it's confusing. 
> > 
> > Great. Will surely ask for your input whenever I have question. It is 
> really kind of you to offer help.  
> > 
> > > Either the slab rebalancer isn't keeping up or you actually do have 
> 39GB 
> > > of data and your expecations are a bit off. This will also depending 
> on 
> > > the TTL's you're setting and how often/quickly your items change size. 
> > > Also things like your serialization method / compression / key length 
> vs 
> > > data length / etc. 
> > 
> > We have much less data than 39 GB. As after facing evictions, it has 
> been always kept higher than expected data-size. 
> > TTL is two days or more.  
> > From my observation items size(data-length) is in the range of 300Bytes 
> to 500K after compression. 
> > Key length is in the range of 40-80 bytes. 
> > 
> > Thank you, 
> > Shweta 
> >   
> > On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando wrote: 
> >   Hey, 
> > 
> >   > Putting my understanding to re-confirm: 
> >   > 1) Page size will always be 1MB and we cannot change 
> it.Moreover, it's not required to be changed. 
> > 
> >   Correct. 
> > 
> >   > 2) We can store items larger than 1MB and it is done by 
> combining chunks together. (example: let's say item size: ~1.6MB --> 4 slab 
> >   chunks(512k slab) from 
> >   > 2 pages will be used) 
> > 
> >   Correct, close. It actually uses more like 3 512k chunks and then 
> one 
> >   smaller chunk from a different class to fit exactly 1.6MB. 
> > 
> >   > We use memcache in production and in past we saw evictions even 
> when free memory was present. Also currently we use cluster with 39GB RAM 
> in 
> >   total to 
> >   > cache data even when data size we expect is ~15GB to avoid 
> eviction of active items. 
> > 
> >   Can you share snapshots from "stats items" and "stats slabs" for 
> one of 
> >   these instances? 
> > 
> >   > But as our data varies in size, it is possible to avoid 
> evictions by tuning parameters: chunk_size, growth_factor, slab_automove. 
> Also I 
> >   believe memcache 
> >   > is efficient and we can reduce cost by reducing memory size for 
> cluster.  
> >   > So I am trying to find the best possible memory size and 
> parameters we can have.So want to be clear with my understanding and 
> calculations. 
> >   > 
> >   > So while trying different parameters and putting all 
> calculations, I observed that total_pages * item_size_max > physical memory 
> for a 
> >   machine. And from 
> >   > all blogs,and docs it didnot match my understanding. But it's 
> clear now. Thanks to you. 
> >   > 
> >   > One last question: From my trials I find that we can achieve 
> ~90% storage efficiency with memcache. (i.e we need 10MB of physical memory 
> to 
> >   store 9MB of 
> >   > data. Do you recommend any idle memory-size interms of 
> percentage of expected data-size?  
> > 
> >   90%+ are perfectly doable. You probably need to look a bit more 
> closely 
> >   into why you're not getting the efficiency you expect. The 
> detailed stats 
> >   output should point to why. I can help with that if it's 
> confusing. 
> > 
> >   Either the slab rebalancer isn't keeping up or you actually do 
> have 39GB 
> >   of data and your expecations are a bit off. This will also 
> depending on 
> >   the TTL's you're setting and how often/quickly your items change 
> size. 
> >   Also things like your serialization method / compression / key 
> length vs 
> >   data length / etc. 
> > 
> >   -Dormando 
> > 
> >   > On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando 
> wrote: 
> >   >   Hey, 
> >   > 
> >   >   Looks like I never updated the manpage. In the past the 
> item size max was 
> >   >   achieved by changing the slab page size, but that hasn't 
> been true 

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread dormando
No attachment

On Fri, 3 Jul 2020, Shweta Agrawal wrote:

>
> Wooo...so quick. :):)
> > Correct, close. It actually uses more like 3 512k chunks and then one 
> > smaller chunk from a different class to fit exactly 1.6MB. 
> I see.Got it.
>
> >Can you share snapshots from "stats items" and "stats slabs" for one of 
> these instances? 
>
> Currently I have summary of it, sharing the same below. I can get snapshot by 
> Tuesday as need to request for it.
>
> pages have value from total_pages from stats slab for each slab
> item_size have value from chunk_size from stats slab for each slab
> Used memory is calculated as pages*page size ---> This has to corrected now.
>
>
> prod_stats.png
>
>
> > 90%+ are perfectly doable. You probably need to look a bit more closely
> > into why you're not getting the efficiency you expect. The detailed stats
> > output should point to why. I can help with that if it's confusing.
>
> Great. Will surely ask for your input whenever I have question. It is really 
> kind of you to offer help. 
>
> > Either the slab rebalancer isn't keeping up or you actually do have 39GB
> > of data and your expecations are a bit off. This will also depending on
> > the TTL's you're setting and how often/quickly your items change size.
> > Also things like your serialization method / compression / key length vs
> > data length / etc.
>
> We have much less data than 39 GB. As after facing evictions, it has been 
> always kept higher than expected data-size.
> TTL is two days or more. 
> From my observation items size(data-length) is in the range of 300Bytes to 
> 500K after compression.
> Key length is in the range of 40-80 bytes.
>
> Thank you,
> Shweta
>  
> On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando wrote:
>   Hey,
>
>   > Putting my understanding to re-confirm:
>   > 1) Page size will always be 1MB and we cannot change it.Moreover, 
> it's not required to be changed.
>
>   Correct.
>
>   > 2) We can store items larger than 1MB and it is done by combining 
> chunks together. (example: let's say item size: ~1.6MB --> 4 slab
>   chunks(512k slab) from
>   > 2 pages will be used)
>
>   Correct, close. It actually uses more like 3 512k chunks and then one
>   smaller chunk from a different class to fit exactly 1.6MB.
>
>   > We use memcache in production and in past we saw evictions even when 
> free memory was present. Also currently we use cluster with 39GB RAM in
>   total to
>   > cache data even when data size we expect is ~15GB to avoid eviction 
> of active items.
>
>   Can you share snapshots from "stats items" and "stats slabs" for one of
>   these instances?
>
>   > But as our data varies in size, it is possible to avoid evictions by 
> tuning parameters: chunk_size, growth_factor, slab_automove. Also I
>   believe memcache
>   > is efficient and we can reduce cost by reducing memory size for 
> cluster. 
>   > So I am trying to find the best possible memory size and parameters 
> we can have.So want to be clear with my understanding and calculations.
>   >
>   > So while trying different parameters and putting all calculations, I 
> observed that total_pages * item_size_max > physical memory for a
>   machine. And from
>   > all blogs,and docs it didnot match my understanding. But it's clear 
> now. Thanks to you.
>   >
>   > One last question: From my trials I find that we can achieve ~90% 
> storage efficiency with memcache. (i.e we need 10MB of physical memory to
>   store 9MB of
>   > data. Do you recommend any idle memory-size interms of percentage of 
> expected data-size? 
>
>   90%+ are perfectly doable. You probably need to look a bit more closely
>   into why you're not getting the efficiency you expect. The detailed 
> stats
>   output should point to why. I can help with that if it's confusing.
>
>   Either the slab rebalancer isn't keeping up or you actually do have 39GB
>   of data and your expecations are a bit off. This will also depending on
>   the TTL's you're setting and how often/quickly your items change size.
>   Also things like your serialization method / compression / key length vs
>   data length / etc.
>
>   -Dormando
>
>   > On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote:
>   >       Hey,
>   >
>   >       Looks like I never updated the manpage. In the past the item 
> size max was
>   >       achieved by changing the slab page size, but that hasn't been 
> true for a
>   >       long time.
>   >
>   >       From ./memcached -h:
>   >       -m, --memory-limit=  item memory in megabytes (default: 64)
>   >
>   >       ... -m just means the memory limit in megabytes, abstract from 
> the page
>   >       size. I think that was always true.
>   >
>   >       In any recentish version, any item larger than half a page size 
> (512k) is
>

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread Shweta Agrawal
Sorry forgot to mention. summary is from one instance. Instance has 13 GB 
of RAM

On Saturday, July 4, 2020 at 9:22:13 AM UTC+5:30, Shweta Agrawal wrote:
>
>
> Wooo...so quick. :):)
>
> > Correct, close. It actually uses more like 3 512k chunks and then one 
> > smaller chunk from a different class to fit exactly 1.6MB. 
>
> I see.Got it.
>
> >Can you share snapshots from "stats items" and "stats slabs" for one of 
> these instances? 
>
> Currently I have summary of it, sharing the same below. I can get snapshot 
> by Tuesday as need to request for it.
>
> pages have value from total_pages from stats slab for each slab
> item_size have value from chunk_size from stats slab for each slab
> Used memory is calculated as pages*page size ---> This has to corrected 
> now.
>
>
> [image: prod_stats.png]
>
> > 90%+ are perfectly doable. You probably need to look a bit more closely
> > into why you're not getting the efficiency you expect. The detailed stats
> > output should point to why. I can help with that if it's confusing.
>
> Great. Will surely ask for your input whenever I have question. It is 
> really kind of you to offer help. 
>
> > Either the slab rebalancer isn't keeping up or you actually do have 39GB
> > of data and your expecations are a bit off. This will also depending on
> > the TTL's you're setting and how often/quickly your items change size.
> > Also things like your serialization method / compression / key length vs
> > data length / etc.
>
> We have much less data than 39 GB. As after facing evictions, it has been 
> always kept higher than expected data-size.
> TTL is two days or more. 
> From my observation items size(data-length) is in the range of 300Bytes to 
> 500K after compression.
> Key length is in the range of 40-80 bytes.
>
> Thank you,
> Shweta
>  
> On Saturday, July 4, 2020 at 8:38:31 AM UTC+5:30, Dormando wrote:
>>
>> Hey, 
>>
>> > Putting my understanding to re-confirm: 
>> > 1) Page size will always be 1MB and we cannot change it.Moreover, it's 
>> not required to be changed. 
>>
>> Correct. 
>>
>> > 2) We can store items larger than 1MB and it is done by combining 
>> chunks together. (example: let's say item size: ~1.6MB --> 4 slab 
>> chunks(512k slab) from 
>> > 2 pages will be used) 
>>
>> Correct, close. It actually uses more like 3 512k chunks and then one 
>> smaller chunk from a different class to fit exactly 1.6MB. 
>>
>> > We use memcache in production and in past we saw evictions even when 
>> free memory was present. Also currently we use cluster with 39GB RAM in 
>> total to 
>> > cache data even when data size we expect is ~15GB to avoid eviction of 
>> active items. 
>>
>> Can you share snapshots from "stats items" and "stats slabs" for one of 
>> these instances? 
>>
>> > But as our data varies in size, it is possible to avoid evictions by 
>> tuning parameters: chunk_size, growth_factor, slab_automove. Also I believe 
>> memcache 
>> > is efficient and we can reduce cost by reducing memory size for 
>> cluster.  
>> > So I am trying to find the best possible memory size and parameters we 
>> can have.So want to be clear with my understanding and calculations. 
>> > 
>> > So while trying different parameters and putting all calculations, I 
>> observed that total_pages * item_size_max > physical memory for a machine. 
>> And from 
>> > all blogs,and docs it didnot match my understanding. But it's clear 
>> now. Thanks to you. 
>> > 
>> > One last question: From my trials I find that we can achieve ~90% 
>> storage efficiency with memcache. (i.e we need 10MB of physical memory to 
>> store 9MB of 
>> > data. Do you recommend any idle memory-size interms of percentage of 
>> expected data-size?  
>>
>> 90%+ are perfectly doable. You probably need to look a bit more closely 
>> into why you're not getting the efficiency you expect. The detailed stats 
>> output should point to why. I can help with that if it's confusing. 
>>
>> Either the slab rebalancer isn't keeping up or you actually do have 39GB 
>> of data and your expecations are a bit off. This will also depending on 
>> the TTL's you're setting and how often/quickly your items change size. 
>> Also things like your serialization method / compression / key length vs 
>> data length / etc. 
>>
>> -Dormando 
>>
>> > On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote: 
>> >   Hey, 
>> > 
>> >   Looks like I never updated the manpage. In the past the item size 
>> max was 
>> >   achieved by changing the slab page size, but that hasn't been 
>> true for a 
>> >   long time. 
>> > 
>> >   From ./memcached -h: 
>> >   -m, --memory-limit=  item memory in megabytes (default: 64) 
>> > 
>> >   ... -m just means the memory limit in megabytes, abstract from 
>> the page 
>> >   size. I think that was always true. 
>> > 
>> >   In any recentish version, any item larger than half a page size 
>> (512k) is 
>> >   created by stitching page chunks together. This

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread dormando
Hey,

> Putting my understanding to re-confirm:
> 1) Page size will always be 1MB and we cannot change it.Moreover, it's not 
> required to be changed.

Correct.

> 2) We can store items larger than 1MB and it is done by combining chunks 
> together. (example: let's say item size: ~1.6MB --> 4 slab chunks(512k slab) 
> from
> 2 pages will be used)

Correct, close. It actually uses more like 3 512k chunks and then one
smaller chunk from a different class to fit exactly 1.6MB.

> We use memcache in production and in past we saw evictions even when free 
> memory was present. Also currently we use cluster with 39GB RAM in total to
> cache data even when data size we expect is ~15GB to avoid eviction of active 
> items.

Can you share snapshots from "stats items" and "stats slabs" for one of
these instances?

> But as our data varies in size, it is possible to avoid evictions by tuning 
> parameters: chunk_size, growth_factor, slab_automove. Also I believe memcache
> is efficient and we can reduce cost by reducing memory size for cluster. 
> So I am trying to find the best possible memory size and parameters we can 
> have.So want to be clear with my understanding and calculations.
>
> So while trying different parameters and putting all calculations, I observed 
> that total_pages * item_size_max > physical memory for a machine. And from
> all blogs,and docs it didnot match my understanding. But it's clear now. 
> Thanks to you.
>
> One last question: From my trials I find that we can achieve ~90% storage 
> efficiency with memcache. (i.e we need 10MB of physical memory to store 9MB of
> data. Do you recommend any idle memory-size interms of percentage of expected 
> data-size? 

90%+ are perfectly doable. You probably need to look a bit more closely
into why you're not getting the efficiency you expect. The detailed stats
output should point to why. I can help with that if it's confusing.

Either the slab rebalancer isn't keeping up or you actually do have 39GB
of data and your expecations are a bit off. This will also depending on
the TTL's you're setting and how often/quickly your items change size.
Also things like your serialization method / compression / key length vs
data length / etc.

-Dormando

> On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote:
>   Hey,
>
>   Looks like I never updated the manpage. In the past the item size max 
> was
>   achieved by changing the slab page size, but that hasn't been true for a
>   long time.
>
>   From ./memcached -h:
>   -m, --memory-limit=  item memory in megabytes (default: 64)
>
>   ... -m just means the memory limit in megabytes, abstract from the page
>   size. I think that was always true.
>
>   In any recentish version, any item larger than half a page size (512k) 
> is
>   created by stitching page chunks together. This prevents waste when an
>   item would be more than half a page size.
>
>   Is there a problem you're trying to track down?
>
>   I'll update the manpage.
>
>   On Fri, 3 Jul 2020, Shweta Agrawal wrote:
>
>   > Hi,
>   > Sorry if I am repeating the question, I searched the list but could 
> not find definite answer. So posting it.
>   >
>   > Memcache version: 1.5.10 
>   > I have started memcahce with option: -I 4m (setting maximum item size 
> to 4MB).Verified it is set by command stats settings , I can see STAT
>   item_size_max
>   > 4194304.
>   >
>   > Documentation from git repository here stats that:
>   >
>   > -I, --max-item-size=
>   > Override the default size of each slab page. The default size is 1mb. 
> Default
>   > value for this parameter is 1m, minimum is 1k, max is 1G (1024 * 1024 
> * 1024).
>   > Adjusting this value changes the item size limit.
>   > My understanding from documentation is this option will allow to save 
> items with size till 4MB and the page size for each slab will be 4MB
>   (as I set it as
>   > -I 4m).
>   >
>   > I am able to save items till 4MB but the page-size is still 1MB.
>   >
>   > -m memory size is default 64MB.
>   >
>   > Calculation:
>   > -> Calculated total pages used from stats slabs output parameter 
> total_pages = 64 (If page size is 4MB then total pages should not be more
>   than 16. Also
>   > when I store 8 items of ~3MB it uses 25 pages but if page size is 
> 4MB, it should use 8 pages right.)
>   >
>   > Can you please help me in understanding the behaviour?
>   >
>   > Attached files with details for output of command stats settings and 
> stats slabs.
>   > Below is the summarized view of the distribution. 
>   > First added items with variable sizes, then then added items with 3MB 
> and above.
>   >
>   > data_distribution.png
>   >
>   >
>   >
>   > Please let me know in case more details are required or question is 
> not clear.
>   >  
>  

Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread Shweta Agrawal

Hi Dormando,

Thanks a lot for the quick and promt reply.

*Putting my understanding to re-confirm:*
1) Page size will always be 1MB and we cannot change it.Moreover, it's not 
required to be changed.
2) We can store items larger than 1MB and it is done by combining chunks 
together. (example: let's say item size: ~1.6MB --> 4 slab chunks(512k 
slab) from 2 pages will be used)

*>Is there a problem you're trying to track down?*
We use memcache in production and in past we saw evictions even when free 
memory was present. Also currently we use cluster with 39GB RAM in total to 
cache data even when data size we expect is ~15GB to avoid eviction of 
active items.
But as our data varies in size, it is possible to avoid evictions by tuning 
parameters: chunk_size, growth_factor, slab_automove. Also I believe 
memcache is efficient and we can reduce cost by reducing memory size for 
cluster. 
So I am trying to find the best possible memory size and parameters we can 
have.So want to be clear with my understanding and calculations.

So while trying different parameters and putting all calculations, I 
observed that *total_pages * item_size_max > physical memory for a machine. 
*And from all blogs,and docs it didnot match my understanding. But it's 
clear now. *Thanks to you.*

*One last question:* From my trials I find that we can achieve ~90% storage 
efficiency with memcache. (i.e we need 10MB of physical memory to store 9MB 
of data. Do you recommend any idle memory-size interms of percentage of 
expected data-size? 

Very grateful for the reply. Thanks a lot.
Shweta

On Saturday, July 4, 2020 at 12:23:09 AM UTC+5:30, Dormando wrote:
>
> Hey, 
>
> Looks like I never updated the manpage. In the past the item size max was 
> achieved by changing the slab page size, but that hasn't been true for a 
> long time. 
>
> From ./memcached -h: 
> -m, --memory-limit=  item memory in megabytes (default: 64) 
>
> ... -m just means the memory limit in megabytes, abstract from the page 
> size. I think that was always true. 
>
> In any recentish version, any item larger than half a page size (512k) is 
> created by stitching page chunks together. This prevents waste when an 
> item would be more than half a page size. 
>
> Is there a problem you're trying to track down? 
>
> I'll update the manpage. 
>
> On Fri, 3 Jul 2020, Shweta Agrawal wrote: 
>
> > Hi, 
> > Sorry if I am repeating the question, I searched the list but could not 
> find definite answer. So posting it. 
> > 
> > Memcache version: 1.5.10  
> > I have started memcahce with option: -I 4m (setting maximum item size to 
> 4MB).Verified it is set by command stats settings , I can see STAT 
> item_size_max 
> > 4194304. 
> > 
> > Documentation from git repository here stats that: 
> > 
> > -I, --max-item-size= 
> > Override the default size of each slab page. The default size is 1mb. 
> Default 
> > value for this parameter is 1m, minimum is 1k, max is 1G (1024 * 1024 * 
> 1024). 
> > Adjusting this value changes the item size limit. 
> > My understanding from documentation is this option will allow to save 
> items with size till 4MB and the page size for each slab will be 4MB (as I 
> set it as 
> > -I 4m). 
> > 
> > I am able to save items till 4MB but the page-size is still 1MB. 
> > 
> > -m memory size is default 64MB. 
> > 
> > Calculation: 
> > -> Calculated total pages used from stats slabs output 
> parameter total_pages = 64 (If page size is 4MB then total pages should not 
> be more than 16. Also 
> > when I store 8 items of ~3MB it uses 25 pages but if page size is 4MB, 
> it should use 8 pages right.) 
> > 
> > Can you please help me in understanding the behaviour? 
> > 
> > Attached files with details for output of command stats settings and 
> stats slabs. 
> > Below is the summarized view of the distribution.  
> > First added items with variable sizes, then then added items with 3MB 
> and above. 
> > 
> > data_distribution.png 
> > 
> > 
> > 
> > Please let me know in case more details are required or question is not 
> clear. 
> >   
> > Thank You, 
> >  Shweta 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to memc...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/2b640e1f-9f59-4432-a930-d830cbe8566do%40googlegroups.com.
>  
>
> > 
> >

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/586aad58-c6fb-4ed8-89ce-6b005d59ba12o%40googlegroups.com.


Re: Total memory allocated with -m NOT EQUAL to (total pages * max_item_size). Request to provide clarification on how page size works.

2020-07-03 Thread dormando
Hey,

Looks like I never updated the manpage. In the past the item size max was
achieved by changing the slab page size, but that hasn't been true for a
long time.

>From ./memcached -h:
-m, --memory-limit=  item memory in megabytes (default: 64)

... -m just means the memory limit in megabytes, abstract from the page
size. I think that was always true.

In any recentish version, any item larger than half a page size (512k) is
created by stitching page chunks together. This prevents waste when an
item would be more than half a page size.

Is there a problem you're trying to track down?

I'll update the manpage.

On Fri, 3 Jul 2020, Shweta Agrawal wrote:

> Hi,
> Sorry if I am repeating the question, I searched the list but could not find 
> definite answer. So posting it.
>
> Memcache version: 1.5.10 
> I have started memcahce with option: -I 4m (setting maximum item size to 
> 4MB).Verified it is set by command stats settings , I can see STAT 
> item_size_max
> 4194304.
>
> Documentation from git repository here stats that:
>
> -I, --max-item-size=
> Override the default size of each slab page. The default size is 1mb. Default
> value for this parameter is 1m, minimum is 1k, max is 1G (1024 * 1024 * 1024).
> Adjusting this value changes the item size limit.
> My understanding from documentation is this option will allow to save items 
> with size till 4MB and the page size for each slab will be 4MB (as I set it as
> -I 4m).
>
> I am able to save items till 4MB but the page-size is still 1MB.
>
> -m memory size is default 64MB.
>
> Calculation:
> -> Calculated total pages used from stats slabs output parameter total_pages 
> = 64 (If page size is 4MB then total pages should not be more than 16. Also
> when I store 8 items of ~3MB it uses 25 pages but if page size is 4MB, it 
> should use 8 pages right.)
>
> Can you please help me in understanding the behaviour?
>
> Attached files with details for output of command stats settings and stats 
> slabs.
> Below is the summarized view of the distribution. 
> First added items with variable sizes, then then added items with 3MB and 
> above.
>
> data_distribution.png
>
>
>
> Please let me know in case more details are required or question is not clear.
>  
> Thank You,
>  Shweta
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/2b640e1f-9f59-4432-a930-d830cbe8566do%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/alpine.DEB.2.21.2007031149160.18887%40dskull.