Thanks!
You can reallocate pages in newer versions. as I said before the `-o
modern` on the latest ones does that automatically so long as you have
some free space for it to work with (the lru_crawler and using sane TTL's
helps there). Otherwise you can use manual controls as listed in
doc/protoco
Hey Guys, thank you so much for everything
This community is awesome; the product is awesome; everyone here is
awesome; and most of all everyone on this thread is awesome!
On Sat, Aug 27, 2016 at 3:05 PM, Joseph Grasser
wrote:
> So, when I compare the total pages with the unfetched evictions I
So, when I compare the total pages with the unfetched evictions I do notice
skew. We should probably reallocate the pages to better fit our usage
pattern.
On Sat, Aug 27, 2016 at 2:46 PM, dormando wrote:
> Probably.
>
> Look through `stats slabs` and `stats items` to see if evictions skew
> towa
Probably.
Look through `stats slabs` and `stats items` to see if evictions skew
toward slabs without enough pages in it, that sort of thing. That's all
fixed (or improved) in more recent versions (with enough feature flags
enabled)
you can also telnet in and run `watch evictions` to get a stream
echo "stats" shows the following :
cmd_set 3,000,000,000
evicted_unfetched 2,800,000,000
evictions 2,900,000,000
This looks super abusive to me. What, is that 6% utilization of data in
cache?
On Sat, Aug 27, 2016 at 1:35 PM, dormando wrote:
> You could comb through stats looking for things like
You could comb through stats looking for things like evicted_unfetched,
unbalanced slab classes, etc.
1.4.31 with `-o modern` can either make a huge improvement in memory
efficiency or a marginal one. I'm unaware of it being worse.
Just something to consider if cost is your concern.
On Sat, 27 A
We are running 1.4.13 on wheezy.
In the environment I am looking at there is positive correlation between
gets and puts. The ration is something like 10 Gets : 15 Puts. The eviction
spikes are also occurring at peak put times ( which kind of makes senses
with the mem pressure ). I think the applic
What version are you on and what're your startup options, out of
curiosity?
A lot of the more recent features can help with memory efficiency, for
what it's worth.
On Sat, 27 Aug 2016, Joseph Grasser wrote:
>
> No problem, I'm trying cut down on cost. We're currently using a dedicated
> model w
On 27 August 2016 at 10:05, Joseph Grasser wrote:
> No problem, I'm trying cut down on cost. We're currently using a dedicated
> model which works for us on a technical level but is expensive (within
> budget but still expensive).
>
> We are experiencing weird spikes in evictions but I think that
No problem, I'm trying cut down on cost. We're currently using a dedicated
model which works for us on a technical level but is expensive (within
budget but still expensive).
We are experiencing weird spikes in evictions but I think that is the
result of developers abusing the service.
Tbh I don'
>
>
> On Aug 27, 2016 1:46 AM, "dormando" wrote:
>
>> >
>> > Thank you for the tips guys!
>> >
>> > The limiting factor for us is actually memory utilization. We are using
>> the default configuration on sizable ec2 nodes and pulling only like 20k
>> qps per node. Which is fine
>> > because we nee
I don't really have high visibility into the average item size but I see
what you mean.
Stats should give us good info on that right?
On Aug 27, 2016 1:46 AM, "dormando" wrote:
> >
> > Thank you for the tips guys!
> >
> > The limiting factor for us is actually memory utilization. We are using
>
>
> Thank you for the tips guys!
>
> The limiting factor for us is actually memory utilization. We are using the
> default configuration on sizable ec2 nodes and pulling only like 20k qps per
> node. Which is fine
> because we need to shard the key set over x servers to handle the mem req
> (30G
Thank you for the tips guys!
The limiting factor for us is actually memory utilization. We are using the
default configuration on sizable ec2 nodes and pulling only like 20k qps
per node. Which is fine because we need to shard the key set over x servers
to handle the mem req (30G) per server.
I s
14 matches
Mail list logo