Thanks!

You can reallocate pages in newer versions. as I said before the `-o
modern` on the latest ones does that automatically so long as you have
some free space for it to work with (the lru_crawler and using sane TTL's
helps there). Otherwise you can use manual controls as listed in
doc/protocol.txt and your own external process.

On Sat, 27 Aug 2016, Joseph Grasser wrote:

> Hey Guys, thank you so much for everything
> This community is awesome; the product is awesome; everyone here is awesome; 
> and most of all everyone on this thread is awesome! 
>
> On Sat, Aug 27, 2016 at 3:05 PM, Joseph Grasser <jgrasser....@gmail.com> 
> wrote:
>       So, when I compare the total pages with the unfetched evictions I do 
> notice skew. We should probably reallocate the pages to better fit our usage 
> pattern. 
>
> On Sat, Aug 27, 2016 at 2:46 PM, dormando <dorma...@rydia.net> wrote:
>       Probably.
>
>       Look through `stats slabs` and `stats items` to see if evictions skew
>       toward slabs without enough pages in it, that sort of thing. That's all
>       fixed (or improved) in more recent versions (with enough feature flags
>       enabled)
>
>       you can also telnet in and run `watch evictions` to get a stream of 
> what's
>       being evicted. Look for patterns or bug developers about it.
>
>       On Sat, 27 Aug 2016, Joseph Grasser wrote:
>
>       > echo "stats" shows the following : cmd_set 3,000,000,000
>       > evicted_unfetched 2,800,000,000
>       > evictions 2,900,000,000
>       >
>       > This looks super abusive to me. What, is that 6% utilization of data 
> in cache? 
>       >
>       > On Sat, Aug 27, 2016 at 1:35 PM, dormando <dorma...@rydia.net> wrote:
>       >       You could comb through stats looking for things like 
> evicted_unfetched,
>       >       unbalanced slab classes, etc.
>       >
>       >       1.4.31 with `-o modern` can either make a huge improvement in 
> memory
>       >       efficiency or a marginal one. I'm unaware of it being worse.
>       >
>       >       Just something to consider if cost is your concern.
>       >
>       >       On Sat, 27 Aug 2016, Joseph Grasser wrote:
>       >
>       >       > We are running 1.4.13 on wheezy. 
>       >       > In the environment I am looking at there is positive 
> correlation between gets and puts. The ration is something like 10 Gets : 15 
> Puts. The eviction
>       spikes are
>       >       also occurring
>       >       > at peak put times ( which kind of makes senses with the mem 
> pressure ). I think the application is some kind of report generation tool - 
> it's hard
>       to say, my
>       >       visibility into
>       >       > the team stuff is pretty low right now as I am a new hire. 
>       >       >
>       >       > On Sat, Aug 27, 2016 at 12:34 PM, dormando 
> <dorma...@rydia.net> wrote:
>       >       >       What version are you on and what're your startup 
> options, out of
>       >       >       curiosity?
>       >       >
>       >       >       A lot of the more recent features can help with memory 
> efficiency, for
>       >       >       what it's worth.
>       >       >
>       >       >       On Sat, 27 Aug 2016, Joseph Grasser wrote:
>       >       >
>       >       >       >
>       >       >       > No problem, I'm trying cut down on cost. We're 
> currently using a dedicated model which works for us on a technical level but 
> is expensive
>       (within budget
>       >       but still
>       >       >       expensive).
>       >       >       >
>       >       >       > We are experiencing weird spikes in evictions but I 
> think that is the result of developers abusing the service.
>       >       >       >
>       >       >       > Tbh I don't know what to make of the evictions yet. 
> I'm gong to dig into it on Monday though.
>       >       >       >
>       >       >       >
>       >       >       > On Aug 27, 2016 1:55 AM, "Ripduman Sohan" 
> <ripduman.so...@gmail.com> wrote:
>       >       >       >
>       >       >       >             On Aug 27, 2016 1:46 AM, "dormando" 
> <dorma...@rydia.net> wrote:
>       >       >       >                   >
>       >       >       >                   > Thank you for the tips guys!
>       >       >       >                   >
>       >       >       >                   > The limiting factor for us is 
> actually memory utilization. We are using the default configuration on 
> sizable ec2 nodes
>       and pulling
>       >       only
>       >       >       >                   like 20k qps per node. Which is fine
>       >       >       >                   > because we need to shard the key 
> set over x servers to handle the mem req (30G) per server.
>       >       >       >                   >
>       >       >       >                   > I should have looked into that 
> before posting.
>       >       >       >                   >
>       >       >       >                   > I am really curious about network 
> saturation though. 200k gets at 1mb per get is a lot of traffic... how can 
> you hit
>       that mark without
>       >       >       >                   saturation?
>       >       >       >
>       >       >       >                   Most people's keys are a lot 
> smaller. In multiget tests with 40 byte keys
>       >       >       >                   I can pull 20 million+ keys/sec out 
> of the server. probably less than
>       >       >       >                   10gbps at that rate too. Tends to 
> cap between 600k and 800k/s if you need
>       >       >       >                   to do a full roundtrip per key 
> fetch. limited by the NIC. Lots of tuning
>       >       >       >                   required to get around that.
>       >       >       >
>       >       >       >
>       >       >       > I think (but may be wrong) the 200K TPS result is 
> based on 1K values.  Dormando should be able to correct me. 
>       >       >       >
>       >       >       > 20K TPS does seem a little low though.  If you're 
> bound by memory set size have you thought of the cost/tradeoff benefits of 
> using dedicated
>       servers for
>       >       your
>       >       >       memcache?  
>       >       >       > I'm quite interested to find out more about what 
> you're trying to optimise.  Is it minimising number of servers, maximising 
> query rate,
>       both, none, etc? 
>       >       >       >
>       >       >       > Feel free to reach out directly if you can't share 
> this publicly. 
>       >       >       >  
>       >       >       >
>       >       >       > --
>       >       >       >
>       >       >       > ---
>       >       >       > You received this message because you are subscribed 
> to a topic in the Google Groups "memcached" group.
>       >       >       > To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/memcached/la-0fH1UzyA/unsubscribe.
>       >       >       > To unsubscribe from this group and all its topics, 
> send an email to memcached+unsubscr...@googlegroups.com.
>       >       >       > For more options, visit 
> https://groups.google.com/d/optout.
>       >       >       >
>       >       >       > --
>       >       >       >
>       >       >       > ---
>       >       > > You received this message because you are subscribed to the 
> Google Groups "memcached" group.
>       >       > > To unsubscribe from this group and stop receiving emails 
> from it, send an email to memcached+unsubscr...@googlegroups.com.
>       >       > > For more options, visit https://groups.google.com/d/optout.
>       >       > >
>       >       > >
>       >       >
>       >       > --
>       >       >
>       >       > ---
>       >       > You received this message because you are subscribed to a 
> topic in the Google Groups "memcached" group.
>       >       > To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/memcached/la-0fH1UzyA/unsubscribe.
>       >       > To unsubscribe from this group and all its topics, send an 
> email to memcached+unsubscr...@googlegroups.com.
>       >       > For more options, visit https://groups.google.com/d/optout.
>       >       >
>       >       >
>       >       > --
>       >       >
>       >       > ---
>       >       > You received this message because you are subscribed to the 
> Google Groups "memcached" group.
>       >       > To unsubscribe from this group and stop receiving emails from 
> it, send an email to memcached+unsubscr...@googlegroups.com.
>       >       > For more options, visit https://groups.google.com/d/optout.
>       >       >
>       >       >
>       >
>       >       --
>       >
>       >       ---
>       >       You received this message because you are subscribed to a topic 
> in the Google Groups "memcached" group.
>       >       To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/memcached/la-0fH1UzyA/unsubscribe.
>       >       To unsubscribe from this group and all its topics, send an 
> email to memcached+unsubscr...@googlegroups.com.
>       >       For more options, visit https://groups.google.com/d/optout.
>       >
>       >
>       > --
>       >
>       > ---
>       > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>       > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memcached+unsubscr...@googlegroups.com.
>       > For more options, visit https://groups.google.com/d/optout.
>       >
>       >
>
>       --
>
>       ---
>       You received this message because you are subscribed to a topic in the 
> Google Groups "memcached" group.
>       To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/memcached/la-0fH1UzyA/unsubscribe.
>       To unsubscribe from this group and all its topics, send an email to 
> memcached+unsubscr...@googlegroups.com.
>       For more options, visit https://groups.google.com/d/optout.
>
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to