What version are you on and what're your startup options, out of
curiosity?

A lot of the more recent features can help with memory efficiency, for
what it's worth.

On Sat, 27 Aug 2016, Joseph Grasser wrote:

>
> No problem, I'm trying cut down on cost. We're currently using a dedicated 
> model which works for us on a technical level but is expensive (within budget 
> but still expensive).
>
> We are experiencing weird spikes in evictions but I think that is the result 
> of developers abusing the service.
>
> Tbh I don't know what to make of the evictions yet. I'm gong to dig into it 
> on Monday though.
>
>
> On Aug 27, 2016 1:55 AM, "Ripduman Sohan" <ripduman.so...@gmail.com> wrote:
>
>             On Aug 27, 2016 1:46 AM, "dormando" <dorma...@rydia.net> wrote:
>                   >
>                   > Thank you for the tips guys!
>                   >
>                   > The limiting factor for us is actually memory 
> utilization. We are using the default configuration on sizable ec2 nodes and 
> pulling only
>                   like 20k qps per node. Which is fine
>                   > because we need to shard the key set over x servers to 
> handle the mem req (30G) per server.
>                   >
>                   > I should have looked into that before posting.
>                   >
>                   > I am really curious about network saturation though. 200k 
> gets at 1mb per get is a lot of traffic... how can you hit that mark without
>                   saturation?
>
>                   Most people's keys are a lot smaller. In multiget tests 
> with 40 byte keys
>                   I can pull 20 million+ keys/sec out of the server. probably 
> less than
>                   10gbps at that rate too. Tends to cap between 600k and 
> 800k/s if you need
>                   to do a full roundtrip per key fetch. limited by the NIC. 
> Lots of tuning
>                   required to get around that.
>
>
> I think (but may be wrong) the 200K TPS result is based on 1K values.  
> Dormando should be able to correct me. 
>
> 20K TPS does seem a little low though.  If you're bound by memory set size 
> have you thought of the cost/tradeoff benefits of using dedicated servers for 
> your memcache?  
> I'm quite interested to find out more about what you're trying to optimise.  
> Is it minimising number of servers, maximising query rate, both, none, etc? 
>
> Feel free to reach out directly if you can't share this publicly. 
>  
>
> --
>
> ---
> You received this message because you are subscribed to a topic in the Google 
> Groups "memcached" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/memcached/la-0fH1UzyA/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to