Jumbo frames FTW

- Marc


On Tue, Aug 20, 2013 at 8:34 PM, Henrik Schröder <skro...@gmail.com> wrote:

> 1400 bytes (plus overhead) sounds suspiciously close to the very common
> 1500 bytes MTU setting, so something weird probably happened when you went
> from one packet to two in that specific environment.
>
>
> /Henrik
>
>
> On Tue, Aug 13, 2013 at 12:06 PM, Karlis Zigurs <homolu...@gmail.com>wrote:
>
>> Hello,
>>
>> Never mind - must have been some interplay between the network and VM.
>> Once memcached was deployed on a physical box (MBAir in fact) it works
>> like a treat keeping 3-4 ms response times when CAS'ing 10k records
>> over local physical network.
>>
>> Regards,
>> Karlis
>>
>>
>> On Tue, Aug 13, 2013 at 2:49 PM, Karlis Zigurs <homolu...@gmail.com>
>> wrote:
>> > Hello,
>> >
>> > I am currently playing around with memcached and have noted a rather
>> > worrying behaviour around using CAS when stored record starts to exceed
>> > circa 1400 bytes: while performing the CAS operations from a single
>> threaded
>> > java client (netspy 2.9.1) anything that exceeds size threshold suddenly
>> > raises the response time from circa 2-3 to circa 300-400ms (with no
>> linear
>> > increase in between, in fact it appears that I get an extra 300ms for
>> every
>> > 1400 bytes afterwards).
>> > I have noted couple of references on the web referring to possible UDP
>> > related limit, but I would never expect such a drastic increase even if
>> > protocol is doing full round trips.
>> >
>> > Version: 1.4.15 (built from scratch on Centos 5 running in VM)
>> > Command line# memcached -vv -u nobody -m 256 -U 0 -p 11211 -l
>> 192.168.x.xxx
>> > Client: netspy java lib, 2.9.1 (single threaded test harness)
>> >
>> > Is this something that is inherent in the current implementation (has
>> > anybody else noticed similar behaviour) or should I proceed with firing
>> up
>> > wireshark and start investigating at the wire / env issues? Possibly
>> some
>> > build flags I should be aware of?
>> >
>> > CAS itself is perfect for the use case (managing the occasional
>> > addition/removal from a master list that further points to a large
>> number of
>> > client groups specific records - treating the core list as low
>> contention
>> > lock with perhaps < 5 write operations per second expected while the
>> rest of
>> > the system would be handling 10+k reads/writes distributed across the
>> whole
>> > estate),
>> >
>> > Regards,
>> > Karlis
>> >
>> > --
>> >
>> > ---
>> > You received this message because you are subscribed to a topic in the
>> > Google Groups "memcached" group.
>> > To unsubscribe from this topic, visit
>> > https://groups.google.com/d/topic/memcached/zdO2Av4Oj84/unsubscribe.
>> > To unsubscribe from this group and all its topics, send an email to
>> > memcached+unsubscr...@googlegroups.com.
>> > For more options, visit https://groups.google.com/groups/opt_out.
>> >
>> >
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "memcached" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to memcached+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>>
>>
>  --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to