Branch friction aside, Paul Saab's description of the interesting
problems is really something worth expanding upon and learning from in
context of a real-world implementation going after C10K scalability.

For example, Paul notes that with thousands of open TCP connections,
memcached can start using several extra gigabytes of memory for
buffers. Well, that's interesting! I want to know more about that.
What can we learn here?

Paul notes that they switched to using UDP for GETs with
application-level flow control. Only GETs? App flow control? That's
good to know -- IIRC, when memcached over UDP was last being discussed
there were advocates for SET over UDP and for various approaches to
flow control. (For my part, I advocated for a GET variant with an
offset+length parameter.)

So while the devs work out the details and disagreements of merging
code bases, let's also keep the communication channels open about the
engineering decisions and experiences in this work. At the end of the
day, if we haven't collectively advanced the state of the art and
updated the best practices in our field, we're doing something wrong.

Aaron


On Fri, Dec 12, 2008 at 1:34 PM, steve.yen <steve....@gmail.com> wrote:
>
> fyi, Paul Saab's note on facebook's memcached improvements and git
> repo (originally pointed out to me by Dustin)
>
> http://www.facebook.com/note.php?note_id=39391378919
>
>

Reply via email to