How're you serializing the list?

In cases where I've had to work with that, we ensure the value is a flat
list of packed numerics, and run no serialization on them. Then you have
the overhead of a network fetch, but testing values within the list is
nearly free.

-Dormando

On Tue, 19 Jan 2010, jim wrote:

> Hi:
>
> I've recently started on a project that is utilizing memcached to
> buffer lists of 'long's that represent key values for larger more
> detailed objects to load if needed.  This list will have a maximum of
> 10,000 items.
>
> ex:
>
> List of longs:
> key : actList     value : maximum 10,000 longs sorted chronologically
> based on their 'detailed' object timestamp.
>
> Detailed entry:
> key : act_<id>  value : text serialized object representing detailed
> activity object.  <id> is contained on the 'List of longs' above.
>
> My question is there are times where we would like to load only the
> first block of entries from the list of longs, say 10 records.  We
> would then look at those 10 records and see if they are new based on
> what a currently displayed list shows and if so grab these new entries
> without pulling all 10,000 across from cache only to utilize the
> relatively small number of new entries on the top of the list.  This
> kind of goes hand in hand with the prepend (or append) operations
> where when new activities arrive we push these new activities into the
> front of the 'List of longs' and it's these new entries that clients
> may be interested in if they do not yet have them.
>
> My question is, is there a way to do this?  Is there a way to grab
> only X bytes from a value in memcache?  I read over the protocol
> document and it doesn't appear there is.  Is there any interest or
> valid use case that this seems to fill for other users?
>
> An alternate solution I can see if to store the 'list of longs' as a
> flat list of keys with a naming convention, such as actList_1,
> actList_2, etc.  However, this will obviously lead to an extremely
> long key name in the multi-key 'get' as well as lots of churn in
> managing these objects .. so it appears far less than ideal.  However,
> we have many (200,000) lists of these 10,000 item 'List of longs'
> which leads to pulling loads of data over the wire when a large update
> occurs that needs to be communicated with cache ... it would be much
> more efficient to only go after a certain number of bytes in cache vs.
> the entire cached value.
>
> Any other thoughts?  Ideas?
>
> Thank you.
> Jim
>
>

Reply via email to