Hello,

I'm trying to use memcached for a use case I don't *think* is outlandish, 
but it's not behaving the way I expect. I wanted to sanity check what I'm 
doing to see if it should be working but there's maybe something I've done 
wrong with my configuration, or if my idea of how it's supposed to work is 
wrong, or if there's a problem with memcached itself.

I'm using memcached as a temporary shared image store in a distributed 
video processing application. At the front of the pipeline is a process 
(actually all these processes are pods in a kubernetes cluster, if it 
matters, and memcached is running in the cluster as well) that consumes a 
video stream over RTSP, saves each frame to memcached, and outputs events 
to a message bus (kafka) with metadata about each frame. At the end of the 
pipeline is another process that consumes these metadata events, and when 
it sees events it thinks are interesting it retrieves the corresponding 
frame from memcached and adds the frame to a web UI. The video is typically 
30fps, so there are about 30 set() operations each second, and since each 
value is effectively an image the values are a bit big (around 1MB... I 
upped the maximum value size in memcached to 2MB to make sure they'd fit, 
and I haven't had any problems with my writes being rejected because of 
size).

The video stream is processed in real-time, and effectively infinite, but 
the memory available to memcached obviously isn't (I've configured it to 
use 5GB, FWIW). That's OK, because the cache is only supposed to be 
temporary storage. My expectation is that once the available memory is 
filled up (which takes a few minutes), then roughly speaking for every new 
frame added to memcached another entry (ostensibly the oldest one) will be 
evicted. If the consuming process at the end of the pipeline doesn't get to 
a frame it wants before it gets evicted that's OK.

That's not what I'm seeing, though, or at least that's not all that I'm 
seeing. There are lots of evictions happening, but the process that's 
writing to memcached also goes through periods where every set() operation 
is rejected with an "Out of memory during read" error. It seems to happen 
in bursts where for several seconds every write encounters the error, then 
for several seconds the set() calls work just fine (and presumably other 
keys are being evicted), then the cycle repeats. It goes on this way for as 
long as I let the process run.

I'm using memcached v1.6.14, installed into my k8s cluster using the 
bitnami helm chart v6.0.5. My reading and writing applications are both 
using pymemcache v3.5.2 for their access.

Can anyone tell me if it seems like what I'm doing should work the way I 
described, and where I should try investigating to see what's going wrong? 
Or alternatively, why what I'm trying to do shouldn't work the way I 
expected it to, so I can figure out how to make my applications behave 
differently?

Thanks,
Hayden

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/702cae66-3108-46de-bb48-38eb3e17a5b7n%40googlegroups.com.

Reply via email to