Hi Dormando,
I am still able to repeatedly reproduce those "Out of memory during read"
errors during large item sets with latest v1.6.22.
please see my bug report
at https://github.com/memcached/memcached/issues/1096
Thanks,
Jianjian
On Monday, November 28, 2022 at 12:25:12 AM UTC-8 Danny
To add another datapoint here, we at Grafana Labs use memcached extensively
in our cloud and this fix made a massive impact on our cache effectiveness:
https://user-images.githubusercontent.com/373762/204228886-7c5a759a-927c-46fb-ae55-3e0b4056ebae.png
Thank you very much to you both for the
Thanks for taking the time to evaluate! It helps my confidence level with
the fix.
You caught me at a good time :) Been really behind with fixes for quite a
while and only catching up this week. I've looked at this a few times and
didn't see the easy fix before...
I think earlier versions of the
I didn't see the docker files in the repo that could build the docker
image, and when I tried cloning the git repo and doing a docker build I
encountered errors that I think were related to the web proxy on my work
network. I was able to grab the release tarball and the bitnami docker
file, do
So I tested this a bit more and released it in 1.6.17; I think bitnami
should pick it up soonish. if not I'll try to figure out docker this
weekend if you still need it.
I'm not 100% sure it'll fix your use case but it does fix some things I
can test and it didn't seem like a regression. would be
You can't build docker images or compile binaries? there's a
docker-compose.yml in the repo already if that helps.
If not I can try but I don't spend a lot of time with docker directly.
On Fri, 26 Aug 2022, Hayden wrote:
> I'd be happy to help validate the fix, but I can't do it until the
I'd be happy to help validate the fix, but I can't do it until the weekend,
and I don't have a ready way to build an updated image. Any chance you
could create a docker image with the fix that I could grab from somewhere?
On Friday, August 26, 2022 at 10:38:54 AM UTC-7 Dormando wrote:
> I have
Took another quick look...
Think there's an easy patch that might work:
https://github.com/memcached/memcached/pull/924
If you wouldn't mind helping validate? An external validator would help me
get it in time for the next release :)
Thanks,
-Dormando
On Wed, 24 Aug 2022, dormando wrote:
>
Hey,
Thanks for the info. Yes; this generally confirms the issue. I see some of
your higher slab classes with "free_chunks 0", so if you're setting data
that requires these chunks it could error out. The "stats items" confirms
this since there are no actual items in those lower slab classes.
What you're saying makes sense, and I'm pretty sure it won't be too hard to
add some functionality to my writing code to break my large items up into
smaller parts that can each fit into a single chunk. That has the added
benefit that I won't have to bother increasing the max item size.
In the
To put a little more internal detail on this:
- As a SET is being processed item chunks must be made available
- If it is chunked memory, it will be fetching these data chunks from
across different slab classes (ie: 512k + 512k + sized enough for
whatever's left over)
- That full chunked item
Hey,
You're probably hitting an edge case in the "large item support".
Basically to store values > 512k memcached internally splits them up into
chunks. When storing items memcached first allocates the item storage,
then reads data from the client socket directly into the data storage.
For
Hello,
I'm trying to use memcached for a use case I don't *think* is outlandish,
but it's not behaving the way I expect. I wanted to sanity check what I'm
doing to see if it should be working but there's maybe something I've done
wrong with my configuration, or if my idea of how it's supposed
13 matches
Mail list logo