Hi there Eugene,
Please look at
http://geode.docs.pivotal.io/docs/reference/topics/cache_xml.html#lru-memory-size.
When configuring this eviction policy, you should be able to specify the
amount of memory that this region holds in memory before it overflows
the value.
I am at this stage uncertain if this policy only takes the size of the
value into account, or if this value would be inclusive of the key as
well. If so, this setting might cause the region to keep fewer and fewer
values in-memory, as the number of entries in the region increase.
--Udo
On 20/04/2016 11:39 am, Eugene Strokin wrote:
Dan, thanks for the response. Yes you right, 512 Mb of course. My
mistake.
The idea is to use as much disk space as possible. I understand the
downside of using high compaction threshold. I'll play with that, and
see how bad it could be.
But what about eviction? Would Geode remove objects from the overflow
automatically once it would reach a certain size?
Ideally, I'd like to set the Geode to start kicking LRU objects out
once the free disk space would reach 1Gb. Is it possible? If so,
please point me to the right direction.
Thanks again,
Eugene
On Tue, Apr 19, 2016 at 8:25 PM, Dan Smith <[email protected]
<mailto:[email protected]>> wrote:
I'm guessing you mean 512MB of RAM, not KB? Otherwise, you are
definitely going to have problems :)
Regarding conserving disk space - I think only allowing for 1 GB
free space is probably going to run into issues. I think you would
be better off having fewer droplets with more space if that's
possible. And only leaving 5% disk space for compaction and as a
buffer to avoid running out of disk is probably not enough.
By default, geode will compact oplogs when they get to be 50%
garbage, which means needing maybe 2X the amount of actual disk
space. You can configure the compaction-threshold to something
like 95%, but that means geode will be doing a lot of extra work
clean up garbage on disk. Regardless, you'll probably want to tune
down the max-oplog-size to something much smaller than 1GB.
-Dan
On Tue, Apr 19, 2016 at 4:26 PM, Eugene Strokin
<[email protected] <mailto:[email protected]>> wrote:
Hello, I'm seriously consider to use Geode as a core for
distributed file cache system. But I have a few questions.
But first, this is what needs to be done: Scalable file system
with LRU eviction policy utilizing the disc space as much as
possible. The idea is to have around 50 small Droplets from
DigitalOcean, which provides 512Kb RAM and 20Gb Storage. The
client should call the cluster and get a byte array by a key.
If needed, the cluster should be expanded. The origin of the
byte arrays are files from AWS S3.
Looks like everything could be done using Geode, but:
- it looks like the compaction requires a lot of free hard
drive space. All I can allow is about 1Gb. Would this work in
my case? How could it be done.
- Is the objects would be evicted automatically from overflow
storage using LRU policy?
Thanks in advance for your answers, ideas, suggestions.
Eugene