I'm looking at the Solr Reference Guide about Solr on HDFS, and it's
bringing up a couple of quick questions for me. I guess I got spoiled by
MMapDirectory and how magically it worked!

1. What is the minimum number of configuration parameters that enables HDFS
block caching? It seems like I need to set XX:MaxDirectMemorySize when
launching Solr, and then for every collection I want to be able to use
caching with, I need to be sure that the Block Cache Settings are enabled
based on defaults, save for
solr.hdfs.blockcache.write.enabled should be false.

2. If I use solr.hdfs.blockcache.global, is the slab count still per core,
or does it apply to everything, or is it no longer relevant?

3. Is there a sneaky way of ensuring a given collection or core loads first
so no other cores accidentally override the global blockcache setting?

4. In terms of -XX:MaxDirectMemorySize and solr.hdfs.blockcache.slab.count,
is there some percentage of system ram or some overall maximum beyond which
it no longer achieves benefits, or can I actually just tune this to be
nearly all of the ram minus the JVM's overhead and the RAM needed by the
system? Or can it even be set higher than the overall RAM just to be sure?

Thanks,

Michael Della Bitta

Applications Developer

o: +1 646 532 3062

appinions inc.

“The Science of Influence Marketing”

18 East 41st Street

New York, NY 10017

t: @appinions <https://twitter.com/Appinions> | g+:
plus.google.com/appinions
<https://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts>
w: appinions.com <http://www.appinions.com/>

Reply via email to