Can I get some recommendations on how to best tweak/setup memory for my fuseki 
servers? Here is my setup:

- I’ve got a single TDB with at least several million triples (I don’t know the 
exact amount yet, but perhaps around 10s of millions, maybe 100s of millions…a 
the very least, I need it to scale to 100s of millions).
- Everything is put in the default graph currently (wanting to change this…but 
can’t the this point in time).
- The “TransitiveReasoner” is being used on the dataset.
- Full text indexing over two different fields is enabled using Lucene.
- The servers are running embedded Fuseki via rdf-delta and sync via a central 
rdf-delta server.
- The simplest of queries won’t finish and runs out of memory with, at the very 
least, 6 GB of RAM.

Also, should I be tweaking my non-heap memory to be larger for the Fuseki 
server?

Thanks.

No PHI in Email: PointClickCare and Collective Medical, A PointClickCare 
Company, policies prohibit sending protected health information (PHI) by email, 
which may violate regulatory requirements. If sending PHI is necessary, please 
contact the sender for secure delivery instructions.

Confidentiality Notice: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information. Any unauthorized review, use, disclosure or 
distribution is prohibited. If you are not the intended recipient, please 
contact the sender by reply email and destroy all copies of the original 
message.

Reply via email to