I have a large ec2 instance(7.5 gb ram), it dies every few hours with out of
heap memory issues.  I started upping the min memory required, currently I
use -Xms3072M .
I insert about 50k docs an hour and I currently have about 65 million docs
with about 10 fields each. Is this already too much data for one box? How do
I know when I've reached the limit of this server? I have no idea how to
keep control of this issue.  Am I just supposed to keep upping the min ram
used for solr? How do I know what the accurate amount of ram I should be
using is? Must I keep adding more memory as the index size grows, I'd rather
the query be a little slower if I can use constant memory and have the
search read from disk.

Reply via email to