Hi,

I have  tuned  (or tried to tune) my settings to only flush the segment
when it has reached its maximum size. At the moment,I am using my
application with only a couple of threads (i have limited to one thread for
analyzing this scenario) and my ramBufferSizeMB=20000 (i.e. ~20GB). With
this, I assumed that my file sizes on the disk will be at in the order of
GB; and no segments will be flushed until the segment's in memory size is
2GB. In 7.0, i am finding that the file is written to disk very early on
and it is being updated every second or so. Had something changed in 7.0
which is causing it?  I tried something similar with solr 6.5 and i was
able to get almost a GB size files on disk.

How can I control it to not write to disk until the segment has reached its
maximum permitted size (1945 MB?) ? My write traffic is 'new only' (i.e.,
it doesn't delete any document) , however I also found following infostream
logs, which incorrectly say 'delete=true':

Oct 16, 2017 10:18:29 PM INFO  (qtp761960786-887) [   x:filesearch]
o.a.s.c.S.Request [filesearch]  webapp=/solr path=/update
params={commit=false} status=0 QTime=21
Oct 16, 2017 10:18:29 PM INFO  (qtp761960786-889) [   x:filesearch]
o.a.s.u.LoggingInfoStream [DW][qtp761960786-889]: anyChanges?
numDocsInRam=4434 deletes=true hasTickets:false pendingChangesInFullFlush:
false
Oct 16, 2017 10:18:29 PM INFO  (qtp761960786-889) [   x:filesearch]
o.a.s.u.LoggingInfoStream [IW][qtp761960786-889]: nrtIsCurrent: infoVersion
matches: false; DW changes: true; BD changes: false
Oct 16, 2017 10:18:29 PM INFO  (qtp761960786-889) [   x:filesearch]
o.a.s.c.S.Request [filesearch]  webapp=/solr path=/admin/luke
params={show=index&numTerms=0&wt=json} status=0 QTime=0



Thanks
Nawab

Reply via email to