bq: Is there a way to not write to disk continuously and only write the file...
Not if we're talking about the transaction log. The design is for the
transaction log in particular to continuously get updates flushed to
it, otherwise you could not replay the transaction log upon restart
and have a
I take my yesterday's comment back. I assumed that the file being written
is a segment, however after letting solr run for the night. I see that the
segment is flushed at the expected size:1945MB (so that file which i
observed was still open for writing).
Now, I have two other questions:-
1. Is th
Anyone can raise a JIRA and submit a patch, it's then up to one of the
committers to pick it up and commit to the code lines. You have to
create an ID of course.
See: https://issues.apache.org/jira/
On Tue, Oct 17, 2017 at 5:04 AM, Mike Sokolov wrote:
> Checkstyle has a onetoplevelclass rule tha
Checkstyle has a onetoplevelclass rule that would enforce this
On October 17, 2017 3:45:01 AM EDT, Uwe Schindler wrote:
>Hi,
>
>this has nothing to do with the Java version. I generally ignore this
>Eclipse-failure as I only develop in Eclipse, but run from command
>line. The reason for this beha
>
> In 7.0, i am finding that the file is written to disk very early on
> and it is being updated every second or so. Had something changed in 7.0
> which is causing it? I tried something similar with solr 6.5 and i was
> able to get almost a GB size files on disk.
Interesting observation, Nawab
17 October 2017, Apache Luceneā¢ 7.1.0 available
The Lucene PMC is pleased to announce the release of Apache Lucene 7.1.0.
Apache Lucene is a high-performance, full-featured text search engine
library written entirely in Java. It is a technology suitable for
nearly any application that requires fu
I recently look at solr and lucene source, I do not know if I can solve this
error and submit a patch?
380382...@qq.com
From: Uwe Schindler
Date: 2017-10-17 15:45
To: java-user@lucene.apache.org
CC: d...@lucene.apache.org
Subject: RE: run in eclipse error
Hi,
this has nothing to do with the
Hi,
this has nothing to do with the Java version. I generally ignore this
Eclipse-failure as I only develop in Eclipse, but run from command line. The
reason for this behaviour is a problem with Eclipse's resource
management/compiler with the way how some classes in Solr (especially facet
comp
Hi,
I have tuned (or tried to tune) my settings to only flush the segment
when it has reached its maximum size. At the moment,I am using my
application with only a couple of threads (i have limited to one thread for
analyzing this scenario) and my ramBufferSizeMB=2 (i.e. ~20GB). With
this, I