f you used the Collections API and two cores mysteriously
failed to reload that would be a bug. Assuming the replicas in
question
were up and running at the time you reloaded.
Thanks for letting us know what's going on.
Erick
On Tue, Mar 10, 2015 at 4:34 AM, Martin de Vries
wrote:
ct, so this issue shouldn't be
related to that.
Sounds like it may just be the bugs Mark is referencing, sorry I
don't
have the JIRA numbers right off.
Best,
Erick
On Thu, Mar 5, 2015 at 4:46 PM, Shawn Heisey
wrote:
On 3/5/2015 3:13 PM, Martin de Vries wrote:
I understand there
segment merges is in an incrementally built index.
The admin UI screen is rooted in the pre-cloud days, the Master/Slave
thing is entirely misleading. In SolrCloud, since all the raw data is
forwarded to all replicas, and any auto commits that happen may very
well be slightly out of sync, the index si
Hi Andrew,
Even our master index is corrupt, so I'm afraid this won't help in our
case.
Martin
Andrew Butkus schreef op 05.03.2015 16:45:
Force a fetchindex on slave from master command:
http://slave_host:port/solr/replication?command=fetchindex - from
http://wiki.apache.org/solr/SolrRepli
Hi,
We have index corruption on some cores on our Solrcloud running version
4.8.1. The index is corrupt on several servers. (for example: when we do
an fq search we get results on some servers, on other servers we don't,
while the stored document contains the field on all servers).
A full re
Hi,
I have two questions about upgrading Solr:
- We upgrade Solr often, to match the latest version. We have a number
of servers in a Solrcloud and prefer to upgrade one or two servers first
and upgrade the other server a few weeks later when we are sure
everything is stable. Is this the reco
We are running stable now for a full day, so the bug has been fixed.
Many thanks!
Martin
Martin, I’ve committed the SOLR-5875 fix, including to the
lucene_solr_4_7 branch.
Any chance you could test the fix?
Hi Steve,
I'm very happy you found the bug. We are running the version from SVN
on one server and it's already running fine for 5 hours. If it's still
stable tomorrow than w
Hi,
When our server crashes the memory fills up fast. So I think it might
be a specific query that causes our servers to crash. I think the query
won't be logged because it doesn't finish. Is there anything we can do
to see the currently running queries in de Solr server (so when can see
them
The memory leak seems to be in:
org.apache.solr.handler.component.ShardFieldSortedHitQueue
I think our issue might be related to this one, because this change has
been introduced in 4.7 and has changes to ShardFieldSortedHitQueue:
https://issues.apache.org/jira/browse/SOLR-5354
Is the memor
> IndexSchema is using 62% of the memory
That seems odd. Can you see what objects are taking all the RAM in
the
IndexSchema?
We investigated this and found out that a dictionary was loaded for
each core, taking loads of memory. We the the config to
shareSchema=true. The memory usage decreas
We parsed the "Unreachable Objects" of the memory dump.
The memory leak seems to be in:
org.apache.solr.handler.component.ShardFieldSortedHitQueue
https://www.dropbox.com/s/hdv49xlb4g4wi03/Screenshot%202014-03-07%2016.51.56.png
Martin
Hi,
We have 5 Solr servers in a Cloud with about 70 cores and 12GB
indexes in total (every core has 2 shards, so it's 6 GB per server).
After upgrade to Solr 4.7 the Solr servers are crashing constantly
(each server about one time per hour). We currently don't have any clue
about the reason.
ied
both the g1 garbage collector and the regular one, the problem happens
with both of them.
We use Java 1.6 on some servers. Will Java 1.7 be
better?
Martin
Martin de Vries schreef op 12.11.2013 10:45:
>
Hi,
>
> We have:
>
> Solr 4.5.1 - 5 servers
> 36 cores, 2 shards
Hi,
We have:
Solr 4.5.1 - 5 servers
36 cores, 2 shards each, 2 servers per shard (every core is on 4
servers)
about 4.5 GB total data on disk per server
4GB JVM-Memory per server, 3GB average in use
Zookeeper 3.3.5 - 3 servers (one shared with Solr)
haproxy load balancing
Our Solrcloud is ver
15 matches
Mail list logo