I haven't had time to really take a look at this. But read a couple of
articles regarding the hard commit and it actually makes sense. We were
seeing tlogs in the multiple GBs during ingest. I will have some time in a
couple of weeks to come back to testing indexing. Thanks for the help.
Vy
-
We ran into this during our indexing process running on 4.10.3. After
increasing zookeeper timeouts, client timeouts, socket timeouts,
implementing retry logic on our loading process the thing that worked was
to change the Hard Commit timing. We were performing a Hard Commit every 5
minutes and aft
Right now index size is about 10GB on each shard (yes I could use more RAM),
but I'm looking more for a step up then step down approach. I will try
adding more RAM to these machines as my next step.
1. Zookeeper is external to these boxes in a three node cluster with more
than enough RAM to keep
On 4/13/2015 10:11 PM, vsilgalis wrote:
> just a couple of notes:
> this a 2 shard setup with 2 nodes per shard.
>
> Currently these are on VMs with 8 cores and 8GB of ram each (java max heap
> is ~5588mb but we usually never even get that high) backed by a NFS file
> store which we store the inde
just a couple of notes:
this a 2 shard setup with 2 nodes per shard.
Currently these are on VMs with 8 cores and 8GB of ram each (java max heap
is ~5588mb but we usually never even get that high) backed by a NFS file
store which we store the indexes on (netapp SAN with nfs exports on SAS
disk).
We upgraded recently to Solr 4.10.2 from 4.2.1 and have been seeing errors
regarding the dreaded broken pipe when doing our reindexing of all our
content.
Specifically:
ERROR - 2015-04-13 17:09:12.310;
org.apache.solr.update.StreamingSolrServers$1; error
java.net.SocketException: Broken pipe