Erick,
I am using Python, so I think SolrJ is not an option. I wrote my libs to
connect to Solr and interpret Solr data.

I will try to load balance via Apache that is in front of Solr, before I
change my setup, I think it will be simpler. I was not aware about the
single point of failure on Solr Cloud when I set my infra.

Thank you so much for your help,
Koji





Em qui, 15 de ago de 2019 às 14:11, Erick Erickson <erickerick...@gmail.com>
escreveu:

> OK, if you’re sending HTTP requests to a single node, that’s
> something of an anti-pattern unless it’s a load balancer that
> sends request to random nodes in the cluster. Do note that
> even if you do send all http requests to one node, the top-level
> request will be forwarded to other nodes in the cluster.
>
> But if your single node dies, then indeed there’s no way for Solr
> to get the request in the other nodes.
>
> If you use SolrJ, in particular CloudSolrClient, it’s ZooKeeper-aware
> and will both avoid dead nodes _and_ distribute the top-level
> queries to all the Solr nodes. It’ll also be informed when a dead
> nodes comes back and put it back into the rotation.
>
> Best,
> Erick
>
> > On Aug 15, 2019, at 10:14 AM, Kojo <rbsnk...@gmail.com> wrote:
> >
> > Erick,
> > I am starting to think that my setup has more than one problem.
> > As I said before, I am not balancing my load to Solr nodes, and I have
> > eight nodes. All of my web application requests go to one Solr node, the
> > only one that dies. If I distribute the load across the other nodes, is
> it
> > possible that these problems may end?
> >
> > Even if I downsize the Solr cloud setup to 2 boxes 2 nodes each with less
> > shards than the 16 shards that I have now, I would like to know your
> > oppinion about the question above.
> >
> > Thank you,
> > Koji
> >
> >
> >
> >
> > Em qua, 14 de ago de 2019 às 14:15, Erick Erickson <
> erickerick...@gmail.com>
> > escreveu:
> >
> >> Kojo:
> >>
> >> On the surface, this is a reasonable configuration. Note that you may
> >> still want to decrease the Java heap, but only if you have enough “head
> >> room” for memory spikes.
> >>
> >> How do you know if you have “head room”? Unfortunately the only good
> >> answer is “you have to test”. You can look at the GC logs to see what
> your
> >> maximum heap requirements are, then add “some extra”.
> >>
> >> Note that there’s a balance here. Let’s say you can run successfully
> with
> >> X heap, so you allocate X + 0.1X to the heap. You can wind up spending a
> >> large amount of time in garbage collection. I.e. GC kicks in and
> recovers
> >> _just_ enough memory to continue for a very short while, then goes into
> >> another GC cycle. You don’t hit OOMs, but your system is slow.
> >>
> >> OTOH, let’s say you need X and allocate 3X. Garbage will accumulate and
> >> full GCs are rarer, but when they occur they take longer.
> >>
> >> And the G1GC collector is the current preference
> >>
> >> As I said, testing is really the only way to determine what the magic
> >> number is.
> >>
> >> Best,
> >> Erick
> >>
> >>> On Aug 14, 2019, at 9:20 AM, Kojo <rbsnk...@gmail.com> wrote:
> >>>
> >>> Shawn,
> >>>
> >>> Only my web application access this solr. at a first look at http
> server
> >>> logs I didnt find something different.  Sometimes I have a very big
> >> crawler
> >>> access to my servers, this was my first bet.
> >>>
> >>> No scheduled crons running at this time too.
> >>>
> >>> I think that I will reconfigure my boxes with two solr nodes each
> instead
> >>> of four and increase heap to 16GB. This box only run Solr and has 64Gb.
> >>> Each Solr will use 16Gb and the box will still have 32Gb for the OS.
> What
> >>> do you think?
> >>>
> >>> This is a production server, so I will plan to migrate.
> >>>
> >>> Regards,
> >>> Koji
> >>>
> >>>
> >>> Em ter, 13 de ago de 2019 às 12:58, Shawn Heisey <apa...@elyograg.org>
> >>> escreveu:
> >>>
> >>>> On 8/13/2019 9:28 AM, Kojo wrote:
> >>>>> Here are the last two gc logs:
> >>>>>
> >>>>>
> >>>>
> >>
> https://send.firefox.com/download/6cc902670aa6f7dd/#Ee568G9vUtyK5zr-nAJoMQ
> >>>>
> >>>> Thank you for that.
> >>>>
> >>>> Analyzing the 20MB gc log actually looks like a pretty healthy system.
> >>>> That log covers 58 hours of runtime, and everything looks very good to
> >> me.
> >>>>
> >>>> https://www.dropbox.com/s/yu1pyve1bu9maun/gc-analysis-kojo.png?dl=0
> >>>>
> >>>> But the small log shows a different story.  That log only covers a
> >>>> little more than four minutes.
> >>>>
> >>>> https://www.dropbox.com/s/vkxfoihh12brbnr/gc-analysis-kojo2.png?dl=0
> >>>>
> >>>> What happened at approximately 10:55:15 PM on the day that the smaller
> >>>> log was produced?  Whatever happened caused Solr's heap usage to
> >>>> skyrocket and require more than 6GB.
> >>>>
> >>>> Thanks,
> >>>> Shawn
> >>>>
> >>
> >>
>
>

Reply via email to