I think it's important to be able to roll out changes to nodes in a way the
user controls (e.g. one node at a time), instead of only having an
all-at-once option. I really liked Tomas's explanation of the need. The
same need exists for collections, and Solr satisfies that today via
configSets.
Thanks Ishan,
I still don't think it covers the cases very well. The possibilities of how
that handler could be screwing up things are infinite (it could be
corrupting local cores, causing OOMs, it could be spawning infinite loops,
you name it). If the new handler requires initialization that
On Fri, 4 Sep, 2020, 12:05 am Erick Erickson,
wrote:
>
>
> I wish everyone would just use Solr the way I think about it ;)
>
https://twitter.com/ichattopadhyaya/status/1210868171814473728
> > On Sep 3, 2020, at 2:11 PM, Tomás Fernández Löbbe
> wrote:
> >
> > I can see that some of these
Hmmm, interesting point about deliberately changing one solr.xml for testing
purposes. To emulate that on a per-node basis you’d have to have something like
a “node props” associated with each node, which my instant reaction to is
“y”.
As far as API only, I’d assumed changes to
Hi Tomas,
This type of a problem can be solved using alternate strategies. Here's how
you can do so: register the updated version of the plugin in /healthcheck2
(while simultaneously the older version continues to work at /healthcheck).
Make sure it works. Once it does, update the /healthcheck
I can see that some of these configurations should be moved to
clusterporps.json, I don’t believe this is the case for all of them. Some
are configurations that are targeting the local node (i.e sharedLib path),
some are needed before connecting to ZooKeeper (zk config). Configuration
of global
bq. Isn’t solr.xml is a way to hardcode config in a more flexible way that a
Java class?
Yes, and the problem word here is “flexible”. For a single-node system that
flexibility is desirable. Flexibility comes at the cost of complexity,
especially in the SolrCloud case. In this case, not so
Everything can of course be hard coded. Isn’t solr.xml is a way to hardcode
config in a more flexible way that a Java class?
I hoped to have the config passed to the plugin be overridable via system
properties like solr.xml allows (free disk size thresholds, log vs. fail
when some constraints
I’d also like solr.xml to go away if at all possible, at least in cloud mode.
Some important points (to me at least).
- having to hand-edit _anything_ is A Bad Thing in a large, distributed system
since you need:
— an infrastructure to automagically distribute whatever it is you’re editing
to
> to stick with solr.xml for current and ongoing feature development and not
> hold up development based on things that may or may not happen in the future.
Where did I say that? There are so many features built recently using
clusterprops.json. The least we can do is stop piling more into
Let’s not do another magic overlay json file on top of xml with all the
confusion it creates.
+1 to stick with solr.xml for current and ongoing feature development and not
hold up development based on things that may or may not happen in the future.
Discussing removal of solr.xml is a typical
Noble,
In order for *clusterpropos.json* to replace what's currently done with
*solr.xml*, we'd need to introduce a mechanism to make configuration
available out of the box (when Zookeeper is still empty). And if
*clusterpropos.json* is to be used in standalone mode, it also must live
somewhere
Let's take a step back and take a look at the history of Solr.
Long ago there was only standalone Solr with a single core
there were 3 files
* solr.xml : everything required for CoreContainer went here
* solr.config.xml : per core configurations go here
* schema.xml: this is not relevant for
This is way above my head, but I wonder if we could dogfood any of
this with a future Solr cloud example? At the moment, it sets up 2-4
nodes, 1 collection, any number of shards/replicas. And it does it by
directory clone and some magic in bin/solr to ensure logs don't step
on each other's foot.
Sure of course someone has to set up the first one, that should be an
initial collaboration with devops one can never escape that. Mount points
can be established in an automated fashion and named by convention. My
yearning is to make the devops side of it devops based (provide machines
that look
As for AMIs, you have to do it at least once, right? or are you thinking in
someone using an pre-existing AMI? I see your point for the case of someone
using the official Solr image as-is without any volume mounts I guess. I'm
wondering if trying to put node configuration inside ZooKeeper is
Which means whoever wants to make changes to solr needs to be
able/willing/competent to make AMI/dockers/etc ... and one has to manage
versions of those variants as opposed to managing versions of config files.
On Fri, Aug 28, 2020 at 1:55 PM Tomás Fernández Löbbe
wrote:
> I think if you are
As far as the new autoscaling you mention as an example Gus, its
configuration goes in solr.xml as per current work in progress. This means
that if you put this solr.xml in ZK, the autoscaling config will happen as
you describe (not the adding removing nodes part, not there yet, but
reading the
I think if you are using AMIs (or Docker), you could put the node
configuration inside the AMI (or Docker image), as Ilan said, together with
the binaries. Say you have a custom top-level handler (Collections, Cores,
Info, whatever), which takes some arguments and it's configured in solr.xml
and
Putting solr.xml in zookeeper means you can add a node simply by starting
solr pointing to the zookeeper, and ensure a consistent solr.xml for the
new node if you've customized it. Since I rarely (never) hit use cases
where I need different per node solr.xml. I generally advocate putting it
in ZK,
I think of it exactly as Jan described it. solr.xml is the node
configuration, usually should be the same for all the cluster, but not
necessarily all the time (i.e. during a deployment they may differ).
Putting it in ZooKeeper is, I believe, a mistake, because then you see a
file up there, but
What I'm really looking for (and currently my understanding is that solr.xml
is the only option) is *a cluster config a Solr dev can set as a default* when
introducing a new feature for example, so that the config is picked out of
the box in SolrCloud, yet allowing the end user to override it if
I interpret solr.xml as the node-local configuration for a single node.
clusterprops.json is the cluster-wide configuration applying to all nodes.
solrconfig.xml is of course per core etc
solr.in.sh is the per-node ENV-VAR way of configuring a node, and many of those
are picked up in solr.xml
I am not sure I understand the deeper point of the question, but just
in terms of files
*) Per core: solrconfig.xml can get overridden with config-api and the
overrides go into configoverlay.json
*) Per core: core.properties can store some configurations values that
are used in variable
Solr.xml can also exist on Zookeeper, it doesn’t _have_ to exist locally. You
do have to restart to have any changes take effect.
Long ago in a Solr far away solr.xml was where all the cores were defined. This
was before “core discovery” was put in. Since solr.xml had to be there anyway
and
I want to ramp-up/discuss/inventory configuration options in Solr. Here's
my understanding of what exists and what could/should be used depending on
the need. Please correct/complete as needed (or point to documentation I
might have missed).
*There are currently 3 sources of general
26 matches
Mail list logo