As far as configuration is concerned -- everything Solr specific goes to zookeeper. The solr.xml can also be put into zookeeper. JNDI is not supported -- can you explain why you need it? Can cluster properties solve the problem? or replica properties? Both of those can go into zookeeper.
Integration testing is possible using MiniSolrCloudCluster which is built to be used programatically. It starts a real zookeeper instance along with real solr instances on jetties. Most other things are by design e.g. whole cluster changes on configuration change otherwise sneaky bugs where one node misses a config change are easily possible and very very difficult to debug. But you can e.g. create a new collection with just one node, stage your changes, run health checks and update the configuration of the main collection. The old master/slave setup was very simple. The cloud is a whole new ball game. Having control of jetty has given Solr a lot of flexibility. At this time I discourage anyone from changing anything inside Jetty's config. If there are certain things that are not possible without then please let us know so figure out how to build such things in Solr itself. We want jetty to be a implementation detail and no more. If you have suggestions on how we can fix some of these problems, please speak up. On Sun, Oct 9, 2016 at 5:41 AM, Aristedes Maniatis <a...@maniatis.org> wrote: > On 9/10/16 2:09am, Shawn Heisey wrote: > > One of the historical challenges on this mailing list is that we were > > rarely aware of what steps the user had taken to install or start Solr, > > and we had to support pretty much any scenario. Since 5.0, the number > > of supported ways to deploy and start Solr is greatly reduced, and those > > ways were written by the project, so we tend to have a better > > understanding of what is happening when a user starts Solr. We also > > usually know the relative location of the logfiles and Solr's data. > > > This migration is causing a lot of grief for us as well, and we are still > struggling to get all the bits in place. Before: > > * gradle build script > * gradle project includes our own unit tests, run in jenkins > * generates war file > * relevant configuration is embedded into the build > * deployment specific variables (db uris, passwords, ip addresses) > conveniently contained in one context.xml file > > > Now: > > * Solr version is no longer bound to our tests or configuration > > * configuration is now scattered in three places: > - zookeeper > - solr.xml in the data directory > - jetty files as part of the solr install that you need to replace (for > example to set JNDI properties) > > * deployment is also scattered: > - Solr platform specific package manager (pkg in FreeBSD in my case, > which I've had to write myself since it didn't exist) > - updating config files above > - writing custom scripts to push Zookeeper configuration into production > - creating collections/cores using the API rather than in a config file > > * unit testing no longer possible since you can't run a mock zookeeper > instance > > * zookeeper is very hard to integrate with deployment processes (salt, > puppet, etc) since configuration is no longer a set of version controlled > files > > * you can't change the configuration of one node as a 'soft deployment': > the whole cluster needs to be changed at once > > If we didn't need a less broken replication solution, I'd stay on Solr4 > forever. > > > I really liked the old war deployment. It bound the solr version and > configuration management into our version controlled source repository > except for one context.xml file that contained server specific deployment > options. Nice. > > The new arrangement is a mess. > > > Ari > > > > -- > --------------------------> > Aristedes Maniatis > GPG fingerprint CBFB 84B4 738D 4E87 5E5C 5EFA EF6A 7D2E 3E49 102A > -- Regards, Shalin Shekhar Mangar.