Re: maxBooleanClauses change in solr.xml not reflecting in solr 8.4.1
Thanks Hoss, Yes, i was making the change in solr.xml in wrong directory earlier. Also as you said: : You need to update EVERY solrconfig.xml that the JVM is loading for this to : actually work. that has not been true for a while, see SOLR-13336 / SOLR-10921 ... I validated this and it's working as expected. We don't need to update every solrconfig.xml. The value mentioned in solr.xml is global and if maxBooleanClauses for any collection in solrconfig.xml exceeds the limit specified in solr.xml then we get the exception. Thanks for replying. On Wed, Jan 6, 2021 at 10:57 PM dinesh naik wrote: > Thanks Shawn, > > This entry ${sol > r.max.booleanClauses:2048} in solr.xml was introduced only in solr > 8.x version and were not present in 7.6 version. > > We have this in solrconfig.xml in 8.4.1 version. > ${solr.max.booleanClauses:2048} maxBooleanClauses> > i was updating the solr.xml in the installation directory and not the > installed data directory, hence the change was not reflecting. > After updating the correct solr.xml and restarting the Solr nodes the new > value is working as expected. > > On Wed, Jan 6, 2021 at 10:34 PM Chris Hostetter > wrote: > >> >> : You need to update EVERY solrconfig.xml that the JVM is loading for >> this to >> : actually work. >> >> that has not been true for a while, see SOLR-13336 / SOLR-10921 ... >> >> : > 2. updated solr.xml : >> : > ${solr.max.booleanClauses:2048} >> : >> : I don't think it's currently possible to set the value with solr.xml. >> >> Not only is it possible, it's neccessary -- the value in solr.xml acts as >> a hard upper limit (and affects all queries, even internally expanded >> queries) on the "soft limit" in solrconfig.xml (that only affects >> explicitly supplied boolean queries from users) >> >> As to the original question... >> >> > 2021-01-05 14:03:59.603 WARN (qtp1545077099-27) >> x:col1_shard1_replica_n3 >> > o.a.s.c.SolrConfig solrconfig.xml: of 2048 is >> greater >> > than global limit of 1024 and will have no effect >> >> I attempted to reproduce this with 8.4.1 and did not see the probem you >> are describing. >> >> Are you 100% certain you are updating the correct solr.xml file? If you >> add some non-xml giberish to the solr.xml you are editing does the solr >> node fail to start up? >> >> Remember that when using SolrCloud, solr will try to load solr.xml from >> zk >> first, and only look on local disk if it can't be found in ZK ... look >> for >> log messages like "solr.xml found in ZooKeeper. Loading..." vs "Loading >> solr.xml from SolrHome (not found in ZooKeeper)" >> >> >> >> >> -Hoss >> http://www.lucidworks.com/ >> > > > -- > Best Regards, > Dinesh Naik > -- Best Regards, Dinesh Naik
Re: maxBooleanClauses change in solr.xml not reflecting in solr 8.4.1
Thanks Shawn, This entry ${solr.max.booleanClauses:2048} in solr.xml was introduced only in solr 8.x version and were not present in 7.6 version. We have this in solrconfig.xml in 8.4.1 version. ${solr.max.booleanClauses:2048} i was updating the solr.xml in the installation directory and not the installed data directory, hence the change was not reflecting. After updating the correct solr.xml and restarting the Solr nodes the new value is working as expected. On Wed, Jan 6, 2021 at 10:34 PM Chris Hostetter wrote: > > : You need to update EVERY solrconfig.xml that the JVM is loading for this > to > : actually work. > > that has not been true for a while, see SOLR-13336 / SOLR-10921 ... > > : > 2. updated solr.xml : > : > ${solr.max.booleanClauses:2048} > : > : I don't think it's currently possible to set the value with solr.xml. > > Not only is it possible, it's neccessary -- the value in solr.xml acts as > a hard upper limit (and affects all queries, even internally expanded > queries) on the "soft limit" in solrconfig.xml (that only affects > explicitly supplied boolean queries from users) > > As to the original question... > > > 2021-01-05 14:03:59.603 WARN (qtp1545077099-27) x:col1_shard1_replica_n3 > > o.a.s.c.SolrConfig solrconfig.xml: of 2048 is greater > > than global limit of 1024 and will have no effect > > I attempted to reproduce this with 8.4.1 and did not see the probem you > are describing. > > Are you 100% certain you are updating the correct solr.xml file? If you > add some non-xml giberish to the solr.xml you are editing does the solr > node fail to start up? > > Remember that when using SolrCloud, solr will try to load solr.xml from zk > first, and only look on local disk if it can't be found in ZK ... look for > log messages like "solr.xml found in ZooKeeper. Loading..." vs "Loading > solr.xml from SolrHome (not found in ZooKeeper)" > > > > > -Hoss > http://www.lucidworks.com/ > -- Best Regards, Dinesh Naik
Re: maxBooleanClauses change in solr.xml not reflecting in solr 8.4.1
: You need to update EVERY solrconfig.xml that the JVM is loading for this to : actually work. that has not been true for a while, see SOLR-13336 / SOLR-10921 ... : > 2. updated solr.xml : : > ${solr.max.booleanClauses:2048} : : I don't think it's currently possible to set the value with solr.xml. Not only is it possible, it's neccessary -- the value in solr.xml acts as a hard upper limit (and affects all queries, even internally expanded queries) on the "soft limit" in solrconfig.xml (that only affects explicitly supplied boolean queries from users) As to the original question... > 2021-01-05 14:03:59.603 WARN (qtp1545077099-27) x:col1_shard1_replica_n3 > o.a.s.c.SolrConfig solrconfig.xml: of 2048 is greater > than global limit of 1024 and will have no effect I attempted to reproduce this with 8.4.1 and did not see the probem you are describing. Are you 100% certain you are updating the correct solr.xml file? If you add some non-xml giberish to the solr.xml you are editing does the solr node fail to start up? Remember that when using SolrCloud, solr will try to load solr.xml from zk first, and only look on local disk if it can't be found in ZK ... look for log messages like "solr.xml found in ZooKeeper. Loading..." vs "Loading solr.xml from SolrHome (not found in ZooKeeper)" -Hoss http://www.lucidworks.com/
Re: maxBooleanClauses change in solr.xml not reflecting in solr 8.4.1
On 1/5/2021 8:26 AM, dinesh naik wrote: Hi all, I want to update the maxBooleanClauses to 2048 (from default value 1024). Below are the steps tried: 1. updated solrconfig.xml : ${solr.max.booleanClauses:2048} You need to update EVERY solrconfig.xml that the JVM is loading for this to actually work. maxBooleanClauses is an odd duck. At the Lucene level, where this matters, it is a global (JVM-wide) variable. So whenever Solr sets this value, it applies to ALL of the Lucene indexes that are being accessed by that JVM. When you havet multiple Solr cores, the last core that was loaded will set the max clauses value for ALL cores. If any of your solrconfig.xml files don't have that config, then it will be set to the default of 1024 when that core is loaded or reloaded. Leaving the config out is not a solution. So if any of your configs don't have the setting or set it to something lower than you need, you run the risk of having the max value incorrectly set across the board. Here are the ways that I think this could be fixed: 1) Make the value per-index in Lucene, (or maybe even per-query) instead of global. 2) Have Solr only change the global Lucene value if the config is *higher* than the current global value. 3) Eliminate the limit entirely. Remove the config option from Solr and have Solr hard-set it to the maximum value. 4) Move the maxBooleanClauses config to solr.xml instead of solrconfig.xml I think that option 1 is the best way to do it, but this problem has been around for many years, so it's probably not easy to do. I don't think it's going to happen. There are a number of existing issues in the Solr bug tracker for changing how the limit is configured. 2. updated solr.xml : ${solr.max.booleanClauses:2048} I don't think it's currently possible to set the value with solr.xml. Thanks, Shawn
Re: maxBooleanClauses change in solr.xml not reflecting in solr 8.4.1
I experienced the same thing in solr-8.7 , it worked for me using system property. Set system property in solr.in.sh file On Tue, Jan 5, 2021 at 8:58 PM dinesh naik wrote: > Hi all, > I want to update the maxBooleanClauses to 2048 (from default value 1024). > Below are the steps tried: > 1. updated solrconfig.xml : > ${solr.max.booleanClauses:2048} > > 2. updated solr.xml : > ${solr.max.booleanClauses:2048} > > 3. Restarted the solr nodes. > > 4. Tried query with more than 2000 OR clauses and getting below waring > message in solr logs: > > 2021-01-05 14:03:59.603 WARN (qtp1545077099-27) x:col1_shard1_replica_n3 > o.a.s.c.SolrConfig solrconfig.xml: of 2048 is greater > than global limit of 1024 and will have no effect > > 2021-01-05 14:03:59.603 WARN (qtp1545077099-27) x:col1_shard1_replica_n3 > o.a.s.c.SolrConfig set 'maxBooleanClauses' in solr.xml to increase global > limit > > Note: In 7.6.1 version we just need to change the solrconfig.xml and it > works. > > Kindly let me know if i am missing something for making it work in 8.4.1 > version. > -- > Best Regards, > Dinesh Naik >
maxBooleanClauses change in solr.xml not reflecting in solr 8.4.1
Hi all, I want to update the maxBooleanClauses to 2048 (from default value 1024). Below are the steps tried: 1. updated solrconfig.xml : ${solr.max.booleanClauses:2048} 2. updated solr.xml : ${solr.max.booleanClauses:2048} 3. Restarted the solr nodes. 4. Tried query with more than 2000 OR clauses and getting below waring message in solr logs: 2021-01-05 14:03:59.603 WARN (qtp1545077099-27) x:col1_shard1_replica_n3 o.a.s.c.SolrConfig solrconfig.xml: of 2048 is greater than global limit of 1024 and will have no effect 2021-01-05 14:03:59.603 WARN (qtp1545077099-27) x:col1_shard1_replica_n3 o.a.s.c.SolrConfig set 'maxBooleanClauses' in solr.xml to increase global limit Note: In 7.6.1 version we just need to change the solrconfig.xml and it works. Kindly let me know if i am missing something for making it work in 8.4.1 version. -- Best Regards, Dinesh Naik
Are values in solr.xml 32-bit or 64-bit?
I'm trying to find out what are the maximum values for parameters specified in solr.xml file? Mainly I am interested in distribUpdateConnTimeout and distribUpdateSoTimeout. I have tried setting those values to 0 in hopes that it would set the timeout to infinite, but I don't think that worked. I want to set these values to maximum possible. Just as FYI, we have multiple very large collections (total combined index size of all shards is over 2TB for each collection, sometimes much more than that) sharded across a handful of Solr nodes. So obviously the queries take a long time, which is expected. I want to make sure the queries don't time out and eventually return results. What is the best way to achieve this? When I set these values to 10 minutes, I was getting timeout errors in solr logs (timeouts were occurring in intra-cluster communication). Thank you in advance! -- Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Re: SolrCloud location for solr.xml
Apologies all I just realised I replied to the wrong thread. This is in response to "Solr cloud on Docker?" not "SolrCloud" location for solr.xml". Apologies for the confusion. Thanks Dwane From: Dwane Hall Sent: Monday, 2 March 2020 7:31 PM To: Jan Høydahl ; solr-user@lucene.apache.org Subject: Re: SolrCloud location for solr.xml Hey Jan, Thanks for the info re swap there’s some interesting observations you’ve mentioned below particularly the container swap by default. There was this note on the Docker forum describing a similar situation you mention did you attempt these settings with the same result? (https://success.docker.com/article/node-using-swap-memory-instead-of-host-memory) It mentions Docker EE specifically but it might be worth a try. In our environment we also run a vm.swappiness setting of 1 (our OS default) which is inherited by our containers as we don’t enforce any memory or resource limits directly on them. I did not attempt to turn off container swap during my testing so I don't have any benchmarks to relay back but if I get some clear air I’ll try to spin up some tests and see if I can replicate your observations. There’s a great resource/tutorial from the Alibaba guys on testing and tweaking container resource limits and their observations are far more comprehensive than my experience (ref below). If you haven't seen it may prove a useful reference for testing as I’ve noticed some of the Docker settings are not quite what you expect them to be. For example the Docker documentation defines the –cpus value as follows: “Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting --cpu-period="10" and --cpu-quota="15". Available in Docker 1.13 and higher. “ From the reference below CPU’s = Threads per core X cores per socket X sockets CPUs are not physical CPU’s We found this out the hard way when attempting to CPU resource limit a Fusion instance and wondered why it was struggling to start all its services on a single thread instead of a physical CPU It's a three part series https://www.alibabacloud.com/blog/594573?spm=a2c5t.11065265.1996646101.searchclickresult.612d62b1TGCY58 https://www.alibabacloud.com/blog/docker-container-resource-management-cpu-ram-and-io-part-2_594575 https://www.alibabacloud.com/blog/594579?spm=a2c5t.11065265.1996646101.searchclickresult.612d62b1TGCY58 I'll keep a keen eye on any developments and relay back if I get some tests together. Cheers, Dwane PS: I’m assuming you're testing Solr 8.4.1 on Linux hosts? From: Jan Høydahl Sent: Monday, 2 March 2020 12:01 AM To: solr-user@lucene.apache.org Subject: Re: SolrCloud location for solr.xml As long as solr.xml is a mix of setting that need to be separate per node and cluster wide settings, it makes no sense to enforce it in zk. Perhaps we instead should stop requiring solr.xml and allow nodes to start without it. Solr can then use a hard coded version as fallback. Most users just copies the example solr.xml into SOLR_HOME so if we can simplify the 80% case I don’t mind if more advanced users put it in zk or in local file system. Jan Høydahl > 1. mar. 2020 kl. 01:26 skrev Erick Erickson : > > Actually, I do this all the time. However, it’s because I’m always blowing > everything away and installing a different version of Solr or some such, > mostly laziness. > > We should move away from allowing solr.xml to be in SOLR_HOME when running in > cloud mode IMO, but that’ll need to be done in phases. > > Best, > Erick > >> On Feb 28, 2020, at 5:17 PM, Mike Drob wrote: >> >> Hi Searchers! >> >> I was recently looking at some of the start-up logic for Solr and was >> interested in cleaning it up a little bit. However, I'm not sure how common >> certain deployment scenarios are. Specifically is anybody doing the >> following combination: >> >> * Using SolrCloud (i.e. state stored in zookeeper) >> * Loading solr.xml from a local solr home rather than zookeeper >> >> Much appreciated! Thanks, >> Mike
Re: SolrCloud location for solr.xml
Hey Jan, Thanks for the info re swap there’s some interesting observations you’ve mentioned below particularly the container swap by default. There was this note on the Docker forum describing a similar situation you mention did you attempt these settings with the same result? (https://success.docker.com/article/node-using-swap-memory-instead-of-host-memory) It mentions Docker EE specifically but it might be worth a try. In our environment we also run a vm.swappiness setting of 1 (our OS default) which is inherited by our containers as we don’t enforce any memory or resource limits directly on them. I did not attempt to turn off container swap during my testing so I don't have any benchmarks to relay back but if I get some clear air I’ll try to spin up some tests and see if I can replicate your observations. There’s a great resource/tutorial from the Alibaba guys on testing and tweaking container resource limits and their observations are far more comprehensive than my experience (ref below). If you haven't seen it may prove a useful reference for testing as I’ve noticed some of the Docker settings are not quite what you expect them to be. For example the Docker documentation defines the –cpus value as follows: “Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting --cpu-period="10" and --cpu-quota="15". Available in Docker 1.13 and higher. “ From the reference below CPU’s = Threads per core X cores per socket X sockets CPUs are not physical CPU’s We found this out the hard way when attempting to CPU resource limit a Fusion instance and wondered why it was struggling to start all its services on a single thread instead of a physical CPU It's a three part series https://www.alibabacloud.com/blog/594573?spm=a2c5t.11065265.1996646101.searchclickresult.612d62b1TGCY58 https://www.alibabacloud.com/blog/docker-container-resource-management-cpu-ram-and-io-part-2_594575 https://www.alibabacloud.com/blog/594579?spm=a2c5t.11065265.1996646101.searchclickresult.612d62b1TGCY58 I'll keep a keen eye on any developments and relay back if I get some tests together. Cheers, Dwane PS: I’m assuming you're testing Solr 8.4.1 on Linux hosts? From: Jan Høydahl Sent: Monday, 2 March 2020 12:01 AM To: solr-user@lucene.apache.org Subject: Re: SolrCloud location for solr.xml As long as solr.xml is a mix of setting that need to be separate per node and cluster wide settings, it makes no sense to enforce it in zk. Perhaps we instead should stop requiring solr.xml and allow nodes to start without it. Solr can then use a hard coded version as fallback. Most users just copies the example solr.xml into SOLR_HOME so if we can simplify the 80% case I don’t mind if more advanced users put it in zk or in local file system. Jan Høydahl > 1. mar. 2020 kl. 01:26 skrev Erick Erickson : > > Actually, I do this all the time. However, it’s because I’m always blowing > everything away and installing a different version of Solr or some such, > mostly laziness. > > We should move away from allowing solr.xml to be in SOLR_HOME when running in > cloud mode IMO, but that’ll need to be done in phases. > > Best, > Erick > >> On Feb 28, 2020, at 5:17 PM, Mike Drob wrote: >> >> Hi Searchers! >> >> I was recently looking at some of the start-up logic for Solr and was >> interested in cleaning it up a little bit. However, I'm not sure how common >> certain deployment scenarios are. Specifically is anybody doing the >> following combination: >> >> * Using SolrCloud (i.e. state stored in zookeeper) >> * Loading solr.xml from a local solr home rather than zookeeper >> >> Much appreciated! Thanks, >> Mike
Re: SolrCloud location for solr.xml
As long as solr.xml is a mix of setting that need to be separate per node and cluster wide settings, it makes no sense to enforce it in zk. Perhaps we instead should stop requiring solr.xml and allow nodes to start without it. Solr can then use a hard coded version as fallback. Most users just copies the example solr.xml into SOLR_HOME so if we can simplify the 80% case I don’t mind if more advanced users put it in zk or in local file system. Jan Høydahl > 1. mar. 2020 kl. 01:26 skrev Erick Erickson : > > Actually, I do this all the time. However, it’s because I’m always blowing > everything away and installing a different version of Solr or some such, > mostly laziness. > > We should move away from allowing solr.xml to be in SOLR_HOME when running in > cloud mode IMO, but that’ll need to be done in phases. > > Best, > Erick > >> On Feb 28, 2020, at 5:17 PM, Mike Drob wrote: >> >> Hi Searchers! >> >> I was recently looking at some of the start-up logic for Solr and was >> interested in cleaning it up a little bit. However, I'm not sure how common >> certain deployment scenarios are. Specifically is anybody doing the >> following combination: >> >> * Using SolrCloud (i.e. state stored in zookeeper) >> * Loading solr.xml from a local solr home rather than zookeeper >> >> Much appreciated! Thanks, >> Mike
Re: SolrCloud location for solr.xml
Actually, I do this all the time. However, it’s because I’m always blowing everything away and installing a different version of Solr or some such, mostly laziness. We should move away from allowing solr.xml to be in SOLR_HOME when running in cloud mode IMO, but that’ll need to be done in phases. Best, Erick > On Feb 28, 2020, at 5:17 PM, Mike Drob wrote: > > Hi Searchers! > > I was recently looking at some of the start-up logic for Solr and was > interested in cleaning it up a little bit. However, I'm not sure how common > certain deployment scenarios are. Specifically is anybody doing the > following combination: > > * Using SolrCloud (i.e. state stored in zookeeper) > * Loading solr.xml from a local solr home rather than zookeeper > > Much appreciated! Thanks, > Mike
SolrCloud location for solr.xml
Hi Searchers! I was recently looking at some of the start-up logic for Solr and was interested in cleaning it up a little bit. However, I'm not sure how common certain deployment scenarios are. Specifically is anybody doing the following combination: * Using SolrCloud (i.e. state stored in zookeeper) * Loading solr.xml from a local solr home rather than zookeeper Much appreciated! Thanks, Mike
Re: uploading solr.xml to zk
In your command, you are missing the "zk" part of the command. Try: bin/solr zk cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181 I see this is wrong in the documentation, I will fix it for the next release of the Ref Guide. I'm not sure about how to refer to it - I don't think you have to do anything? I could be very wrong on that, though. On Fri, Jul 7, 2017 at 2:31 PM, <im...@elogic.pk> wrote: > The documentation says > > If you for example would like to keep your solr.xml in ZooKeeper to avoid > having to copy it to every node's so > lr_home directory, you can push it to ZooKeeper with the bin/solr utility > (Unix example): > bin/solr cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181 > > So Im trying to push the solr.xml my local zookeepr > > solr-6.4.1/bin/solr file:/home/user1/solr/nodes/day1/solr/solr.xml > zk:/solr.xml -z localhost:9983 > > ERROR: cp is not a valid command! > > Afterwards > When starting up a node how do we refer to the solr.xml inside zookeeper? Any > examples? > > Thanks > Imran > > > Sent from Mail for Windows 10 >
Re: uploading solr.xml to zk
Actually it is corrected in the latest docs... On Fri, Jul 7, 2017 at 9:35 AM, Erick Erickson <erickerick...@gmail.com> wrote: > Blast, you're right that's a doc problem. I'll change the current docs > but I'm afraid that'll live on in older docs. > > It should be: > > bin/solr zk cp blah blah > > (note the "zk" bit) > > Sorry about that. > > On Fri, Jul 7, 2017 at 12:31 PM, <im...@elogic.pk> wrote: >> The documentation says >> >> If you for example would like to keep your solr.xml in ZooKeeper to avoid >> having to copy it to every node's so >> lr_home directory, you can push it to ZooKeeper with the bin/solr utility >> (Unix example): >> bin/solr cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181 >> >> So Im trying to push the solr.xml my local zookeepr >> >> solr-6.4.1/bin/solr file:/home/user1/solr/nodes/day1/solr/solr.xml >> zk:/solr.xml -z localhost:9983 >> >> ERROR: cp is not a valid command! >> >> Afterwards >> When starting up a node how do we refer to the solr.xml inside zookeeper? >> Any examples? >> >> Thanks >> Imran >> >> >> Sent from Mail for Windows 10 >>
Re: uploading solr.xml to zk
Blast, you're right that's a doc problem. I'll change the current docs but I'm afraid that'll live on in older docs. It should be: bin/solr zk cp blah blah (note the "zk" bit) Sorry about that. On Fri, Jul 7, 2017 at 12:31 PM, <im...@elogic.pk> wrote: > The documentation says > > If you for example would like to keep your solr.xml in ZooKeeper to avoid > having to copy it to every node's so > lr_home directory, you can push it to ZooKeeper with the bin/solr utility > (Unix example): > bin/solr cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181 > > So Im trying to push the solr.xml my local zookeepr > > solr-6.4.1/bin/solr file:/home/user1/solr/nodes/day1/solr/solr.xml > zk:/solr.xml -z localhost:9983 > > ERROR: cp is not a valid command! > > Afterwards > When starting up a node how do we refer to the solr.xml inside zookeeper? Any > examples? > > Thanks > Imran > > > Sent from Mail for Windows 10 >
Re: uploading solr.xml to zk
Not quite right. That should be: (note the ZK in the command). I'm surprised that you didn't get an error message if that's the exact command. solr-6.4.1/bin/solr zk cp file:/home/user1/solr/nodes/day1/solr/solr.xml zk:/solr.xml -z localhost:9983 Second possibility is that you have a chroot in there somewhere. How do you start solr? Does the ZooKeeper ensemble end with just the port or something like :9983/solr? Finally, you should be able to verify that solr.xml is in ZK by the admin UI>>cloud>>tree view. Best, Erick On Fri, Jul 7, 2017 at 3:03 PM, <im...@elogic.pk> wrote: > Thanks for the reply > This is the exact command on a RHEL 6 machine > > solr-6.4.1/bin/solr cp file:/home/user1/solr/nodes/day1/solr/solr.xml > zk:/solr.xml -z localhost:9983 > > I am following the documentation of 6.4.1 > > > > I am assuming if the solr.xml is present in zookeeper, we can point to an > empty directory to start a node? > > Regards, > Imran > > > > > > > > > > Sent from Mail for Windows 10 > > From: Jan Høydahl > Sent: Friday, July 7, 2017 2:01 AM > To: solr-user@lucene.apache.org > Subject: Re: uploading solr.xml to zk > >> ERROR: cp is not a valid command! > > Can you write the exact command you typed again? > Once solr.xml is in zookeeper, solr will find it automatically. > > -- > Jan Høydahl, search solution architect > Cominvent AS - www.cominvent.com > >> 7. jul. 2017 kl. 21.31 skrev im...@elogic.pk: >> >> The documentation says >> >> If you for example would like to keep your solr.xml in ZooKeeper to avoid >> having to copy it to every node's so >> lr_home directory, you can push it to ZooKeeper with the bin/solr utility >> (Unix example): >> bin/solr cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181 >> >> So Im trying to push the solr.xml my local zookeepr >> >> solr-6.4.1/bin/solr file:/home/user1/solr/nodes/day1/solr/solr.xml >> zk:/solr.xml -z localhost:9983 >> >> ERROR: cp is not a valid command! >> >> Afterwards >> When starting up a node how do we refer to the solr.xml inside zookeeper? >> Any examples? >> >> Thanks >> Imran >> >> >> Sent from Mail for Windows 10 >> > >
RE: uploading solr.xml to zk
Thanks for the reply This is the exact command on a RHEL 6 machine solr-6.4.1/bin/solr cp file:/home/user1/solr/nodes/day1/solr/solr.xml zk:/solr.xml -z localhost:9983 I am following the documentation of 6.4.1 I am assuming if the solr.xml is present in zookeeper, we can point to an empty directory to start a node? Regards, Imran Sent from Mail for Windows 10 From: Jan Høydahl Sent: Friday, July 7, 2017 2:01 AM To: solr-user@lucene.apache.org Subject: Re: uploading solr.xml to zk > ERROR: cp is not a valid command! Can you write the exact command you typed again? Once solr.xml is in zookeeper, solr will find it automatically. -- Jan Høydahl, search solution architect Cominvent AS - www.cominvent.com > 7. jul. 2017 kl. 21.31 skrev im...@elogic.pk: > > The documentation says > > If you for example would like to keep your solr.xml in ZooKeeper to avoid > having to copy it to every node's so > lr_home directory, you can push it to ZooKeeper with the bin/solr utility > (Unix example): > bin/solr cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181 > > So Im trying to push the solr.xml my local zookeepr > > solr-6.4.1/bin/solr file:/home/user1/solr/nodes/day1/solr/solr.xml > zk:/solr.xml -z localhost:9983 > > ERROR: cp is not a valid command! > > Afterwards > When starting up a node how do we refer to the solr.xml inside zookeeper? Any > examples? > > Thanks > Imran > > > Sent from Mail for Windows 10 >
Re: uploading solr.xml to zk
> ERROR: cp is not a valid command! Can you write the exact command you typed again? Once solr.xml is in zookeeper, solr will find it automatically. -- Jan Høydahl, search solution architect Cominvent AS - www.cominvent.com > 7. jul. 2017 kl. 21.31 skrev im...@elogic.pk: > > The documentation says > > If you for example would like to keep your solr.xml in ZooKeeper to avoid > having to copy it to every node's so > lr_home directory, you can push it to ZooKeeper with the bin/solr utility > (Unix example): > bin/solr cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181 > > So Im trying to push the solr.xml my local zookeepr > > solr-6.4.1/bin/solr file:/home/user1/solr/nodes/day1/solr/solr.xml > zk:/solr.xml -z localhost:9983 > > ERROR: cp is not a valid command! > > Afterwards > When starting up a node how do we refer to the solr.xml inside zookeeper? Any > examples? > > Thanks > Imran > > > Sent from Mail for Windows 10 >
uploading solr.xml to zk
The documentation says If you for example would like to keep your solr.xml in ZooKeeper to avoid having to copy it to every node's so lr_home directory, you can push it to ZooKeeper with the bin/solr utility (Unix example): bin/solr cp file:local/file/path/to/solr.xml zk:/solr.xml -z localhost:2181 So Im trying to push the solr.xml my local zookeepr solr-6.4.1/bin/solr file:/home/user1/solr/nodes/day1/solr/solr.xml zk:/solr.xml -z localhost:9983 ERROR: cp is not a valid command! Afterwards When starting up a node how do we refer to the solr.xml inside zookeeper? Any examples? Thanks Imran Sent from Mail for Windows 10
Re: Arguments for and against putting solr.xml into Zookeeper?
On 4/12/2016 2:20 PM, John Bickerstaff wrote: > I'm wondering if anyone can comment on arguments for and against putting > solr.xml into Zookeeper? > > I assume one argument for doing so is that I would then have all > configuration in one place. > > I also assume that if it doesn't get included as part of the upconfig > command, there is likely a reason? If you want the *exact* same file to be used by all SolrCloud nodes, especially if your cluster is fairly dynamic, then having solr.xml in zookeeper makes this easier. Because SolrCloud does not function without zookeeper, having a critical server-level configuration file stored there doesn't require an extra dependency. If each node has a different solr.xml file, then you wouldn't want it to be stored in zookeeper. The solr.xml file configures a Solr server at a global level -- the 'upconfig' command is for uploading configurations for collections, which (with the notable exception of maxBooleanClauses) does not include any global config. Thanks, Shawn
Re: Arguments for and against putting solr.xml into Zookeeper?
The relevant JIRA is SOLR-7735 and its references. Maybe that would be useful as the background. Regards, Alex. Newsletter and resources for Solr beginners and intermediates: http://www.solr-start.com/ On 13 April 2016 at 06:20, John Bickerstaff <j...@johnbickerstaff.com> wrote: > Hello all, > > I'm wondering if anyone can comment on arguments for and against putting > solr.xml into Zookeeper? > > I assume one argument for doing so is that I would then have all > configuration in one place. > > I also assume that if it doesn't get included as part of the upconfig > command, there is likely a reason? > > Thanks...
Re: Arguments for and against putting solr.xml into Zookeeper?
upconfig is for _configurations_. Each collection can use one of the configurations. Solr.xml is configuration for the entire Solr instance so it doesn't make sense for it to be part of upconfig. There's certainly room for something explicit to upload it separate from configsets though... Best, Erick On Tue, Apr 12, 2016 at 1:20 PM, John Bickerstaff <j...@johnbickerstaff.com> wrote: > Hello all, > > I'm wondering if anyone can comment on arguments for and against putting > solr.xml into Zookeeper? > > I assume one argument for doing so is that I would then have all > configuration in one place. > > I also assume that if it doesn't get included as part of the upconfig > command, there is likely a reason? > > Thanks...
Arguments for and against putting solr.xml into Zookeeper?
Hello all, I'm wondering if anyone can comment on arguments for and against putting solr.xml into Zookeeper? I assume one argument for doing so is that I would then have all configuration in one place. I also assume that if it doesn't get included as part of the upconfig command, there is likely a reason? Thanks...
Re: SOLR_HOST vs solr.xml (solrcloud config)
On 3/12/2016 10:47 AM, Shawn Heisey wrote: > If SOLR_HOST is not working, perhaps the start script in your install > has a bug. I was going to look for reported bugs on SOLR_HOST, but the > Apache bugtracker (Jira) is down. Thanks, Shawn Jira's back. If you're running a version before 5.2, there is indeed a bug in the start script with SOLR_HOST. https://issues.apache.org/jira/browse/SOLR-7545 Thanks, Shawn
Re: SOLR_HOST vs solr.xml (solrcloud config)
On 3/11/2016 10:21 PM, Brian Wright wrote: > Use the |SOLR_HOST|variable in the include file to set the hostname of > the Solr server. > |SOLR_HOST=solr1.example.com| > > Setting the hostname of the Solr server is recommended, especially > when running in SolrCloud mode, as this determines the address of the > node when it registers with ZooKeeper. > > > Yet, in the example solr.xml, the stanza defines ... > > > > ${host:} > ${jetty.port:8983} > ${hostContext:solr} > > ${genericCoreNodeNames:true} > > ${zkClientTimeout:3} > name="distribUpdateSoTimeout">${distribUpdateSoTimeout:60} > name="distribUpdateConnTimeout">${distribUpdateConnTimeout:6} > > > > > More specifically the ${host:} variable. When this variable is filled > in from startup execution, Zookeeper seems to obtain the correct > hostname in spite of SOLR_HOST having been set. Yet, the docs > recommend setting (or additionally setting?) SOLR_HOST with an > explicit hostname. The SOLR_HOST environment variable is used by the startup shell script (I looked at 5.4.0, the Windows start script has a command that accomplishes the same thing) -- setting the Java system property named "host": SOLR_HOST_ARG=("-Dhost=$SOLR_HOST") The solr.xml file contains ${host:} which tells it to use that java property. The colon in the property name determines the value if the property is not available -- blank in this case . A blank value for the "host" setting in solr.xml means that Solr will use its default -- the first IP address on the machine. If SOLR_HOST is not working, perhaps the start script in your install has a bug. I was going to look for reported bugs on SOLR_HOST, but the Apache bugtracker (Jira) is down. Thanks, Shawn
Re: SOLR_HOST vs solr.xml (solrcloud config)
Correction... Too fast on the send button. The subject should have been SOLR_HOST, not SOLR_HOME. Sorry for any confusion. Though, the body is correct. On 3/11/16 9:21 PM, Brian Wright wrote: Hi, Another question regarding documentation of Solr and Zookeeper. The manual states: Solr Hostname Use the |SOLR_HOST| variable in the include file to set the hostname of the Solr server. |SOLR_HOST=solr1.example.com| Setting the hostname of the Solr server is recommended, especially when running in SolrCloud mode, as this determines the address of the node when it registers with ZooKeeper. Yet, in the example solr.xml, the stanza defines ... ${host:} ${jetty.port:8983} ${hostContext:solr} ${genericCoreNodeNames:true} ${zkClientTimeout:3} name="distribUpdateSoTimeout">${distribUpdateSoTimeout:60} name="distribUpdateConnTimeout">${distribUpdateConnTimeout:6} More specifically the ${host:} variable. When this variable is filled in from startup execution, Zookeeper seems to obtain the correct hostname in spite of SOLR_HOST having been set. Yet, the docs recommend setting (or additionally setting?) SOLR_HOST with an explicit hostname. If the stanza in solr.xml is there to specifically define the setup of solrcloud, is it still recommended to define SOLR_HOST separately and what benefit does this provide over relying on ${host:} in solr.xml? This ${host:} variable at least works without making an explicit declaration of an FQDN. At the same time, ${host:} will auto populate if the hostname of the box changes. If you hardcode SOLR_HOST into a config, this won't dynamically update should the hostname of the box change (not that I'm going to run around changing hostnames, but I can see how explicitly defining SOLR_HOST could become a problem when someone doesn't know that it is defined). What is the best practice here? Thanks. -- Signature *Brian Wright* *Sr. Systems Engineer * 901 Mariners Island Blvd Suite 200 San Mateo, CA 94404 USA *Email *<mailto:bri...@marketo.com>bri...@marketo.com *Phone *+1.650.539.3530** *<http://www.marketo.com/>www.marketo.com* Marketo Logo -- Signature *Brian Wright* *Sr. Systems Engineer * 901 Mariners Island Blvd Suite 200 San Mateo, CA 94404 USA *Email *bri...@marketo.com <mailto:bri...@marketo.com> *Phone *+1.650.539.3530** *www.marketo.com <http://www.marketo.com/>* Marketo Logo smime.p7s Description: S/MIME Cryptographic Signature
SOLR_HOME vs solr.xml (solrcloud config)
Hi, Another question regarding documentation of Solr and Zookeeper. The manual states: Solr Hostname Use the |SOLR_HOST| variable in the include file to set the hostname of the Solr server. |SOLR_HOST=solr1.example.com| Setting the hostname of the Solr server is recommended, especially when running in SolrCloud mode, as this determines the address of the node when it registers with ZooKeeper. Yet, in the example solr.xml, the stanza defines ... ${host:} ${jetty.port:8983} ${hostContext:solr} ${genericCoreNodeNames:true} ${zkClientTimeout:3} name="distribUpdateSoTimeout">${distribUpdateSoTimeout:60} name="distribUpdateConnTimeout">${distribUpdateConnTimeout:6} More specifically the ${host:} variable. When this variable is filled in from startup execution, Zookeeper seems to obtain the correct hostname in spite of SOLR_HOST having been set. Yet, the docs recommend setting (or additionally setting?) SOLR_HOST with an explicit hostname. If the stanza in solr.xml is there to specifically define the setup of solrcloud, is it still recommended to define SOLR_HOST separately and what benefit does this provide over relying on ${host:} in solr.xml? This ${host:} variable at least works without making an explicit declaration of an FQDN. At the same time, ${host:} will auto populate if the hostname of the box changes. If you hardcode SOLR_HOST into a config, this won't dynamically update should the hostname of the box change (not that I'm going to run around changing hostnames, but I can see how explicitly defining SOLR_HOST could become a problem when someone doesn't know that it is defined). What is the best practice here? Thanks. -- Signature *Brian Wright* *Sr. Systems Engineer * 901 Mariners Island Blvd Suite 200 San Mateo, CA 94404 USA *Email *bri...@marketo.com <mailto:bri...@marketo.com> *Phone *+1.650.539.3530** *www.marketo.com <http://www.marketo.com/>* Marketo Logo smime.p7s Description: S/MIME Cryptographic Signature
sharedLib node in solr.xml
I am working with the gettingstarted example from the Solr Quick Start guide. I developed a couple of Solr plugins and I want to tell Solr where to find them, and I'm following this <https://wiki.apache.org/solr/SolrPlugins> guide to do it. For a single core it was easy, I just put them in a lib directory under the instance directory. But now I am trying it with multiple cores. In /solr-5.4.0/server/solr, I updated the solr.xml file with sharedLib as follows: *${sharedLib:}* ${host:} ${jetty.port:8983} ${hostContext:solr} ${genericCoreNodeNames:true} ${zkClientTimeout:3} ${distribUpdateSoTimeout:60} ${distribUpdateConnTimeout:6} ${socketTimeout:60} ${connTimeout:6} But when I start up Solr, the log says: Unknown configuration parameter in section of solr.xml: sharedLib Then I read that I need a core.properties file to contain the sharedLib variable. If this is true, where should I place this file? Where should I place my shared library containing my plugins? One other question: in the gettingstarted example, what directory is the instance directory? Thanks for the help.
Re: sharedLib node in solr.xml
On 1/21/2016 11:13 AM, Bob Lawson wrote: > I am working with the gettingstarted example from the Solr Quick Start > guide. I developed a couple of Solr plugins and I want to tell Solr where > to find them, and I'm following this > <https://wiki.apache.org/solr/SolrPlugins> guide to do it. For a single > core it was easy, I just put them in a lib directory under the instance > directory. But now I am trying it with multiple cores. In > /solr-5.4.0/server/solr, I updated the solr.xml file with sharedLib as > follows: > > > > > > *${sharedLib:}* > But when I start up Solr, the log says: Unknown configuration parameter in > section of solr.xml: sharedLib The sharedLib parameter is not for SolrCloud, it is for Solr in general. Put it in the section, not the section. If you are going to use the default location (which is $SOLRHOME/lib ... $SOLRHOME is where solr.xml lives), then you do not need to include the sharedLib configuration at all. Thanks, Shawn
Re: sharedLib node in solr.xml
Thanks. I tried what you said and sill have problems. I removed sharedLib from solr.xml, so solr.xml is back to its original state. I then placed a lib directory containing my jar into the solr home directory, which is /solr-5.4.0/server/solr. I then ran bin/solr start -e cloud -noprompt Received error: Error CREATEing SolrCore 'gettingstarted_shard1_replica2': Unable to create core [gettingstarted_shard1_replica2] Caused by: com.oracle.querygen.plugin.DraQueryComponent Log said: Error loading class 'com.rwl.querygen.plugin.MyQueryComponent' Obviously, it can't find the jar containing MyQueryComponent. I know I'm doing something wrong, but I'm stuck again. Thanks in advance for any further advice. On Thu, Jan 21, 2016 at 2:04 PM, Shawn Heisey <apa...@elyograg.org> wrote: > On 1/21/2016 11:13 AM, Bob Lawson wrote: > > I am working with the gettingstarted example from the Solr Quick Start > > guide. I developed a couple of Solr plugins and I want to tell Solr > where > > to find them, and I'm following this > > <https://wiki.apache.org/solr/SolrPlugins> guide to do it. For a single > > core it was easy, I just put them in a lib directory under the instance > > directory. But now I am trying it with multiple cores. In > > /solr-5.4.0/server/solr, I updated the solr.xml file with sharedLib as > > follows: > > > > > > > > > > > > *${sharedLib:}* > > > But when I start up Solr, the log says: Unknown configuration parameter > in > > section of solr.xml: sharedLib > > The sharedLib parameter is not for SolrCloud, it is for Solr in > general. Put it in the section, not the section. > > If you are going to use the default location (which is $SOLRHOME/lib ... > $SOLRHOME is where solr.xml lives), then you do not need to include the > sharedLib configuration at all. > > Thanks, > Shawn > >
Re: sharedLib node in solr.xml
I just wanted to give an update on this problem. I decided to reinstall Solr 5.4.0 and try again, following the simplest procedure I could. First I created a single core collection called 'test'. I modified solrconfig.xml to use my plugin. I created a lib directory under the test folder. The lib directory contains the jar with my plugin. I restarted Solr with bin/solr start, and everything ran perfectly. I then stopped Solr with bin/solr stop. I then copied the exact same solrconfig.xml file to /solr-5.4.0/server/solr/configsets/data_driven_schema_configs/conf. I copied the exact same lib directory to /solr-5.4.0/server/solr. I tried to launch the cloud gettingstarted example by executing: bin/solr start -e cloud -noprompt. I received the following error: *ERROR: Failed to create collection 'gettingstarted' due to: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://10.0.2.15:7574/solr <http://10.0.2.15:7574/solr>: Error CREATEing SolrCore 'gettingstarted_shard2_replica2': Unable to create core [gettingstarted_shard2_replica2] Caused by: com.rwl.querygen.plugin.MyQueryComponent* The log file error was: *ERROR - 2016-01-21 20:46:36.498; [c:gettingstarted s:shard2 r:core_node2 x:gettingstarted_shard2_replica1] org.apache.solr.core.CoreContainer; Error creating core [gettingstarted_shard2_replica1]: Error loading class 'com.rwl.querygen.plugin.MyQueryComponent'* It certainly looks like Solr is looking somewhere else for my jar file, but I followed the guide and advice on this thread to the letter. Cleary, there is nothing wrong with my solrconfig.xml or my lib directory, because they work fine with the simpler single core instantiation. Once again, I appreciate those who have taken time to help and appreciate any continued support I can get. Thanks!!! On Thu, Jan 21, 2016 at 2:22 PM, Bob Lawson <bwlawson...@gmail.com> wrote: > Correction: > > Thanks. I tried what you said and sill have problems. I removed > sharedLib from solr.xml, so solr.xml is back to its original state. I then > placed a lib directory containing my jar into the solr home directory, > which is /solr-5.4.0/server/solr. I then ran bin/solr start -e cloud > -noprompt > > Received error: Error CREATEing SolrCore > 'gettingstarted_shard1_replica2': Unable to create core > [gettingstarted_shard1_replica2] Caused by: > com.rwl.querygen.plugin.MyQueryComponent > > Log said: Error loading class 'com.rwl.querygen.plugin.MyQueryComponent' > > Obviously, it can't find the jar containing MyQueryComponent. I know I'm > doing something wrong, but I'm stuck again. Thanks in advance for any > further advice. > > On Thu, Jan 21, 2016 at 2:21 PM, Bob Lawson <bwlawson...@gmail.com> wrote: > >> Thanks. I tried what you said and sill have problems. I removed >> sharedLib from solr.xml, so solr.xml is back to its original state. I then >> placed a lib directory containing my jar into the solr home directory, >> which is /solr-5.4.0/server/solr. I then ran bin/solr start -e cloud >> -noprompt >> >> Received error: Error CREATEing SolrCore >> 'gettingstarted_shard1_replica2': Unable to create core >> [gettingstarted_shard1_replica2] Caused by: >> com.oracle.querygen.plugin.DraQueryComponent >> >> Log said: Error loading class 'com.rwl.querygen.plugin.MyQueryComponent' >> >> Obviously, it can't find the jar containing MyQueryComponent. I know I'm >> doing something wrong, but I'm stuck again. Thanks in advance for any >> further advice. >> >> On Thu, Jan 21, 2016 at 2:04 PM, Shawn Heisey <apa...@elyograg.org> >> wrote: >> >>> On 1/21/2016 11:13 AM, Bob Lawson wrote: >>> > I am working with the gettingstarted example from the Solr Quick Start >>> > guide. I developed a couple of Solr plugins and I want to tell Solr >>> where >>> > to find them, and I'm following this >>> > <https://wiki.apache.org/solr/SolrPlugins> guide to do it. For a >>> single >>> > core it was easy, I just put them in a lib directory under the instance >>> > directory. But now I am trying it with multiple cores. In >>> > /solr-5.4.0/server/solr, I updated the solr.xml file with sharedLib as >>> > follows: >>> > >>> > >>> > >>> > >>> > >>> > *${sharedLib:}* >>> >>> > But when I start up Solr, the log says: Unknown configuration >>> parameter in >>> > section of solr.xml: sharedLib >>> >>> The sharedLib parameter is not for SolrCloud, it is for Solr in >>> general. Put it in the section, not the section. >>> >>> If you are going to use the default location (which is $SOLRHOME/lib ... >>> $SOLRHOME is where solr.xml lives), then you do not need to include the >>> sharedLib configuration at all. >>> >>> Thanks, >>> Shawn >>> >>> >> >
Re: sharedLib node in solr.xml
On 1/21/2016 12:21 PM, Bob Lawson wrote: Thanks. I tried what you said and sill have problems. I removed sharedLib from solr.xml, so solr.xml is back to its original state. I then placed a lib directory containing my jar into the solr home directory, which is /solr-5.4.0/server/solr. I then ran bin/solr start -e cloud -noprompt Received error: Error CREATEing SolrCore 'gettingstarted_shard1_replica2': Unable to create core [gettingstarted_shard1_replica2] Caused by: com.oracle.querygen.plugin.DraQueryComponent Log said: Error loading class 'com.rwl.querygen.plugin.MyQueryComponent' Obviously, it can't find the jar containing MyQueryComponent. I know I'm doing something wrong, but I'm stuck again. Thanks in advance for any further advice. Do you by chance have elements in your solrconfig.xml that load this jar? When you use $SOLRHOME/lib, you should *not* use in solrconfig.xml. Some jars seem to have problems when they are loaded more than once. There have been a number of bugs related to this. The most recent is SOLR-6188. One of the older bugs is SOLR-4852. Can you delete the solr log, restart Solr, and put the new log somewhere (like gist, or a paste website) that we can view it? This will include a lot of information that can help pinpoint what's happening. Thanks, Shawn
Re: sharedLib node in solr.xml
On 1/21/2016 1:56 PM, Bob Lawson wrote: I then copied the exact same solrconfig.xml file to /solr-5.4.0/server/solr/configsets/data_driven_schema_configs/conf. I copied the exact same lib directory to /solr-5.4.0/server/solr. I tried to launch the cloud gettingstarted example by executing: bin/solr start -e cloud -noprompt. I received the following error: When you start Solr with "bin/solr start" the solr home is not explicitly set, so it becomes "./solr" -- with "server" as the current working directory. When you start with -e cloud, you're potentially running more than one Solr instance. Each one has a specific solr home under the example directory. You would need to copy the lib directory into each of those solr home directories in order for this to work. Thanks, Shawn
Re: sharedLib node in solr.xml
Shawn, that was it! I copied my lib directory to both /solr-5.4.0/example/cloud/node1/solr and /solr-5.4.0/example/cloud/node2/solr. Everything ran perfectly. Maybe the documentation should be updated with this clarification, because it currently makes it sound like all you have to do is put lib into /solr-5.4.0/server/solr Thank you so much for your help! You guys are great and provide a wonderful service to us novices. On Thu, Jan 21, 2016 at 4:54 PM, Shawn Heiseywrote: > On 1/21/2016 1:56 PM, Bob Lawson wrote: > >> I then copied the exact same solrconfig.xml file to >> /solr-5.4.0/server/solr/configsets/data_driven_schema_configs/conf. I >> copied the exact same lib directory to /solr-5.4.0/server/solr. I tried >> to >> launch the cloud gettingstarted example by executing: bin/solr start -e >> cloud -noprompt. I received the following error: >> > > When you start Solr with "bin/solr start" the solr home is not explicitly > set, so it becomes "./solr" -- with "server" as the current working > directory. > > When you start with -e cloud, you're potentially running more than one > Solr instance. Each one has a specific solr home under the example > directory. You would need to copy the lib directory into each of those > solr home directories in order for this to work. > > Thanks, > Shawn > >
Re: sharedLib node in solr.xml
Correction: Thanks. I tried what you said and sill have problems. I removed sharedLib from solr.xml, so solr.xml is back to its original state. I then placed a lib directory containing my jar into the solr home directory, which is /solr-5.4.0/server/solr. I then ran bin/solr start -e cloud -noprompt Received error: Error CREATEing SolrCore 'gettingstarted_shard1_replica2': Unable to create core [gettingstarted_shard1_replica2] Caused by: com.rwl.querygen.plugin.MyQueryComponent Log said: Error loading class 'com.rwl.querygen.plugin.MyQueryComponent' Obviously, it can't find the jar containing MyQueryComponent. I know I'm doing something wrong, but I'm stuck again. Thanks in advance for any further advice. On Thu, Jan 21, 2016 at 2:21 PM, Bob Lawson <bwlawson...@gmail.com> wrote: > Thanks. I tried what you said and sill have problems. I removed > sharedLib from solr.xml, so solr.xml is back to its original state. I then > placed a lib directory containing my jar into the solr home directory, > which is /solr-5.4.0/server/solr. I then ran bin/solr start -e cloud > -noprompt > > Received error: Error CREATEing SolrCore > 'gettingstarted_shard1_replica2': Unable to create core > [gettingstarted_shard1_replica2] Caused by: > com.oracle.querygen.plugin.DraQueryComponent > > Log said: Error loading class 'com.rwl.querygen.plugin.MyQueryComponent' > > Obviously, it can't find the jar containing MyQueryComponent. I know I'm > doing something wrong, but I'm stuck again. Thanks in advance for any > further advice. > > On Thu, Jan 21, 2016 at 2:04 PM, Shawn Heisey <apa...@elyograg.org> wrote: > >> On 1/21/2016 11:13 AM, Bob Lawson wrote: >> > I am working with the gettingstarted example from the Solr Quick Start >> > guide. I developed a couple of Solr plugins and I want to tell Solr >> where >> > to find them, and I'm following this >> > <https://wiki.apache.org/solr/SolrPlugins> guide to do it. For a single >> > core it was easy, I just put them in a lib directory under the instance >> > directory. But now I am trying it with multiple cores. In >> > /solr-5.4.0/server/solr, I updated the solr.xml file with sharedLib as >> > follows: >> > >> > >> > >> > >> > >> > *${sharedLib:}* >> >> > But when I start up Solr, the log says: Unknown configuration >> parameter in >> > section of solr.xml: sharedLib >> >> The sharedLib parameter is not for SolrCloud, it is for Solr in >> general. Put it in the section, not the section. >> >> If you are going to use the default location (which is $SOLRHOME/lib ... >> $SOLRHOME is where solr.xml lives), then you do not need to include the >> sharedLib configuration at all. >> >> Thanks, >> Shawn >> >> >
Re: Solr 4.8 - Updating zkhost list in solr.xml without requiring a restart
Why don't you create DNS names, or such, so that you can replace a zookeeper instance at the same hostname:port rather than having to edit solr.xml across your whole Solr farm? The idea is that your list of zookeeper hostnames is a virtual one, not a real one. Upayavira On Wed, Sep 30, 2015, at 04:40 AM, pramodmm wrote: > > > Before we even think about upgrading the zookeeper functionality in > > Solr, we must wait for the official 3.5 release from the zookeeper > > project. Alpha (or Beta) software will not be included in Solr unless > > it is the only way to fix a very serious bug. This is a new feature, > > not a bug. > > In the meantime, please help me validate what we are doing is right. > Currently, our zookeeper instances are running on vmware machines and > when > one of them dies and we get a new machine as a replacement - we install > zookeeper and make it a part of the ensemble. Then we manually, go to > every > individual solr instance in the solr cloud - edit its solr.xml - remove > the > entry of the dead machine from zkhost and replace it with the new > hostname - > thus keeping the list up-to-date. Then, we restart solr box. > > Are these the right steps ? > > Thanks, > Pramod > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Solr-4-8-Updating-zkhost-list-in-solr-xml-without-requiring-a-restart-tp4231979p4231994.html > Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr 4.8 - Updating zkhost list in solr.xml without requiring a restart
On 9/29/2015 9:40 PM, pramodmm wrote: > In the meantime, please help me validate what we are doing is right. > Currently, our zookeeper instances are running on vmware machines and when > one of them dies and we get a new machine as a replacement - we install > zookeeper and make it a part of the ensemble. Then we manually, go to every > individual solr instance in the solr cloud - edit its solr.xml - remove the > entry of the dead machine from zkhost and replace it with the new hostname - > thus keeping the list up-to-date. Then, we restart solr box. That sounds correct to me. It will be very nice for large installs when we can upgrade to ZK 3.5 in Solr. In order to use the new functionality, the ZK servers must also be upgraded. Thanks, Shawn
Re: Solr 4.8 - Updating zkhost list in solr.xml without requiring a restart
> The idea is that your list of zookeeper hostnames is a virtual one, not > a real one. Thanks for the suggestion. Looks like I am not alone in thinking along the same lines. I am planning on doing that and was not sure if anyone else tried this approach and validated that it worked. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-8-Updating-zkhost-list-in-solr-xml-without-requiring-a-restart-tp4231979p4232045.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr 4.8 - Updating zkhost list in solr.xml without requiring a restart
Hi, Is there an example which I could use - to upload solr.xml in zookeeper and change zkhost entries on the fly and have solr instances be updated via zookeeper. This will prevent us from restarting each solr node everytime, a new zookeeper host is added or deleted. We are on Solr 4.8. Thanks, Pramod -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-8-Updating-zkhost-list-in-solr-xml-without-requiring-a-restart-tp4231979.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr 4.8 - Updating zkhost list in solr.xml without requiring a restart
On 9/29/2015 5:59 PM, pramodEbay wrote: > Is there an example which I could use - to upload solr.xml in zookeeper and > change zkhost entries on the fly and have solr instances be updated via > zookeeper. This will prevent us from restarting each solr node everytime, a > new zookeeper host is added or deleted. > > We are on Solr 4.8. Support in zookeeper for dynamically changing the cluster membership has been added to the 3.5 version, which is currently only available as an alpha release. https://issues.apache.org/jira/browse/ZOOKEEPER-107 This feature has been under development for a REALLY long time. The comments are a discussion that is very technical in nature and difficult to follow, it looks like it took a very long time to come up with a usable design. I don't know anything about how they have implemented the dynamic cluster support, so I do not know whether Solr requires code changes to use it. A quick scan of the first few comments suggests that they are trying to make this server-side, with all clients updating automatically, so Solr might not need any code changes. Let's hope that this is the case. Before we even think about upgrading the zookeeper functionality in Solr, we must wait for the official 3.5 release from the zookeeper project. Alpha (or Beta) software will not be included in Solr unless it is the only way to fix a very serious bug. This is a new feature, not a bug. Thanks, Shawn
Re: Solr 4.8 - Updating zkhost list in solr.xml without requiring a restart
> Before we even think about upgrading the zookeeper functionality in > Solr, we must wait for the official 3.5 release from the zookeeper > project. Alpha (or Beta) software will not be included in Solr unless > it is the only way to fix a very serious bug. This is a new feature, > not a bug. In the meantime, please help me validate what we are doing is right. Currently, our zookeeper instances are running on vmware machines and when one of them dies and we get a new machine as a replacement - we install zookeeper and make it a part of the ensemble. Then we manually, go to every individual solr instance in the solr cloud - edit its solr.xml - remove the entry of the dead machine from zkhost and replace it with the new hostname - thus keeping the list up-to-date. Then, we restart solr box. Are these the right steps ? Thanks, Pramod -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-8-Updating-zkhost-list-in-solr-xml-without-requiring-a-restart-tp4231979p4231994.html Sent from the Solr - User mailing list archive at Nabble.com.
shareSchema property unknown in new solr.xml format
Hi, I’m trying to move from legacy solr.xml to new solr.xml format (version 4.10.4) I’m getting this error on startup: solrcloud section of solr.xml contains 1 unknown config parameter(s): [shareSchema] The las documentation has an entry for this property … I have the property configured as: str name=shareSchema${shareSchema:true}/str The stack trace: ERROR - localhost - 2015-07-20 12:32:05.132; org.apache.solr.common.SolrException; null:org.apache.solr.common.SolrException: solrcloud section of solr.xml contains 1 unknown config parameter(s): [shareSchema] at org.apache.solr.core.ConfigSolrXml.errorOnLeftOvers(ConfigSolrXml.java:242) at org.apache.solr.core.ConfigSolrXml.fillSolrCloudSection(ConfigSolrXml.java:167) at org.apache.solr.core.ConfigSolrXml.fillPropMap(ConfigSolrXml.java:120) at org.apache.solr.core.ConfigSolrXml.init(ConfigSolrXml.java:53) at org.apache.solr.core.ConfigSolr.fromConfig(ConfigSolr.java:108) at org.apache.solr.core.ConfigSolr.fromInputStream(ConfigSolr.java:93) at org.apache.solr.core.ConfigSolr.fromFile(ConfigSolr.java:70) at org.apache.solr.core.ConfigSolr.fromSolrHome(ConfigSolr.java:103) at org.apache.solr.servlet.SolrDispatchFilter.loadConfigSolr(SolrDispatchFilter.java:156) at org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:187) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:136) at org.eclipse.jetty.servlet.FilterHolder.doStart(FilterHolder.java:119) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:719) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:265) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1252) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:710) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:494) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:39) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:186) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:494) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:141) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.fileAdded(ScanningAppProvider.java:145) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:56) at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:609) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:540) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:403) at org.eclipse.jetty.util.Scanner.doStart(Scanner.java:337) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:121) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:555) at org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:230) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.eclipse.jetty.util.component.AggregateLifeCycle.doStart(AggregateLifeCycle.java:81) at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:58) at org.eclipse.jetty.server.handler.HandlerWrapper.doStart(HandlerWrapper.java:96) at org.eclipse.jetty.server.Server.doStart(Server.java:280) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1259) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:1182) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.eclipse.jetty.start.Main.invokeMain(Main.java:473) at org.eclipse.jetty.start.Main.start(Main.java:615) at org.eclipse.jetty.start.Main.main(Main.java:96) —/Yago Riveiro
Re: shareSchema property unknown in new solr.xml format
On 7/20/2015 6:40 AM, Yago Riveiro wrote: I’m trying to move from legacy solr.xml to new solr.xml format (version 4.10.4) I’m getting this error on startup: solrcloud section of solr.xml contains 1 unknown config parameter(s): [shareSchema] There's somewhat confusing information in Jira on this. One issue seems to deprecate shareSchema in 4.4. Deprecation in one major release implies that the feature will be completely removed in the next major release, but should still work until then: https://issues.apache.org/jira/browse/SOLR-4779 Another issue talks about fixing str/bool/int detection on parameters like shareSchema in 4.10 and 5.x (trunk at the time): https://issues.apache.org/jira/browse/SOLR-5746 You said you're still running 4.x, specifically 4.10.4, which means that I would expect shareSchema to still work. Although this is technically a bug, I don't think it's a big enough problem to warrant fixing in the 4.10 branch. The first thing I would advise trying is to change the parameter from str to bool. Hopefully that will fix it. There may have been a misstep in the patch for SOLR-5746. Do you actually NEED this functionality? It is no longer there in 5.x, so it may be a good idea to prepare for it's removal now. The amount of memory required for an IndexSchema object for each core should not be prohibitively large, unless you've got thousands of cores ... but in that case, you're already dealing with fairly significant scaling challenges, this one may not matter much. If you do need the functionality, then a jira is in order to work on it. I can't guarantee that there will be a 4.10.5 release, even if a fix is committed to the 4.10 branch. I have not examined the code to see what's actually happening. Your response will determine whether I take a look. Thanks, Shawn
Re: shareSchema property unknown in new solr.xml format
: I’m getting this error on startup: : : solrcloud section of solr.xml contains 1 unknown config parameter(s): [shareSchema] Pretty sure that's because it was never a supported property of the solrcloud section -- even in the old format of solr.xml. it's just a top level property -- ie: create a child node for it directly under solr, outside of solrcloud. Ah ... i see, this page is giving an incorrect example... https://cwiki.apache.org/confluence/display/solr/Moving+to+the+New+solr.xml+Format ...I'll fix that. -Hoss http://www.lucidworks.com/
Re: shareSchema property unknown in new solr.xml format
Thank Hoss this was the example that I saw to configure my solr.xml I have more than 250 cores and any resource that I can optimize is always welcome but if in 5.x this feature is deprecated I will try to upgrade as soon as possible.
Re: Problem with new solr.xml format and core swaps
Well, at least it's _some_ progress ;). Agreed, the segments hanging around is still something of a mystery although if I really stretch I could relate them, maybe. I believe there's clean-up logic when a core starts up to nuke cruft in the index directory. If the cruft was created after a core swap on the core where Solr couldn't write the core.properties file, then when the core started back up is it possible that it was looking in the wrong directory to clean stuff up. This a total and complete guess though as I don't know that bit of code so If the undeleted segment files were in a directory related to the core whose core.properties file wasn't persisted, that would lend some credence to the idea though. FWIW, Erick On Tue, Apr 7, 2015 at 12:18 PM, Shawn Heisey apa...@elyograg.org wrote: On 4/7/2015 10:54 AM, Erick Erickson wrote: I'm pretty clueless why you would be seeing this, and slammed with other stuff so I can't dig into this right now. What do the core.properties files look like when you see this? They should be re-written when you swap cores. Hmmm, I wonder if there's some condition where the files are already open and the persistence fails? If so we should be logging that error, I have no proof either way whether we are or not though. Guessing that your log files in the problem case weren't all that helpful, but let's have a look at them if this occurs again? I hadn't had a chance to review the logs, but when I did just now, I found this: ERROR - 2015-04-07 11:56:15.568; org.apache.solr.core.CorePropertiesLocator; Couldn't persist core properties to /index/solr4/cores/sparkinc_0/core.properties: java.io.FileNotFoundException: /index/solr4/cores/sparkinc_0/core.properties (Permission denied) That's fairly clear. I guess my permissions were wrong. My best guess as to why -- things owned by root from when I created the core.properties files. Solr does not run as root. I didn't think to actually look at the permissions before I ran a script that I maintain which fixes all the ownership on my various directories involved in my full search installation. I don't think this explains the not-deleted segment files problem. Those segment files were written by solr running as the regular user, so there couldn't have been a permission problem. Thanks, Shawn
Re: Problem with new solr.xml format and core swaps
Shawn: I'm pretty clueless why you would be seeing this, and slammed with other stuff so I can't dig into this right now. What do the core.properties files look like when you see this? They should be re-written when you swap cores. Hmmm, I wonder if there's some condition where the files are already open and the persistence fails? If so we should be logging that error, I have no proof either way whether we are or not though. Guessing that your log files in the problem case weren't all that helpful, but let's have a look at them if this occurs again? Sorry I can't be more help Erick On Mon, Apr 6, 2015 at 8:38 PM, Shawn Heisey apa...@elyograg.org wrote: On 4/6/2015 6:40 PM, Erick Erickson wrote: What version are you migrating _from_? 4.9.0? There were some persistence issues at one point, but AFAIK they were fixed by 4.9, I can check if you're on an earlier version... Effectively there is no previous version. Whenever I upgrade, I delete all the data directories and completely reindex. When I converted from the old solr.xml to core discovery, the server was already on 4.9.1. Thanks, Shawn
Re: Problem with new solr.xml format and core swaps
On 4/7/2015 10:54 AM, Erick Erickson wrote: I'm pretty clueless why you would be seeing this, and slammed with other stuff so I can't dig into this right now. What do the core.properties files look like when you see this? They should be re-written when you swap cores. Hmmm, I wonder if there's some condition where the files are already open and the persistence fails? If so we should be logging that error, I have no proof either way whether we are or not though. Guessing that your log files in the problem case weren't all that helpful, but let's have a look at them if this occurs again? I hadn't had a chance to review the logs, but when I did just now, I found this: ERROR - 2015-04-07 11:56:15.568; org.apache.solr.core.CorePropertiesLocator; Couldn't persist core properties to /index/solr4/cores/sparkinc_0/core.properties: java.io.FileNotFoundException: /index/solr4/cores/sparkinc_0/core.properties (Permission denied) That's fairly clear. I guess my permissions were wrong. My best guess as to why -- things owned by root from when I created the core.properties files. Solr does not run as root. I didn't think to actually look at the permissions before I ran a script that I maintain which fixes all the ownership on my various directories involved in my full search installation. I don't think this explains the not-deleted segment files problem. Those segment files were written by solr running as the regular user, so there couldn't have been a permission problem. Thanks, Shawn
Re: Problem with new solr.xml format and core swaps
Shawn: What version are you migrating _from_? 4.9.0? There were some persistence issues at one point, but AFAIK they were fixed by 4.9, I can check if you're on an earlier version... Erick On Sun, Apr 5, 2015 at 2:05 PM, Shawn Heisey apa...@elyograg.org wrote: I'm having two problems with Solr 4.9.1. I can't upgrade yet, because we are using a third-party plugin component that is not yet explicitly qualified for anything newer than 4.9.0. The point release upgrade seemed like a safe bet, because I know that we don't do API changes in point releases. These are transient problems, and do not seem to be affecting the index at this time. Some background info: Ubuntu 14, Java 8u40 from the webupd8 PPA, Solr 4.9.1. It is *NOT* SolrCloud. Full rebuilds on my index involve building a new index in cores that I have designated build cores, then swapping those cores with live cores. This always worked flawlessly before I updated to Solr 4.9.1 and migrated the config to use core discovery. root@idxb4:~# cat /index/solr4/cores/sparkinc_0/core.properties name=sparkinclive dataDir=../../data/sparkinc_0 root@idxb4:~# cat /index/solr4/cores/sparkinc_1/core.properties name=sparkincbuild dataDir=../../data/sparkinc_1 The first problem: Sometimes, in a completely unpredictable manner, the new solr.xml format seems to behave like using the old format with persistent=false. When I restarted Solr yesterday, that action swapped the live cores with the build cores and I lost half my index because it swapped back to the previous build cores. Just now when I tried a restart, everything worked flawlessly and the cores did not swap. The second problem: Sometimes old index segments do not get deleted, even though they are not part of the index. Another part of the full rebuild process involves clearing the build cores before beginning the full import. The code does a deleteByQuery with *:* and then optimizes the core. Sometimes this action fails to delete the old segment files, but when I checked the core Overview in the admin UI, numDocs only reflected the newly indexed docs and deletedDocs was 0. It was actually while trying to fix/debug this second problem that I discovered the first problem. Once the rebuild finished, I wanted to see what would happen if I restarted Solr while one of my cores had 32GB of segment files that were not part of the index ... but that's when the indexes swapped. At that point, I deleted all the dataDirs on both machines (it's a distributed index), restarted Solr again, and began a full rebuild. Everything seems to be fine now. Are either of these problems anything that anyone has seen? I don't recall seeing anything come across the list before. Are there existing issues in Jira? Is there any information that I can provide which would help in narrowing down the problem? Thanks, Shawn
Re: Problem with new solr.xml format and core swaps
On 4/6/2015 6:40 PM, Erick Erickson wrote: What version are you migrating _from_? 4.9.0? There were some persistence issues at one point, but AFAIK they were fixed by 4.9, I can check if you're on an earlier version... Effectively there is no previous version. Whenever I upgrade, I delete all the data directories and completely reindex. When I converted from the old solr.xml to core discovery, the server was already on 4.9.1. Thanks, Shawn
Problem with new solr.xml format and core swaps
I'm having two problems with Solr 4.9.1. I can't upgrade yet, because we are using a third-party plugin component that is not yet explicitly qualified for anything newer than 4.9.0. The point release upgrade seemed like a safe bet, because I know that we don't do API changes in point releases. These are transient problems, and do not seem to be affecting the index at this time. Some background info: Ubuntu 14, Java 8u40 from the webupd8 PPA, Solr 4.9.1. It is *NOT* SolrCloud. Full rebuilds on my index involve building a new index in cores that I have designated build cores, then swapping those cores with live cores. This always worked flawlessly before I updated to Solr 4.9.1 and migrated the config to use core discovery. root@idxb4:~# cat /index/solr4/cores/sparkinc_0/core.properties name=sparkinclive dataDir=../../data/sparkinc_0 root@idxb4:~# cat /index/solr4/cores/sparkinc_1/core.properties name=sparkincbuild dataDir=../../data/sparkinc_1 The first problem: Sometimes, in a completely unpredictable manner, the new solr.xml format seems to behave like using the old format with persistent=false. When I restarted Solr yesterday, that action swapped the live cores with the build cores and I lost half my index because it swapped back to the previous build cores. Just now when I tried a restart, everything worked flawlessly and the cores did not swap. The second problem: Sometimes old index segments do not get deleted, even though they are not part of the index. Another part of the full rebuild process involves clearing the build cores before beginning the full import. The code does a deleteByQuery with *:* and then optimizes the core. Sometimes this action fails to delete the old segment files, but when I checked the core Overview in the admin UI, numDocs only reflected the newly indexed docs and deletedDocs was 0. It was actually while trying to fix/debug this second problem that I discovered the first problem. Once the rebuild finished, I wanted to see what would happen if I restarted Solr while one of my cores had 32GB of segment files that were not part of the index ... but that's when the indexes swapped. At that point, I deleted all the dataDirs on both machines (it's a distributed index), restarted Solr again, and began a full rebuild. Everything seems to be fine now. Are either of these problems anything that anyone has seen? I don't recall seeing anything come across the list before. Are there existing issues in Jira? Is there any information that I can provide which would help in narrowing down the problem? Thanks, Shawn
Solr cannot find solr.xml even though it's there
I'm getting the following stacktrace with Solr 4.5.0 SEVERE: null:org.apache.solr.common.SolrException: Could not load SOLR configuration at org.apache.solr.core.ConfigSolr.fromFile(ConfigSolr.java:71) at org.apache.solr.core.ConfigSolr.fromSolrHome(ConfigSolr.java:98) at org.apache.solr.servlet.SolrDispatchFilter.loadConfigSolr(SolrDispatchFilter.java:144) at org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:175) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:127) at org.apache.catalina.core.ApplicationFilterConfig.initFilter(ApplicationFilterConfig.java:279) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:260) at org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:105) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:4830) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5510) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649) at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1081) at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1877) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) /solr.xml cannot start Solrcommon.SolrException: solr.xml does not exist in /opt/solr/solr-shard1 at org.apache.solr.core.ConfigSolr.fromFile(ConfigSolr.java:60) ... 20 more I went so far as to set the permissions on solr-shard1 to global read/write/execute (777) and yet it still cannot load the file. It doesn't say there's a parse error or anything constructive. Any ideas as to what is going on?
Re: Solr cannot find solr.xml even though it's there
On 12/20/2014 12:27 PM, Mike Thomsen wrote: at java.lang.Thread.run(Thread.java:745) /solr.xml cannot start Solrcommon.SolrException: solr.xml does not exist in /opt/solr/solr-shard1 at org.apache.solr.core.ConfigSolr.fromFile(ConfigSolr.java:60) ... 20 more I went so far as to set the permissions on solr-shard1 to global read/write/execute (777) and yet it still cannot load the file. It doesn't say there's a parse error or anything constructive. Any ideas as to what is going on? If you log on to the system as the user that is attempting to start the container ... can you read that solr.xml file? These commands would check the permissions at each point in that path so you can see if the user in question has the appropriate rights at each level. ls -ald /. ls -ald /opt/. ls -ald /opt/solr/. ls -ald /opt/solr/solr-shard1/. ls -ald /opt/solr/solr-shard1/solr.xml It does seem odd that your solr home would be at a directory called solr-shard1 ... it makes more sense to me that it would be /opt/solr ... and that the solr.xml would live there. Only you know what your directory structure is, though. Thanks, Shawn
Re: Solr cannot find solr.xml even though it's there
It's supposed to be a simple two shard configuration of SolrCloud with two copies of Solr running in different tomcat servers on the same box. I can read the solr.xml just fine as that user (vagrant) and checked out the permissions and there's nothing obviously wrong there. On Sat, Dec 20, 2014 at 3:40 PM, Shawn Heisey apa...@elyograg.org wrote: On 12/20/2014 12:27 PM, Mike Thomsen wrote: at java.lang.Thread.run(Thread.java:745) /solr.xml cannot start Solrcommon.SolrException: solr.xml does not exist in /opt/solr/solr-shard1 at org.apache.solr.core.ConfigSolr.fromFile(ConfigSolr.java:60) ... 20 more I went so far as to set the permissions on solr-shard1 to global read/write/execute (777) and yet it still cannot load the file. It doesn't say there's a parse error or anything constructive. Any ideas as to what is going on? If you log on to the system as the user that is attempting to start the container ... can you read that solr.xml file? These commands would check the permissions at each point in that path so you can see if the user in question has the appropriate rights at each level. ls -ald /. ls -ald /opt/. ls -ald /opt/solr/. ls -ald /opt/solr/solr-shard1/. ls -ald /opt/solr/solr-shard1/solr.xml It does seem odd that your solr home would be at a directory called solr-shard1 ... it makes more sense to me that it would be /opt/solr ... and that the solr.xml would live there. Only you know what your directory structure is, though. Thanks, Shawn
Re: solr.xml coreRootDirectory relative to solr home
: An oversight I think. If you create a patch, let me know and we can : get it committed. that definitely sounds bad we should certainly try to fix that before 5.0 comes out since it does have back-compat implictations... https://issues.apache.org/jira/browse/SOLR-6718 ...better to have a 5.0 release note about tweaking rel-paths when people expect more changes then a 5.x release note that might be easily overlooked. -Hoss http://www.lucidworks.com/
solr.xml coreRootDirectory relative to solr home
Hi, I'm trying to configure a different core discovery root directory in solr.xml with the coreRootDirectory setting as described in https://cwiki.apache.org/confluence/display/solr/Format+of+solr.xml I'd like to just set it to a subdirectory of solr home (a cores directory to avoid confusion with configsets and other directories). I tried str name=coreRootDirectorycores/str but that's interpreted relative to the current working directory. Other paths such as sharedLib are interpreted relative to Solr Home and I had expected this here too. I do not set solr home via system property but via JNDI so I don't think I can use a ${solr.home}/cores or something like that? It would be nice solr home were available for property substitution even if set via JNDI. Is there another way to set a path relative to solr home here? Regards, Andreas
Re: solr.xml coreRootDirectory relative to solr home
An oversight I think. If you create a patch, let me know and we can get it committed. Hmmm, not sure though, this'll change the current behavior that people might be counting on On Thu, Nov 6, 2014 at 1:02 AM, Andreas Hubold andreas.hub...@coremedia.com wrote: Hi, I'm trying to configure a different core discovery root directory in solr.xml with the coreRootDirectory setting as described in https://cwiki.apache.org/confluence/display/solr/Format+of+solr.xml I'd like to just set it to a subdirectory of solr home (a cores directory to avoid confusion with configsets and other directories). I tried str name=coreRootDirectorycores/str but that's interpreted relative to the current working directory. Other paths such as sharedLib are interpreted relative to Solr Home and I had expected this here too. I do not set solr home via system property but via JNDI so I don't think I can use a ${solr.home}/cores or something like that? It would be nice solr home were available for property substitution even if set via JNDI. Is there another way to set a path relative to solr home here? Regards, Andreas
Re: solr.xml coreRootDirectory relative to solr home
On 11/6/2014 12:02 PM, Erick Erickson wrote: An oversight I think. If you create a patch, let me know and we can get it committed. Hmmm, not sure though, this'll change the current behavior that people might be counting on Relative to the solr home sounds like the best option to me. It's what I would expect, since most of the rest of Solr uses directories relative to other directories that may or may not be explicitly defined. I haven't researched in-depth, but I think that the solr home itself is the only thing in Solr that defaults to something relative to the current working directory ... and that seems like a very good policy to keep. Thanks, Shawn
Add core in solr.xml | Problem with starting SOLRcloud
Hello, Our platform has 4 solr instances and 3 zookeepers(solr 4.1.0). I want to add a new core in my solrcloud. I add the new core to the solr.xml file: core name=collection2 instanceDir=collection2 / I put the config files in the directory collection2. I uploaded the new config to zookeeper and start solr. Solr did not start up and gives the following error: Oct 16, 2014 4:57:06 PM org.apache.solr.cloud.ZkController publish INFO: publishing core=collection1 state=recovering Oct 16, 2014 4:57:06 PM org.apache.solr.cloud.ZkController publish INFO: numShards not found on descriptor - reading it from system property Oct 16, 2014 4:57:06 PM org.apache.solr.client.solrj.impl.HttpClientUtil createClient INFO: Creating new http client, config:maxConnections=128maxConnectionsPerHost=32followRedirects=false Oct 16, 2014 4:59:06 PM org.apache.solr.common.SolrException log SEVERE: Error while trying to recover. core=collection1:org.apache.solr.common.SolrException: I was asked to wait on state recovering for 31.114.2.237:8910_solr but I still do not see the requested state. I see state: active live:true at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181) at org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:202) at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:346) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:223) Oct 16, 2014 4:59:06 PM org.apache.solr.cloud.RecoveryStrategy doRecovery SEVERE: Recovery failed - trying again... (0) core=collection1 Oct 16, 2014 4:59:06 PM org.apache.solr.cloud.RecoveryStrategy doRecovery INFO: Wait 2.0 seconds before trying to recover again (1) Oct 16, 2014 4:59:08 PM org.apache.solr.cloud.ZkController publish INFO: publishing core=collection1 state=recovering Oct 16, 2014 4:59:08 PM org.apache.solr.cloud.ZkController publish INFO: numShards not found on descriptor - reading it from system property Oct 16, 2014 4:59:08 PM org.apache.solr.client.solrj.impl.HttpClientUtil createClient INFO: Creating new http client, config:maxConnections=128maxConnectionsPerHost=32followRedirects=false What's wrong with my setup? Any help would be appreciated! Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Add-core-in-solr-xml-Problem-with-starting-SOLRcloud-tp4164524.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Required local configuration with ZK solr.xml?
On 1/29/2014 12:48 PM, Jeff Wartes wrote: And that, I think, is my misunderstanding. I had assumed that the link between a node and the collections it belongs to would be the (possibly chroot¹ed) zookeeper reference *itself*, not the node¹s directory structure. Instead, it appears that ZK is simply a repository for the collection configuration, where nodes may look up what they need based on filesystem core references. Work is underway towards a new mode where zookeeper is the ultimate source of truth, and each node will behave accordingly to implement and maintain that truth. I can't seem to locate a Jira issue for it, unfortunately. It's possible that one doesn't exist yet, or that it has an obscure title. Mark Miller is the one who really understands the full details, as he's a primary author of SolrCloud code. Currently, what SolrCloud considers to be truth is dictated by both zookeeper and an amalgamation of which cores each server actually has present. The collections API modifies both. With an older config (all current and future 4.x versions), the latter is in solr.xml. If you're using the new solr.xml format (available 4.4 and later, will be mandatory in 5.0), it's done with Core Discovery. Zookeeper has a list of everything and coordinates the cluster state, but has no real control over the cores that actually exist on each server. When the two sources of truth disagree, nothing happens to fix the situation, manual intervention is required. Any errors in my understanding of SolrCloud are my own. I don't claim that what I just wrote is error-free, but I am pretty sure that it's essentially correct. Thanks, Shawn
Re: Required local configuration with ZK solr.xml?
Work is underway towards a new mode where zookeeper is the ultimate source of truth, and each node will behave accordingly to implement and maintain that truth. I can't seem to locate a Jira issue for it, unfortunately. It's possible that one doesn't exist yet, or that it has an obscure title. Mark Miller is the one who really understands the full details, as he's a primary author of SolrCloud code. Currently, what SolrCloud considers to be truth is dictated by both zookeeper and an amalgamation of which cores each server actually has present. The collections API modifies both. With an older config (all current and future 4.x versions), the latter is in solr.xml. If you're using the new solr.xml format (available 4.4 and later, will be mandatory in 5.0), it's done with Core Discovery. Zookeeper has a list of everything and coordinates the cluster state, but has no real control over the cores that actually exist on each server. When the two sources of truth disagree, nothing happens to fix the situation, manual intervention is required. Thanks Shawn, this was exactly the confirmation I was looking for. I think I have a much better understanding now. The takeaway I have is that SolrCloud¹s current automation assumes relatively static clusters, and that if I want anything like dynamic scaling, I¹m going to have to write my own tooling to add nodes safely. Fortunately, it appears that the necessary CoreAdmin commands don¹t need much besides the collection name, so it smells like a simple thing to query zookeeper¹s /collections path (or clusterstate.json) and issue GET requests accordingly when I spin up a new node. If you (or anyone) does happen to recall a reference to the work you alluded to, I¹d certainly be interested. I googled around myself for a few minutes, but haven¹t found anything so far.
Re: Required local configuration with ZK solr.xml?
Found it. In case anyone else cares, this appears to be the root issue: https://issues.apache.org/jira/browse/SOLR-5128 Thanks again. On 1/30/14, 9:01 AM, Jeff Wartes jwar...@whitepages.com wrote: Work is underway towards a new mode where zookeeper is the ultimate source of truth, and each node will behave accordingly to implement and maintain that truth. I can't seem to locate a Jira issue for it, unfortunately. It's possible that one doesn't exist yet, or that it has an obscure title. Mark Miller is the one who really understands the full details, as he's a primary author of SolrCloud code. Currently, what SolrCloud considers to be truth is dictated by both zookeeper and an amalgamation of which cores each server actually has present. The collections API modifies both. With an older config (all current and future 4.x versions), the latter is in solr.xml. If you're using the new solr.xml format (available 4.4 and later, will be mandatory in 5.0), it's done with Core Discovery. Zookeeper has a list of everything and coordinates the cluster state, but has no real control over the cores that actually exist on each server. When the two sources of truth disagree, nothing happens to fix the situation, manual intervention is required. Thanks Shawn, this was exactly the confirmation I was looking for. I think I have a much better understanding now. The takeaway I have is that SolrCloud¹s current automation assumes relatively static clusters, and that if I want anything like dynamic scaling, I¹m going to have to write my own tooling to add nodes safely. Fortunately, it appears that the necessary CoreAdmin commands don¹t need much besides the collection name, so it smells like a simple thing to query zookeeper¹s /collections path (or clusterstate.json) and issue GET requests accordingly when I spin up a new node. If you (or anyone) does happen to recall a reference to the work you alluded to, I¹d certainly be interested. I googled around myself for a few minutes, but haven¹t found anything so far.
Re: Required local configuration with ZK solr.xml?
...the differnce between that example and what you are doing here is that in that example, because both of nodes already had collection1 instance dirs, they expected to be part of collection1 when they joined the cluster. And that, I think, is my misunderstanding. I had assumed that the link between a node and the collections it belongs to would be the (possibly chroot¹ed) zookeeper reference *itself*, not the node¹s directory structure. Instead, it appears that ZK is simply a repository for the collection configuration, where nodes may look up what they need based on filesystem core references. So assume I have a ZK running separately, and it already has a config uploaded which all my collections will use. I then add two nodes with empty solr home dirs, (no cores found) and use the collections API to create a new collection ³collection1 with numShards=1 and replicationFactor=2. Please check my assertions: WRONG: If I take the second node down, wipe the solr home dir on it, and add it back, the collection will properly replicate back onto the node. RIGHT: If I replace the second node, it must have a $SOLR_HOME/collection1_shard1_replica2/core.properties file in order to properly act as the replacement. RIGHT: If I replace the first node, it must have a $SOLR_HOME/collection1_shard1_replica1/core.properties file in order to properly act as the replacement. If correct, this sounds kinda painful, because you have to know the exact list of directory names for the set of collections/shards/replicas the node was participating in if you want to replace a given node. Worse, if you¹re replacing a node whose disk failed, you need to manually reconstruct the contents of those core.properties files? This also leaves me a bit confused about how to increase the replicationFactor on an existing collection, but I guess that¹s tangential. Thanks.
Required local configuration with ZK solr.xml?
It was my hope that storing solr.xml would mean I could spin up a Solr node pointing it to a properly configured zookeeper ensamble, and that no further local configuration or knowledge would be necessary. However, I’m beginning to wonder if that’s sufficient. It’s looking like I may also need each node to be preconfigured with at least a directory and a core.properties for each collection/core the node intends to participate in? Is that correct? I figured I’d test this by starting a stand-alone ZK, and configuring it by issuing a zkCli bootstrap against the solr examples dir solr dir, then manually putfile-ing the (new-style) solr.xml. I then attempted to connect two solr instances that referenced that zookeeper, but did NOT use the solr examples dir as the base. I essentially used empty directories for the solr home. Although both connected and zk shows both in the /live_nodes, both report “0 cores discovered” in the logs, and don’t seem to find and participate in the collection as happens when you follow the SolrCloud example verbatim. (http://wiki.apache.org/solr/SolrCloud#Example_A:_Simple_two_shard_cluster) I may have some other configuration issues at present, but I’m going to be disappointed if I need to have preknowledge of what collections/cores may have been dynamically created in a cluster in order to add a node that participates in that cluster. It feels like I might be missing something. Any clarifications would be appreciated.
Re: Required local configuration with ZK solr.xml?
Maybe i'm mising something, but everything you are describing sounds correct and working properly -- the disconnect between what i think is suppose to happen and what you seem to be expecting seems to be right arround here : essentially used empty directories for the solr home. Although both : connected and zk shows both in the /live_nodes, both report “0 cores : discovered” in the logs, and don’t seem to find and participate in the : collection as happens when you follow the SolrCloud example verbatim. : (http://wiki.apache.org/solr/SolrCloud#Example_A:_Simple_two_shard_cluster) ...the differnce between that example and what you are doing here is that in that example, because both of nodes already had collection1 instance dirs, they expected to be part of collection1 when they joined the cluster. In your situation however, it sounds like you have created a cluster w/o any collections (which is fine) and you have a config set ready to go in ZK. If you now send a CREATE command to the collections API, refering to the config set, it should automatically create the neccessary cores on your nodes. if that's not the case, or if i've missunderstood what you have done and are trying to do, please elaborate. -Hoss http://www.lucidworks.com/
Re: core.properties and solr.xml
For us we don't fully rely on cloud/collections api for creating and deploying instances/etc.. we control this via an external mechanism so this would allow me to have instances figure out what they should be based on an external system.. we do this now but have to drop core.properties files all over.. i'd like to not have to do that... its more of a desire for cleanliness of my filesystem than anything else because this is all automated at this point.. On Wed, Jan 15, 2014 at 1:49 PM, Mark Miller markrmil...@gmail.com wrote: What’s the benefit? So you can avoid having a simple core properties file? I’d rather see more value than that prompt exposing something like this to the user. It’s a can of warms that I personally have not seen a lot of value in yet. Whether we mark it experimental or not, this adds a burden, and I’m still wondering if the gains are worth it. - Mark On Jan 15, 2014, at 12:04 PM, Alan Woodward a...@flax.co.uk wrote: This is true. But if we slap big warning: experimental messages all over it, then users can't complain too much about backwards-compat breaks. My intention when pulling all this stuff into the CoresLocator interface was to allow other implementations to be tested out, and other suggestions have already come up from time to time on the list. It seems a shame to *not* allow this to be opened up for advanced users. Alan Woodward www.flax.co.uk On 15 Jan 2014, at 16:24, Mark Miller wrote: I think these API’s are pretty new and deep to want to support them for users at this point. It constrains refactoring and can complicates things down the line, especially with SolrCloud. This same discussion has come up in JIRA issues before. At best, I think all the recent refactoring in this area needs to bake. - Mark On Jan 15, 2014, at 11:01 AM, Alan Woodward a...@flax.co.uk wrote: I think solr.xml is the correct place for it, and you can then set up substitution variables to allow it to be set by environment variables, etc. But let's discuss on the JIRA ticket. Alan Woodward www.flax.co.uk On 15 Jan 2014, at 15:39, Steven Bower wrote: I will open up a JIRA... I'm more concerned over the core locator stuff vs the solr.xml.. Should the specification of the core locator go into the solr.xml or via some other method? steve On Tue, Jan 14, 2014 at 5:06 PM, Alan Woodward a...@flax.co.uk wrote: Hi Steve, I think this is a great idea. Currently the implementation of CoresLocator is picked depending on the type of solr.xml you have (new- vs old-style), but it should be easy enough to extend the new-style logic to optionally look up and instantiate a plugin implementation. Core loading and new core creation is all done through the CL now, so as long as the plugin implemented all methods, it shouldn't break the Collections API either. Do you want to open a JIRA? Alan Woodward www.flax.co.uk On 14 Jan 2014, at 19:20, Erick Erickson wrote: The work done as part of new style solr.xml, particularly by romsegeek should make this a lot easier. But no, there's no formal support for such a thing. There's also a desire to make ZK the one source of truth in Solr 5, although that effort is in early stages. Which is a long way of saying that I think this would be a good thing to add. Currently there's no formal way to specify one though. We'd have to give some thought as to what abstract methods are required. The current old style and new style classes . There's also the chicken-and-egg question; how does one specify the new class? This seems like something that would be in a (very small) solr.xml or specified as a sysprop. And knowing where to load the class from could be interesting. A pluggable SolrConfig I think is a stickier wicket, it hasn't been broken out into nice interfaces like coreslocator has been. And it's used all over the place, passed in and recorded in constructors etc, as well as being possibly unique for each core. There's been some talk of sharing a single config object, and there's also talk about using config sets that might address some of those concerns, but neither one has gotten very far in 4x land. FWIW, Erick On Tue, Jan 14, 2014 at 1:41 PM, Steven Bower smb-apa...@alcyon.net wrote: Are there any plans/tickets to allow for pluggable SolrConf and CoreLocator? In my use case my solr.xml is totally static, i have a separate dataDir and my core.properties are derived from a separate configuration (living in ZK) but totally outside of the SolrCloud.. I'd like to be able to not have any instance directories and/or no solr.xml or core.properties files laying around as right now I just regenerate them on startup each time in my start scripts.. Obviously I can just hack my stuff in and clearly this could break the write side of the collections API (which i don't
Re: core.properties and solr.xml
I will open up a JIRA... I'm more concerned over the core locator stuff vs the solr.xml.. Should the specification of the core locator go into the solr.xml or via some other method? steve On Tue, Jan 14, 2014 at 5:06 PM, Alan Woodward a...@flax.co.uk wrote: Hi Steve, I think this is a great idea. Currently the implementation of CoresLocator is picked depending on the type of solr.xml you have (new- vs old-style), but it should be easy enough to extend the new-style logic to optionally look up and instantiate a plugin implementation. Core loading and new core creation is all done through the CL now, so as long as the plugin implemented all methods, it shouldn't break the Collections API either. Do you want to open a JIRA? Alan Woodward www.flax.co.uk On 14 Jan 2014, at 19:20, Erick Erickson wrote: The work done as part of new style solr.xml, particularly by romsegeek should make this a lot easier. But no, there's no formal support for such a thing. There's also a desire to make ZK the one source of truth in Solr 5, although that effort is in early stages. Which is a long way of saying that I think this would be a good thing to add. Currently there's no formal way to specify one though. We'd have to give some thought as to what abstract methods are required. The current old style and new style classes . There's also the chicken-and-egg question; how does one specify the new class? This seems like something that would be in a (very small) solr.xml or specified as a sysprop. And knowing where to load the class from could be interesting. A pluggable SolrConfig I think is a stickier wicket, it hasn't been broken out into nice interfaces like coreslocator has been. And it's used all over the place, passed in and recorded in constructors etc, as well as being possibly unique for each core. There's been some talk of sharing a single config object, and there's also talk about using config sets that might address some of those concerns, but neither one has gotten very far in 4x land. FWIW, Erick On Tue, Jan 14, 2014 at 1:41 PM, Steven Bower smb-apa...@alcyon.net wrote: Are there any plans/tickets to allow for pluggable SolrConf and CoreLocator? In my use case my solr.xml is totally static, i have a separate dataDir and my core.properties are derived from a separate configuration (living in ZK) but totally outside of the SolrCloud.. I'd like to be able to not have any instance directories and/or no solr.xml or core.properties files laying around as right now I just regenerate them on startup each time in my start scripts.. Obviously I can just hack my stuff in and clearly this could break the write side of the collections API (which i don't care about for my case)... but having a way to plug these would be nice.. steve
Re: core.properties and solr.xml
I think solr.xml is the correct place for it, and you can then set up substitution variables to allow it to be set by environment variables, etc. But let's discuss on the JIRA ticket. Alan Woodward www.flax.co.uk On 15 Jan 2014, at 15:39, Steven Bower wrote: I will open up a JIRA... I'm more concerned over the core locator stuff vs the solr.xml.. Should the specification of the core locator go into the solr.xml or via some other method? steve On Tue, Jan 14, 2014 at 5:06 PM, Alan Woodward a...@flax.co.uk wrote: Hi Steve, I think this is a great idea. Currently the implementation of CoresLocator is picked depending on the type of solr.xml you have (new- vs old-style), but it should be easy enough to extend the new-style logic to optionally look up and instantiate a plugin implementation. Core loading and new core creation is all done through the CL now, so as long as the plugin implemented all methods, it shouldn't break the Collections API either. Do you want to open a JIRA? Alan Woodward www.flax.co.uk On 14 Jan 2014, at 19:20, Erick Erickson wrote: The work done as part of new style solr.xml, particularly by romsegeek should make this a lot easier. But no, there's no formal support for such a thing. There's also a desire to make ZK the one source of truth in Solr 5, although that effort is in early stages. Which is a long way of saying that I think this would be a good thing to add. Currently there's no formal way to specify one though. We'd have to give some thought as to what abstract methods are required. The current old style and new style classes . There's also the chicken-and-egg question; how does one specify the new class? This seems like something that would be in a (very small) solr.xml or specified as a sysprop. And knowing where to load the class from could be interesting. A pluggable SolrConfig I think is a stickier wicket, it hasn't been broken out into nice interfaces like coreslocator has been. And it's used all over the place, passed in and recorded in constructors etc, as well as being possibly unique for each core. There's been some talk of sharing a single config object, and there's also talk about using config sets that might address some of those concerns, but neither one has gotten very far in 4x land. FWIW, Erick On Tue, Jan 14, 2014 at 1:41 PM, Steven Bower smb-apa...@alcyon.net wrote: Are there any plans/tickets to allow for pluggable SolrConf and CoreLocator? In my use case my solr.xml is totally static, i have a separate dataDir and my core.properties are derived from a separate configuration (living in ZK) but totally outside of the SolrCloud.. I'd like to be able to not have any instance directories and/or no solr.xml or core.properties files laying around as right now I just regenerate them on startup each time in my start scripts.. Obviously I can just hack my stuff in and clearly this could break the write side of the collections API (which i don't care about for my case)... but having a way to plug these would be nice.. steve
Re: core.properties and solr.xml
I think these API’s are pretty new and deep to want to support them for users at this point. It constrains refactoring and can complicates things down the line, especially with SolrCloud. This same discussion has come up in JIRA issues before. At best, I think all the recent refactoring in this area needs to bake. - Mark On Jan 15, 2014, at 11:01 AM, Alan Woodward a...@flax.co.uk wrote: I think solr.xml is the correct place for it, and you can then set up substitution variables to allow it to be set by environment variables, etc. But let's discuss on the JIRA ticket. Alan Woodward www.flax.co.uk On 15 Jan 2014, at 15:39, Steven Bower wrote: I will open up a JIRA... I'm more concerned over the core locator stuff vs the solr.xml.. Should the specification of the core locator go into the solr.xml or via some other method? steve On Tue, Jan 14, 2014 at 5:06 PM, Alan Woodward a...@flax.co.uk wrote: Hi Steve, I think this is a great idea. Currently the implementation of CoresLocator is picked depending on the type of solr.xml you have (new- vs old-style), but it should be easy enough to extend the new-style logic to optionally look up and instantiate a plugin implementation. Core loading and new core creation is all done through the CL now, so as long as the plugin implemented all methods, it shouldn't break the Collections API either. Do you want to open a JIRA? Alan Woodward www.flax.co.uk On 14 Jan 2014, at 19:20, Erick Erickson wrote: The work done as part of new style solr.xml, particularly by romsegeek should make this a lot easier. But no, there's no formal support for such a thing. There's also a desire to make ZK the one source of truth in Solr 5, although that effort is in early stages. Which is a long way of saying that I think this would be a good thing to add. Currently there's no formal way to specify one though. We'd have to give some thought as to what abstract methods are required. The current old style and new style classes . There's also the chicken-and-egg question; how does one specify the new class? This seems like something that would be in a (very small) solr.xml or specified as a sysprop. And knowing where to load the class from could be interesting. A pluggable SolrConfig I think is a stickier wicket, it hasn't been broken out into nice interfaces like coreslocator has been. And it's used all over the place, passed in and recorded in constructors etc, as well as being possibly unique for each core. There's been some talk of sharing a single config object, and there's also talk about using config sets that might address some of those concerns, but neither one has gotten very far in 4x land. FWIW, Erick On Tue, Jan 14, 2014 at 1:41 PM, Steven Bower smb-apa...@alcyon.net wrote: Are there any plans/tickets to allow for pluggable SolrConf and CoreLocator? In my use case my solr.xml is totally static, i have a separate dataDir and my core.properties are derived from a separate configuration (living in ZK) but totally outside of the SolrCloud.. I'd like to be able to not have any instance directories and/or no solr.xml or core.properties files laying around as right now I just regenerate them on startup each time in my start scripts.. Obviously I can just hack my stuff in and clearly this could break the write side of the collections API (which i don't care about for my case)... but having a way to plug these would be nice.. steve
Re: core.properties and solr.xml
This is true. But if we slap big warning: experimental messages all over it, then users can't complain too much about backwards-compat breaks. My intention when pulling all this stuff into the CoresLocator interface was to allow other implementations to be tested out, and other suggestions have already come up from time to time on the list. It seems a shame to *not* allow this to be opened up for advanced users. Alan Woodward www.flax.co.uk On 15 Jan 2014, at 16:24, Mark Miller wrote: I think these API’s are pretty new and deep to want to support them for users at this point. It constrains refactoring and can complicates things down the line, especially with SolrCloud. This same discussion has come up in JIRA issues before. At best, I think all the recent refactoring in this area needs to bake. - Mark On Jan 15, 2014, at 11:01 AM, Alan Woodward a...@flax.co.uk wrote: I think solr.xml is the correct place for it, and you can then set up substitution variables to allow it to be set by environment variables, etc. But let's discuss on the JIRA ticket. Alan Woodward www.flax.co.uk On 15 Jan 2014, at 15:39, Steven Bower wrote: I will open up a JIRA... I'm more concerned over the core locator stuff vs the solr.xml.. Should the specification of the core locator go into the solr.xml or via some other method? steve On Tue, Jan 14, 2014 at 5:06 PM, Alan Woodward a...@flax.co.uk wrote: Hi Steve, I think this is a great idea. Currently the implementation of CoresLocator is picked depending on the type of solr.xml you have (new- vs old-style), but it should be easy enough to extend the new-style logic to optionally look up and instantiate a plugin implementation. Core loading and new core creation is all done through the CL now, so as long as the plugin implemented all methods, it shouldn't break the Collections API either. Do you want to open a JIRA? Alan Woodward www.flax.co.uk On 14 Jan 2014, at 19:20, Erick Erickson wrote: The work done as part of new style solr.xml, particularly by romsegeek should make this a lot easier. But no, there's no formal support for such a thing. There's also a desire to make ZK the one source of truth in Solr 5, although that effort is in early stages. Which is a long way of saying that I think this would be a good thing to add. Currently there's no formal way to specify one though. We'd have to give some thought as to what abstract methods are required. The current old style and new style classes . There's also the chicken-and-egg question; how does one specify the new class? This seems like something that would be in a (very small) solr.xml or specified as a sysprop. And knowing where to load the class from could be interesting. A pluggable SolrConfig I think is a stickier wicket, it hasn't been broken out into nice interfaces like coreslocator has been. And it's used all over the place, passed in and recorded in constructors etc, as well as being possibly unique for each core. There's been some talk of sharing a single config object, and there's also talk about using config sets that might address some of those concerns, but neither one has gotten very far in 4x land. FWIW, Erick On Tue, Jan 14, 2014 at 1:41 PM, Steven Bower smb-apa...@alcyon.net wrote: Are there any plans/tickets to allow for pluggable SolrConf and CoreLocator? In my use case my solr.xml is totally static, i have a separate dataDir and my core.properties are derived from a separate configuration (living in ZK) but totally outside of the SolrCloud.. I'd like to be able to not have any instance directories and/or no solr.xml or core.properties files laying around as right now I just regenerate them on startup each time in my start scripts.. Obviously I can just hack my stuff in and clearly this could break the write side of the collections API (which i don't care about for my case)... but having a way to plug these would be nice.. steve
Re: core.properties and solr.xml
What’s the benefit? So you can avoid having a simple core properties file? I’d rather see more value than that prompt exposing something like this to the user. It’s a can of warms that I personally have not seen a lot of value in yet. Whether we mark it experimental or not, this adds a burden, and I’m still wondering if the gains are worth it. - Mark On Jan 15, 2014, at 12:04 PM, Alan Woodward a...@flax.co.uk wrote: This is true. But if we slap big warning: experimental messages all over it, then users can't complain too much about backwards-compat breaks. My intention when pulling all this stuff into the CoresLocator interface was to allow other implementations to be tested out, and other suggestions have already come up from time to time on the list. It seems a shame to *not* allow this to be opened up for advanced users. Alan Woodward www.flax.co.uk On 15 Jan 2014, at 16:24, Mark Miller wrote: I think these API’s are pretty new and deep to want to support them for users at this point. It constrains refactoring and can complicates things down the line, especially with SolrCloud. This same discussion has come up in JIRA issues before. At best, I think all the recent refactoring in this area needs to bake. - Mark On Jan 15, 2014, at 11:01 AM, Alan Woodward a...@flax.co.uk wrote: I think solr.xml is the correct place for it, and you can then set up substitution variables to allow it to be set by environment variables, etc. But let's discuss on the JIRA ticket. Alan Woodward www.flax.co.uk On 15 Jan 2014, at 15:39, Steven Bower wrote: I will open up a JIRA... I'm more concerned over the core locator stuff vs the solr.xml.. Should the specification of the core locator go into the solr.xml or via some other method? steve On Tue, Jan 14, 2014 at 5:06 PM, Alan Woodward a...@flax.co.uk wrote: Hi Steve, I think this is a great idea. Currently the implementation of CoresLocator is picked depending on the type of solr.xml you have (new- vs old-style), but it should be easy enough to extend the new-style logic to optionally look up and instantiate a plugin implementation. Core loading and new core creation is all done through the CL now, so as long as the plugin implemented all methods, it shouldn't break the Collections API either. Do you want to open a JIRA? Alan Woodward www.flax.co.uk On 14 Jan 2014, at 19:20, Erick Erickson wrote: The work done as part of new style solr.xml, particularly by romsegeek should make this a lot easier. But no, there's no formal support for such a thing. There's also a desire to make ZK the one source of truth in Solr 5, although that effort is in early stages. Which is a long way of saying that I think this would be a good thing to add. Currently there's no formal way to specify one though. We'd have to give some thought as to what abstract methods are required. The current old style and new style classes . There's also the chicken-and-egg question; how does one specify the new class? This seems like something that would be in a (very small) solr.xml or specified as a sysprop. And knowing where to load the class from could be interesting. A pluggable SolrConfig I think is a stickier wicket, it hasn't been broken out into nice interfaces like coreslocator has been. And it's used all over the place, passed in and recorded in constructors etc, as well as being possibly unique for each core. There's been some talk of sharing a single config object, and there's also talk about using config sets that might address some of those concerns, but neither one has gotten very far in 4x land. FWIW, Erick On Tue, Jan 14, 2014 at 1:41 PM, Steven Bower smb-apa...@alcyon.net wrote: Are there any plans/tickets to allow for pluggable SolrConf and CoreLocator? In my use case my solr.xml is totally static, i have a separate dataDir and my core.properties are derived from a separate configuration (living in ZK) but totally outside of the SolrCloud.. I'd like to be able to not have any instance directories and/or no solr.xml or core.properties files laying around as right now I just regenerate them on startup each time in my start scripts.. Obviously I can just hack my stuff in and clearly this could break the write side of the collections API (which i don't care about for my case)... but having a way to plug these would be nice.. steve
core.properties and solr.xml
Are there any plans/tickets to allow for pluggable SolrConf and CoreLocator? In my use case my solr.xml is totally static, i have a separate dataDir and my core.properties are derived from a separate configuration (living in ZK) but totally outside of the SolrCloud.. I'd like to be able to not have any instance directories and/or no solr.xml or core.properties files laying around as right now I just regenerate them on startup each time in my start scripts.. Obviously I can just hack my stuff in and clearly this could break the write side of the collections API (which i don't care about for my case)... but having a way to plug these would be nice.. steve
Re: core.properties and solr.xml
The work done as part of new style solr.xml, particularly by romsegeek should make this a lot easier. But no, there's no formal support for such a thing. There's also a desire to make ZK the one source of truth in Solr 5, although that effort is in early stages. Which is a long way of saying that I think this would be a good thing to add. Currently there's no formal way to specify one though. We'd have to give some thought as to what abstract methods are required. The current old style and new style classes . There's also the chicken-and-egg question; how does one specify the new class? This seems like something that would be in a (very small) solr.xml or specified as a sysprop. And knowing where to load the class from could be interesting. A pluggable SolrConfig I think is a stickier wicket, it hasn't been broken out into nice interfaces like coreslocator has been. And it's used all over the place, passed in and recorded in constructors etc, as well as being possibly unique for each core. There's been some talk of sharing a single config object, and there's also talk about using config sets that might address some of those concerns, but neither one has gotten very far in 4x land. FWIW, Erick On Tue, Jan 14, 2014 at 1:41 PM, Steven Bower smb-apa...@alcyon.net wrote: Are there any plans/tickets to allow for pluggable SolrConf and CoreLocator? In my use case my solr.xml is totally static, i have a separate dataDir and my core.properties are derived from a separate configuration (living in ZK) but totally outside of the SolrCloud.. I'd like to be able to not have any instance directories and/or no solr.xml or core.properties files laying around as right now I just regenerate them on startup each time in my start scripts.. Obviously I can just hack my stuff in and clearly this could break the write side of the collections API (which i don't care about for my case)... but having a way to plug these would be nice.. steve
Re: core.properties and solr.xml
Hi Steve, I think this is a great idea. Currently the implementation of CoresLocator is picked depending on the type of solr.xml you have (new- vs old-style), but it should be easy enough to extend the new-style logic to optionally look up and instantiate a plugin implementation. Core loading and new core creation is all done through the CL now, so as long as the plugin implemented all methods, it shouldn't break the Collections API either. Do you want to open a JIRA? Alan Woodward www.flax.co.uk On 14 Jan 2014, at 19:20, Erick Erickson wrote: The work done as part of new style solr.xml, particularly by romsegeek should make this a lot easier. But no, there's no formal support for such a thing. There's also a desire to make ZK the one source of truth in Solr 5, although that effort is in early stages. Which is a long way of saying that I think this would be a good thing to add. Currently there's no formal way to specify one though. We'd have to give some thought as to what abstract methods are required. The current old style and new style classes . There's also the chicken-and-egg question; how does one specify the new class? This seems like something that would be in a (very small) solr.xml or specified as a sysprop. And knowing where to load the class from could be interesting. A pluggable SolrConfig I think is a stickier wicket, it hasn't been broken out into nice interfaces like coreslocator has been. And it's used all over the place, passed in and recorded in constructors etc, as well as being possibly unique for each core. There's been some talk of sharing a single config object, and there's also talk about using config sets that might address some of those concerns, but neither one has gotten very far in 4x land. FWIW, Erick On Tue, Jan 14, 2014 at 1:41 PM, Steven Bower smb-apa...@alcyon.net wrote: Are there any plans/tickets to allow for pluggable SolrConf and CoreLocator? In my use case my solr.xml is totally static, i have a separate dataDir and my core.properties are derived from a separate configuration (living in ZK) but totally outside of the SolrCloud.. I'd like to be able to not have any instance directories and/or no solr.xml or core.properties files laying around as right now I just regenerate them on startup each time in my start scripts.. Obviously I can just hack my stuff in and clearly this could break the write side of the collections API (which i don't care about for my case)... but having a way to plug these would be nice.. steve
Re: Minimal solr.xml since 4.4 and beyond
Well, thanks for details :) I know that the old structure is supported yet but i want to prevent the changed in 5.0. Maybe i will not work on it when it will happen ^^ Best, Scatman -- View this message in context: http://lucene.472066.n3.nabble.com/Minimal-solr-xml-since-4-4-and-beyond-tp4109937p4110169.html Sent from the Solr - User mailing list archive at Nabble.com.
Minimal solr.xml since 4.4 and beyond
Hi everyone, I'm just back to work with Solr and saw the feature, since 4.4, with the new structure of solr.xml I'm just wondering, since core's properties are in the core.properties files, what is the new minimal file for the solr.xml ? I saw in a topic that even if solr.xml wasn't create, solr will run without error ( i didn't try myself at the moment ). I ask it because the new structure will be mandatory at 5.0 Thanks for reply in advance. Best, Scatman. -- View this message in context: http://lucene.472066.n3.nabble.com/Minimal-solr-xml-since-4-4-and-beyond-tp4109937.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Minimal solr.xml since 4.4 and beyond
On 1/7/2014 3:53 AM, Scatman wrote: I'm just back to work with Solr and saw the feature, since 4.4, with the new structure of solr.xml I'm just wondering, since core's properties are in the core.properties files, what is the new minimal file for the solr.xml ? I saw in a topic that even if solr.xml wasn't create, solr will run without error ( i didn't try myself at the moment ). I ask it because the new structure will be mandatory at 5.0 I am pretty sure that the absolute minimum you would need for a solr.xml in the new format is: solr / This would set all the possible options to their defaults. The solr.xml file contains a number of global configuration options. With the old-style solr.xml where core information is ALSO stored in that file, your solr.xml file ends up being different on multiple servers. The new format only stores configuration information, so it can (and probably should) be identical on each server in a cluster. http://wiki.apache.org/solr/Solr.xml%204.4%20and%20beyond If you use the new solr.xml format, a very recent Solr version, and SolrCloud, you can store solr.xml in zookeeper, which means that you can easily maintain an identical solr.xml for multiple servers. Change it once in zookeeper, do a rolling restart on all your Solr instances, and they're all using it. The old format will be supported on all future 4.x releases, so you do not need to switch right away. Thanks, Shawn
Re: solr.xml
Sounds like a bug. If you are seeing this happen in 4.6, I'd file a JIRA issue. - Mark On Sun, Dec 8, 2013 at 3:49 PM, William Bell billnb...@gmail.com wrote: Any thoughts? Why are we getting duplicate items in solr.xml ? -- Forwarded message -- From: William Bell billnb...@gmail.com Date: Sat, Dec 7, 2013 at 1:48 PM Subject: solr.xml To: solr-user@lucene.apache.org solr-user@lucene.apache.org We are having issues with SWAP CoreAdmin in 4.5.1 and 4.6. Using legacy solr.xml we issue a SWAP, and we want it persistent. It has bee running flawless since 4.5. Now it creates duplicate lines in solr.xml. Even the example multi core schema in 4.5.1 doesn't work with persistent=true - it creates duplicate lines in solr.xml. cores adminPath=/admin/cores core name=autosuggest loadOnStartup=true instanceDir=autosuggest transient=false/ core name=citystateprovider loadOnStartup=true instanceDir=citystateprovider transient=false/ core name=collection1 loadOnStartup=true instanceDir=collection1 transient=false/ core name=facility loadOnStartup=true instanceDir=facility transient=false/ core name=inactiveproviders loadOnStartup=true instanceDir=inactiveproviders transient=false/ core name=linesvcgeo instanceDir=linesvcgeo loadOnStartup=true transient=false/ core name=linesvcgeofull instanceDir=linesvcgeofull loadOnStartup=true transient=false/ core name=locationgeo loadOnStartup=true instanceDir=locationgeo transient=false/ core name=market loadOnStartup=true instanceDir=market transient=false/ core name=portalprovider loadOnStartup=true instanceDir=portalprovider transient=false/ core name=practice loadOnStartup=true instanceDir=practice transient=false/ core name=provider loadOnStartup=true instanceDir=provider transient=false/ core name=providersearch loadOnStartup=true instanceDir=providersearch transient=false/ core name=tridioncomponents loadOnStartup=true instanceDir=tridioncomponents transient=false/ core name=linesvcgeo instanceDir=linesvcgeo loadOnStartup=true transient=false/ core name=linesvcgeofull instanceDir=linesvcgeofull loadOnStartup=true transient=false/ /cores -- Bill Bell billnb...@gmail.com cell 720-256-8076 -- Bill Bell billnb...@gmail.com cell 720-256-8076 -- - Mark
Re: solr.xml
Thanks Mark. https://issues.apache.org/jira/browse/SOLR-5543 On Mon, Dec 9, 2013 at 2:39 PM, Mark Miller markrmil...@gmail.com wrote: Sounds like a bug. If you are seeing this happen in 4.6, I'd file a JIRA issue. - Mark On Sun, Dec 8, 2013 at 3:49 PM, William Bell billnb...@gmail.com wrote: Any thoughts? Why are we getting duplicate items in solr.xml ? -- Forwarded message -- From: William Bell billnb...@gmail.com Date: Sat, Dec 7, 2013 at 1:48 PM Subject: solr.xml To: solr-user@lucene.apache.org solr-user@lucene.apache.org We are having issues with SWAP CoreAdmin in 4.5.1 and 4.6. Using legacy solr.xml we issue a SWAP, and we want it persistent. It has bee running flawless since 4.5. Now it creates duplicate lines in solr.xml. Even the example multi core schema in 4.5.1 doesn't work with persistent=true - it creates duplicate lines in solr.xml. cores adminPath=/admin/cores core name=autosuggest loadOnStartup=true instanceDir=autosuggest transient=false/ core name=citystateprovider loadOnStartup=true instanceDir=citystateprovider transient=false/ core name=collection1 loadOnStartup=true instanceDir=collection1 transient=false/ core name=facility loadOnStartup=true instanceDir=facility transient=false/ core name=inactiveproviders loadOnStartup=true instanceDir=inactiveproviders transient=false/ core name=linesvcgeo instanceDir=linesvcgeo loadOnStartup=true transient=false/ core name=linesvcgeofull instanceDir=linesvcgeofull loadOnStartup=true transient=false/ core name=locationgeo loadOnStartup=true instanceDir=locationgeo transient=false/ core name=market loadOnStartup=true instanceDir=market transient=false/ core name=portalprovider loadOnStartup=true instanceDir=portalprovider transient=false/ core name=practice loadOnStartup=true instanceDir=practice transient=false/ core name=provider loadOnStartup=true instanceDir=provider transient=false/ core name=providersearch loadOnStartup=true instanceDir=providersearch transient=false/ core name=tridioncomponents loadOnStartup=true instanceDir=tridioncomponents transient=false/ core name=linesvcgeo instanceDir=linesvcgeo loadOnStartup=true transient=false/ core name=linesvcgeofull instanceDir=linesvcgeofull loadOnStartup=true transient=false/ /cores -- Bill Bell billnb...@gmail.com cell 720-256-8076 -- Bill Bell billnb...@gmail.com cell 720-256-8076 -- - Mark -- Bill Bell billnb...@gmail.com cell 720-256-8076
Fwd: solr.xml
Any thoughts? Why are we getting duplicate items in solr.xml ? -- Forwarded message -- From: William Bell billnb...@gmail.com Date: Sat, Dec 7, 2013 at 1:48 PM Subject: solr.xml To: solr-user@lucene.apache.org solr-user@lucene.apache.org We are having issues with SWAP CoreAdmin in 4.5.1 and 4.6. Using legacy solr.xml we issue a SWAP, and we want it persistent. It has bee running flawless since 4.5. Now it creates duplicate lines in solr.xml. Even the example multi core schema in 4.5.1 doesn't work with persistent=true - it creates duplicate lines in solr.xml. cores adminPath=/admin/cores core name=autosuggest loadOnStartup=true instanceDir=autosuggest transient=false/ core name=citystateprovider loadOnStartup=true instanceDir=citystateprovider transient=false/ core name=collection1 loadOnStartup=true instanceDir=collection1 transient=false/ core name=facility loadOnStartup=true instanceDir=facility transient=false/ core name=inactiveproviders loadOnStartup=true instanceDir=inactiveproviders transient=false/ core name=linesvcgeo instanceDir=linesvcgeo loadOnStartup=true transient=false/ core name=linesvcgeofull instanceDir=linesvcgeofull loadOnStartup=true transient=false/ core name=locationgeo loadOnStartup=true instanceDir=locationgeo transient=false/ core name=market loadOnStartup=true instanceDir=market transient=false/ core name=portalprovider loadOnStartup=true instanceDir=portalprovider transient=false/ core name=practice loadOnStartup=true instanceDir=practice transient=false/ core name=provider loadOnStartup=true instanceDir=provider transient=false/ core name=providersearch loadOnStartup=true instanceDir=providersearch transient=false/ core name=tridioncomponents loadOnStartup=true instanceDir=tridioncomponents transient=false/ core name=linesvcgeo instanceDir=linesvcgeo loadOnStartup=true transient=false/ core name=linesvcgeofull instanceDir=linesvcgeofull loadOnStartup=true transient=false/ /cores -- Bill Bell billnb...@gmail.com cell 720-256-8076 -- Bill Bell billnb...@gmail.com cell 720-256-8076
solr.xml
We are having issues with SWAP CoreAdmin in 4.5.1 and 4.6. Using legacy solr.xml we issue a SWAP, and we want it persistent. It has bee running flawless since 4.5. Now it creates duplicate lines in solr.xml. Even the example multi core schema in 4.5.1 doesn't work with persistent=true - it creates duplicate lines in solr.xml. cores adminPath=/admin/cores core name=autosuggest loadOnStartup=true instanceDir=autosuggest transient=false/ core name=citystateprovider loadOnStartup=true instanceDir=citystateprovider transient=false/ core name=collection1 loadOnStartup=true instanceDir=collection1 transient=false/ core name=facility loadOnStartup=true instanceDir=facility transient=false/ core name=inactiveproviders loadOnStartup=true instanceDir=inactiveproviders transient=false/ core name=linesvcgeo instanceDir=linesvcgeo loadOnStartup=true transient=false/ core name=linesvcgeofull instanceDir=linesvcgeofull loadOnStartup=true transient=false/ core name=locationgeo loadOnStartup=true instanceDir=locationgeo transient=false/ core name=market loadOnStartup=true instanceDir=market transient=false/ core name=portalprovider loadOnStartup=true instanceDir=portalprovider transient=false/ core name=practice loadOnStartup=true instanceDir=practice transient=false/ core name=provider loadOnStartup=true instanceDir=provider transient=false/ core name=providersearch loadOnStartup=true instanceDir=providersearch transient=false/ core name=tridioncomponents loadOnStartup=true instanceDir=tridioncomponents transient=false/ core name=linesvcgeo instanceDir=linesvcgeo loadOnStartup=true transient=false/ core name=linesvcgeofull instanceDir=linesvcgeofull loadOnStartup=true transient=false/ /cores -- Bill Bell billnb...@gmail.com cell 720-256-8076
Re: Migration from old solr.xml to the new solr.xml style
I don't know, what is high number of cores? 10? 100? 1,000,000? In my initial tests I was getting around 1,000/second, macbook pro with spinning disk. Best, Erick On Sat, Nov 30, 2013 at 3:02 PM, Yago Riveiro yago.rive...@gmail.comwrote: Erick, I have no custom stuff: solr persistent=true cores host=${host:} adminPath=/admin/cores zkClientTimeout=${zkClientTimeout:} hostPort=${port:} hostContext=${hostContext:} // all cores created with COLLECTION API here. cores solr This new method scans the folder where the cores are, is fast with boxes with high number of cores? -- Yago Riveiro Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Saturday, November 30, 2013 at 7:52 PM, Erick Erickson wrote: Right, there is no auto-migration tool. The empty core.properties file trick _assumes_ that all the default are acceptable. If you've done any custom stuff in your core tags in old-style solr.xml, you'll have to create an equivalent entry in core.properties. Best, Erick On Sat, Nov 30, 2013 at 2:49 PM, Yago Riveiro yago.rive...@gmail.com(mailto: yago.rive...@gmail.com)wrote: Furkan, If I understand, I must change the solr.xml to new style and add an empty file in each core with the name core.properties, right? -- Yago Riveiro Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Saturday, November 30, 2013 at 7:36 PM, Furkan KAMACI wrote: Did you read here: https://cwiki.apache.org/confluence/display/solr/Moving+to+the+New+solr.xml+Format 2013/11/30 yriveiro yago.rive...@gmail.com (mailto: yago.rive...@gmail.com) (mailto: yago.rive...@gmail.com (mailto:yago.rive...@gmail.com)) Hi, There is some way to automatically migrate from old solr.xml style to the new? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Migration-from-old-solr-xml-to-the-new-solr-xml-style-tp4104154.html Sent from the Solr - User mailing list archive at Nabble.com ( http://Nabble.com) ( http://Nabble.com).
Migration from old solr.xml to the new solr.xml style
Hi, There is some way to automatically migrate from old solr.xml style to the new? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Migration-from-old-solr-xml-to-the-new-solr-xml-style-tp4104154.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Migration from old solr.xml to the new solr.xml style
Did you read here: https://cwiki.apache.org/confluence/display/solr/Moving+to+the+New+solr.xml+Format 2013/11/30 yriveiro yago.rive...@gmail.com Hi, There is some way to automatically migrate from old solr.xml style to the new? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Migration-from-old-solr-xml-to-the-new-solr-xml-style-tp4104154.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Migration from old solr.xml to the new solr.xml style
Furkan, If I understand, I must change the solr.xml to new style and add an empty file in each core with the name core.properties, right? -- Yago Riveiro Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Saturday, November 30, 2013 at 7:36 PM, Furkan KAMACI wrote: Did you read here: https://cwiki.apache.org/confluence/display/solr/Moving+to+the+New+solr.xml+Format 2013/11/30 yriveiro yago.rive...@gmail.com (mailto:yago.rive...@gmail.com) Hi, There is some way to automatically migrate from old solr.xml style to the new? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Migration-from-old-solr-xml-to-the-new-solr-xml-style-tp4104154.html Sent from the Solr - User mailing list archive at Nabble.com (http://Nabble.com).
Re: Migration from old solr.xml to the new solr.xml style
Right, there is no auto-migration tool. The empty core.properties file trick _assumes_ that all the default are acceptable. If you've done any custom stuff in your core tags in old-style solr.xml, you'll have to create an equivalent entry in core.properties. Best, Erick On Sat, Nov 30, 2013 at 2:49 PM, Yago Riveiro yago.rive...@gmail.comwrote: Furkan, If I understand, I must change the solr.xml to new style and add an empty file in each core with the name core.properties, right? -- Yago Riveiro Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Saturday, November 30, 2013 at 7:36 PM, Furkan KAMACI wrote: Did you read here: https://cwiki.apache.org/confluence/display/solr/Moving+to+the+New+solr.xml+Format 2013/11/30 yriveiro yago.rive...@gmail.com (mailto: yago.rive...@gmail.com) Hi, There is some way to automatically migrate from old solr.xml style to the new? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Migration-from-old-solr-xml-to-the-new-solr-xml-style-tp4104154.html Sent from the Solr - User mailing list archive at Nabble.com ( http://Nabble.com).
Re: Migration from old solr.xml to the new solr.xml style
Erick, I have no custom stuff: solr persistent=true cores host=${host:} adminPath=/admin/cores zkClientTimeout=${zkClientTimeout:} hostPort=${port:} hostContext=${hostContext:} // all cores created with COLLECTION API here. cores solr This new method scans the folder where the cores are, is fast with boxes with high number of cores? -- Yago Riveiro Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Saturday, November 30, 2013 at 7:52 PM, Erick Erickson wrote: Right, there is no auto-migration tool. The empty core.properties file trick _assumes_ that all the default are acceptable. If you've done any custom stuff in your core tags in old-style solr.xml, you'll have to create an equivalent entry in core.properties. Best, Erick On Sat, Nov 30, 2013 at 2:49 PM, Yago Riveiro yago.rive...@gmail.com (mailto:yago.rive...@gmail.com)wrote: Furkan, If I understand, I must change the solr.xml to new style and add an empty file in each core with the name core.properties, right? -- Yago Riveiro Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Saturday, November 30, 2013 at 7:36 PM, Furkan KAMACI wrote: Did you read here: https://cwiki.apache.org/confluence/display/solr/Moving+to+the+New+solr.xml+Format 2013/11/30 yriveiro yago.rive...@gmail.com (mailto:yago.rive...@gmail.com) (mailto: yago.rive...@gmail.com (mailto:yago.rive...@gmail.com)) Hi, There is some way to automatically migrate from old solr.xml style to the new? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Migration-from-old-solr-xml-to-the-new-solr-xml-style-tp4104154.html Sent from the Solr - User mailing list archive at Nabble.com (http://Nabble.com) ( http://Nabble.com).
Re: Migration from old solr.xml to the new solr.xml style
There is no automatic tool that help you migrate. Besides the Erick's answer if you have any problems you can ask it. Thanks; Furkan KAMACI 2013/11/30 Erick Erickson erickerick...@gmail.com Right, there is no auto-migration tool. The empty core.properties file trick _assumes_ that all the default are acceptable. If you've done any custom stuff in your core tags in old-style solr.xml, you'll have to create an equivalent entry in core.properties. Best, Erick On Sat, Nov 30, 2013 at 2:49 PM, Yago Riveiro yago.rive...@gmail.com wrote: Furkan, If I understand, I must change the solr.xml to new style and add an empty file in each core with the name core.properties, right? -- Yago Riveiro Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Saturday, November 30, 2013 at 7:36 PM, Furkan KAMACI wrote: Did you read here: https://cwiki.apache.org/confluence/display/solr/Moving+to+the+New+solr.xml+Format 2013/11/30 yriveiro yago.rive...@gmail.com (mailto: yago.rive...@gmail.com) Hi, There is some way to automatically migrate from old solr.xml style to the new? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Migration-from-old-solr-xml-to-the-new-solr-xml-style-tp4104154.html Sent from the Solr - User mailing list archive at Nabble.com ( http://Nabble.com).
Re: core swap duplicates core entries in solr.xml
Hi Jeremy, Could you open a JIRA ticket for this? Thanks, Alan Woodward www.flax.co.uk On 8 Nov 2013, at 21:16, Branham, Jeremy [HR] wrote: When performing a core swap in SOLR 4.5.1 with persistence on, the two core entries that were swapped are duplicated. Solr.xml ?xml version=1.0 encoding=UTF-8 ? solr persistent=true sharedLib=lib cores adminPath=/admin/cores core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=howtopolicies dataDir=/data/v8p/solr/howtopolicies/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsc dataDir=/data/v8p/solr/wdsc/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=other dataDir=/data/v8p/solr/other/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=psd dataDir=/data/v8p/solr/psd/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=nat dataDir=/data/v8p/solr/nat/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsc2 dataDir=/data/v8p/solr/wdsc2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=kms2 dataDir=/data/v8p/solr/kms/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=howtotools dataDir=/data/v8p/solr/howtotools/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=ewts dataDir=/data/v8p/solr/ewts/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsr dataDir=/data/v8p/solr/wdsr/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsr2 dataDir=/data/v8p/solr/wdsr2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=ce dataDir=/data/v8p/solr/ce/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=sp2 dataDir=/data/v8p/solr/sp2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=terms dataDir=/data/v8p/solr/terms/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=tools dataDir=/data/v8p/solr/tools/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=kms dataDir=/data/v8p/solr/kms2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsp dataDir=/data/v8p/solr/wdsp2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsp2 dataDir=/data/v8p/solr/wdsp/data/ /cores /solr Performed swap - ?xml version=1.0 encoding=UTF-8 ? solr persistent=true sharedLib=lib cores adminPath=/admin/cores core name=ce instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/ce/data/ core name=ewts instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/ewts/data/ core name=howtopolicies instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/howtopolicies/data/ core name=howtotools instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/howtotools/data/ core name=kms instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/kms/data/ core name=kms2 instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/kms2/data/ core name=nat instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/nat/data/ core name=other instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/other/data/ core name=psd instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/psd/data/ core name=sp2 instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/sp2/data/ core name=terms instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/terms/data/ core name=tools instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/tools/data/ core name=wdsc instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/wdsc/data/ core name=wdsc2 instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/wdsc2/data/ core name=wdsp instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/wdsp2/data/ core name=wdsp2
core swap duplicates core entries in solr.xml
When performing a core swap in SOLR 4.5.1 with persistence on, the two core entries that were swapped are duplicated. Solr.xml ?xml version=1.0 encoding=UTF-8 ? solr persistent=true sharedLib=lib cores adminPath=/admin/cores core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=howtopolicies dataDir=/data/v8p/solr/howtopolicies/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsc dataDir=/data/v8p/solr/wdsc/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=other dataDir=/data/v8p/solr/other/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=psd dataDir=/data/v8p/solr/psd/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=nat dataDir=/data/v8p/solr/nat/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsc2 dataDir=/data/v8p/solr/wdsc2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=kms2 dataDir=/data/v8p/solr/kms/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=howtotools dataDir=/data/v8p/solr/howtotools/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=ewts dataDir=/data/v8p/solr/ewts/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsr dataDir=/data/v8p/solr/wdsr/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsr2 dataDir=/data/v8p/solr/wdsr2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=ce dataDir=/data/v8p/solr/ce/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=sp2 dataDir=/data/v8p/solr/sp2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=terms dataDir=/data/v8p/solr/terms/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=tools dataDir=/data/v8p/solr/tools/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=kms dataDir=/data/v8p/solr/kms2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsp dataDir=/data/v8p/solr/wdsp2/data/ core schema=/data/v8p/solr/root/conf/schema.xml instanceDir=/data/v8p/solr/root/ name=wdsp2 dataDir=/data/v8p/solr/wdsp/data/ /cores /solr Performed swap - ?xml version=1.0 encoding=UTF-8 ? solr persistent=true sharedLib=lib cores adminPath=/admin/cores core name=ce instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/ce/data/ core name=ewts instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/ewts/data/ core name=howtopolicies instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/howtopolicies/data/ core name=howtotools instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/howtotools/data/ core name=kms instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/kms/data/ core name=kms2 instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/kms2/data/ core name=nat instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/nat/data/ core name=other instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/other/data/ core name=psd instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/psd/data/ core name=sp2 instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/sp2/data/ core name=terms instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/terms/data/ core name=tools instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/tools/data/ core name=wdsc instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/wdsc/data/ core name=wdsc2 instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/wdsc2/data/ core name=wdsp instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/wdsp2/data/ core name=wdsp2 instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/wdsp/data/ core name=wdsr instanceDir=/data/v8p/solr/root/ schema=/data/v8p/solr/root/conf/schema.xml dataDir=/data/v8p/solr/wdsr
Re: Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5
Done https://issues.apache.org/jira/browse/SOLR-5398 -- View this message in context: http://lucene.472066.n3.nabble.com/Global-User-defined-properties-solr-xml-from-Solr-4-4-to-Solr-4-5-tp4097740p4098143.html Sent from the Solr - User mailing list archive at Nabble.com.
Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5
Hi, I am migrating Solr 4.4 to Solr 4.5 and I have an issue in Solr.xml. I my old Solr.xml I had some properties I am reusing for all my cores. Furthemore I have some properties related to each individual core. solr property name=lucene.version value=LUCENE_40/ property name=store.fields value=true/ cores adminPath=/admin/cores core name=person instanceDir=person property name=solr.data.dir value=/var/lib/solrdata/persondata/ property name=poll.interval value=00:00:05/ /core core name=person instanceDir=company property name=solr.data.dir value=/var/lib/solrdata/companydata/ property name=poll.interval value=00:00:60/ /core /cores /solr wirh new Solr.xml format in Solr 4.5. I can introduce core related properties in core.properties file but I can't find anywhere how I can set up global properties like property name=lucene.version value=LUCENE_40/ Regards, Sergio -- View this message in context: http://lucene.472066.n3.nabble.com/Global-User-defined-properties-solr-xml-from-Solr-4-4-to-Solr-4-5-tp4097740.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5
They _should_ be just the same. The new solr.xml format just works by moving the individual core tags and putting the properties in individual core.properties, and removing the cores tag completely. So the global properties you refer to should be unaffected. But I haven't verified this personally. FWIW, Erick On Fri, Oct 25, 2013 at 7:30 AM, marotosg marot...@gmail.com wrote: Hi, I am migrating Solr 4.4 to Solr 4.5 and I have an issue in Solr.xml. I my old Solr.xml I had some properties I am reusing for all my cores. Furthemore I have some properties related to each individual core. solr property name=lucene.version value=LUCENE_40/ property name=store.fields value=true/ cores adminPath=/admin/cores core name=person instanceDir=person property name=solr.data.dir value=/var/lib/solrdata/persondata/ property name=poll.interval value=00:00:05/ /core core name=person instanceDir=company property name=solr.data.dir value=/var/lib/solrdata/companydata/ property name=poll.interval value=00:00:60/ /core /cores /solr wirh new Solr.xml format in Solr 4.5. I can introduce core related properties in core.properties file but I can't find anywhere how I can set up global properties like property name=lucene.version value=LUCENE_40/ Regards, Sergio -- View this message in context: http://lucene.472066.n3.nabble.com/Global-User-defined-properties-solr-xml-from-Solr-4-4-to-Solr-4-5-tp4097740.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5
Hi Erik. thanks for your help. I tried with solr.xml as follow solr property name=lucene.version value=LUCENE_40/ /solr It fails with this exception Caused by: org.apache.solr.common.SolrException: No system property or default value specified for lucene.version value:${lucene.version} If I put this variable in core.porperties lucene.version=LUCENE_40. Do you know if this is a bug? Thanks Sergio -- View this message in context: http://lucene.472066.n3.nabble.com/Global-User-defined-properties-solr-xml-from-Solr-4-4-to-Solr-4-5-tp4097740p4097776.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5
Property substitution is controlled by the conf/solrcore.properties file. The old solr.xml had a properties attribute to control where that file is located, but that feature was removed from the new solr.xml (in Solr 4.4), so the substitution properties must be in the solrcore.properties file in the conf subdirectory of the core. -- Jack Krupansky -Original Message- From: marotosg Sent: Friday, October 25, 2013 11:26 AM To: solr-user@lucene.apache.org Subject: Re: Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5 Hi Erik. thanks for your help. I tried with solr.xml as follow solr property name=lucene.version value=LUCENE_40/ /solr It fails with this exception Caused by: org.apache.solr.common.SolrException: No system property or default value specified for lucene.version value:${lucene.version} If I put this variable in core.porperties lucene.version=LUCENE_40. Do you know if this is a bug? Thanks Sergio -- View this message in context: http://lucene.472066.n3.nabble.com/Global-User-defined-properties-solr-xml-from-Solr-4-4-to-Solr-4-5-tp4097740p4097776.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5
Right, but what if you have many properties being shared across multiple cores. That means you have to copy same properties in each individual core.properties. Is not this redundant data. My main problem is I would like to keep several properties at solr level not to core level. Thanka a lot Sergio -- View this message in context: http://lucene.472066.n3.nabble.com/Global-User-defined-properties-solr-xml-from-Solr-4-4-to-Solr-4-5-tp4097740p4097789.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5
Yes, it is unfortunate that the properties property was removed. -- Jack Krupansky -Original Message- From: marotosg Sent: Friday, October 25, 2013 12:52 PM To: solr-user@lucene.apache.org Subject: Re: Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5 Right, but what if you have many properties being shared across multiple cores. That means you have to copy same properties in each individual core.properties. Is not this redundant data. My main problem is I would like to keep several properties at solr level not to core level. Thanka a lot Sergio -- View this message in context: http://lucene.472066.n3.nabble.com/Global-User-defined-properties-solr-xml-from-Solr-4-4-to-Solr-4-5-tp4097740p4097789.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Global User defined properties - solr.xml from Solr 4.4 to Solr 4.5
Can you file a JIRA issue? - Mark On Oct 25, 2013, at 12:52 PM, marotosg marot...@gmail.com wrote: Right, but what if you have many properties being shared across multiple cores. That means you have to copy same properties in each individual core.properties. Is not this redundant data. My main problem is I would like to keep several properties at solr level not to core level. Thanka a lot Sergio -- View this message in context: http://lucene.472066.n3.nabble.com/Global-User-defined-properties-solr-xml-from-Solr-4-4-to-Solr-4-5-tp4097740p4097789.html Sent from the Solr - User mailing list archive at Nabble.com.
Split shard doesn't persist data correctly on solr.xml
I notice that when a SPLISHARD operation finish, the solr.xml is not update properly. # Parent solr.xml: core numShards=2 name=test_shard1_replica1 instanceDir=test_shard1_replica1 shard=shard1 collection=test/ # Children solr.xml: core name=test_shard1_0_replica1 shardState=construction instanceDir=test_shard1_0_replica1 shard=shard1_0 collection=test property name=shardRange value=8000-bfff/ /core core name=test_shard1_1_replica1 shardState=construction instanceDir=test_shard1_1_replica1 shard=shard1_1 collection=test property name=shardRange value=c000-/ /core # Paren Clusterstate: shard1:{ range:8000-, state:inactive, replicas:{192.168.2.18:8983_solr_test_shard1_replica1:{ state:active, base_url:http://192.168.2.18:8983/solr;, core:test_shard1_replica1, node_name:192.168.2.18:8983_solr, leader:true}}}, # Children Clusterstate: shard1_0:{ range:8000-bfff, state:active, replicas:{192.168.2.18:8983_solr_test_shard1_0_replica1:{ state:active, base_url:http://192.168.2.18:8983/solr;, core:statistics-11_shard1_0_replica1, node_name:192.168.2.18:8983_solr, leader:true}}}, shard1_1:{ range:c000-, state:active, replicas:{192.168.2.18:8983_solr_test_shard1_1_replica1:{ state:active, base_url:http://192.168.2.18:8983/solr;, core:statistics-11_shard1_1_replica1, node_name:192.168.2.18:8983_solr, leader:true}}}, I only notice this because I did a restart and the nodes was show on cloud graph as down. The shards where I did a manual replication were written to the solr.xml file as expected, but not at the time that I executed the CREATE command. command: curl 'http://192.168.2.18:8983/solr/admin/cores?action=CREATEname=test_shard2_0_replicaXcollection=testshard=shard2_0' Create replicaA - solr.xml not write nothing about the replicaA. Create replicaA - solr.xml not write nothing about the replicaB, registered data about the replicaA. Is like I have a lag of 1 operation, this is normal? /Yago - Best regards -- View this message in context: http://lucene.472066.n3.nabble.com/Split-shard-doesn-t-persist-data-correctly-on-solr-xml-tp4093996.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Do I need solr.xml?
Actually, you're getting a solr.xml file but you don't know it. When Solr doesn't find solr.xml, there's a default one hard- coded that is used. See ConfigSolrOld.java, at the end DEF_SOLR_XML is defined. So, as Hoss says, it's much better to make one anyway so you know what you're getting. Consider setting it up for the discovery mode, see: http://wiki.apache.org/solr/Solr.xml%204.4%20and%20beyond Best Erick On Wed, Jul 24, 2013 at 9:21 PM, Chris Hostetter hossman_luc...@fucit.org wrote: : I get what looks like the admin page, but it says that there are solr core : initialization failures, and the links on the page just bring me back to the : same page. if you get an error on the admin UI, there should be specifics about *what* the initialization failure is -- at last one sentence, and there should be a full stack trace in the logs -- having those details will help understand the root of your first problem, which may explain your second problem. it would also help to know what the CoreAdmin handler returns when you ask it for status about all the cores -- even if the *UI* is having problems on your browser, that should return useful info (like: how many cores you have -- if any -- and which one had an init failure) https://cwiki.apache.org/confluence/display/solr/CoreAdminHandler+Parameters+and+Usage#CoreAdminHandlerParametersandUsage-{{STATUS}} : Second, when I try to put a doc in the index using the PHP Pecl Solr package : from a page on my site, I get errors that indicate that Solr can't see my : schema.xml file, since Solr doesn't recognize some of the fields that I've : defined. I have my updated schema.xml file in /etc/solr/collection1/conf/ that doesn't make sense -- if solr can't see your schema.xml file at all, you wouldn't get an error about the fields you definied being missing -- you'd get an error about the collection you are talking to not existin,g because if your schema.xml file can't be found (or has a problem loading) the entire SolrCore won't load. : str name=msgERROR: [doc=334455] unknown field 'brand'/str int : name=code400/int /lst /response ' in X: : SolrClient-addDocument(Object(SolrInputDocument)) #1 {main} thrown in XX that error indicates that your solr client sent a document to some (valid and functioning) SolrCore which has a schema.xml that does not contain a field named brand. : And this is the relevant section of my schema.xml : :field name=brand type=int indexed=false stored=true : required=true/ my best guess: you have multiple core defined in your solr setup -- one of which is working, and is what your client is trying to talk to, but which doesn't have the schema.xml that you put your domain specific fields in (maybe it's just the default example configs?) and you have another core defined, using your customized configs, which failed to load properly. you mentioned that you did in fact put your configs in collection1 dir, but w/o the specifics of what your solr home dir structure looks like, and the specifics of your error message, and details about the URLs your client tried to talk to when it got that error, etc it's all just guesswork on our parts. http://wiki.apache.org/solr/UsingMailingLists : So my question is: do I actually need to create a solr.xml file, and all the : accompanying files that go into specifying a core? (I'm not sure if there are, : but from some of the documentation it seems like there may be.) Or am I : pursuing an unnecessary solution to these problems, and there's a simpler fix? the short answer of your specific question is no, you don't *have* to have a solr.xml (at least not in Solr 4.x) but it's a really good idea, even if you only want a single core, because it gives you a way to be explicit about what you want and be sure it's what you are getting. -Hoss