lost in solr new core architecture
HI, Currently i am using solr 4.2 with tomcat right now i am stucked because i don't know how to upgrade to solr 4.7, because the problem for me is that i am familiar with the cores architecture of solr 4.2 in which we defined the every core name as well as instanceDir but not with solr 4.7. Any help will be appreciated, thanks With Regards Aman Tandon
Re: lost in solr new core architecture
On 4/12/2014 12:27 AM, Aman Tandon wrote: HI, Currently i am using solr 4.2 with tomcat right now i am stucked because i don't know how to upgrade to solr 4.7, because the problem for me is that i am familiar with the cores architecture of solr 4.2 in which we defined the every core name as well as instanceDir but not with solr 4.7. Any help will be appreciated, thanks Solr 4.7 will use the solr.xml from a 4.2 install with no problem. I just upgraded one copy of my index from 4.2.1 to 4.7.1 without changing my solr.xml. There is no need to switch to the new format until 5.0 gets released, which is NOT going to happen soon. If you use contrib or third party jars, you may need to be aware of SOLR-4852. If you do switch to the new solr.xml format, the instanceDir for each core is the directory where the core.properties file is found. It does not need to be specified anywhere -- it will be discovered. Thanks, Shawn
Re: lost in solr new core architecture
Thanks Shawn, it works fine. But there is a problem showing in my tomcat logs related to log4j i have already put the log4j in lib directory of tomcat as well as in WEB-INF\classes\log4j.properties, but still this error is coming in the logs. Also there is no warm up queries showing i am confused. [image: Inline image 1] Please help me, that's why i just don't like windows PC :( On Sat, Apr 12, 2014 at 1:00 PM, Shawn Heisey s...@elyograg.org wrote: On 4/12/2014 12:27 AM, Aman Tandon wrote: HI, Currently i am using solr 4.2 with tomcat right now i am stucked because i don't know how to upgrade to solr 4.7, because the problem for me is that i am familiar with the cores architecture of solr 4.2 in which we defined the every core name as well as instanceDir but not with solr 4.7. Any help will be appreciated, thanks Solr 4.7 will use the solr.xml from a 4.2 install with no problem. I just upgraded one copy of my index from 4.2.1 to 4.7.1 without changing my solr.xml. There is no need to switch to the new format until 5.0 gets released, which is NOT going to happen soon. If you use contrib or third party jars, you may need to be aware of SOLR-4852. If you do switch to the new solr.xml format, the instanceDir for each core is the directory where the core.properties file is found. It does not need to be specified anywhere -- it will be discovered. Thanks, Shawn -- With Regards Aman Tandon
Re: lost in solr new core architecture
Aman: Two things: 1 Images generally get stripped by the mail server or by receiving programs, all I see is a little box with inline image 1 in it. People often post images somewhere else and provide a link to get around this problem. 2 No big deal, but it's better to start a new post when the topic changes rather than change the subject, From: https://people.apache.org/~hossman/#threadhijack When starting a new discussion on a mailing list, please do not reply to an existing message, instead start a fresh email. Even if you change the subject line of your email, other mail headers still track which thread you replied to and your question is hidden in that thread and gets less attention. It makes following discussions in the mailing list archives particularly difficult. Best, Erick On Sat, Apr 12, 2014 at 4:51 AM, Aman Tandon amantandon...@gmail.com wrote: Thanks Shawn, it works fine. But there is a problem showing in my tomcat logs related to log4j i have already put the log4j in lib directory of tomcat as well as in WEB-INF\classes\log4j.properties, but still this error is coming in the logs. Also there is no warm up queries showing i am confused. Please help me, that's why i just don't like windows PC :( On Sat, Apr 12, 2014 at 1:00 PM, Shawn Heisey s...@elyograg.org wrote: On 4/12/2014 12:27 AM, Aman Tandon wrote: HI, Currently i am using solr 4.2 with tomcat right now i am stucked because i don't know how to upgrade to solr 4.7, because the problem for me is that i am familiar with the cores architecture of solr 4.2 in which we defined the every core name as well as instanceDir but not with solr 4.7. Any help will be appreciated, thanks Solr 4.7 will use the solr.xml from a 4.2 install with no problem. I just upgraded one copy of my index from 4.2.1 to 4.7.1 without changing my solr.xml. There is no need to switch to the new format until 5.0 gets released, which is NOT going to happen soon. If you use contrib or third party jars, you may need to be aware of SOLR-4852. If you do switch to the new solr.xml format, the instanceDir for each core is the directory where the core.properties file is found. It does not need to be specified anywhere -- it will be discovered. Thanks, Shawn -- With Regards Aman Tandon
Re: Relevance/Rank
Looks to me that the original query was using the lucene query parser, whereas bq is a parameter of the edismax query parser. This means that the bq param is being ignored. Move the fq clause to the q param, and add ^2000 after the name:123-444 bit. Upayavira On Fri, Apr 11, 2014, at 06:03 PM, EXTERNAL Taminidi Ravi (ETI, Automotive-Service-Solutions) wrote: HI thanks Aman/Eric, I move part of the query under q=*:* and there is a difference in the score and the Order. It seems work for me now. I use this and move forward Thanks Ravi -Original Message- From: Aman Tandon [mailto:amantandon...@gmail.com] Sent: Friday, April 11, 2014 12:02 AM To: solr-user@lucene.apache.org Subject: Re: Relevance/Rank Its fine Erick, I am guessing that maybe* fq=(SKU:204-161)... *this SKU with that value is present in all results that's why Name products are not getting boosted. Ravi: check your results without filtering, does all the results include *SKU:204-161. *I guess this may help. On Fri, Apr 11, 2014 at 9:22 AM, Erick Erickson erickerick...@gmail.comwrote: Aman: Oops, looked at the wrong part of the query, didn't see the bq clause. You're right of course. Sorry for the misdirection. Erick -- With Regards Aman Tandon
stucked with log4j configuration
I have upgraded my solr4.2 to solr 4.7.1 but in my logs there is an error for log4j log4j: Could not find resource Please find the attachment of the screenshot of the error console https://drive.google.com/file/d/0B5GzwVkR3aDzdjE1b2tXazdxcGs/edit?usp=sharing -- With Regards Aman Tandon
Re: lost in solr new core architecture
Sorry Erick. I will keep that in mind from the next time. On Sat, Apr 12, 2014 at 6:06 PM, Erick Erickson erickerick...@gmail.comwrote: Aman: Two things: 1 Images generally get stripped by the mail server or by receiving programs, all I see is a little box with inline image 1 in it. People often post images somewhere else and provide a link to get around this problem. 2 No big deal, but it's better to start a new post when the topic changes rather than change the subject, From: https://people.apache.org/~hossman/#threadhijack When starting a new discussion on a mailing list, please do not reply to an existing message, instead start a fresh email. Even if you change the subject line of your email, other mail headers still track which thread you replied to and your question is hidden in that thread and gets less attention. It makes following discussions in the mailing list archives particularly difficult. Best, Erick On Sat, Apr 12, 2014 at 4:51 AM, Aman Tandon amantandon...@gmail.com wrote: Thanks Shawn, it works fine. But there is a problem showing in my tomcat logs related to log4j i have already put the log4j in lib directory of tomcat as well as in WEB-INF\classes\log4j.properties, but still this error is coming in the logs. Also there is no warm up queries showing i am confused. Please help me, that's why i just don't like windows PC :( On Sat, Apr 12, 2014 at 1:00 PM, Shawn Heisey s...@elyograg.org wrote: On 4/12/2014 12:27 AM, Aman Tandon wrote: HI, Currently i am using solr 4.2 with tomcat right now i am stucked because i don't know how to upgrade to solr 4.7, because the problem for me is that i am familiar with the cores architecture of solr 4.2 in which we defined the every core name as well as instanceDir but not with solr 4.7. Any help will be appreciated, thanks Solr 4.7 will use the solr.xml from a 4.2 install with no problem. I just upgraded one copy of my index from 4.2.1 to 4.7.1 without changing my solr.xml. There is no need to switch to the new format until 5.0 gets released, which is NOT going to happen soon. If you use contrib or third party jars, you may need to be aware of SOLR-4852. If you do switch to the new solr.xml format, the instanceDir for each core is the directory where the core.properties file is found. It does not need to be specified anywhere -- it will be discovered. Thanks, Shawn -- With Regards Aman Tandon -- With Regards Aman Tandon
Re: stucked with log4j configuration
Have you seen: https://wiki.apache.org/solr/SolrTomcat#Logging and http://tomcat.apache.org/tomcat-6.0-doc/logging.html ? Best, Erick P.S. I don't know much about setting up Tomcat, so that's the best I can do... On Sat, Apr 12, 2014 at 6:26 AM, Aman Tandon amantandon...@gmail.com wrote: I have upgraded my solr4.2 to solr 4.7.1 but in my logs there is an error for log4j log4j: Could not find resource Please find the attachment of the screenshot of the error console https://drive.google.com/file/d/0B5GzwVkR3aDzdjE1b2tXazdxcGs/edit?usp=sharing -- With Regards Aman Tandon
Re: stucked with log4j configuration
Haha...Yeah i thought its about 17/18 hours of try to integrate solr with tomcat if you don't know about anything. Thanks for these resources let's see what can happen :D On Sat, Apr 12, 2014 at 7:09 PM, Erick Erickson erickerick...@gmail.comwrote: Have you seen: https://wiki.apache.org/solr/SolrTomcat#Logging and http://tomcat.apache.org/tomcat-6.0-doc/logging.html ? Best, Erick P.S. I don't know much about setting up Tomcat, so that's the best I can do... On Sat, Apr 12, 2014 at 6:26 AM, Aman Tandon amantandon...@gmail.com wrote: I have upgraded my solr4.2 to solr 4.7.1 but in my logs there is an error for log4j log4j: Could not find resource Please find the attachment of the screenshot of the error console https://drive.google.com/file/d/0B5GzwVkR3aDzdjE1b2tXazdxcGs/edit?usp=sharing -- With Regards Aman Tandon -- With Regards Aman Tandon
Apache Solr SpellChecker Integration with the default select request handler
Hello fellow Solr users, I am using the default select request handler to search a Solr core , I also use the eDismaxquery parser. 1. I want to integrate this with the spellchecker search component so that if a search request comes in the spellchecker component also gets called and I get a suggestion back with search results. 2. If the suggestion is above a certain threshold then I want the search to be made on that suggestion , otherwise the suggestion should comeback along with the search results for the original search term. In order to accomplish this it seems I need to integrate the SearchHandler.java class to call the spellchecker internally and then make a search call if the suggestion from the spellchecker has a suggestion that is above a certain threshold. I would really appreciate if there any examples of calling the SpellChecker component via the API in Solr that someone can share with me and also if you could validate my approach. Thank You.
Re: Apache Solr SpellChecker Integration with the default select request handler
Hi; Do you use Solrj at your application? Why you did not consider to use to solve this with Solrj? Thanks; Furkan KAMACI 2014-04-12 18:34 GMT+03:00 S.L simpleliving...@gmail.com: Hello fellow Solr users, I am using the default select request handler to search a Solr core , I also use the eDismaxquery parser. 1. I want to integrate this with the spellchecker search component so that if a search request comes in the spellchecker component also gets called and I get a suggestion back with search results. 2. If the suggestion is above a certain threshold then I want the search to be made on that suggestion , otherwise the suggestion should comeback along with the search results for the original search term. In order to accomplish this it seems I need to integrate the SearchHandler.java class to call the spellchecker internally and then make a search call if the suggestion from the spellchecker has a suggestion that is above a certain threshold. I would really appreciate if there any examples of calling the SpellChecker component via the API in Solr that someone can share with me and also if you could validate my approach. Thank You.
Re: Apache Solr SpellChecker Integration with the default select request handler
Yes, I use solrJ , but only to index the data , the querying of the data happens usinf the default select query handler from a non java client. On Sat, Apr 12, 2014 at 12:12 PM, Furkan KAMACI furkankam...@gmail.comwrote: Hi; Do you use Solrj at your application? Why you did not consider to use to solve this with Solrj? Thanks; Furkan KAMACI 2014-04-12 18:34 GMT+03:00 S.L simpleliving...@gmail.com: Hello fellow Solr users, I am using the default select request handler to search a Solr core , I also use the eDismaxquery parser. 1. I want to integrate this with the spellchecker search component so that if a search request comes in the spellchecker component also gets called and I get a suggestion back with search results. 2. If the suggestion is above a certain threshold then I want the search to be made on that suggestion , otherwise the suggestion should comeback along with the search results for the original search term. In order to accomplish this it seems I need to integrate the SearchHandler.java class to call the spellchecker internally and then make a search call if the suggestion from the spellchecker has a suggestion that is above a certain threshold. I would really appreciate if there any examples of calling the SpellChecker component via the API in Solr that someone can share with me and also if you could validate my approach. Thank You.
Re: stucked with log4j configuration
Consider the Heliosearch distribution of Solr (HDS) - it comes pre-configured for Tomcat: http://heliosearch.com/heliosearch-distribution-for-solr/ -- Jack Krupansky -Original Message- From: Aman Tandon Sent: Saturday, April 12, 2014 10:16 AM To: solr-user@lucene.apache.org Subject: Re: stucked with log4j configuration Haha...Yeah i thought its about 17/18 hours of try to integrate solr with tomcat if you don't know about anything. Thanks for these resources let's see what can happen :D On Sat, Apr 12, 2014 at 7:09 PM, Erick Erickson erickerick...@gmail.comwrote: Have you seen: https://wiki.apache.org/solr/SolrTomcat#Logging and http://tomcat.apache.org/tomcat-6.0-doc/logging.html ? Best, Erick P.S. I don't know much about setting up Tomcat, so that's the best I can do... On Sat, Apr 12, 2014 at 6:26 AM, Aman Tandon amantandon...@gmail.com wrote: I have upgraded my solr4.2 to solr 4.7.1 but in my logs there is an error for log4j log4j: Could not find resource Please find the attachment of the screenshot of the error console https://drive.google.com/file/d/0B5GzwVkR3aDzdjE1b2tXazdxcGs/edit?usp=sharing -- With Regards Aman Tandon -- With Regards Aman Tandon
Question regarding solrj
Hi Solr Gurus, I have some doubt related to solrj client. My scenario is like this: - There is a proxy server (Play App) which internally queries solr. - The proxy server is called from client side, which uses Solrj library. The issue is that I can't change client code. I can only change configurations to call different servers, hence I need to use SolrJ. - Results are successfully returned from my play app in *java-bin*format without modify them, but on client side, I am receiving this exception: Caused by: java.lang.NullPointerException * at org.apache.solr.common.util.JavaBinCodec.readExternString(JavaBinCodec.java:689)* * at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:188)* * at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:112)* * at org.apache.solr.client.solrj.impl.BinaryResponseParser.processResponse(BinaryResponseParser.java:41)* * at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)* * at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)* * at org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:90)* * at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:310)* * at com.ibm.commerce.foundation.internal.server.services.search.util.SearchQueryHelper.query(SearchQueryHelper.java:125)* * at com.ibm.commerce.foundation.server.services.rest.search.processor.solr.SolrRESTSearchExpressionProcessor.performSearch(SolrRESTSearchExpressionProcessor.java:506)* * at com.ibm.commerce.foundation.server.services.search.SearchServiceFacade.performSearch(SearchS* erviceFacade.java:193) I am not sure, if this exception is related to some issue in response format or with respect to querying non-solr server from solrj. Let me know your thoughts Thanks, Prashant
Fetching document by comparing date to today date
Hello i have come across many threads where people have asked how to fetch doc based on date comparison, my problem is pretty much on the same line. based on todays date i want to fetch documents which are live For example i have three doc below doc1 liveDate=1-MAR-2014 doc2 liveDate=1-APR-2014 doc3 liveDate=1-MAY-2014 i want to select only one doc based on todays date, hence if today is 14-APR and if i run query liveDate:[* TO 14-APR-2014] its getting two doc i want to get only the latest one which is doc2. Is there out of the box method which can solve my issue. In order to fix this issue i proposed to have doc liveStartDate and liveEndDate doc1 liveStartDate=1-MAR-2014 liveEndDate=31-MAR-2014 doc2 liveStartDate=1-APR-2014 liveEndDate=31-APR-2014 doc2 liveStartDate=1-MAY-2014 liveEndDate=31-MAY-2014 Hence if today is 14-APR-2014 can i run a query where i can give a condition something like currentDateliveStartDate AND currentDateliveEndDate can someone please let me know how to do this kind of date comparison. thanks darniz -- View this message in context: http://lucene.472066.n3.nabble.com/Fetching-document-by-comparing-date-to-today-date-tp4130802.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Fetching document by comparing date to today date
Date math lets you add and subtract offsets in various date and time units and truncate to a specified unit as well. For example: q=someDateField:[NOW/DAY TO NOW+1DAY/DAY} Note the use of } to exclude the end point of a range. Also, be careful to URL-encode the +, otherwise URL parsing will treat it as a space. See: https://cwiki.apache.org/confluence/display/solr/Working+with+Dates -- Jack Krupansky -Original Message- From: Darniz Sent: Saturday, April 12, 2014 4:33 PM To: solr-user@lucene.apache.org Subject: Fetching document by comparing date to today date Hello i have come across many threads where people have asked how to fetch doc based on date comparison, my problem is pretty much on the same line. based on todays date i want to fetch documents which are live For example i have three doc below doc1 liveDate=1-MAR-2014 doc2 liveDate=1-APR-2014 doc3 liveDate=1-MAY-2014 i want to select only one doc based on todays date, hence if today is 14-APR and if i run query liveDate:[* TO 14-APR-2014] its getting two doc i want to get only the latest one which is doc2. Is there out of the box method which can solve my issue. In order to fix this issue i proposed to have doc liveStartDate and liveEndDate doc1 liveStartDate=1-MAR-2014 liveEndDate=31-MAR-2014 doc2 liveStartDate=1-APR-2014 liveEndDate=31-APR-2014 doc2 liveStartDate=1-MAY-2014 liveEndDate=31-MAY-2014 Hence if today is 14-APR-2014 can i run a query where i can give a condition something like currentDateliveStartDate AND currentDateliveEndDate can someone please let me know how to do this kind of date comparison. thanks darniz -- View this message in context: http://lucene.472066.n3.nabble.com/Fetching-document-by-comparing-date-to-today-date-tp4130802.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Fetching document by comparing date to today date
Thanks for the quick answer i was able to solve my problem with the addition of two new fields and if todays date is april14 my query is (liveStartDate:[* TO 2014-04-14T00:00:00Z] AND liveEndDate:[2014-04-14T00:00:00Z TO *]) and its fetches me the correct document guess my initial question does solr provide out of the box functionality if i have the below three documents set, how can i get only doc2 if i assume todays date is 14-APRIL-2014 for simplicity i gave the liveDate as dates beginning at the start of each month but in real life these dates can be anything. doc1 liveDate=1-MAR-2014 doc2 liveDate=1-APR-2014 doc3 liveDate=1-MAY-2014 -- View this message in context: http://lucene.472066.n3.nabble.com/Fetching-document-by-comparing-date-to-today-date-tp4130802p4130807.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Strange double-logging with log4j
On 4/11/2014 3:21 PM, Shawn Heisey wrote: This is lucene_solr_4_7_2_r1586229, downloaded from the release manager's staging area. I configured the following in my log4j.properties file: log4j.rootLogger=WARN, file log4j.category.org.apache.solr.core.SolrCore=INFO, file Now EVERYTHING that SolrCore logs (which is all at INFO) is being logged twice. Should I have done this differently, or is there a bug? I am using a container setup that is almost exactly like the example. The slf4j jars have been upgraded to 1.7.6 and jetty's jars have been upgraded to 8.1.14. I did figure out how to fix it: log4j.additivity.org.apache.solr.core.SolrCore=false I do not know what actually caused the additivity, though. I can't see anything in my logging config that would result in that class being asked to log twice. Full config below: # Logging level log4j.rootLogger=WARN, file log4j.category.org.apache.solr.core.SolrCore=INFO, file log4j.additivity.org.apache.solr.core.SolrCore=false #- size rotation with log cleanup. log4j.appender.file=org.apache.log4j.RollingFileAppender log4j.appender.file.MaxFileSize=2GB log4j.appender.file.MaxBackupIndex=9 #- File to log to and log format log4j.appender.file.File=/index/solr4/logs/solr.log log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%-5p - %d{-MM-dd HH:mm:ss.SSS}; %C; %m\n Thanks, Shawn
Re: Apache Solr SpellChecker Integration with the default select request handler
Hi; I do not want to change the direction of your question but it is really good, secure and flexible to do such kind of things at your client (a java client or not). On the other *if *you let people to access your Solr instance directly it causes some security issues. Thanks; Furkan KAMACI 2014-04-12 19:26 GMT+03:00 S.L simpleliving...@gmail.com: Yes, I use solrJ , but only to index the data , the querying of the data happens usinf the default select query handler from a non java client. On Sat, Apr 12, 2014 at 12:12 PM, Furkan KAMACI furkankam...@gmail.com wrote: Hi; Do you use Solrj at your application? Why you did not consider to use to solve this with Solrj? Thanks; Furkan KAMACI 2014-04-12 18:34 GMT+03:00 S.L simpleliving...@gmail.com: Hello fellow Solr users, I am using the default select request handler to search a Solr core , I also use the eDismaxquery parser. 1. I want to integrate this with the spellchecker search component so that if a search request comes in the spellchecker component also gets called and I get a suggestion back with search results. 2. If the suggestion is above a certain threshold then I want the search to be made on that suggestion , otherwise the suggestion should comeback along with the search results for the original search term. In order to accomplish this it seems I need to integrate the SearchHandler.java class to call the spellchecker internally and then make a search call if the suggestion from the spellchecker has a suggestion that is above a certain threshold. I would really appreciate if there any examples of calling the SpellChecker component via the API in Solr that someone can share with me and also if you could validate my approach. Thank You.
Re: deleting large amount data from solr cloud
Hi; Do you get any problems when you index your data? On the other hand deleting as bulks and reducing the size of documents may help you not to hit OOM. Thanks; Furkan KAMACI 2014-04-12 8:22 GMT+03:00 Aman Tandon amantandon...@gmail.com: Vinay please share your experience after trying this solution. On Sat, Apr 12, 2014 at 4:12 AM, Vinay Pothnis poth...@gmail.com wrote: The query is something like this: *curl -H 'Content-Type: text/xml' --data 'deletequeryparam1:(val1 OR val2) AND -param2:(val3 OR val4) AND date_param:[138395520 TO 138516480]/query/delete' 'http://host:port/solr/coll-name1/update?commit=true'* Trying to restrict the number of documents deleted via the date parameter. Had not tried the distrib=false option. I could give that a try. Thanks for the link! I will check on the cache sizes and autowarm values. Will try and disable the caches when I am deleting and give that a try. Thanks Erick and Shawn for your inputs! -Vinay On 11 April 2014 15:28, Shawn Heisey s...@elyograg.org wrote: On 4/10/2014 7:25 PM, Vinay Pothnis wrote: When we tried to delete the data through a query - say 1 day/month's worth of data. But after deleting just 1 month's worth of data, the master node is going out of memory - heap space. Wondering is there any way to incrementally delete the data without affecting the cluster adversely. I'm curious about the actual query being used here. Can you share it, or a redacted version of it? Perhaps there might be a clue there? Is this a fully distributed delete request? One thing you might try, assuming Solr even supports it, is sending the same delete request directly to each shard core with distrib=false. Here's a very incomplete list about how you can reduce Solr heap requirements: http://wiki.apache.org/solr/SolrPerformanceProblems# Reducing_heap_requirements Thanks, Shawn -- With Regards Aman Tandon
Re: Question regarding solrj
Hi; If you had a chance to change the code at client side I would suggest to try that: http://lucene.apache.org/solr/4_2_1/solr-solrj/org/apache/solr/client/solrj/impl/HttpSolrServer.html#setParser(org.apache.solr.client.solrj.ResponseParser) There maybe a problem about character encoding of your Play App and here is the information: Javabin is a custom binary format used to write out Solr's response in a fast and efficient manner. As of Solr 3.1, the JavaBin format has changed to version 2. Version 2 serializes strings differently: instead of writing the number of UTF-16 characters followed by the bytes in Modified UTF-8 it writes the number of UTF-8 bytes followed by the bytes in UTF-8. Which version of Solr and Solrj do you use respectively? On the other hand if you give us more information I can help you because there may be any other interesting thing as like here: https://issues.apache.org/jira/browse/SOLR-5744 Thanks; Furkan KAMACI 2014-04-12 22:18 GMT+03:00 Prashant Golash prashant.gol...@gmail.com: Hi Solr Gurus, I have some doubt related to solrj client. My scenario is like this: - There is a proxy server (Play App) which internally queries solr. - The proxy server is called from client side, which uses Solrj library. The issue is that I can't change client code. I can only change configurations to call different servers, hence I need to use SolrJ. - Results are successfully returned from my play app in *java-bin*format without modify them, but on client side, I am receiving this exception: Caused by: java.lang.NullPointerException * at org.apache.solr.common.util.JavaBinCodec.readExternString(JavaBinCodec.java:689)* * at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:188)* * at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:112)* * at org.apache.solr.client.solrj.impl.BinaryResponseParser.processResponse(BinaryResponseParser.java:41)* * at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:385)* * at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:180)* * at org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:90)* * at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:310)* * at com.ibm.commerce.foundation.internal.server.services.search.util.SearchQueryHelper.query(SearchQueryHelper.java:125)* * at com.ibm.commerce.foundation.server.services.rest.search.processor.solr.SolrRESTSearchExpressionProcessor.performSearch(SolrRESTSearchExpressionProcessor.java:506)* * at com.ibm.commerce.foundation.server.services.search.SearchServiceFacade.performSearch(SearchS* erviceFacade.java:193) I am not sure, if this exception is related to some issue in response format or with respect to querying non-solr server from solrj. Let me know your thoughts Thanks, Prashant
Re: Regex For *|* at hl.regex.pattern
Hi; I could find a way to achieve it when I debugged the source code. Defining a delimiter and indexing it as an individual token is the first step. Writing a regex that matches for given delimiter is the next step. Last step is defining the slop size. When you have a big slop size you get the whole sentence. Thanks; Furkan KAMACI 2014-04-08 13:30 GMT+03:00 Furkan KAMACI furkankam...@gmail.com: Hi Jack; My sentence delimiter is not one character; it is *|* How to write a regex for it? 2014-04-08 8:06 GMT+03:00 Jack Krupansky j...@basetechnology.com: The regex pattern should match the text of the fragment. IOW, exclude whatever delimiters are not allowed in the fragment. The default is: [-\w ,\n']{20,200} -- Jack Krupansky -Original Message- From: Furkan KAMACI Sent: Monday, April 7, 2014 10:21 AM To: solr-user@lucene.apache.org Subject: Regex For *|* at hl.regex.pattern Hi; I try that but it does not work do I miss anything: q=portuhl.regex.pattern=.*\*\|\*.*hl.fragsize=120hl.regex.slop=0.2 My aim is to check whether it includes *|* or not (that's why I've put .* beginning and end of the regex to achieve whatever you match) How to fix it? Thanks; Furkan KAMACI
Re: highlighting displays to much
Hi; Firstly it is not usual that highlighter cuts up words. When you change the slop size you will realize that highlight size may change. Slop size is how far the fragmenter can stray from the ideal fragment size. A slop of 0.2 means that the fragmenter can go over or under by 20%. Thanks; Furkan KAMACI 2014-04-11 13:16 GMT+03:00 ao...@swissonline.ch: i am using solr 4.3.1 and want to highlight complete sentences if possible or at least not cut up words. it it finds something the hole field is displayed instead of only 180 chars the field is: fieldType name=text_de class=solr.TextField positionIncrementGap=100 field name=plain_text type=text_de indexed=true stored=true default= / solrconfig setting for highlighting: str name=hltrue/str str name=hl.flplain_text title description/str str name=hl.simple.prelt;bgt;/str str name=hl.simple.postlt;/bgt;/str str name=hl.snippets5/str str name=hl.fragsize180/str str name=hl.fragmenterregex/str str name=hl.regex.slop0.2/str str name=hl.regex.pattern\w[^\.!\?]{20,160}/str
Re: Search a list of words and returned order
Hi; The documents with more words that matched the query will be higher in the result list than those that have fewer words that matched the query. If you want to have documents that have all th equery words in their fields at the top of the results list you can try that: Define a query with a mm of 1 and define a boost query with the help of dereferencing as like: {!edismaxqf=$boostQueryQf mm=100% v=$mainQuery}^10 You can check the Apache Solr CookBook page 296. If you have any problems I can help you. Thanks; Furkan KAMACI 2014-04-11 19:08 GMT+03:00 Jack Krupansky j...@basetechnology.com: Generally, the documents containing more of the terms should score higher and be returned first, but relevancy for some terms can skew that ordering, to some degree. What specific use cases are failing for you? You can always add an additional optional subquery which is the AND of all terms and has a significant boost: q=see spot run (+see +spot +run)^10 -- Jack Krupansky -Original Message- From: Croci Francesco Luigi (ID SWS) Sent: Friday, April 11, 2014 9:47 AM To: 'solr-user@lucene.apache.org' Subject: Search a list of words and returned order When I search for a list of words, per default Solr uses the OR operator. In my case I index (pdfs) files. How/what can I do so that when I search the index for a list of words, I get the list of documents ordered first by the ones that have all the words in them? Thank you Francesco
Re: svn vs GIT
Hi Amon; There has been a conversation about it at dev list: http://search-lucene.com/m/PrTmPXyDlv/The+Old+Git+Discussionsubj=Re+The+Old+Git+Discussion On the other hand you do not need to know SVN to use, develop and contribute to Apache Solr project. You can follow the project at GitHub: https://github.com/apache/lucene-solr Thanks; Furkan KAMACI 2014-04-11 5:12 GMT+03:00 Aman Tandon amantandon...@gmail.com: thanks sir, in that case i need to know about svn as well. Thanks Aman Tandon On Fri, Apr 11, 2014 at 7:26 AM, Alexandre Rafalovitch arafa...@gmail.comwrote: You can find the read-only Git's version of Lucene+Solr source code here: https://github.com/apache/lucene-solr . The SVN preference is Apache Foundation's choice and legacy. Most of the developers' workflows are also around SVN. Regards, Alex. Personal website: http://www.outerthoughts.com/ Current project: http://www.solr-start.com/ - Accelerating your Solr proficiency On Fri, Apr 11, 2014 at 7:48 AM, Aman Tandon amantandon...@gmail.com wrote: Hi, I am new here, i have question in mind that why we are preferring the svn more than git? -- With Regards Aman Tandon
Re: No route to host
Hi; Explanation of NoRouteToHostException: Signals that an error occurred while attempting to connect a socket to a remote address and port. Typically, the remote host cannot be reached because of an intervening firewall, or if an intermediate router is down. Try to access to that page via Curl or browser, or ping it from the machine that runs code. Thanks; Furkan KAMACI 2014-04-10 8:34 GMT+03:00 Suresh Soundararajan suresh.soundarara...@aspiresys.com: You are running the solr in the built in jetty server or tomcat ? First check http://host:8080/ is working. If that working then check with http://host:8080/solr, which will display the solr admin page. From this page you can check the collection1 core is available or not and also you can view the log to check any configuration missing. Thanks, SureshKumar.S From: Rallavagu rallav...@gmail.com Sent: Thursday, April 10, 2014 2:13 AM To: solr-user@lucene.apache.org Subject: Re: No route to host Sorry. I should have mentioned earlier. I have removed the original host name on purpose. Thanks. On 4/9/14, 1:42 PM, Siegfried Goeschl wrote: Hi folks, the URL looks wrong (misconfigured) http://host:8080/solr/collection1 Cheers, Siegfried Goeschl On 09 Apr 2014, at 14:28, Rallavagu rallav...@gmail.com wrote: All, I see the following error in the log file. The host that it is trying to find is itself. Wondering if anybody experienced this before or any other info would helpful. Thanks. 709703139 [http-bio-8080-exec-43] ERROR org.apache.solr.update.SolrCmdDistributor - org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://host:8080/solr/collection1 at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:503) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:197) at org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer.request(ConcurrentUpdateSolrServer.java:293) at org.apache.solr.update.SolrCmdDistributor.submit(SolrCmdDistributor.java:212) at org.apache.solr.update.SolrCmdDistributor.distribCommit(SolrCmdDistributor.java:181) at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1260) at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157) at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1859) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:710) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:413) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1023) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: java.net.NoRouteToHostException: No route to host at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at
Re: Fetching document by comparing date to today date
Hi Darniz; Why don't you filter your documents that has a date until TODAY and then sort them by date and finally get only 1 document with rows=1 ? Thanks; Furkan KAMACI 2014-04-13 0:08 GMT+03:00 Darniz rnizamud...@edmunds.com: Thanks for the quick answer i was able to solve my problem with the addition of two new fields and if todays date is april14 my query is (liveStartDate:[* TO 2014-04-14T00:00:00Z] AND liveEndDate:[2014-04-14T00:00:00Z TO *]) and its fetches me the correct document guess my initial question does solr provide out of the box functionality if i have the below three documents set, how can i get only doc2 if i assume todays date is 14-APRIL-2014 for simplicity i gave the liveDate as dates beginning at the start of each month but in real life these dates can be anything. doc1 liveDate=1-MAR-2014 doc2 liveDate=1-APR-2014 doc3 liveDate=1-MAY-2014 -- View this message in context: http://lucene.472066.n3.nabble.com/Fetching-document-by-comparing-date-to-today-date-tp4130802p4130807.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: waitForLeaderToSeeDownState when leader is down
Hi; There is an explanation as follows: This is meant to protect the case where you stop a shard or it fails and then the first node to get started back up has stale data - you don't want it to just become the leader. So we wait to see everyone we know about in the shard up to 3 or 5 min by default. Then we know all the shards participate in the leader election and the leader will end up with all updates it should have. You can check it from here: http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201306.mbox/%3ccajt9wng_yykcxggentgcxguhhcjhidear-jygpgrnkaedrz...@mail.gmail.com%3E Thanks; Furkan KAMACI 2014-04-08 23:51 GMT+03:00 Jessica Mallet mewmewb...@gmail.com: To clarify, when I said leader and follower I meant the old leader and follower before the zookeeper session expiration. When they're recovering there's no leader. On Tue, Apr 8, 2014 at 1:49 PM, Jessica Mallet mewmewb...@gmail.com wrote: I'm playing with dropping the cluster's connections to zookeeper and then reconnecting them, and during recovery, I always see this on the leader's logs: ElectionContext.java (line 361) Waiting until we see more replicas up for shard shard1: total=2 found=1 timeoutin=139902 and then on the follower, I see: SolrException.java (line 121) There was a problem finding the leader in zk:org.apache.solr.common.SolrException: Could not get leader props at org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:958) at org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:922) at org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1463) at org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:380) at org.apache.solr.cloud.ZkController.access$100(ZkController.java:84) at org.apache.solr.cloud.ZkController$1.command(ZkController.java:232) at org.apache.solr.common.cloud.ConnectionManager$2$1.run(ConnectionManager.java:179) Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /collections/lc4/leaders/shard1 at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151) at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:273) at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:270) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73) at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:270) at org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:936) ... 6 more They block each other's progress until leader decides to give up and not wait for more replicas to come up: ElectionContext.java (line 368) Was waiting for replicas to come up, but they are taking too long - assuming they won't come back till later and then recovery moves forward again. Should waitForLeaderToSeeDownState move on if there's no leader at the moment? Thanks, Jessica
Re: Delete by query with soft commit
Hi Jess; Could you check here first: http://search-lucene.com/m/QTPaSxpsW/Commit+Within+and+%252Fupdate%252Fextract+handlersubj=Re+Commit+Within+and+update+extract+handler and then here: http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/ Thanks; Furkan KAMACI 2014-04-09 0:24 GMT+03:00 youknow...@heroicefforts.net youknow...@heroicefforts.net: It appears that UpdateResponse.setCommitWithin is not honored when executing a delete query against SolrCloud (SolrJ 4.6). However, setting the hard commit parameter functions as expected. Is this a known bug? Thanks, -Jess
Re: OutOfMemoryError while merging large indexes
Hi; According to Sun, the error happens if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown.. Specifying more memory should be helpful. On the other hand you should check here: http://wiki.apache.org/solr/SolrPerformanceProblems and here: http://wiki.apache.org/solr/ShawnHeisey Thanks; Furkan KAMACI 2014-04-09 4:25 GMT+03:00 François Schiettecatte fschietteca...@gmail.com: Have you tried using: -XX:-UseGCOverheadLimit François On Apr 8, 2014, at 6:06 PM, Haiying Wang haiyingwa...@yahoo.com wrote: Hi, We were trying to merge a large index (9GB, 21 million docs) into current index (only 13MB), using mergeindexes command ofCoreAdminHandler, but always run into OOM error. We currently set the max heap size to 4GB for the Solr server. We are using 4.6.0, and did not change the original solrconfig.xml. Is there any setting/configure that could help to complete the mergeindexes process without running into OOM error? I can increase the max jvm heap size, but am afraid that may not scale in case larger index need to be merged in the future, and hoping the index merge can be performed with limited memory foorprint. Please help. Thanks! The jvm heap setting: -Xmx4096M -Xms512M Command used: curl http://dev101:8983/solr/admin/cores?action=mergeindexescore=collection1indexDir=/solr/tmp/data/snapshot.20140407194442777 OOM error stack trace: Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:133) at java.lang.StringCoding.decode(StringCoding.java:179) at java.lang.String.lt;initgt;(String.java:483) at java.lang.String.lt;initgt;(String.java:539) at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.readField(CompressingStoredFieldsReader.java:187) at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.visitDocument(CompressingStoredFieldsReader.java:351) at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:276) at org.apache.lucene.index.IndexReader.document(IndexReader.java:436) at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.merge(CompressingStoredFieldsWriter.java:345) at org.apache.lucene.index.SegmentMerger.mergeFields(SegmentMerger.java:316) at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:94) at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2555) at org.apache.solr.update.DirectUpdateHandler2.mergeIndexes(DirectUpdateHandler2.java:449) at org.apache.solr.update.processor.RunUpdateProcessor.processMergeIndexes(RunUpdateProcessorFactory.java:88) at org.apache.solr.update.processor.UpdateRequestProcessor.processMergeIndexes(UpdateRequestProcessor.java:59) at org.apache.solr.update.processor.LogUpdateProcessor.processMergeIndexes(LogUpdateProcessorFactory.java:149) at org.apache.solr.handler.admin.CoreAdminHandler.handleMergeAction(CoreAdminHandler.java:384) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:662) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:248) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:197) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) Regards, Haiying
Re: Fetching document by comparing date to today date
You haven't told us under what criteria you would exclude the March document. Do you want only docs that are in the current month? If so, date:[NOW/MONTH TO NOW/MONTH+1MONTH] should do it. Best, Erick On Sat, Apr 12, 2014 at 4:08 PM, Furkan KAMACI furkankam...@gmail.com wrote: Hi Darniz; Why don't you filter your documents that has a date until TODAY and then sort them by date and finally get only 1 document with rows=1 ? Thanks; Furkan KAMACI 2014-04-13 0:08 GMT+03:00 Darniz rnizamud...@edmunds.com: Thanks for the quick answer i was able to solve my problem with the addition of two new fields and if todays date is april14 my query is (liveStartDate:[* TO 2014-04-14T00:00:00Z] AND liveEndDate:[2014-04-14T00:00:00Z TO *]) and its fetches me the correct document guess my initial question does solr provide out of the box functionality if i have the below three documents set, how can i get only doc2 if i assume todays date is 14-APRIL-2014 for simplicity i gave the liveDate as dates beginning at the start of each month but in real life these dates can be anything. doc1 liveDate=1-MAR-2014 doc2 liveDate=1-APR-2014 doc3 liveDate=1-MAY-2014 -- View this message in context: http://lucene.472066.n3.nabble.com/Fetching-document-by-comparing-date-to-today-date-tp4130802p4130807.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: svn vs GIT
Thanks Furkan...its aman not amon hahah...:D With Regards Aman Tandon On Sun, Apr 13, 2014 at 4:26 AM, Furkan KAMACI furkankam...@gmail.comwrote: Hi Amon; There has been a conversation about it at dev list: http://search-lucene.com/m/PrTmPXyDlv/The+Old+Git+Discussionsubj=Re+The+Old+Git+Discussion On the other hand you do not need to know SVN to use, develop and contribute to Apache Solr project. You can follow the project at GitHub: https://github.com/apache/lucene-solr Thanks; Furkan KAMACI 2014-04-11 5:12 GMT+03:00 Aman Tandon amantandon...@gmail.com: thanks sir, in that case i need to know about svn as well. Thanks Aman Tandon On Fri, Apr 11, 2014 at 7:26 AM, Alexandre Rafalovitch arafa...@gmail.comwrote: You can find the read-only Git's version of Lucene+Solr source code here: https://github.com/apache/lucene-solr . The SVN preference is Apache Foundation's choice and legacy. Most of the developers' workflows are also around SVN. Regards, Alex. Personal website: http://www.outerthoughts.com/ Current project: http://www.solr-start.com/ - Accelerating your Solr proficiency On Fri, Apr 11, 2014 at 7:48 AM, Aman Tandon amantandon...@gmail.com wrote: Hi, I am new here, i have question in mind that why we are preferring the svn more than git? -- With Regards Aman Tandon
Re: stucked with log4j configuration
Well I hope log4j2 is something Solr supports when GA Bill Bell Sent from mobile On Apr 12, 2014, at 7:26 AM, Aman Tandon amantandon...@gmail.com wrote: I have upgraded my solr4.2 to solr 4.7.1 but in my logs there is an error for log4j log4j: Could not find resource Please find the attachment of the screenshot of the error console https://drive.google.com/file/d/0B5GzwVkR3aDzdjE1b2tXazdxcGs/edit?usp=sharing -- With Regards Aman Tandon
Re: Apache Solr SpellChecker Integration with the default select request handler
Furkan, I am not sure how this could be a security concern, what I am actually asking is an approach to integrate the spellchecker search component within the default request handler. Thanks. On Sat, Apr 12, 2014 at 5:38 PM, Furkan KAMACI furkankam...@gmail.comwrote: Hi; I do not want to change the direction of your question but it is really good, secure and flexible to do such kind of things at your client (a java client or not). On the other *if *you let people to access your Solr instance directly it causes some security issues. Thanks; Furkan KAMACI 2014-04-12 19:26 GMT+03:00 S.L simpleliving...@gmail.com: Yes, I use solrJ , but only to index the data , the querying of the data happens usinf the default select query handler from a non java client. On Sat, Apr 12, 2014 at 12:12 PM, Furkan KAMACI furkankam...@gmail.com wrote: Hi; Do you use Solrj at your application? Why you did not consider to use to solve this with Solrj? Thanks; Furkan KAMACI 2014-04-12 18:34 GMT+03:00 S.L simpleliving...@gmail.com: Hello fellow Solr users, I am using the default select request handler to search a Solr core , I also use the eDismaxquery parser. 1. I want to integrate this with the spellchecker search component so that if a search request comes in the spellchecker component also gets called and I get a suggestion back with search results. 2. If the suggestion is above a certain threshold then I want the search to be made on that suggestion , otherwise the suggestion should comeback along with the search results for the original search term. In order to accomplish this it seems I need to integrate the SearchHandler.java class to call the spellchecker internally and then make a search call if the suggestion from the spellchecker has a suggestion that is above a certain threshold. I would really appreciate if there any examples of calling the SpellChecker component via the API in Solr that someone can share with me and also if you could validate my approach. Thank You.
Exception while unmarshalling response in SolrJ
Hi, I am using SolrJ client to send request to Solr. But instead of calling Solr directly SolrJ communicates with my proxy server which in turn calls Solr and gets the response in javabin format and returns back the response to the client in the same format. The proxy server is written using play framework and just sends request to Solr and returns the HTTP response to client. Below is the exception I get in SolrJ client library when it tries to unmarshall the javabin response. I'm using Solrj 4.7.0. How can I fix this? Exception Stack trace: *Exception in thread main org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrExceptionat org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:477) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:199) at org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:90) at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)at com.br.solr.Main.main(Main.java:20)Caused by: java.lang.NullPointerExceptionat org.apache.solr.common.util.JavaBinCodec.readExternString(JavaBinCodec.java:769) at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:192) at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:116) at org.apache.solr.client.solrj.impl.BinaryResponseParser.processResponse(BinaryResponseParser.java:43) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:475) ... 4 more*