Solr 1.4 Enterprise Search Server book examples
Hello, We've recently acquired the Solr 1.4 Enterprise Search Server book. I've tried to download the example ZIP file from the editor's website, but the file is actually corrupted, and I cannot unzip it :( Could someone tell me if I can get these examples from another location? I've send a message last week to the editor reporting the issue, but that is not yet fixed ; and I'd really like to take a look at the example code and make some tests. Regards, -- Johan Cwiklinski
resolutions and chapters
Hi, I am currently putting together a search for a DB where I have resolutions along with their metadata as well as chapters, its text and metadata. Most of the searching will actually be done on the metadata. The plan atm is to support 2 search modes: (a) one where the results will be resolutions and (b) another where the results will be chapters. (a) Here I will search both the document and chapter data, but the actual result entities I want are resolutions. In terms of rating I obviously want stuff to rate higher with more relevant chapters, so I sort of need to group the hits on the chapters when computing the score. For good measure I might also want to show the number of chapters that had a match, potentially even with links to these chapters, so I would also need the chapter id's that matches. (b) Here I will just search across the chapters and rank them each on their own. Seems straight forward. Now how should I best structure my index for this? number of cores: I guess I will have two cores, one for documents and one for chapters? Then again there is some minor overlap in fields between the two and there is no real overhead with having unused fields, so I could just as well use one core. grouping: how do I best group the scores for the (a) type search? should I just do two searches and combine the results? then again this will make paging tricky. regards, Lukas Kahwe Smith m...@pooteeweet.org
Re: Howto build a function query using the 'query' function
Villemos, Gert wrote: If the 'query' returned a count, yes. But my problem is exactly that as far as I can see from the description of the 'query' function, it does NOT return the count but the score of the search. I assumed that myQueryReturningACountOfHowOftenThisDocumentIsReferenced is an int field that holds the referenced count. If it is, just using linear function I mentioned at the previous mail should solve your problem. If it is not, what is it? Koji -- http://www.rondhuit.com/en/
Re: hybrid approach to using cloud servers for Solr/Lucene
Not sure if it's a good argument, but cost is the reason. By keeping the base implementation in a permanent, local instance, cost is supposed to be less. Dennis Gearon Signature Warning EARTH has a Right To Life, otherwise we all die. Read 'Hot, Flat, and Crowded' Laugh at http://www.yert.com/film.php --- On Sun, 4/25/10, Otis Gospodnetic otis_gospodne...@yahoo.com wrote: From: Otis Gospodnetic otis_gospodne...@yahoo.com Subject: Re: hybrid approach to using cloud servers for Solr/Lucene To: solr-user@lucene.apache.org Date: Sunday, April 25, 2010, 9:30 PM Hi, Hm. Everything is doable, but this sounds a bit undefined and possibly messy. If flexibility is of such importance, why have the local part at all? Why not have everything in an elastic cloud environment? Otis Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch Lucene ecosystem search :: http://search-lucene.com/ - Original Message From: Dennis Gearon gear...@sbcglobal.net To: solr-user@lucene.apache.org Sent: Sun, April 25, 2010 10:17:11 PM Subject: hybrid approach to using cloud servers for Solr/Lucene I'm working on an app that could grow much faster and bigger than I could scale local resources, at least on certain dates and for other reasons. So I'd like to run a local machine in a dedicated host or even virtual machine at a host. If the load goes up, then queries are sent to the cloud at a certain point. Is this practical, anyone have experience in this? This is obviously a search engine app based on solr/lucene if someone is wondering. Dennis Gearon Signature Warning EARTH has a Right To Life, otherwise we all die. Read 'Hot, Flat, and Crowded' Laugh at href=http://www.yert.com/film.php; target=_blank http://www.yert.com/film.php
Re: Howto build a function query using the 'query' function
I *think* that he means that myQueryReturningACountOfHowOftenThisDocumentIsReferenced is something like facet.count (or so). If not, I am sorry. However, it interests me very much, how to get the facet.count of the document in a query function. For example: If a document has got catField:finance - how many documents that match the query x got a facet on catField:finance? Is it possible to retrive that count and apply the count that affects the document (in this case the count for finance) within a query function? Maybe we can discuss this later on, if Villemos's question is answered? Best Regards - Mitch -- View this message in context: http://lucene.472066.n3.nabble.com/Howto-build-a-function-query-using-the-query-function-tp729407p756421.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: json.nl=arrarr does not work with facet.date
Thanks Hoss, it would be a great feature if an array could be returned, parsing would be much easier Also I am encountering a similar issue with the highlight results, is there an option as below? json.hl=arrarr highlighting:[ { name:[Apple 60 GB iPod with emVideo/em Playback Black], features:[Stores up to 15,000 songs, 25,000 photos, or 150 hours of emvideo/em] }, { features:[Dual DVI connectors, HDTV out, emvideo/em input] } ] Instead of current -- highlighting:{ MA147LL/A:{ name:[Apple 60 GB iPod with emVideo/em Playback Black], features:[Stores up to 15,000 songs, 25,000 photos, or 150 hours of emvideo/em] }, EN7800GTX/2DHTV/256M:{ features:[Dual DVI connectors, HDTV out, emvideo/em input] }, 100-435805:{ name:[ATI Radeon X1900 XTX 512 MB PCIE emVideo/em Card] } } Cheers, Will -- View this message in context: http://lucene.472066.n3.nabble.com/json-nl-arrarr-does-not-work-with-facet-date-tp708730p756450.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: Problem with pdf, upgrading Cell
Okay i've been digging a little bit through the Java code from the SVN, and it seems the load function inside the ExtractingDocumentLoader class does not receive the ContentStream (it is set to null...).Maybe i should send this to the developper mailing list? Marc From: dekay...@hotmail.com To: solr-user@lucene.apache.org Subject: RE: Problem with pdf, upgrading Cell Date: Fri, 23 Apr 2010 16:03:28 +0200 Seems like i'm not the only one with this no extraction problem:http://www.mail-archive.com/solr-user@lucene.apache.org/msg33609.htmlApparently he tried the same thing, building from the trunk, and indexing a pdf, and no extraction occured... Strange. Marc G. _ Hotmail arrive sur votre téléphone ! Compatible Iphone, Windows Phone, Blackberry, … http://www.messengersurvotremobile.com/?d=Hotmail _ Découvrez comment SURFER DISCRETEMENT sur un site de rencontres ! http://clk.atdmt.com/FRM/go/206608211/direct/01/
AW: Solr 1.4 Enterprise Search Server book examples
i have send you a private mail. markus -Ursprüngliche Nachricht- Von: Johan Cwiklinski [mailto:johan.cwiklin...@ajlsm.com] Gesendet: Montag, 26. April 2010 10:58 An: solr-user@lucene.apache.org Betreff: Solr 1.4 Enterprise Search Server book examples Hello, We've recently acquired the Solr 1.4 Enterprise Search Server book. I've tried to download the example ZIP file from the editor's website, but the file is actually corrupted, and I cannot unzip it :( Could someone tell me if I can get these examples from another location? I've send a message last week to the editor reporting the issue, but that is not yet fixed ; and I'd really like to take a look at the example code and make some tests. Regards, -- Johan Cwiklinski
solrDynamicMbeans access
hello, i need to access the solr mbeans displayed in jconsole to access the attributes of solr using codes( java) JMXServiceURL address = new JMXServiceURL(service:jmx:rmi:///jndi/rmi://host:port/jmxrmi); JMXConnector conn = JMXConnectorFactory.connect(address); MBeanServerConnection mbs = conn.getMBeanServerConnection(); now how do i create a solrMbean object to check on its attributes. -- View this message in context: http://lucene.472066.n3.nabble.com/solrDynamicMbeans-access-tp756593p756593.html Sent from the Solr - User mailing list archive at Nabble.com.
Multiple DataSources- 2 tables - 2 db's- Where ... ?!
Hello... i got a new problem. we put out item's table into antother database and now i need to use multiple datascource but without successed =( so.. here my data-config-xml in short ;) dataSource name=shops type=JdbcDataSource ... / entity name=active pk=id dataSource=shops query=select id FROM shops WHERE is_active=1 / dataSource name=items type=JdbcDataSource ... / entity name=item pk=id dataSource=items processor=org.apache.solr.handler.dataimport.CachedSqlEntityProcessor query=select i.id, i.shop_id, i.name, from shop_items as i WHERE i.shop_id='{shops.active.id}' some delta imports ... and categroy mappings. so, i want only to index these items where is_active=1 in the table shops. how can i perform this ? thhhxxx =) -- View this message in context: http://lucene.472066.n3.nabble.com/Multiple-DataSources-2-tables-2-db-s-Where-tp756683p756683.html Sent from the Solr - User mailing list archive at Nabble.com.
indexer threading?
Hi, I was wondering about how the multi-threading of the indexer works? I am using SolrJ to stream documents to a server. As I add more threads on the client side, I slowly see both speed and CPU usage go up on the indexer side. Once I hit about 4 threads, my indexer is at 100% cpu usage (of 1 CPU on a 4-way box), and will not do any more work. It is pretty fast, doing something like 75k lines of text per second.. but I would really like to use all 4 CPUs on the indexer. Is the just a limitation of Solr, or is this a limitation of using SolrJ and document streaming? Thanks, Brian
Re: Solr 1.4 Enterprise Search Server book examples
Hi, I'm also interested to get those examples, would someone to share them ? On 4/26/10, markus.rietz...@rzf.fin-nrw.de markus.rietz...@rzf.fin-nrw.de wrote: i have send you a private mail. markus -Ursprüngliche Nachricht- Von: Johan Cwiklinski [mailto:johan.cwiklin...@ajlsm.com] Gesendet: Montag, 26. April 2010 10:58 An: solr-user@lucene.apache.org Betreff: Solr 1.4 Enterprise Search Server book examples Hello, We've recently acquired the Solr 1.4 Enterprise Search Server book. I've tried to download the example ZIP file from the editor's website, but the file is actually corrupted, and I cannot unzip it :( Could someone tell me if I can get these examples from another location? I've send a message last week to the editor reporting the issue, but that is not yet fixed ; and I'd really like to take a look at the example code and make some tests. Regards, -- Johan Cwiklinski -- Abdelhamid ABID Software Engineer- J2EE / WEB
Re: Multiple DataSources- 2 tables - 2 db's- Where ... ?!
i got a new problem. we put out item's table into antother database and now i need to use multiple datascource but without successed =( so.. here my data-config-xml in short ;) dataSource name=shops type=JdbcDataSource ... / entity name=active pk=id dataSource=shops query=select id FROM shops WHERE is_active=1 / dataSource name=items type=JdbcDataSource ... / entity name=item pk=id dataSource=items processor=org.apache.solr.handler.dataimport.CachedSqlEntityProcessor query=select i.id, i.shop_id, i.name, from shop_items as i WHERE i.shop_id='{shops.active.id}' some delta imports ... and categroy mappings. so, i want only to index these items where is_active=1 in the table shops. how can i perform this ? thhhxxx =) Datasourse definitions should be in the beginning right after the dataConfig and '{shops.active.id}' should be '{active.id}' dataSource name=shops type=JdbcDataSource ... / dataSource name=items type=JdbcDataSource ... / entity name=active pk=id dataSource=shops query=select id FROM shops WHERE is_active=1 / entity name=item pk=id dataSource=items processor=org.apache.solr.handler.dataimport.CachedSqlEntityProcessor query=select i.id, i.shop_id, i.name, from shop_items as i WHERE i.shop_id='{active.id}' some delta imports ... and categroy mappings.
Re: Best way to prevent this search lockup (apparently caused during big segment merges)?
Otis, The index is currently 236GB. I don't know which particular segments were being merged when I reported this problem, but my largest segment now (_8nm) is taking up 133GB, and the largest single file in the index is _8nm.prx, 71GB. I'm using a custom C# indexing client, so no Solrj/StreamingUpdateSolrServer. I'm submitting only one document per HTTP POST, and posts are to ExtractingRequestHandler (aka Solr Cell). The indexing client is multi-threaded. (Multi-threading may help hit the 200-thread limit sooner, but it seems like reducing the # of threads would just postpone hitting that limit, rather than eliminate the problem.) I also have Solr set to auto-commit every 30 minutes or so, to try to keep the index adequately live. On Fri, Apr 23, 2010 at 1:38 PM, Otis Gospodnetic otis_gospodne...@yahoo.com wrote: Chris, It looks like Mike already offered several solutions though I don't know what Solr does without looking at the code. But I'm curious: * how big is your index? and do you know how large the segments being merged are? * do you batch docs or do you make use of Streaming SolrServer? I'm curious, because I've never encountered this problem before... Thanks, Otis Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch Lucene ecosystem search :: http://search-lucene.com/ - Original Message From: Chris Harris rygu...@gmail.com To: solr-user@lucene.apache.org Sent: Thu, April 22, 2010 6:28:29 PM Subject: Best way to prevent this search lockup (apparently caused during big segment merges)? I'm running Solr 1.4+ under Tomcat 6, with indexing and searching requests simultaneously hitting the same Solr machine. Sometimes Solr, Tomcat, and my (C#) indexing process conspire to render search inoperable. So far I've only noticed this while big segment merges (i.e. merges that take multiple minutes) are taking place. Let me explain the situation as best as I understand it. My indexer has a main loop that looks roughly like this: while true: try: submit a new add or delete request to Solr via HTTP catch timeoutException: sleep a few seconds When things are going wrong (i.e., when a large segment merge is happening), this loop is problematic: * When the indexer's request hits Solr, then the corresponding thread in Tomcat blocks. (It looks to me like the thread is destined to block until the entire merge is complete. I'll paste in what the Java stack traces look like at the end of the message if they can help diagnose things.) * Because the Solr thread stays blocked for so long, eventually the indexer hits a timeoutException. (That is, it gives up on Solr.) * Hitting the timeout exception doesn't cause the corresponding Tomcat thread to die or unblock. Therefore, each time through the loop, another Solr-handling thread inside Tomcat enters a blocked state. * Eventually so many threads (maxThreads, whose Tomcat default is 200) are blocked that Tomcat starts rejecting all new Solr HTTP requests -- including those coming in from the web tier. * Users are unable to search. The problem might self-correct once the merge is complete, but that could be quite a while. What are my options for changing Solr settings or changing my indexing process to avoid this lockup scenario? Do you agree that the segment merge is helping cause the lockup? Do adds and deletes really need to block on segment merges?
Re: Solr does not honor facet.mincount and field.facet.mincount
Hi Koji, Thanks, f.field name.facet.mincount works. Thanks to you Chris as well, I was expecting that if I set the mincount to be ZERO, then facet will be totally ignored. But that is not the case. For example: If I want to display facets on fields A, B, C and D. But in case a field say C, does not have any data, then C should be excluded from the solr response lst name=facet_counts lst name=facet_queries/ lst name=facet_fields lst name=A int name=Aplha2/int /lst lst name=B int name=Beta20/int /lst lst name=D int name=Gamma12/int /lst /lst lst name=facet_dates/ This way I need not do any hiding in the View related code, because displaying a facet with no values does not make sense. But looks like, I need to handle Empty facets in code as solr will return the empty facets as well in the response as given below: lst name=facet_counts lst name=facet_queries/ lst name=facet_fields lst name=A int name=Aplha2/int /lst lst name=B int name=Beta20/int /lst lst name=C/ /lst lst name=D int name=Gamma12/int /lst /lst lst name=facet_dates/ ~Umesh -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-does-not-honor-facet-mincount-and-field-facet-mincount-tp746499p757206.html Sent from the Solr - User mailing list archive at Nabble.com.
solr.xml absolute path schema and config files
Does this work? When trying to display with a URL such as solr/sandbox/admin/file/?file=/mnt/solr/schema.xml from the Solr admin console, the following error occurs: type Status report message Can not find: schema.xml [/mnt/solr/sandbox/conf/mnt/solr/schema.xml] description The request sent by the client was syntactically incorrect (Can not find: schema.xml [/mnt/solr/sandbox/conf/mnt/solr/schema.xml]).
Re: Request Solr schema field definition vs dynamic creation performance help questions
: How does defining fields that may not be used affect an index? not much. there is a trivial amount of overhead in the IndexSchema object that contributes to fieled lookps when indxing, and obviously things like hte schema browser are affected, but unused fields shouldn't influence search performance at all. honestly: i doubt you would be able to measure any distinction even if you declared 10,000 unused fields in your schema.xml : Do copyFields effect performance of an index? they make it bigger, and they contribute to indexing speed. -Hoss
Re: luke responses of solr
: but does any one know how to get the same programatically.??? : I have used the piece of code below: I don't relaly understand your code .. a LukeResponse object is just a simple container for modeling the response fro mthe LukeRequestHandler -- you have to actually execute a LukeRequest to get a valid LukeResponse back. -Hoss
Re: Using QueryElevationComponent without specifying top results?
On Apr 26, 2010, at 7:53 AM, Oliver Beattie wrote: Hi all, I'm currently writing an application that uses Solr, and we'd like to use something like the QueryElevationComponent, without having to specify which results appear top. For example, what we really need is a way to say for this search, include these results as part of the result set, and rank them as you normally would. We're using a filter to specify which results we want included (which is distance-based), but we really want to be able to explicitly include certain results in certain queries (i.e. we want to include a listing more than 5 miles away from a particular location for certain queries). Is this possible? Any help would be really appreciated :) I'm not following the rank them as you normally would part. If Solr were already finding them, then they would already be ranked and showing up in the results and you wouldn't need to hardcode them, right? So, that leaves a couple of cases: 1. Including results that don't match 2. Elevating results that do match In your case, it sounds like you mostly just want #1. And, based on the context (distance search) perhaps you want those results sorted by distance? Otherwise, how else would you know where to inject the results? The QueryElevationComponent can include the results, although, I must admit, I'm not 100% certain on what happens to injected results given sorting. -- Grant Ingersoll http://www.lucidimagination.com/ Search the Lucene ecosystem using Solr/Lucene: http://www.lucidimagination.com/search
Re: solr.xml absolute path schema and config files
: Does this work? When trying to display with a URL such as : solr/sandbox/admin/file/?file=/mnt/solr/schema.xml from the Solr : admin console, the following error occurs: : : type Status report : : message Can not find: schema.xml [/mnt/solr/sandbox/conf/mnt/solr/schema.xml] As written the ShowFileRequesHandler will only allow you to access files in the conf dir for the current SolrCore -- even if you specify an absolute path, it treats it as being relativie hte conf dir (hence the path returned by the error message) it's a psuedo safety feature to prevent requests like /admin/file/?file=/etc/password -Hoss
Re: Using QueryElevationComponent without specifying top results?
Hi Grant, Thanks for getting back to me. Yes, indeed, #1 is exactly what I'm looking for. Results are already ranked by distance (among other things), but we need the ability to manually include a certain result in the set. They wouldn't usually match, because they fall outside the radius of the filter query we use. Most of the resulting score comes from function queries (we have a number of metrics that rank listings [price, feedback score, etc]), so the score from the text search doesn't have *that much* bearing on the outcome. So, yeah, basically, I'm looking for a way to include results that don't match, but have Solr calculate its score as it would if it did match the filter query. Sorry for being so unclear and rambling a bit, I'm struggling to articulate what we want in a clear manner! —Oliver On 26 April 2010 19:13, Grant Ingersoll gsing...@apache.org wrote: On Apr 26, 2010, at 7:53 AM, Oliver Beattie wrote: Hi all, I'm currently writing an application that uses Solr, and we'd like to use something like the QueryElevationComponent, without having to specify which results appear top. For example, what we really need is a way to say for this search, include these results as part of the result set, and rank them as you normally would. We're using a filter to specify which results we want included (which is distance-based), but we really want to be able to explicitly include certain results in certain queries (i.e. we want to include a listing more than 5 miles away from a particular location for certain queries). Is this possible? Any help would be really appreciated :) I'm not following the rank them as you normally would part. If Solr were already finding them, then they would already be ranked and showing up in the results and you wouldn't need to hardcode them, right? So, that leaves a couple of cases: 1. Including results that don't match 2. Elevating results that do match In your case, it sounds like you mostly just want #1. And, based on the context (distance search) perhaps you want those results sorted by distance? Otherwise, how else would you know where to inject the results? The QueryElevationComponent can include the results, although, I must admit, I'm not 100% certain on what happens to injected results given sorting. -- Grant Ingersoll http://www.lucidimagination.com/ Search the Lucene ecosystem using Solr/Lucene: http://www.lucidimagination.com/search
Re: Boost function on *:*
: Is it possible to use boost function across the whole index/empty search : term? function queries by definition match all docs -- so just query for the function your want and you'll get all documents scored according to hte function. based on your wordking, it sounds like you are using dismax, and want this function as your alt query... /selectdefType=dismaxqf=..q.alt={!func}yourFunc(...) -Hoss
Re: solr.xml absolute path schema and config files
Maybe we can add an error message to ShowFileRequesHandler that explains to the user that displaying an absolute path doesn't work? On Mon, Apr 26, 2010 at 1:13 PM, Chris Hostetter hossman_luc...@fucit.org wrote: : Does this work? When trying to display with a URL such as : solr/sandbox/admin/file/?file=/mnt/solr/schema.xml from the Solr : admin console, the following error occurs: : : type Status report : : message Can not find: schema.xml [/mnt/solr/sandbox/conf/mnt/solr/schema.xml] As written the ShowFileRequesHandler will only allow you to access files in the conf dir for the current SolrCore -- even if you specify an absolute path, it treats it as being relativie hte conf dir (hence the path returned by the error message) it's a psuedo safety feature to prevent requests like /admin/file/?file=/etc/password -Hoss
Re: solr.xml absolute path schema and config files
: Maybe we can add an error message to ShowFileRequesHandler that : explains to the user that displaying an absolute path doesn't work? yeah ... that would certianly be better -- i think it alreayd does that if you try ../../foo but for absolute paths there is no explicit test -- just the failure when it attempts ot resolve that path relative the conf dir. : : On Mon, Apr 26, 2010 at 1:13 PM, Chris Hostetter : hossman_luc...@fucit.org wrote: : : : Does this work? When trying to display with a URL such as : : solr/sandbox/admin/file/?file=/mnt/solr/schema.xml from the Solr : : admin console, the following error occurs: : : : : type Status report : : : : message Can not find: schema.xml [/mnt/solr/sandbox/conf/mnt/solr/schema.xml] : : As written the ShowFileRequesHandler will only allow you to access files : in the conf dir for the current SolrCore -- even if you specify an : absolute path, it treats it as being relativie hte conf dir (hence the : path returned by the error message) : : it's a psuedo safety feature to prevent requests like : : /admin/file/?file=/etc/password : : : -Hoss : : : -Hoss
Re: Solr 1.4 Enterprise Search Server book examples
Hello, Le 26/04/2010 20:53, findbestopensource a écrit : I am able to successfully download the code. It is of 360 MB and took lot of time to download. I'm also able to download the file ; but not to extract many of the files it contains after download (can list them but not extract, an error occurs). Are you able to extract the ZIP archive you've downloaded? https://www.packtpub.com/solr-1-4-enterprise-search-server/book Select the download the code link and provide your email id, Download link will be sent via email. Regards Aditya www.findbestopensource.com On Mon, Apr 26, 2010 at 8:34 PM, Abdelhamid ABID aeh.a...@gmail.com wrote: Hi, I'm also interested to get those examples, would someone to share them ? On 4/26/10, markus.rietz...@rzf.fin-nrw.de markus.rietz...@rzf.fin-nrw.de wrote: i have send you a private mail. markus -Ursprüngliche Nachricht- Von: Johan Cwiklinski [mailto:johan.cwiklin...@ajlsm.com] Gesendet: Montag, 26. April 2010 10:58 An: solr-user@lucene.apache.org Betreff: Solr 1.4 Enterprise Search Server book examples Hello, We've recently acquired the Solr 1.4 Enterprise Search Server book. I've tried to download the example ZIP file from the editor's website, but the file is actually corrupted, and I cannot unzip it :( Could someone tell me if I can get these examples from another location? I've send a message last week to the editor reporting the issue, but that is not yet fixed ; and I'd really like to take a look at the example code and make some tests. Regards, -- Johan Cwiklinski -- Abdelhamid ABID Software Engineer- J2EE / WEB -- Johan Cwiklinski
Re: SEVERE: Could not start SOLR. Check solr/home property
Did you by any chance set up multicore? Try passing in the path to the Solr home directory as -Dsolr.solr.home=/path/to/solr/home while you start Solr. On Mon, Apr 26, 2010 at 1:04 PM, Jon Drukman jdruk...@gmail.com wrote: What does this error mean? SEVERE: Could not start SOLR. Check solr/home property I've had this solr installation working before, but I haven't looked at it in a few months. I checked it today and the web side is returning a 500 error, the log file shows this when starting up: SEVERE: Could not start SOLR. Check solr/home property java.lang.RuntimeException: java.io.IOException: read past EOF at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1068) at org.apache.solr.core.SolrCore.init(SolrCore.java:579) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:137) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:83) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:99) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:594) at org.mortbay.jetty.servlet.Context.startContext(Context.java:139) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218) For the record, I've never explictly set solr/home ever. It always just worked. -jsd- -- - Siddhant
Re: Boost function on *:*
Correct, I am using dismax by default. I actually accomplished what I was looking for by creating a separate request handler with a defType of lucene and then I used _val_ hook. I tried using the {!func}function as you describe but couldn't get it work. Are there any difference between the two? -- View this message in context: http://lucene.472066.n3.nabble.com/Boost-function-on-tp747131p757817.html Sent from the Solr - User mailing list archive at Nabble.com.
AutoSuggest with custom sorting
Hi I am supposed to implement auto suggest where the prefix matches are sorted based on the following criteria. We have two fields (max characters ~ 100) that we need to search. Field 1 has only one word (no spaces) where as Field2 has multiple words separated by spaces. Example - Row1 ```Field1 - ROFL Field2 - Rolls on the floor laughing Row2 Field1: IRLL Field2 - Rolling Row3 Field1 - IRLTR Field2 - I Roll 1. Results matching field1 should be ranked higher. Results matching the first word of Field2 should be ranked higher than any subsequent matches. If you search for RO* in the above example the ranking should be Row1-Row2-Row3. 2.The next sort parameter is the length of the word. So, if you are searching for IR, Row2 (2 out of 4 ) matches higher than Row3 (2 out of 5). 3. The final sort parameter is an integer field that we already have as part of the schema. Any help or pointers will be deeply appreciated. -Papiya Pink OTC Markets Inc. provides the leading inter-dealer quotation and trading system in the over-the-counter (OTC) securities market. We create innovative technology and data solutions to efficiently connect market participants, improve price discovery, increase issuer disclosure, and better inform investors. Our marketplace, comprised of the issuer-listed OTCQX and broker-quoted Pink Sheets, is the third largest U.S. equity trading venue for company shares. This document contains confidential information of Pink OTC Markets and is only intended for the recipient. Do not copy, reproduce (electronically or otherwise), or disclose without the prior written consent of Pink OTC Markets. If you receive this message in error, please destroy all copies in your possession (electronically or otherwise) and contact the sender above.
aaaah. No response, No Result
Hello. For few hours i changed my data-config.xml. everything worked fine, but now, one of my cores get no results. i test with http://server/solr/select/?q=*:*debugQuery=on; but i get no result. in schema Browser 800.000 Items are in my index when i search with my API i get some results but not over direct http-request !? whats going on here ??? thx for fast help ! =) -- View this message in context: http://lucene.472066.n3.nabble.com/h-No-response-No-Result-tp757948p757948.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SEVERE: Could not start SOLR. Check solr/home property
On 4/26/10 1:18 PM, Siddhant Goel wrote: Did you by any chance set up multicore? Try passing in the path to the Solr home directory as -Dsolr.solr.home=/path/to/solr/home while you start Solr. Nope, no multicore. I destroyed the index and re-created it from scratch and now it works fine. No idea what was going on there. Luckily it takes 10 minutes to create and the box is not in production yet.
Installing Solr under Glassfish
I am having trouble getting solr configured under glassfish. I am following the setup instructions from (http://wiki.apache.org/solr/SolrInstall). I copied the example solr home directory example/solr to /var/solr. When I deploy the solr.war and start it, I get the error that it cannot find solrconfig.xml and the places that it was looking are classpath and 'solr/conf'. First I tried to set the java property solr.solr.home=/var/solr, in both the generic JVM configuration as well as a default startup property and web container. That did not change the error message. The error message also indicates that the $CWD is set to some config directory under the default glassfish domain. So to test that idea, I copied the whole 'solr/conf' directory to that location and the admin console comes up. However, that doesn't appear to be the proper configuration either since none of the xlts appear to get triggered and I get an unadorned xml doc as return result. It also does not appear to be the way that glassfish wants its webapps configured. I also looked at the web.xml in WEB-INF of the deployed web application and found a segment that states that you can set the Solr home directory directly in web.xml. So I tried that, but that gives exactly the same solrconfig.xml not found error. Thus, I have run out of options mentioned in either the wiki or the config files. So, any suggestions how to do configure Solr to work under Glassfish? Theo
Re: Installing Solr under Glassfish
Turns out that I was almost there, just stuck that system property in the wrong place. There is a Systems Properties form just for these types of things. Sticking them in the JVM or Web Container properties wasn't the right thing to do. The only problem I now still have is that the xml response is raw. How do I trigger the xlts transformations for a easier to read xml rendering? For those interested in the answer here are the steps I went through to get Solr configured under Glassfish first, I used the command line bin/asadmin deploy solr.war to deploy the application. Then I copied the Solr home directory structure (from the documentation is example/solr) to /var/solr There is a java system property for solr by the name solr.solr.home that you can set to direct the Solr web application to find its home directory at a particular place. So in our case that system property needs to be set as follows: solr.solr.home=/var/solr In the Enterprise Server form (Common Tasks/Enterprise Server in the left navigation menu of the Administration console) there is a tab called System Properties. Add the above mentioned property in this form. save it. Then go back to the General tab. Just underneath the General Information label there is a button with the command Restart. Hit that to restart the Glassfish server. I guess, if you set this property before you deploy the Solr web application you don't need to do a restart, but I didn't confirm that behavior. Once restarted, go to the applications tab and launch the Solr application. And now the Solr web application will come up in a separate browser window. On 4/26/2010 9:08 PM, Theodore Omtzigt wrote: I am having trouble getting solr configured under glassfish. I am following the setup instructions from (http://wiki.apache.org/solr/SolrInstall). I copied the example solr home directory example/solr to /var/solr. When I deploy the solr.war and start it, I get the error that it cannot find solrconfig.xml and the places that it was looking are classpath and 'solr/conf'. First I tried to set the java property solr.solr.home=/var/solr, in both the generic JVM configuration as well as a default startup property and web container. That did not change the error message. The error message also indicates that the $CWD is set to some config directory under the default glassfish domain. So to test that idea, I copied the whole 'solr/conf' directory to that location and the admin console comes up. However, that doesn't appear to be the proper configuration either since none of the xlts appear to get triggered and I get an unadorned xml doc as return result. It also does not appear to be the way that glassfish wants its webapps configured. I also looked at the web.xml in WEB-INF of the deployed web application and found a segment that states that you can set the Solr home directory directly in web.xml. So I tried that, but that gives exactly the same solrconfig.xml not found error. Thus, I have run out of options mentioned in either the wiki or the config files. So, any suggestions how to do configure Solr to work under Glassfish? Theo
TikaEntityProcessor in Solr1.4
Hi, I would like to use TikaEntityProcessor with Solr1.4. https://issues.apache.org/jira/browse/SOLR-1358 shows that this is added in Solr1.5. Can anyone please point me on the steps to patch Solr 1.4 with these changes (if this is possible/allowed). Also, is there a timeframe on Solr1. release? Regards Monmohan -- View this message in context: http://lucene.472066.n3.nabble.com/TikaEntityProcessor-in-Solr1-4-tp758371p758371.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SEVERE: Could not start SOLR. Check solr/home property
Hi Jon, Not sure who spits out that error message, but you can always use -Dsolr.solr.home=/path/to/solr/home as the JVM param (in your app server startup script, most likely). Otis Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch Lucene ecosystem search :: http://search-lucene.com/ - Original Message From: Jon Drukman jdruk...@gmail.com To: solr-user@lucene.apache.org Sent: Mon, April 26, 2010 4:04:39 PM Subject: SEVERE: Could not start SOLR. Check solr/home property What does this error mean? SEVERE: Could not start SOLR. Check solr/home property I've had this solr installation working before, but I haven't looked at it in a few months. I checked it today and the web side is returning a 500 error, the log file shows this when starting up: SEVERE: Could not start SOLR. Check solr/home property java.lang.RuntimeException: java.io.IOException: read past EOF at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1068) at org.apache.solr.core.SolrCore.init(SolrCore.java:579) at org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:137) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:83) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:99) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:40) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:594) at org.mortbay.jetty.servlet.Context.startContext(Context.java:139) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1218) For the record, I've never explictly set solr/home ever. It always just worked. -jsd-