Hi all,
I came across this issue when I was exploring the solr FunctionQuery. Hope
anyone of you can help me on this:
I need to combine the score of a normal key word search with one numeric
field in the index to form a new score. So I use the query() function
provided in the FunctionQuery,
So
I downloaded solr 1.4.0 but discovered when using solrj 1.4 that a required
slf4j jar was missing in the distribution (i.e. apache-solr-1.4.0/dist). I got
a java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder when using
solrj
I solved the problem according to
Hi Sascha,
Thanks for your reply.
Our approach is similar to what you have mentioned in the jira issue except
that we have all metadata in the xml and not in the database. I am therefore
using a custom XmlUpdateRequestHandler to parse the XML and then calling
Tika from within the XML Loader to
are you sure that the doc w/ the same id was not created after that?
On Mon, Nov 16, 2009 at 11:12 PM, Mark Ellul m...@catalystic.com wrote:
Hi,
I have added a deleted field in my database, and am using the
Dataimporthandler to add rows to the index...
I am using solr 1.4
I have added my
The doc already existed before the delta-import has been run.
And it exists afterwards... even though it says its deleting it.
Any ideas of what I can try?
On 11/17/09, Noble Paul നോബിള് नोब्ळ् noble.p...@corp.aol.com wrote:
are you sure that the doc w/ the same id was not created after
why don't you add a new timestamp field . you can use the
TemplateTransformer with the formatDate() function
On Tue, Nov 17, 2009 at 5:49 PM, Mark Ellul m...@catalystic.com wrote:
Hi Noble,
Excellent Question... should the field that does the deleting be in a
different entity to the one that
Kewin,
Kerwin wrote:
Our approach is similar to what you have mentioned in the jira issue except
that we have all metadata in the xml and not in the database. I am therefore
using a custom XmlUpdateRequestHandler to parse the XML and then calling
Tika from within the XML Loader to parse the
Hi Noble,
I have updated my entity specs, by having a separate entity for
selecting rows which are not deleted for and ones that are deleted, so
I am sure now that the document is not getting added in the same
import.
I read in the tutorial that the deletes are not taken out until the
commit is
Mark,
http://localhost:8983/solr/update?stream.body=%3Ccommit/%3E
Otis
--
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
- Original Message
From: Mark Ellul m...@catalystic.com
To:
Thanks Otis... I remember that one!
It still did not remove the document! So obviously its something else thats
happening.
On Tue, Nov 17, 2009 at 10:47 AM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
Mark,
http://localhost:8983/solr/update?stream.body=%3Ccommit/%3E
Otis
--
Hi there!
I am trying to test the distributed search on 2 servers. I've created simple
application which adds sample documents to 2 different solr servers (version
1.3.0).
While it is possible to search for certain keyphrase on any of these servers,
I am getting weird error when trying to
I apologize in advance for the simple questionwe're running on Solr 1.3,
looking to upgrade to 1.4. I haven't been able to find instructions or
guidelines for upgrading. Can anyone point me in the right direction?
Thanks!
Adam
On Tue, Nov 17, 2009 at 06:09:56PM +0200, Eugene Dzhurinsky wrote:
java.lang.NullPointerException
at
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:421)
I compared schema.xml from Solr installation package with the one I created,
and found out that my
Several things about your message don't make sense...
1) the field names listed in your qf don't match up to the field names
in the generated query.toString() ... suggesting that they come from
differnet examples
2) the query.toString() output from each of your queries are identicle,
and yet
The new PECL package solr-0.9.7 (beta) has been released at
http://pecl.php.net/.
Release notes
-
- Fixed bug 16924 AC_MSG_NOTICE() is undefined in autoconf 2.13
- Added new method SolrClient::getDebug()
- Modified SolrClient::__construct() so that port numbers and other integer
: I am using Dismax request handler for queries:
:
: ...select?q=foo bar foo2 bar2qt=dismaxmm=2...
...
: But now I want change this to the following:
:
: List all documents that have at least 2 of the optional clauses OR that
: have at least one of the query terms (e.g. foo) more than
: Basically, search entries are keyed to other documents. We have finite
: storage,
: so we purge old documents. My understanding was that deleted documents
: still
: take space until an optimize is done. Therefore, if I don't optimize, the
: index
: size on disk will grow without bound.
:
:
Hi,
Sending this mail again after I joined the sol-user group..Kindly find time
to help.
Thanks and Rgds,
Anil
-- Forwarded message --
From: Anil Cherian cherian.anil2...@gmail.com
Date: Fri, Nov 13, 2009 at 3:48 PM
Subject: solr index-time boost... help required please
To:
: str name=qfPlantSearch^1 GeographySearch^1 RegionSearch^1
: CountrySearch^1 BusUnitSearch^1 BusinessFunctionSearch^1
: Businessprocesses^1 LifecycleStatus^1 ApplicationNature^1 UploadedDate^1
: /str
: str name=pfPlantSearch^1 GeographySearch^1 RegionSearch^1
: CountrySearch^1
On Tue, Nov 17, 2009 at 2:24 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: Basically, search entries are keyed to other documents. We have finite
: storage,
: so we purge old documents. My understanding was that deleted documents
: still
: take space until an optimize is done.
: I apologize in advance for the simple questionwe're running on Solr
: 1.3, looking to upgrade to 1.4. I haven't been able to find
: instructions or guidelines for upgrading. Can anyone point me in the
: right direction?
Official info for people upgrading can be found in the
: I downloaded solr 1.4.0 but discovered when using solrj 1.4 that a
: required slf4j jar was missing in the distribution (i.e.
: apache-solr-1.4.0/dist). I got a java.lang.NoClassDefFoundError:
: org/slf4j/impl/StaticLoggerBinder when using solrj
...
: Have I overlooked something or
: I'm newbie using Solr and I'd like to run some tests against our data set. I
: have successful tested Solr + Cell using the standard Http Solr server
: and now we need to test the Embedded solution and when a try to start the
: embedded server i get this exception:
:
: INFO: registering core:
: If documents are being added to and removed from an index (and commits
: are being issued) while a user is searching, then the experience of
: paging through search results using the obvious solr mechanism
: (start=100Rows=10) may be disorienting for the user. For one
: example, by the time the
CHANGES.txt contains information, but no instructions.
-Adam
- Original Message
From: Chris Hostetter hossman_luc...@fucit.org
To: solr-user@lucene.apache.org
Sent: Tue, November 17, 2009 1:43:14 PM
Subject: Re: Where is upgrading documentation?
: I apologize in advance for the
Thanks a lot Hoss!
[ ]'s
Leonardo da S. Souza
°v° Linux user #375225
/(_)\ http://counter.li.org/
^ ^
On Tue, Nov 17, 2009 at 6:12 PM, Chris Hostetter
hossman_luc...@fucit.orgwrote:
: I'm newbie using Solr and I'd like to run some tests against our data
set. I
: have successful tested
Hi users,
i wanted to know is there a way we can initialte solr indexing.
I mean for example i have a field which was of type string and i indexed 100
documents.
When i change the field to text i dont want to load the document again, i
should be able to just run a command line and the documents
I am looking at executing a single solr query and having solr automatically
execute one (or more) additional solr queries (inside solr) as a way to save
some overhead/time. I am doing this by overriding the SearchComponent. My
code works and I was looking at ways to optimize the code.
the
On Tue, Nov 17, 2009 at 11:09:38AM -0800, Chris Hostetter said:
Several things about your message don't make sense...
Hmm, sorry - a byproduct of building up the mail over time I think.
The query
?q=Here there be dragons
fl=id,title,score
debugQuery=on
qt=dismax
qf=title
gets echoed as
I want to use the standard QueryComponent to run a query then sort a *limited
number of the results* by some function query. So if my query returns
10,000 results, I'd like to calculate the function over only the top, say
100 of them, and sort that for the ultimate results. Is this possible?
Permanent solution we found was to add:
1. flush() before closing the segment.gen file write (On Lucene).
2. Remove the slave's segment.gen before replication
Point 1 elaborated:
Lucene 2.4, org.apache.lucene.index.SegmentInfos.finishCommit(Directory dir)
method:
Writing of segment.gen file
While trying to make use of the StreamingUpdateSolrServer for updates with the
release code for Solr.14 I noticed some characters such as é did not show up in
the index correctly. The code should set the CharsetName via the constructor
of the OutputStreamWriter. I noticed that the
Maduranga Kannangara wrote:
Permanent solution we found was to add:
1. flush() before closing the segment.gen file write (On Lucene).
Hmm ... but close does flush?
2. Remove the slave's segment.gen before replication
Point 1 elaborated:
Lucene 2.4,
Been there done that.
Indexing into the smaller cores will be faster.
You will be able to spread the load across multiple machines.
There are other advantages:
You will not have a 1/2Terabyte set of files to worry about.
You will not need 1.1T in one partition to run an optimize.
You will not
Darniz,
The indexer is typically an external application you write. This application
gets documents from some data source and sends them to Solr for indexing. It
is this application that needs to be able to re-get the appropriate set of
documents from the data source and re-send them to Solr
hi,
how to get the autocomplete/autosuggest feature in the solr1.4.plz give me
the code also...
--
View this message in context:
http://old.nabble.com/how-to-get-the-autocomplete-feature-in-solr-1.4--tp26402992p26402992.html
Sent from the Solr - User mailing list archive at Nabble.com.
There seems to be some improvement. The writes speeds are faster. Server
restarts are lower.
We changed the configuration to:
maxDocs50/maxDocs
maxTime1/maxTime
Before the Change:
- Server Restarts: 10 times in 12 hours
- CPU load: Average:50 and Peak:90
After the Change:
- Server
37 matches
Mail list logo