Thank you for the reference to the ${foo} format.
I am looking at trying to minimize the redundant data in my document feed since
I have lots of records with an overall small footprint per record. This simple
change can save me maybe 20% of my data set size. It also provides a mechanism
to
Hi all,
it seems that we just post to much to fast to Solr.
When we post 100 documents (seperate calls) and perform a commit everything
goes well, but as soon as we start sending thousands of documents and than use
autocommit or send the commit message we have the situation that there are a
Hi,
I want to use result of a function query based on multiple field values for
ranking the results,
can I also return the value of the computed function along with other fields
of the documents returned.
thanks in anticipation
-umar
Chris Hostetter wrote:
The two exceptions you cited both indicate there was at least one date
instance with no millis included -- NOW can't do that. it always inlcudes
millis (even though it shouldn't).
I've seen people suggest, for performance reasons, that they reduce the
granularity of
Welcome abroad, Koji.
Bill
On Tue, May 6, 2008 at 6:56 PM, Koji Sekiguchi [EMAIL PROTECTED] wrote:
Hi Erik and everyone!
I'm looking forward to working with you. :)
Cheers,
Koji
Erik Hatcher wrote:
A warm welcome to our newest Solr committer, Koji Sekiguchi! He's been
providing
the really simple way is to index none for fields that are empty then just
search on color:none.
On Tue, May 6, 2008 at 9:06 PM, Brendan Grainger [EMAIL PROTECTED]
wrote:
Hi,
Not sure if this is what you want, but to search for 'empty' fields we use
something like this:
(*:* AND
We have a few slave solr servers that are just hardlinked-rsynced
copies of a master server.
When we do the rsync the changes don't show up immediately. The snap*
scripts call a commit on the slave servers -- but since these are
readonly servers we've disabled /update in the solrconfig.xml
I currently have a java-based application that stores all objects on the file
system (text, blobs) and uses lucene to search the objects. If I can store
these objects in solr, I would greatly increase the scalability of my
application.
Would it be safe to replace the filesystem with solr in
: (*:* AND -color:[* TO *])
the *:* shouldn't be neccessary in Solr, fq=-color:[* TO *] should work just
fine.
: One of the fields in my database is color. It can either contain a value
: (blue, red etc) or be blank. When I perform a search with facet counts on, I
: get a count for _empty_.
To search against multiple Solrs, you can use
http://wiki.apache.org/solr/DistributedSearch in Solr 1.3. This is not tied
to the MultiCore feature.
-Original Message-
From: Shalin Shekhar Mangar [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 06, 2008 9:28 PM
To:
Hi Tim,
You definitely have something seriously wrong with your setup--some
people have reported thousands of documents indexed per second, and
I've personal indexed millions of documents sans commit/. We've
never had a report of documents not being in the index despite being
sent to
On 7-May-08, at 5:01 AM, Umar Shah wrote:
Hi,
I want to use result of a function query based on multiple field
values for
ranking the results,
can I also return the value of the computed function along with
other fields
of the documents returned.
If you score documents based on a
I have the same requirement, and from what I understand the distributed
search feature will help implementing this, by having one shard per
language. Am I right?
Gereon
Mike Klaas wrote:
On 5-May-08, at 1:28 PM, Eli K wrote:
Wouldn't this impact both indexing and search performance and
Gereon,
I think that you must have the same schema on each shard but I am not
sure if it must also have the same analyzers.
These are shards of one index and not multiple indexes. There is
probably a way to get each shard to contain one language but then you
end up with x servers for x
Another option would be to use a multi-core configuration, one for each
language. If you're using the java client from 1.3 you could then just have a
base url that you append the language string to in order to pick what core
you're searching over, (http://searchserver:1234/solr/en,
I'm a complete newbie to Solr and Java programming. I'm able to get Solr up
running. I'd like to replace Porter stemming with KStem. I have KStem
source, but I'm clueless in term of how to compile and use it.
Thanks,
HH
There is nothing super special that you need to do to get KStem compiled.
However, you will need the Solr JAR file on your classpath when you compile
KStem.
You can do this on command-line, ANT, Eclipse, etc. This will produce the class
files. It will also be the easiest to use if you put this
I don't really see how that would help, no. All the benefits from
using separate indices would be gained by using one field per
language, ISTM.
By the way, there are tools available that make field-per-language
stuff much easier, especially if there are many fields. By using
dynamic
On 7-May-08, at 8:26 AM, Phillip Rhodes wrote:
I currently have a java-based application that stores all objects on
the file system (text, blobs) and uses lucene to search the
objects. If I can store these objects in solr, I would greatly
increase the scalability of my application.
On 7-May-08, at 5:04 AM, Daniel Papasian wrote:
Chris Hostetter wrote:
The two exceptions you cited both indicate there was at least one
date
instance with no millis included -- NOW can't do that. it always
inlcudes
millis (even though it shouldn't).
I've seen people suggest, for
20 matches
Mail list logo