On 10/30/06, Kevin Lewandowski [EMAIL PROTECTED] wrote:
I'm no longer able to add new data or optimize my index. There are
currently 1600 files in the index directory and it's about 1.1gb. I've
tried changing solrconfig.xml to use the compound file format and that
didn't make a difference. My
On 10/30/06, Kevin Lewandowski [EMAIL PROTECTED] wrote:
I'm no longer able to add new data or optimize my index. There are
currently 1600 files in the index directory and it's about 1.1gb. I've
tried changing solrconfig.xml to use the compound file format and that
didn't make a difference. My
This is what I just stuck in my Solr SVN checkout, so I can build + deploy the
app:
rm -rfv /usr/local/jetty/work/* \
ant dist-war \
cp dist/solr-1.0.war /usr/local/jetty/webapps/solr.war
pushd /usr/local/jetty/ \
java -classpath `/simpy/bin/addjars /usr/local/jetty/lib`:`/simpy/bin/addjars
On 10/30/06, Otis Gospodnetic [EMAIL PROTECTED] wrote:
This is what I just stuck in my Solr SVN checkout, so I can build + deploy the
app:
This is not ideal though, as every time I make a change I need to run this,
remove old files, build a war, deploy it, and wait a few seconds for Jetty
: Here's a problem I got. It says there's no match for snapshot.* found on the
: master box. This is wrong, there's one such file exists.
:
: - I then ran snappuller specifically on the snap file that's on the master:
: ./bin/snappuller -n snapshot.20061023172655
:
: This time it worked. and
Hello everyone,
Has anybody successfully implemented a Lucene spellchecker within Solr?
If so, could you give details on how one would achieve this?
If not, is it planned to make it as standard within Solr? Its a feature
almost every Solr application would want to use, so I think it would be
: My setup is relatively primitive, since my Solr dev time is limited.
: All of my changes have been developed using unit tests and the example
: app. Make a change, 'ant -Dtestcase=... test' (with an occasional
: 'ant clean'). To test with the example, 'ant clean example', Ctrl-C
: the open
Hi, Hoss,
Thanks for the reply!
For #2, I think I just need to setup the passwordless SSH with empty
passphase. right?
For #1:
I'm using the following Enterprise version:
Linux version 2.4.21-37a6 (gcc version 3.2.3 20030502 (Red Hat Linux
3.2.3-47))
I tried to run the find command
find
I have not done one but have been planning to do it based on this article:
http://today.java.net/pub/a/today/2005/08/09/didyoumean.html
With Solr it would be much simpler than the java examples they give.
On 10/30/06, Michael Imbeault [EMAIL PROTECTED] wrote:
Hello everyone,
Has anybody
Hi -
I'd like to be able to limit the number of documents returned from
any particular group of documents, much as Google only shows a max of
two results from any one website.
The docs are all marked as to which group they belong to. There will
probably be multiple groups returned from any
I just wrote this for Technorati the other day for our internal hackaton. It
was pretty simple. It doesn't use Solr, but it could. It uses Jetty's HTTP
handler, which I highly recomment. It acts as a web service that responds to
HTTP GET requests that contain a query, and it return
I had the very same article in mind - how would it be simpler in Solr
than in Lucene? A spellchecker is pretty much standard in every major
I meant it would be a simpler implementation in Solr because you don't
have to deal with java or any Lucene API's. You just create a document
for each
12 matches
Mail list logo