So, you're using Jetty. That's indeed a place to store the file when using
Jetty.
Oh, I apparently figured out how to get the jar file to load, so problem is
solved I suppose.
The fix seems very odd to me, but I got it from a comment on the SSP 2 blog
page (
So those multiple documents overwrite eachother? In that case, your data is
not suited for a lowercased docID. I'd recommend not doing any analysis on the
docID to prevent such headaches.
Hi ,
My schema consists of a field of type lowercase(for applying the lowercase
filter factory) and
Hello,
The terms query for a date field seems to get populated with some weird dates,
many of these dates (1970,2009,2011-04-23) are not present in the indexed data.
Please see sample data below
I also notice that a delete and optimize does not remove the relevant terms for
date fields, the
I want multiple documents with same unique key to overwrite each other but
they are not overwriting because of lowercase field type as unique key
On 4 May 2011 11:45, Markus Jelsma markus.jel...@openindex.io wrote:
So those multiple documents overwrite eachother? In that case, your data is
I could see required field missing exception for the few docs except that i
could not see any other exception.
--
View this message in context:
http://lucene.472066.n3.nabble.com/full-import-called-simultaneously-for-multiple-core-tp2894606p2897746.html
Sent from the Solr - User mailing list
Hello,
I'm trying to modify Solr and I think debugging will be very useful to
understand what's going on. Hence I'd like to use an IDE (NetBeans)
which automatically supports Maven projects. I see under src/maven
that there are templates but I'm not sure how to use them to mavenize
the
In the ant script there is a target to generate maven's artifacts.
After that, you will be able to open the project as a standard maven
project.
Ludovic.
2011/5/4 Gabriele Kahlout [via Lucene]
ml-node+2898068-621882422-383...@n3.nabble.com
Hello,
I'm trying to modify Solr and I think
Hello Barry,
the main AnalysisEngine descriptor defined inside the analysisEngine
element should be inside one of the jars imported with the lib elements.
At the moment it cannot be taken from expanded directories but it should be
easy to do it (and indeed useful) modifying the
Hello,
I just upgraded to 3.1. After this the solr.log is showing deprecation
warnings (see below).
What can I do about this?
Regards,
Ward
-
WARNING: WhitespaceTokenizerFactory is using deprecated LUCENE_24 emulation.
You should at some point declare and reindex to at least 3.0,
Hi Robert,
Have you seen *any* growth?
We have once added a copy field for supporting leading wildcard and got our
index doubled (or something close).
On Tue, May 3, 2011 at 9:24 PM, Robert Petersen rober...@buy.com wrote:
From what I have seen, adding a second field with the same terms as
Dear list,
is it possible to add a custom QueryParser stage to solr
or add a custom query filter?
My aim is to filter out reserved characters from query terms,
like : within a query term.
query=text:(:foo AND bar)
query=text:(foo AND b:ar)
Regards
Bernd
generate-maven-artifacts:
[mkdir] Created dir: /Users/simpatico/SOLR_HOME/build/maven
[mkdir] Created dir: /Users/simpatico/SOLR_HOME/dist/maven
[copy] Copying 1 file to
/Users/simpatico/SOLR_HOME/build/maven/src/maven
[artifact:install-provider] Installing provider:
oups,
sorry, this was not the target I used (this one should work too, but...),
the one I used is get-maven-poms. That will just create pom files and copy
them to their right target locations.
I'm using netbeans and I'm using the plugin Automatic Projects to do
everything inside the IDE.
Which
On Wed, May 4, 2011 at 1:11 PM, lboutros boutr...@gmail.com wrote:
oups,
sorry, this was not the target I used (this one should work too, but...),
the one I used is get-maven-poms. That will just create pom files and copy
them to their right target locations.
I don't have get-maven-poms
ok, this is part of my build.xml (from the svn repository) :
property name=version value=3.1-SNAPSHOT/
target name=get-maven-poms
description=Copy Maven POMs from dev-tools/maven/ to their target
locations
copy todir=. overwrite=true
fileset
It worked after checking out the dev-tools folder. Thank you!
On Wed, May 4, 2011 at 1:20 PM, lboutros boutr...@gmail.com wrote:
property name=version value=3.1-SNAPSHOT/
target name=get-maven-poms
description=Copy Maven POMs from dev-tools/maven/ to their target
locations
Hello,
I want to patch my solr installation(1.4.1) with
solr-2010.(https://issues.apache.org/jira/browse/SOLR-2010)
I need this feature:
Only return collations that are guaranteed to result in hits if re-queried
Now i try the following code:
wget
I'm not sure what you're asking here, can you clarify? A search
machine that replicates from an indexer is just a Solr server,
search requests are handled like any other server.
If you're asking about how to configure replication, see:
http://wiki.apache.org/solr/SolrReplication#Slave
Best
Erick
thanks for bringing closure here. Problems like this drive me crazy,
especially when the solution is really simple, but hard to figure out!
Erick
On Wed, May 4, 2011 at 1:14 AM, Jed Glazner jglaz...@beyondoblivion.com wrote:
So it turns out that it's the host names. According the DNS RFC
I would like to get more details about JSON capabilites provided by the SOLR
3.1 version.
Does this really means referring to the schema.xml I could just hit a JSON
passing the field values corresponding to the fields of schema.xml and the
document get indexed to SOLR?
Any help would be highly
This is pretty fragile, the Jetty work directories come and go.
I predict it will keep disappearing and/or you'll go through this same hassle
next time you re-install or move to a new machine or...
You *should* be able to just remove that directory entirely and still start w/o
copying the jar.
Hmmm, this *looks* like you've changed your schema without
re-indexing all your data so you're getting old (string?) values in
that field, but that's just a guess. If this is really happening on a
clean index it's a problem.
I'm also going to guess that you're not really deleting the documents
OK, what is your proof that they're not overwriting? Because the
deleted documents are still in the index, and looking at, say,
terms will show them until an optimize is done.
The deleted copies won't be shown in search results etc, but
the underlying data is still in the index.
If that's not
Hmmm, Can you provide more details? I know of no reason this
isn't working...
Best
Erick
On Wed, May 4, 2011 at 3:27 AM, Kannan ramkannan2...@gmail.com wrote:
I could see required field missing exception for the few docs except that i
could not see any other exception.
--
View this message
What this is saying is that your index was created with a 2.x format.
That format
is supported in 3.x, but will NOT be supported in 4.x.
So re-index your data with a 3.x Solr and this should go away...
Best
Erick
On Wed, May 4, 2011 at 6:20 AM, Ward Bekker w...@equanimity.nl wrote:
Hello,
I
Sure, all you have to do is derive from the right class. See:
http://wiki.apache.org/solr/SolrPlugins#QParserPlugin
But this'll be tricky since you have to get at the proper colons in your
example, and not remove the ones that delimit fields
Might it be easier to clean the search terms in
Have you looked here?
http://wiki.apache.org/solr/SolJSON
If so, what parts are you having trouble with?
Best
Erick
On Wed, May 4, 2011 at 8:11 AM, pgaur themirror2...@gmail.com wrote:
I would like to get more details about JSON capabilites provided by the SOLR
3.1 version.
Does this really
Hi Erick,
I've removed the old indexes. Rebuilded and I'm still getting the depreciation
warnings.
Regards,
Ward
On May 4, 2011, at 2:23 PM, Erick Erickson wrote:
What this is saying is that your index was created with a 2.x format.
That format
is supported in 3.x, but will NOT be
did you update this part in your solrconfig.xml ?
luceneMatchVersionLUCENE_31/luceneMatchVersion
Ludovic.
-
Jouve
France.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Deprication-warnings-in-Solr-log-tp2898163p2898749.html
Sent from the Solr - User mailing list
That attribute is not defined. Is it required?
Regards,
Ward
On May 4, 2011, at 3:11 PM, lboutros wrote:
did you update this part in your solrconfig.xml ?
luceneMatchVersionLUCENE_31/luceneMatchVersion
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi Erik,
Am 04.05.2011 14:30, schrieb Erick Erickson:
Sure, all you have to do is derive from the right class. See:
http://wiki.apache.org/solr/SolrPlugins#QParserPlugin
But this'll be tricky since you have to get at the proper colons in your
example, and not remove the ones that delimit
I just did a clean check out on the 1.4.1 branch and then applied the latest
(10/22/2010) version of SOLR-2010_141.patch and it applied cleanly.
I noticed from the listing you sent that any new files it removes trailing
crs from the text. Maybe its not doing this for you on files that need
I also should mention that solr-2010 is incorporated in Solr 3.1, so if you can
upgrade you won't need a patch. Note, however, that you will still want to
apply the fix in solr-2462 regardless of the version as this fix hasn't been
committed anywhere.
James Dyer
E-Commerce Systems
Ingram
This could work. Are there search/index performance drawbacks when using it?
On Mon, May 2, 2011 at 6:22 PM, Ahmet Arslan iori...@yahoo.com wrote:
Is there an efficient way to update multiple documents with common values
(e.g. color = white)? An example would be to mark all white-colored
I have a couple lib directives in my solrconfig.xml:
lib dir=./lib /
lib dir=/home/user/apache-solr-1.4.1/example/solr/lib/ /
Both of those should work, as far as I know. Those are pointing to 2
different folders, and both have a copy of my jar file in them. Yet, for
some reason Solr doesn't
On 5/4/2011 8:50 AM, Dyer, James wrote:
I also should mention that solr-2010 is incorporated in Solr 3.1, so if you can
upgrade you won't need a patch. Note, however, that you will still want to
apply the fix in solr-2462 regardless of the version as this fix hasn't been
committed anywhere.
That won't work. External file fields are currently only usable within
function queries, according to the Javadocs
On Wed, May 4, 2011 at 12:16 PM, Rih tanrihae...@gmail.com wrote:
This could work. Are there search/index performance drawbacks when using
it?
On Mon, May 2, 2011 at 6:22 PM,
but it doesn't build.
Now, I've checked out solr4 from the trunk and tried to build the maven
project there, but it fails downloading berkleydb:
BUILD FAILURE
Total time: 1:07.367s
Finished at: Wed May 04 20:33:29 CEST 2011
Hmmm, I'll have to defer this to people who understand it better.. Siiiggghh...
Erick
On Wed, May 4, 2011 at 9:56 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
Hi Erik,
Am 04.05.2011 14:30, schrieb Erick Erickson:
Sure, all you have to do is derive from the right class. See:
How do you add multiple documents to Solr in JSON in a single request?
In XML, I can just send this:
add
docfield name=id1/field/doc
docfield name=id2/field/doc
/add
There is an example on this page:
http://wiki.apache.org/solr/UpdateJSON
But it doesn't demonstrate how to send more than
I do not build this part, I don't need it.
The lib was present in the branch_3x branch, but is not there anymore.
You can download it here :
http://search.lucidimagination.com/search/out?u=http%3A%2F%2Fdownloads.osafoundation.org%2Fdb%2Fdb-4.7.25.jar
You have to install it locally.
Ludovic.
Neither do i..but i was doing mvn install.. what do you do?
On Wed, May 4, 2011 at 9:11 PM, lboutros boutr...@gmail.com wrote:
I do not build this part, I don't need it.
The lib was present in the branch_3x branch, but is not there anymore.
You can download it here :
I opened and built my needed projects in Netbeans, i.e.: Solr Core, Solr
Search Server, Solrj, Lucene Core etc
But with the given library you should go to the next step.
Ludovic.
-
Jouve
France.
--
View this message in context:
Hi,
When I have add the Json request handler as below for update in solrconfig.xml
requestHandler name=/update/json class=solr.JsonUpdateRequestHandler/
I am getting following error. Version : apache-solr-1.4.1. Could you please
help...
Error is shown below,
Check your log files for more
I'm trying to use MoreLikeThis handler and mlt.qf to boost certain fields:
/solr/mlt?q=id:1mlt.fl=body_title,textmlt.qf=body_title^20.0+text^1.0mlt.mintf=1
Looks like this has been an outstanding issue:
This could work. Are there
search/index performance drawbacks when using it?
I am not using this feature in production, But it is the only way that i know
to update a field without re-indexing whole document.
If you can give us more details about use case, others can suggest different
That won't work. External file fields
are currently only usable within
function queries, according to the Javadocs
Yes you are right, only function queries.
However he can dump ids of white-colored items to a text file in the following
format:
12278=20.0
9984=20.0
issue a commit and filter
Hi folks. What you're supposed to do is run:
mvn -N -Pbootstrap install
as the very first one-time only step. It copies several custom jar files into
your local repository. From then on you can build like normally with maven.
~ David Smiley
Author:
I found out how to do it, but you have to have duplicate add keys in
a JSON object, which isn't easily serializable from a hash in a
language.
I reported an issue here:
https://issues.apache.org/jira/browse/SOLR-2496
Please vote for it if you agree.
On Wed, May 4, 2011 at 3:00 PM, Neil Hooey
Erik,
I suspected the same, and setup a test instance to reproduce this. The date
field I used is setup to capture indexing time, in other words the schema has a
default value of NOW. However, I have reproduced this issue with fields which
do no have defaults too.
On the second one, I did a
Hi guys,
Can i have a field name with a period(.) ?
Like in *file.size*
thanks!
[ ]'s
Leonardo da S. Souza
°v° Linux user #375225
/(_)\ http://counter.li.org/
^ ^
Hi,
In solr admin query full interface page, the following query with english
become term query according to debug :
title_en_US: (blood red)
lst name=debug
str name=rawquerystringtitle_en_US: (blood red)/str
str name=querystringtitle_en_US: (blood red)/str
str name=parsedquerytitle_en_US:blood
If it helps, these are filelist output before and after restarting master on
a sample setup:
Before restarting master:
---
{indexSize=113.82 KB,
indexPath=C:\JavaStuff\Solr\replication\solrhome\master\data\index,
Hi,
I have been using using SolrSpellCheckcomponent. One of my requirements is
that if a user types something like add, solr would return adidas. To
get something like this, I used EdgeNGramsFilterFactory and applied it to
the fields that I am indexing. So for adidas I will have something like a,
Please see Robert's two solutions (autoGeneratePhraseQueries or PositionFilter)
http://search-lucene.com/m/imED32mqqyp1/
--- On Thu, 5/5/11, cyang2010 ysxsu...@hotmail.com wrote:
From: cyang2010 ysxsu...@hotmail.com
Subject: why query chinese character with bracket become phrase query by
Kapil Chhabra indicates on his blog that if you boost a value in a
multivalued field during index time, the boosts are consolidated for
every field, and the individual values are lost.
Here's the link:
http://blog.kapilchhabra.com/2008/01/solr-index-time-boost-facts-2
This post is from
: That sounds quite reasonable indeed. But i don't understand why Solr doesn't
: throw an exception when i actually index a string in a long fieldType while i
: do remember getting some number formatting exception when pushing strings to
: an integer fieldType.
:
: With the current set up i
Hi Neil,
I think payloads is the way to go. Index-time boosting is not per term.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
From: Neil Hooey nho...@gmail.com
To:
If I have a document with:
{ id: 1, sentences: hello world|5.0_goodbye|2.3_this is a sentence|2.8 }
How would I get those payloads to take affect, on the tokens separated by
_?
How do you write a query to use those payloads?
On Wed, May 4, 2011 at 22:26, Otis Gospodnetic
okay... let me make the situation more clear... I am trying to create an
universal field which includes information about users like firstname,
surname, gender, location etc. When I enter something e.g London, I would
like to match any users having 'London' in any field firstname, surname or
Hello,
I'm using solr version 1.4.0 with tomcat 6. I've 2 solr instances running as
2 different web apps with separate data folders. My application requires
frequent commits from multiple clients. I've noticed that when more than one
client try to commit at the same time, these
Thank you so much for this gem, David!
I still don't manage to build though:
$ svn update
At revision 1099684.
$ mvn clean
$ mvn -N -Pbootstrap install
[INFO]
[INFO] BUILD FAILURE
[INFO]
62 matches
Mail list logo