I don't have time to verify this now, but the RichDocumentHandler does
not have a separate contrib directory and I don't think the
RichDocumentHandler patch makes a jar particular to the handler;
instead, the java files get dumped in the main solr tree
(java/org/apache/solr) , and therefore they ge
The moment the current patch is tested it will be checked in.
On Thu, Dec 11, 2008 at 8:33 PM, Jeff Newburn wrote:
> Thank you for the quick response. I will keep an eye on that to see how it
> progresses.
>
>
> On 12/10/08 8:03 PM, "Noble Paul നോബിള് नोब्ळ्"
> wrote:
>
>> This is a known is
The error message is saying "undefined field color"
Is that field defined in your schema? If not, you need to define it,
or map the color field to another field during import.
-Yonik
On Thu, Dec 11, 2008 at 11:37 PM, phil cryer wrote:
> I can't import csv files into Solr - I've gone through th
I can't import csv files into Solr - I've gone through the wiki and
all the examples online, but I've hit the same error - what am I'm
doing wrong?
curl 'http://localhost:8080/solr/update/csv?commit=true' --data-binary
@green.csv -H 'Content-type:text/plain; charset=utf-8' -s -u
solrAdmin:solrAdmi
How big is your index? There is a variant of the Lucene disk accessors in
the Lucene contrib area. It stores all of the index data directly in POJOs
(Java objects) and does not marshal them into a disk-saveable format. The
indexes are understandably larger, but all data added is automatically
commi
Have you tried just checking out (or exporting) the source from SVN
and applying the patch? Works fine for me that way.
$ svn co http://svn.apache.org/repos/asf/lucene/solr/tags/
release-1.3.0 solr-1.3.0
$ cd solr-1.3.0 ; patch -p0 < ~/Downloads/collapsing-patch-to-1.3.0-
ivan_2.patch
Doug
Oleg,
The reliable formula is situation-specific, I think. One sure way to decrease
the warm time is to minimize the number of items to copy from old caches to new
caches on warmup.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: oleg_gn
We are still having this problem. I am wondering if it can be fixed with
autowarm settings. Is there a reliable formula for determining the autowarm
settings?
--
View this message in context:
http://www.nabble.com/Query-Performance-while-updating-the-index-tp20452835p20968516.html
Sent from the
I am also a Mac user. 10.5.5. I generally compile on OS X then
upload to Debian (Debian's java just isn't as friendly to me).
Perhaps try this old Mac adage - have you repaired permissions? :-)
If you want I can send you a tar ball of the patched code off-list so
you can move on.
--
Ste
Hello,
I am making a query to my Solr server in which I would like to have a number
of fields returned, with highlighting if available. I've noticed that in the
query response, I get back both the original field name and then in a
different section, the highlighted snippet. I am wondering if there
On Dec 10, 2008, at 10:21 PM, Jacob Singh wrote:
Hey folks,
I'm looking at implementing ExtractingRequestHandler in the
Apache_Solr_PHP
library, and I'm wondering what we can do about adding meta-data.
I saw the docs, which suggests you use different post headers to
pass field
values alo
http://wiki.apache.org/solr/SolrWebSphere
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Alexander Ramos Jardim
> To: solr-user@lucene.apache.org
> Sent: Thursday, December 11, 2008 1:30:03 PM
> Subject: Re: Exception running Solr in Weblo
We are currently using Apache, SOLR and Java to populate the lucene/solr index.
We need someone to upgrade our SOLR to the newest version, review our schema
(solr) and solr config to performance tune and make suggestions. Also the
schema and solr config most likely contain information we don't n
It was a completely clean install. I downloaded it from one of
mirrors right before applying the patch to it.
Very troubling. Any other suggestions or ideas?
I am running it on Mac OS Maybe I will try looking for some answers
around that.
-John
On Dec 11, 2008, at 3:05 PM, Stephen Weiss
You can set the home directory in your Tomcat context snippet/file.
http://wiki.apache.org/solr/SolrTomcat#head-7036378fa48b79c0797cc8230a8a
a0965412fb2e
This controls where Solr looks for solrconfig.xml and schema.xml. The
solrconfig.xml in turn specifies where to find the data directory.
-
Hey there,
I would like to change the default directory where solr looks for the config
files and index.
Let's say I would like to put:
/opt/tomcat/bin/solr/data/index in /var/searchengine_data/index
and
/opt/tomcat/bin/solr/conf in /usr/home/searchengine_files/conf
Is there any way to do it via
Yes, only ivan patch 2 (and before, only ivan patch 1), my sense was
these patches were meant to be used in isolation (there were no notes
saying to apply any other patches first).
Are you using patches for any other purpose (non-SOLR-236)? Maybe you
need to apply this one first, then thos
thanks for the advice.
I just downloaded a completely clean version, haven't even tried to
build it yet.
Applied the same, and I received exactly the same results.
Do you only apply the ivan patch 2? What version of patch are you
running?
-John
On Dec 11, 2008, at 2:10 PM, Stephen Weis
Also, if you are using solr 1.3, solr 1.4 will reopen readers rather
than open them again. This means only changed segments have to be
reloaded. If you turn off all the caches and use a bit higher merge
factor, maybe a low max merge docs, you can prob get things a lot
quicker. There will still
It sounds like you need real-time search, where documents are
available in the next query. Solr doesn't do that.
That is a pretty rare feature and must be designed in at the start.
The usual workaround is to have a main index plus a small delta
index and search both. Deletes have to be handled se
Thanks Doug, removing "query" definitely helped. I just switched to
Ivan's new patch (which definitely helped a lot - no SEVERE errors now
- thanks Ivan!) but I'm still struggling with faceting myself.
Basically, I can tell that faceting is happening after the collapse -
because the facet
http://wiki.apache.org/solr/SolrWebSphere
On Fri, Dec 12, 2008 at 12:00 AM, Alexander Ramos Jardim <
alexander.ramos.jar...@gmail.com> wrote:
> Can't find it on the wiki. Could you put the url here?
>
> 2008/12/11 Otis Gospodnetic
>
> > I think somebody just put up a page about Solr and WebLogic
We commit immediately after each and every document submit. I think we have to
because we want to immediately retrieve a count on the number of documents of
that type, including the one that we just submitted. And my understanding is
that if we don't commit immediately, the new document will
Are you sure you have a clean copy of the source? Every time I've
applied his patch I grab a fresh copy of the tarball and run the exact
same command, it always works for me.
Now, whether the collapsing actually works is a different matter...
--
Steve
On Dec 11, 2008, at 1:29 PM, John Mart
chip correra wrote:
We’re using Solr as a backend indexer/search engine to support an AJAX
based consumer application. Basically, when users of our system create
“Documents” in our product, we commit right away, because we want to
immediately re-query and get counts back from Solr to
I had a similar problem and I solved it by making the directory a
multi-valued field in the index and giving each directory a unique id. So
for example, a document in directory 2 would contain in the index: "dir_id:A
dir_id:B dir_id:2". A search on any of those fields will then return
directory 2.
We’re using Solr as a backend indexer/search engine to support an AJAX
based consumer application. Basically, when users of our system create
“Documents” in our product, we commit right away, because we want to
immediately re-query and get counts back from Solr to update the user’s
i
I use this workaround all the time.
When I need to put the hierarchy which a product belongs, I simply arranje
all the nodes as: "a ^ b ^ c ^ d"
2008/12/11 Otis Gospodnetic
> This is what Hoss was hinting at yesterday (or was that on the Lucene
> list?). You can do that if you encode the hiera
Hi,
I am trying to apply Ivan's field collapsing patch to solr 1.3 (not a
nightly), and it continously fails. I am using the following command:
patch -p0 -i collapsing-patch-to-1.3.0-ivan_2.patch --dry-run
I am in the apache-solr directory, and have read write for all files
directories and
This is what Hoss was hinting at yesterday (or was that on the Lucene list?).
You can do that if you encode the hierarchy in a field properly., e.g. "/A /B
/1" may be one doc's field. "/A /B /2" may be another doc's field. THen you
just have to figure out how to query that to get a sub-tree.
Can't find it on the wiki. Could you put the url here?
2008/12/11 Otis Gospodnetic
> I think somebody just put up a page about Solr and WebLogic up on the Solr
> Wiki...
>
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
> > From: Alex
I think somebody just put up a page about Solr and WebLogic up on the Solr
Wiki...
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Alexander Ramos Jardim
> To: solr-user@lucene.apache.org
> Sent: Thursday, December 11, 2008 1:01:16 PM
> S
Guys,
I keep getting this exception from time to time on my application when it
comunicates with Solr. Does anyone knows if Solr tries to write headers
after the response has been sent?
<[ACTIVE] ExecuteThread: '12' for queue: 'weblogic.kernel.Default
(self-tuning)'> <> <> <> <1229015137
I have discovered some weirdness with our Minimum Match functionality.
Essentially it comes up with absolutely no results on certain queries.
Basically, searches with 2 words and 1 being ³the² don¹t have a return
result. From what we can gather the minimum match criteria is making it
such that if
Yonik
>Another thought I just had - do you have autocommit enabled?
>
No; not as far as I know!
The solrconfig.xml from the two versions are equivalent as best I can tell,
also they are exactly as provided in the download. The only changes were
made by the attached script and should not affect co
In an attempt to keep my current solr-ruby work manageable, I've created a
temp repo on my github account and put everything there:
http://github.com/mwmitchell/solr-ruby/tree/master
I'll be fleshing out the wiki in the next few days:
http://github.com/mwmitchell/solr-ruby/wikis with examples.
Mo
Thank you for the quick response. I will keep an eye on that to see how it
progresses.
On 12/10/08 8:03 PM, "Noble Paul നോബിള് नोब्ळ्" <[EMAIL PROTECTED]>
wrote:
> This is a known issue and I was planning to take it up soon.
> https://issues.apache.org/jira/browse/SOLR-821
>
>
> On Thu, Dec
My mistake I saw the maven directories and did not see the build.xml
in the src directory so just assumed...My Bad.
Anyway built successfully, thanks.
Now to apply the field collapsing patch.
-John
On Dec 11, 2008, at 8:46 AM, Noble Paul നോബിള്
नोब्ळ् wrote:
Solr uses ant for build
ins
I can help a bit with <2>...
First, keep in mind the difference between index and query
time boosting:
>From Hossman:
..Index time field bosts are a way to express things like
"this documents title is worth twice as much as the title of most documents"
query time boosts are a way to express "i ca
Hi,
Around 50threads/sec the request bring back " No read Solr server
available" , the gc seems to be quite full, but I didn"t get OOM error,
would love an advice.
Thanks a lot
Details :
8G of memory
4CPU : Intel(R) Xeon(R) CPU5160 @ 3.00GHz
Solr 1.3
# Arguments to pass to the J
On Thu, Dec 11, 2008 at 5:56 PM, sunnyfr <[EMAIL PROTECTED]> wrote:
>
> So according to you and everything explained in my post, I did my best to
> optimize it ?
> Yes it's unique queries. I will try it again and activate cache.
>
If you run unique queries then it is not a very realistic test. Tu
Take a look at FunctionQuery support in Solr:
http://wiki.apache.org/solr/FunctionQuery
http://wiki.apache.org/solr/SolrRelevancyFAQ#head-b1b1cdedcb9cd9bfd9c994709b4d7e540359b1fd
On Thu, Dec 11, 2008 at 7:01 PM, Pooja Verlani <[EMAIL PROTECTED]>wrote:
> Hi all,
>
> I have a specific requirement
Solr uses ant for build
install ant
On Thu, Dec 11, 2008 at 7:13 PM, John Martyniak <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have downloaded Maven 2.0.9, and tried to build using "mvn clean install"
> and "mvn install", nothing works.
>
> Can somebody tell me how to build solr from source? I am try
Hi,
I have downloaded Maven 2.0.9, and tried to build using "mvn clean
install" and "mvn install", nothing works.
Can somebody tell me how to build solr from source? I am trying to
build the 1.3 source.
thank you very much,
-John
Actually I still have this error : " No read Solr server available "
sunnyfr wrote:
>
> Ok sorry I just add the parameter -XX:+UseParallelGC and it seems to don't
> go oom.
>
>
>
>
> sunnyfr wrote:
>>
>> Actually I just notices, lot of request didn"t bring back correct answer,
>> but " No
Hi all,
I have a specific requirement for query time boosting.
I have to boost a field on the basis of the value returned from one of the
fields of the document.
Basically, I have the creationDate for a document and in order to introduce
recency factor in the search, i need to give a boost to the
Ok sorry I just add the parameter -XX:+UseParallelGC and it seems to don't go
oom.
sunnyfr wrote:
>
> Actually I just notices, lot of request didn"t bring back correct answer,
> but " No read Solr server available" so my jmeter didn't take that for an
> error. Obviously out of memory, and a f
Actually I just notices, lot of request didn"t bring back correct answer, but
" No read Solr server available" so my jmeter didn't take that for an error.
Obviously out of memory, and a file gc.log is created with :
0.054: [GC [PSYoungGen: 5121K->256K(298688K)] 5121K->256K(981376K),
0.0020630 secs
Hi Otis,
Thanks for the info and help. I started reading up about it (on
Markmail, nice site), and it looks like there is some activity to put
it into 1.4. I will try and apply the patch, and see how that works.
It seems like a couple of people are using it in a production
environment
So according to you and everything explained in my post, I did my best to
optimize it ?
Yes it's unique queries. I will try it again and activate cache.
What you mean by hit the file system?
thanks a lot
Shalin Shekhar Mangar wrote:
>
> Are each of those queries unique?
>
> First time que
Are each of those queries unique?
First time queries are slower. They are cached by Solr and the same query
again will return results very quickly because it won't need to hit the file
system.
On Thu, Dec 11, 2008 at 4:08 PM, sunnyfr <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'm doing a stress test
Hi,
ayyanar wrote:
>
> Thanks Rob. Can you plz provide some sample documents (lucene) for title
> bassed boosting?
>
I'm not sure about the Lucene part (this is the Solr mailing list after
all), but if you want index time boosting of certain fields, you have to add
documents like this:
Hi,
I'm doing a stress test on solr.
I've around 8,5M of doc, the size of my data's directory is 5,6G.
I've indexed again my data to make it faster, and applied all the last
patch.
My index data store just two field : id and text (which is a copy of three
fiels)
But I still think it's very long
Hi,
Any plans of supporting user-defined classifications on Solr? Is there
any component which returns all the children of a node (till the leaf
node) when I search for any node?
May be this would help:
Say I have a few SolrDocuments classified as:
A
Thanks Rob. Can you plz provide some sample documents (lucene) for title
bassed boosting?
--
View this message in context:
http://www.nabble.com/Nwebie-Question-on-boosting-tp20950286p20952532.html
Sent from the Solr - User mailing list archive at Nabble.com.
55 matches
Mail list logo