i didnt expect this answer .
Thank u very much for ur reply Gora
i already saw that sampleafter completing my first stage self research
only i posted this post.i clearly know about that there is an example in
solr about the RSS in the example they used the url
On Fri, Sep 23, 2011 at 11:50 AM, nagarjuna nagarjuna.avul...@gmail.com wrote:
i didnt expect this answer .
Thank u very much for ur reply Gora
i already saw that sampleafter completing my first stage self research
only i posted this post.i clearly know about that there is an
Hey guys,
Very new to solr. I'm using the data import handler to pull customer data
out of my database and index it. All works great so far. Now I'm trying to
query against a specific field and I seem to be struggling with doing a
wildcard search. See below.
I have several indexed documents
yaa Gora i set up rss feed to my blog and i have the following url for the
rss feed of my blog
http://nagarjunaavula.blogspot.com/feeds/posts/default?alt=rss
http://nagarjunaavula.blogspot.com/feeds/posts/default?alt=rss u can check
this url.then how to use this url in my solr application
Thank you very much! It is working.
Regards
On Wed, Sep 14, 2011 at 4:14 PM, Juan Grande juan.gra...@gmail.com wrote:
Hi Ahmad,
While Solr is starting it writes the path to SOLR_HOME to the log. The
message looks something like:
Sep 14, 2011 9:14:53 AM
Hi All!
We are working first time with solr and have a simple data model
Entity Person(column surname) has 1:n Attribute(column name) has 1:n
Value(column text)
We need faceted search on the content of Attribute:name not on Attribute:name
itself, e.g if an Attribute of person has name=hobby,
Hi all
I sent my data from Nutch to Solr for indexing and searching. Now I want to
delete all of the indexed data sent from Nutch. Can anyone help me?
thanks
Thanks for helping me so far,
Yes i have seen the edgeNGrams possiblity. Correct me if i'm wrong, but i
thought it isn't possible to do infix searches with edgeNGrams? Like chest
gives suggestion manchester.
--
View this message in context:
Hi, I suppose that this isn't what you mean but I leave it here, because
it could help you.
If this what you need?
Using SolrJ, I delete all the rows of the index whit this command:
solr.deleteByQuery(id:*);
But you need to delete all the rows inserted from Nutch, could be this helps
You've got CommonGramsFilterFactory and StopFilterFactory both using
stopwords.txt, which is a confusing configuration. Normally you'd want one
or the other, not both ... but if you did legitimately have both, you'd want
them to each use a different wordlist.
Maybe I am wrong. But my
Thanks Otis,
I am able to show the results such that the last match (500 characters around
the match) in the log file is shown highlighted. I can try creating multiple
documents from one log file to see if it improves the performance.
Can anything else be done to reduce the heap size?
Anand
Im using EdgeNgrams to do the same thing rather than wild card searches.
More info here :
http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/
Make sure your search phrase is enclosed in quotes as well so its
treated as a phrase rather than 2
Hi solR users!
I'd like to make research on my client database, in particular, i need
to find client by their address (ex : 100 avenue des champs élysée)
Does anyone know a good fieldType to store my addresses to enable me to
search client by address easily ?
thank you all
On
Hi solR users!
I'd like to make research on my client database, in particular, i need
to find client by their address (ex : 100 avenue des champs élysée)
Does anyone know a good fieldType to store my addresses to enable me to
search client by address easily ?
thank you all
The regex fragmenter showed that there was enough content to show multiple
snippets.
The amount of snippets has no effect on any of the types of breakIterator.
Only fragsize has effect.
Or is this highlighter not supporting multiple snippets?
--
View this message in context:
(11/09/23 20:03), O. Klein wrote:
The regex fragmenter showed that there was enough content to show multiple
snippets.
The amount of snippets has no effect on any of the types of breakIterator.
Only fragsize has effect.
Or is this highlighter not supporting multiple snippets?
This
OK, I found the problem was in our new interface.
Your feedback made me look deeper. Thanx.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Snippets-and-Boundaryscanner-in-Highlighter-tp3358898p3361571.html
Sent from the Solr - User mailing list archive at Nabble.com.
Erik
I tried your solution.. but it still not open the files in solr results, I
am pasting my files.. take a look is somthing can be corrected :
data-config.xml:
dataConfig
dataSource type=BinFileDataSource name=bin/
document
entity name=f processor=FileListEntityProcessor
Hi,
OK, if SOLR-2403 being related to the bug I described, has been fixed in
SOLR 3.4 than we are safe, since we are in the process of migration. Is it
possible to verify this somehow? Is FacetComponent class is the one I should
start checking this from? Can you give any other pointers?
OK, for
On Sun, Sep 18, 2011 at 11:47 AM, abhayd ajdabhol...@hotmail.com wrote:
hi gora,
Query works and if i remove xml data load indexing works fine too
Problem seem to be with this
entity name=f processor=FileListEntityProcessor
baseDir=${solr.solr.home} fileName=.xml
*
I have a java program which sends thousands of Solr XML files up to Solr
using the following code. It works fine until there is a problem with one of
the Solr XML files. The code fails on the solrServer.request(up) line, but
it does not throw an exception, my application therefore cannot catch
On Sat, Sep 3, 2011 at 1:29 AM, Chris Hostetter hossman_luc...@fucit.orgwrote:
: I am not sure if current version has this, but DIH used to reload
: connections after some idle time
:
: if (currTime - connLastUsed CONN_TIME_OUT) {
: synchronized (this) {
:
On 9/23/2011 1:45 AM, Pranav Prakash wrote:
Maybe I am wrong. But my intentions of using both of them is - first I
want to use phrase queries so used CommonGramsFilterFactory. Secondly,
I dont want those stopwords in my index, so I have used
StopFilterFactory to remove them.
CommonGrams is
All the solr methods look like they should throw those 2 exceptions.
Have you tried the DirectXmlRequest method?
up.process(solrServer);
public UpdateResponse process( SolrServer server ) throws
SolrServerException, IOException
{
long startTime = System.currentTimeMillis();
Roy,
Use something other than Nabble or quote previous email to help people keep
track of what your problem is/was about.
Yes, with edge ngrams you won't be able to do infix searches but are you
sure you want that? People typically don't miss/skip the beginning of a word...
Otis
Hi Ahmad,
Ah, that's a FAQ! :)
http://search-lucene.com/?q=delete+all+documentsfc_project=Solrfc_type=wiki
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message -
From: ahmad ajiloo
hi
I am not getting exception anymore.. I had issue with database
But now real problem i always have ...
Now that i can fetch ID's from database how would i fetch correcponding data
from ID in xm file
So after getting DB info from jdbcsource I use xpath processor like this,
but it does not
Hi Roland,
I did this:
http://search-lucene.com/?q=sort+by+functionfc_project=Solrfc_type=wiki
Which took me to this:
http://wiki.apache.org/solr/FunctionQuery#Sort_By_Function
And further on that page you'll find strdist function documented:
http://wiki.apache.org/solr/FunctionQuery#strdist
Nicolas,
A text or ngram field should do it.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message -
From: Nicolas Martin nmar...@doyousoft.com
To: solr-user@lucene.apache.org
Cc:
Sent: Friday,
hi
My requirement is
i have a list of popular search terms in database
seachterm | count
---
mango | 100
Consider i have only oneterm in that table, mango. I use edgengram and put
that in auto_complete field in solr index with count.
If user starts typing m i wil show
yes it is possible
http://www.medihack.org/2011/03/01/autocompletion-autosuggestion-using-solr/
Since i m looking into autosuggest i came across that info while doing
research..
--
View this message in context:
I tried that with the same results. You would think I would get the
exception back from Solr so I could trap it, instead I lose all other
requests after it.
On Fri, Sep 23, 2011 at 8:33 AM, Gunther, Andrew gunth...@si.edu wrote:
All the solr methods look like they should throw those 2
This seems to be out of date. I am running Solr 3.4
* the file structure of apachehome/contrib is different and I don't see
velocity anywhere underneath
* the page referenced below only talks about Solr 1.4 and 4.0
?
On Thu, Sep 22, 2011 at 19:51, Markus Jelsma markus.jel...@openindex.iowrote:
Hi
I have indexed some 1M documents, just for performance testing. I have written
a query parser plug, when i add it in solr lib folder under tomcat wepapps
folder. and try to load solr admin page it keeps on loading and when I delete
jar file of query parser plugin from lib it works fine. but
ok, answered my own question, found velocity rw in solrconfig.xml. next
question:
where does velocity look for its templates?
-
Subscribe to the Nimble Books Mailing List http://eepurl.com/czS- for
monthly updates
On Fri, Sep 23, 2011 at
Thanks Rahul.
Are you using 3.3 or 3.4? I'm on 3.3 right now
I will try the patch today
Thanks again,
Maria
-Original Message-
From: Rahul Warawdekar [mailto:rahul.warawde...@gmail.com]
Sent: Thursday, September 22, 2011 12:46 PM
To: solr-user@lucene.apache.org
Subject: Re:
Just another point worth mentioning here.. Though its related to Nutch and
not Solr..
If you want to re-crawl and try to get new data into the index, you have to
remove data from the crawl folder (default for nutch) of nutch too.. Only
then will you get fresh crawled data (not to be confused with
Few thoughts:
1) If you place the script transformer method on the entity named x
and then pass the ${topic_tree.topic_id} to that as an argument, then
shouldn't you have everything you need to work with x's row? Even if
you can't look up at the parent, all you needed to know was the
topic_id and
I am using Solr 3.1.
But you can surely try the patch with 3.3.
On Fri, Sep 23, 2011 at 1:35 PM, Vazquez, Maria (STM)
maria.vazq...@dexone.com wrote:
Thanks Rahul.
Are you using 3.3 or 3.4? I'm on 3.3 right now
I will try the patch today
Thanks again,
Maria
-Original Message-
Hi,
In working through some updates for the Solr Size Estimator, I have
found a number of gaps in the Solr Wiki. I've Google'd to a fair degree
on each of these and either found nothing or an insufficient explanation.
In particular, for each of the following I'm looking for:
A) An
conf/velocity by default. See Solr's example configuration.
Erik
On Sep 23, 2011, at 12:37, Fred Zimmerman w...@nimblebooks.com wrote:
ok, answered my own question, found velocity rw in solrconfig.xml. next
question:
where does velocity look for its templates?
I tried the patch
(https://issues.apache.org/jira/secure/attachment/12481497/SOLR-2233-001
.patch)
And now I get these errors. Am I doing something wrong? Using MS SQL
Server
23 Sep 2011 12:26:14,418
[org.apache.solr.handler.dataimport.ThreadedEntityProcessorWrapper]
Exception in entity :
When I create a query like somethingfl=content in solr/browse the and
= in URL converted to %26 and %3D and no result occurs. but it works in
solr/admin advanced search and also in URL bar directly, How can I solve
this problem? Thanks
--
View this message in context:
On Sep 23, 2011, at 2:03pm, hadi wrote:
I have to cores with seprate schema and index but i want to have single
result set in solr/browse,
If they have different schemas, how would you combine results from the two?
If they have the same schemas, then you can define a third core with a
I index my files with solrj and crawl my sites with nutch 1.3 ,as you
know, i have to overwrite the nutch schema on solr schema in order to
have view the result in solr/browse, in this case i should define two
cores,but i want have single result or the user can search into both
core indexes at the
Hi all,
I'd like to know what the specific disadvantages are for using dynamic
fields in my schema are? About half of my fields are dynamic, but I could
move all of them to be static fields. WIll my searches run faster? If there
are no disadvantages, can I just set all my fields to be dynamic?
On 9/23/2011 6:00 PM, hadi wrote:
I index my files with solrj and crawl my sites with nutch 1.3 ,as you
know, i have to overwrite the nutch schema on solr schema in order to
have view the result in solr/browse, in this case i should define two
cores,but i want have single result or the user
The first thing I'd try is just tweaking the Xmx parameter on the invocation,
java -Xmx2048M -jar start.jar
Second option: Play with your autocommit options in solrconfig.xml
and lower it substantially, although I'm not quite sure how DIH interacts
with that.
Gotta rush, so sorry this is so
If relevance ranking is working well, in theory it doesn't matter how many hits
you get as long as the best results show up in the first page of results.
However, the default in choosing which facet values to show is to show the
facets with the highest count in the entire result set. Is there
hi
thanks for details. I will look into xsl suggestion.
Any idea how would i send parameter to script?
As i understand thats the syntax for script transformer
entity name=e pk=id transformer=script:f1 query=select * from
table1
--
View this message in context:
50 matches
Mail list logo