I have some unclear behaviour with using clean and
pre/postImportDeleteQuery for delta-imports. The docs under
http://wiki.apache.org/solr/DataImportHandler#Configuration_in_data-config.xml
are not clear enough.
My observation is:
- preImportDeleteQuery is only executed if clean=true is set
-
Hey Prad,
already had a look at your mysql-query log, to check if the relevant select
query is executed? and if, what results it had?
Regards
Stefan
On Sat, Jan 29, 2011 at 12:50 AM, makeyourrules makeyourru...@gmail.comwrote:
Hello,
I am trying to delete some records from my index with
On Fri, Jan 28, 2011 at 1:30 AM, Jianbin Dai j...@huawei.com wrote:
Hi,
Do we have data import handler to fast read in data from noSQL database,
specifically, MongoDB I am thinking to use?
Or a more general question, how does Solr work with noSQL database?
Can't say anything about
This is the log trace..
2011-01-31 10:07:18,837 ERROR (main)[SearchBusinessControllerImpl] Solr
connecting to url: http://10.145.10.154:8081/solr
2011-01-31 10:07:18,873 DEBUG (main)[DefaultHttpParams] Set parameter
http.useragent = Jakarta Commons-HttpClient/3.1
2011-01-31 10:07:18,880 DEBUG
Solr is Http Caching enabled by default. Try to cleare cache before quering .
Shift+refresh(F5) may cleare cache.
Due to cache it may be possible old results may displayed after index have
been changed
-
Thanx:
Grijesh
--
View this message in context:
Hi all,
I want to know how to apply patch for extended dismax query parser on solr
1.4.1.
--
Thanks Regards,
Isan Fulia.
Do you know how to apply patches in general? Or is this specifically
about the edismax patch?
Quick response for the general how to apply a patch question:
1 get the source code for Solr
2 get to the point you can run ant clean test successfully.
3 apply the source patch
4 execute ant dist.
You
Hi,
I will give you feedback today. There occured another issue with our current
Solr-installation that I have to fix.
Thanks for your effort!
Regards
--
View this message in context:
specifically for edismax patch
On 31 January 2011 18:22, Erick Erickson erickerick...@gmail.com wrote:
Do you know how to apply patches in general? Or is this specifically
about the edismax patch?
Quick response for the general how to apply a patch question:
1 get the source code for Solr
Have you tried it? What problems are you having?
Please review: http://wiki.apache.org/solr/UsingMailingLists
Erick
On Mon, Jan 31, 2011 at 8:10 AM, Isan Fulia isan.fu...@germinait.comwrote:
specifically for edismax patch
On 31 January 2011 18:22, Erick Erickson erickerick...@gmail.com
Hi list,
I am not sure whether this behaviour is intended or not.
I am experimenting with the UpdateRequestProcessor-feature of Solr (V: 1.4)
and there occured something I find strange.
Well, when I send csv-data to the CSV-UpdateHandler with some fields
specified that are not part of the
Can anyone shed any light on this, and whether it could be a config
issue? I'm now using the latest SVN trunk, which includes the Tika 0.8
jars.
When I send a ZIP file (containing two txt files, doc1.txt and doc2.txt)
to the ExtractingRequestHandler, I get the following log entry
(formatted
What are the advantages of using something like HBase over your standard Lucene
index with Solr? It would seem to me like you'd be losing a lot of what Lucene
has to offer!?!
Adam
On Jan 31, 2011, at 5:34 AM, Steven Noels stev...@outerthought.org wrote:
On Fri, Jan 28, 2011 at 1:30 AM,
(11/01/31 22:20), Em wrote:
Hi list,
I am not sure whether this behaviour is intended or not.
I am experimenting with the UpdateRequestProcessor-feature of Solr (V: 1.4)
and there occured something I find strange.
Well, when I send csv-data to the CSV-UpdateHandler with some fields
specified
i found the problem.
DIH or i think the JDBC-Driver casting 0 and 1 to boolean, if the field in
database from type (tinyint(1)).
iam using tow fields with type of tinyint(1) and tinyint(2) -.-
-
--- System
One Server, 12
Hi,
I have 2 cores CoreA and CoreB, when updating content on CoreB, I use
solrj and EmbeddedSolrServer to query CoreA for information, however
when I do this with my junit tests (which also use EmbeddedSolrServer
to query) I get this error
SEVERE: Previous SolrRequestInfo was not closed!
Hi Koji,
following is the solrconfig:
requestHandler name=/update/csv class=solr.CSVRequestHandler
lst name=defaults
str name=update.processorthrowAway/str
/lst
/requestHandler
updateRequestProcessorChain name=throwAway
processor
(11/01/31 23:33), Em wrote:
Hi Koji,
following is the solrconfig:
requestHandler name=/update/csv class=solr.CSVRequestHandler
lst name=defaults
str name=update.processorthrowAway/str
/lst
/requestHandler
updateRequestProcessorChain name=throwAway
Well, I would say that the best way to be sure is to benchmark different
configurations.
As far as I know, it's usually not recommended such a big RAM Buffer size,
default is 32 MB and probably won't get any improvements using more than 128
MB.
The same with the mergeFactor, I know that a larger
Thanks for your reply Stefan, mysql log says query is returning those deleted
records and also the solr log has the deleted records, but for some reason
they are not actually getting deleted from the index.
[2011/01/28 16:58:00.319] Deleting document: BAAH
[2011/01/28 17:06:50.537] Deleting
I had attached the Analysis report of the query George*
Attachment didn't arrive. But I think you are referring output of analysis.jsp.
It can be confusing because it does not do actual query parsing.
Instead you can look output of debugQuery=on.
When I indexed *George *it was also finally
Okay, I added some Logging-Stuff to both the processor and its factory.
It turned out that there IS an updateProcessor returned and it is NOT null.
However, my logging-method inside the processAdd-Method (1st line, so it
HAS to be called, if one calls the method) get never called - so the
thanks that helps
--
View this message in context:
http://lucene.472066.n3.nabble.com/NOT-operator-not-working-tp2365831p2389803.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have below configuration. Somehow the field KVK IS indexed and the
varstatement column isnt.
I have tried everything: reloaded schema.xml, reindex...but somehow the
varstatement column remains 'false' even though I KNOW it is true.
The KVK value IS indexed correctly. What else can it be? I
hi
We already have faceting on our site.
I am loading devices and accessories in solr index. deviceType indicates if
its a device or accessory
All other attributes are same for device and accessory. When query results
come back I would like to display someting like
Devices
+Manucaturer (100)
Hello list,
I am attempting to port a plug-in to my Solr implementation and would like to
discuss best practice for doing so. The plug-in relates specifically to the
query submitted through Solr, the idea is to provide some sort of query
'refinement' mechanism relating t a specific domain.
Here is what I found out:
The CSVRequestHandler gets its fields in line 240 and the following
ones. Those fieldnames come from the file's header
or from the specified params in the request.
The CSVRequestHandler calls prepareFields to create an array of
SchemaFields (see line 269) that will be
Has there been any progress on this or tools people might use to capture the
average or 90% time for the last hour?
That would allow us to better match up slowness with other metrics like
CPU/IO/Memory to find bottlenecks in the system.
Thanks,
Ian.
On Wed, Mar 31, 2010 at 9:13 PM, Chris
Hi,
I think I've found the cause:
src/java/org/apache/solr/util/TestHarness.java, query(String handler,
SolrQueryRequest req) calls SolrRequestInfo.setRequestInfo(new
SolrRequestInfo(req, rsp)), which my componenet also calls in the same
thread hence the error.
The fix was to override assertQ
What is your schema definition for varstatement? Please include
the fieldType as well as the field definition.
How do you expect to convert from your bit type to whatever you've
defined in your schema for varstatement (which is boolean?)?
And lastly, how do you KNOW your actual select statement
I don't think you'll be able to do this with your present schema, the
information
isn't available in the faceting response, you'd get something like
manufacturers1100/manufacturers and no way to know that 1,000
of them were accessories.
You could change the values in your index to something like
Hello,
I have huge numbers of data indexed in solr and I would know the best way to
migrate it ?
A simple cp of the data directory can work ?
Thanks you
Vincent Chavelle
Hi,
I'm implementing custom dynamic results filtering to improve fuzzy /
phonetic search support in my search application. I use the
CommonsHttpSolrServer object to connect remotely to Solr. I would like to
be able to index multiple fuzzy / phonetic match encodings, e.g. one of the
packaged
SOLR LUCENE
DEVELOPERS
Hi i am new to solr and i like to make a custom search page for enterprise users
in JSP that takes the results of Apache Solr.
- Where i can find some useful examples for that topic ?
- Is JSP the correct approach to solve mi requirement ?
- If not what is the best
On Mon, 31 Jan 2011 08:40 -0500, Estrada Groups
estrada.adam.gro...@gmail.com wrote:
What are the advantages of using something like HBase over your standard
Lucene index with Solr? It would seem to me like you'd be losing a lot of
what Lucene has to offer!?!
I think Steven is saying that he
Hi John, you can use whatever you want for building your application, using
Solr on the backend (JSP included). You should find all the information you
need on Solr's wiki page:
http://wiki.apache.org/solr/
http://wiki.apache.org/solr/including some client libraries to easy
integrate your
I copied the whole index from our production box (which was having the
delete issue) and put it on a test server and tried deleting docs and it
works The only difference between the production server and test server
is that production server keeps getting select queries from users pretty
much
Tomas,
I also know velocity can be used and works well.
I would be interested to a simpler way to have the objects of SOLR available in
a jsp than write a custom jsp processor as a request handler; indeed, this
seems to be the way solrj is expected to be used in the wiki page.
Actually I
My current project has the requirement to support search when user inputs any
number of terms across a few index fields (movie title, actor, director).
In order to maximize result, I plan to support all those searches listed in
the subject, phrase, individual term, prefix, fuzzy and stemming.
Haha, I KNOW that to be very true: I have done everything correct, its this
stupid computer that doesnt understand me ;)
Anyway:
fieldType name=boolean class=solr.BoolField sortMissingLast=true
omitNorms=true/
field name=varstatement type=boolean indexed=true stored=true/
The reason I'm
On a very quick test, it looks like every integer value except 1 is
converted to false (I haven't looked at the underlying code, but
this sure makes sense).
So my guess is that what's being sent to Solr isn't what you think,
that is the varstatement you get back is something other than 1. I
have
: Interesting idea. I must investigate if this is a possibility - eg. how often
: will a document be reindexed from one shard to another - this is actually a
: possibility as a consequence of the way we configure our shards :-/
:
: Thanks for the input! I was still hoping for a way to get that
: Well, this does not seem to me like a bug but more like an exotic
: situation where two concepts collidate with eachother.
: The CSVRequestHandler is intended to sweep all the unneccessary stuff
: out of the input to avoid exceptions for unknown fields
: while my UpdateRequestProcessor needs
: I have successfully created a QueryComponent class that, assuming it
: has the integer bitset, can turn that into the necessary DocSetFilter
: to pass to the searcher, get back the facets, etc. That part all works
...
: What I'm unsure how to do is actually send this compressed bitset
: I am loading devices and accessories in solr index. deviceType indicates if
: its a device or accessory
:
: All other attributes are same for device and accessory. When query results
: come back I would like to display someting like
:
: Devices
: +Manucaturer (100)
: - Samsung (50)
: -
: I have huge numbers of data indexed in solr and I would know the best way to
: migrate it ?
: A simple cp of the data directory can work ?
if you don't have any custom components, you can probably just use
your entire solr home dir as is -- just change the solr.war. (you can't
just copy
Hello Shan,
I was able to delete without hanging by making the
following changes to the solrconfig.xml in the mainIndex section and
reloading the core. BTW Iam using 1.4.1...Hope you get your deletes working
as well. Let us know if it works for you or if you find any other
On Mon, Jan 31, 2011 at 9:38 PM, Upayavira u...@odoko.co.uk wrote:
On Mon, 31 Jan 2011 08:40 -0500, Estrada Groups
estrada.adam.gro...@gmail.com wrote:
What are the advantages of using something like HBase over your standard
Lucene index with Solr? It would seem to me like you'd be losing
Anyone got a great little script for changing a schema?
i.e., after changing:
database,
the view in the database for data import
the data-config.xml file
the schema.xml file
I BELIEVE that I have to run:
a delete command for the whole index *:*
a full import and optimize
This all
Dear Solr users,
I am currently using SolR and TermsComponents to make an auto suggest for my
website.
I have a field called p_field indexed and stored with type=text in the
schema xml. Nothing out of the usual.
I feed to Solr a set of words separated by a coma and a space such as (for
two
Hi Hoss,
actually I thought this would be neccessary for the SolrInputDocument to map
against a special FieldType, but this isn't true. The mapping comes
sometimes after the UpdateProcessor finished its work.
So yes, there is no reason to force the CSVRequestHandler to throw an
Exception if the
51 matches
Mail list logo