Hi,
I am trying to delete a some documents in my index by query.
When I select them with this negated query I get all the documents I
want to delete but when I use this query in the DeleteByQuery it is not
working
Im trying to delete all elements which value ends with 'somename/'
When I
Hi,
I am trying to delete a some documents in my index by query.
When I just select them with this negated query, I get all the documents
I want to delete but when I use this query in the DeleteByQuery it is
not working
Im trying to delete all elements which value ends with 'somename/'
Hi Markus,
Why do you think it's not deleting amyrhing,?
Thanks,
Patrick
Op 22 okt. 2012 08:36 schreef Markus.Mirsberger markus.mirsber...@gmx.de
het volgende:
Hi,
I am trying to delete a some documents in my index by query.
When I just select them with this negated query, I get all the
Hi, Patrick,
Because I have the same amount of documents in my index than before I
perform the query.
And when I use the negated query just to select the documents I can see
they still there (and of course all other documents too :) )
Regards,
Markus
On 22.10.2012 14:38, Patrick Plaatje
Did you make sure to commit after the delete?
Patrick
Op 22 okt. 2012 08:43 schreef Markus.Mirsberger markus.mirsber...@gmx.de
het volgende:
Hi, Patrick,
Because I have the same amount of documents in my index than before I
perform the query.
And when I use the negated query just to select
Amit,
Your guess was perfect and result is what expected:
fq=-location_0_coordinate:[* TO *] to get docs with no geo data
Thx,
Jul
--
View this message in context:
http://lucene.472066.n3.nabble.com/Easy-question-docs-with-empty-geodata-field-tp4014751p4015067.html
Sent from the Solr -
Yes Im sure.
I commited a second time too to be sure.
And I tried to delete just one entry with the same command but without a
negated query and this worked.
I think the problem is that its a negated query.
Markus
On 22.10.2012 14:46, Patrick Plaatje wrote:
Did you make sure to commit after
3.6 has some quirks around parsing pure negative queries sometimes. Try
*:* -whatever.
BTW, a syntax I like for doing delete-by-query just in a raw URL is
http://localhost:8983/solr/collection1/update?commit=truestream.body=deletequery*:*
-store_0_coordinate:[* TO *]/query/delete
The curl you
LucidWorks is a commercial product supported by LucidWorks (the company). As
Hatcher already said, you really should ask the question on the LucidWorks forum
bq:
It's best to ask LucidWorks related questions at
http://support.lucidworks.com rather than in this e-mail list.
As for
Hi Erick,
thanks alot. That trick fixed it :)
Regards,
Markus
On 22.10.2012 15:43, Erick Erickson wrote:
3.6 has some quirks around parsing pure negative queries sometimes. Try
*:* -whatever.
BTW, a syntax I like for doing delete-by-query just in a raw URL is
Hi,
I noticed a duplicate entry in my index and I am wondering how that
can be, because I have a uniqueKey defined.
I have the following defined in my schema.xml:
?xml version=1.0 ?
schema name=main core version=1.1
types
fieldtype name=string class=solr.StrField
sortMissingLast=true
Hi,
This is how we do it in our Solr 3.4 setup:
curl http://solrip:port/solr/update?commit=true --data-binary
'deletequeryhere_goes_the_query/query/delete' -H
'Content-type:text/xml'
i.e. no extra update, /update tags surrounding the delete tags.
HTH,
Dmitry
On Mon, Oct 22, 2012 at 10:29
Billy,
There's a great wiki page at:
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
which gives an example on indexing polygons
-Original Message-
From: Billy Newman [mailto:newman...@gmail.com]
Sent: Sunday, October 21, 2012 3:27 PM
To: solr-user@lucene.apache.org
Subject:
Which release of Solr?
Is this a single node Solr or distributed or cloud?
Is is possible that you added documents with the overwrite=false
attribute? That would suppress the uniqueness test.
Is it possible that you added those documents before adding the uniqueKey
element to your schema,
My experience for the easiest query is solr/itas (aka velocity solr).
paul
Le 22 oct. 2012 à 11:15, Muwonge Ronald a écrit :
Hi all,
have done some crawls for certain urls with nutch and indexed them to
solr.I kindly request for assistance in getting the best search
interface but have no
hello jack,
that was it!
thx
mark
--
View this message in context:
http://lucene.472066.n3.nabble.com/need-help-with-exact-match-search-tp4014832p4015103.html
Sent from the Solr - User mailing list archive at Nabble.com.
I was trying to use phonetic filter factory , I have tried all the encoders
that are available with solr.PhoneticFilterFactory but none of them is
supporting indian languages . Is there any other Filter/Method available so
that i can get phonetic representation for indian languages e.g
Hi Mark,
Mark Miller wrote:
Still waiting on that issue. I think Andrzej should just update it to
trunk and commit - it's option and defaults to off. Go vote :)
Sounds like the problem is already solved and the remaining work
consists of code integration? Can somebody estimate how much work
Thanks let me try it
On Mon, Oct 22, 2012 at 3:13 PM, Paul Libbrecht p...@hoplahup.net wrote:
My experience for the easiest query is solr/itas (aka velocity solr).
paul
Le 22 oct. 2012 à 11:15, Muwonge Ronald a écrit :
Hi all,
have done some crawls for certain urls with nutch and
On Mon, Oct 22, 2012 at 2:08 PM, Jack Krupansky j...@basetechnology.com wrote:
Which release of Solr?
3.6.1
Is this a single node Solr or distributed or cloud?
single node, actually embedded in an application.
Is is possible that you added documents with the overwrite=false
attribute? That
All - I'm a bit new to Solr and looking for documentation or guides on
implementing Solr as an enterprise search solution over some other products we
are currently using. Ideally, I'd like to find out information about
* General Solr server hardware requirements and approx. starting
When Solr is slow, I'm seeing these in the logs:
[collection1] Error opening new searcher. exceeded limit of
maxWarmingSearchers=2, try again later.
[collection1] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
Googling, I found this in the FAQ:
Typically the way to avoid this error is to
Further on that in recent versions of Solr, it's /browse, not the sillier
/itas handler name.
As far as the best search front end, it's such an opinionated answer here.
It all really depends on what technologies you'd like to deploy. The library
world has created two nice front-ends that
Hello!
You can check if the long warming is causing the overlapping
searchers. Check Solr admin panel and look at cache statistics, there
should be warmupTime property.
Lowering the autowarmCount should lower the time needed to warm up,
howere you can also look at your warming queries (if you
Are you using Solr 3X? The occasional long commit should no longer
show up in Solr 4.
- Mark
On Mon, Oct 22, 2012 at 10:44 AM, Dotan Cohen dotanco...@gmail.com wrote:
I've got a script writing ~50 documents to Solr at a time, then
commiting. Each of these documents is no longer than 1 KiB of
On Mon, Oct 22, 2012 at 5:02 PM, Rafał Kuć r@solr.pl wrote:
Hello!
You can check if the long warming is causing the overlapping
searchers. Check Solr admin panel and look at cache statistics, there
should be warmupTime property.
Thank you, I have gone over the Solr admin panel twice and
On Mon, Oct 22, 2012 at 5:27 PM, Mark Miller markrmil...@gmail.com wrote:
Are you using Solr 3X? The occasional long commit should no longer
show up in Solr 4.
Thank you Mark. In fact, this is the production release of Solr 4.
--
Dotan Cohen
http://gibberish.co.il
http://what-is-what.com
And, are you using UUID's or providing specific key values?
-- Jack Krupansky
-Original Message-
From: Robert Krüger
Sent: Monday, October 22, 2012 9:22 AM
To: solr-user@lucene.apache.org
Subject: Re: uniqueKey not enforced
On Mon, Oct 22, 2012 at 2:08 PM, Jack Krupansky
On Mon, Oct 22, 2012 at 6:01 PM, Jack Krupansky j...@basetechnology.com wrote:
And, are you using UUID's or providing specific key values?
specific key values
On 10/22/2012 9:58 AM, Dotan Cohen wrote:
Thank you, I have gone over the Solr admin panel twice and I cannot
find the cache statistics. Where are they?
If you are running Solr4, you can see individual cache autowarming times
here, assuming your core is named collection1:
On Mon, Oct 22, 2012 at 7:29 PM, Shawn Heisey s...@elyograg.org wrote:
On 10/22/2012 9:58 AM, Dotan Cohen wrote:
Thank you, I have gone over the Solr admin panel twice and I cannot find
the cache statistics. Where are they?
If you are running Solr4, you can see individual cache autowarming
Perhaps you can grab a snapshot of the stack traces when the 60 second
delay is occurring?
You can get the stack traces right in the admin ui, or you can use
another tool (jconsole, visualvm, jstack cmd line, etc)
- Mark
On Mon, Oct 22, 2012 at 1:47 PM, Dotan Cohen dotanco...@gmail.com wrote:
On Mon, Oct 22, 2012 at 9:22 PM, Mark Miller markrmil...@gmail.com wrote:
Perhaps you can grab a snapshot of the stack traces when the 60 second
delay is occurring?
You can get the stack traces right in the admin ui, or you can use
another tool (jconsole, visualvm, jstack cmd line, etc)
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the cause of your problem.
Second, the OS will use the extra memory for file buffers, which really helps
performance, so you might
Has the Solr team considered renaming the optimize function to avoid
leading people down the path of this antipattern?
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a
Lucene already did that:
https://issues.apache.org/jira/browse/LUCENE-3454
Here is the Solr issue:
https://issues.apache.org/jira/browse/SOLR-3141
People over-use this regardless of the name. In Ultraseek Server, it was called
force merge and we had to tell people to stop doing that nearly
On Mon, Oct 22, 2012 at 4:39 PM, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
Has the Solr team considered renaming the optimize function to avoid
leading people down the path of this antipattern?
If it were never the right thing to do, it could simply be removed.
The problem is
On Mon, Oct 22, 2012 at 10:01 PM, Walter Underwood
wun...@wunderwood.org wrote:
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the cause of your problem.
Thanks. Looking at
any input on this?
thanks
Jie
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-memory-leak-prevent-tomcat-shutdown-tp4014788p4015265.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Mon, Oct 22, 2012 at 10:44 PM, Walter Underwood
wun...@wunderwood.org wrote:
Lucene already did that:
https://issues.apache.org/jira/browse/LUCENE-3454
Here is the Solr issue:
https://issues.apache.org/jira/browse/SOLR-3141
People over-use this regardless of the name. In Ultraseek
can someone provide example configuration how to use new compression in
solr 4.1?
http://blog.jpountz.net/post/33247161884/efficient-compressed-stored-fields-with-lucene
I have a few questions regarding Solr Cloud. I've been following it for quite
some time but I believe it wasn't ever production ready. I see that with the
release of 4.0 it's considered stable… is that the case? Can anyone out there
share your experiences with Solr Cloud in a production
On 10/22/2012 3:11 PM, Dotan Cohen wrote:
On Mon, Oct 22, 2012 at 10:01 PM, Walter Underwood
wun...@wunderwood.org wrote:
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the
On Tue, Oct 23, 2012 at 3:52 AM, Shawn Heisey s...@elyograg.org wrote:
As soon as you make any change at all to an index, it's no longer
optimized. Delete one document, add one document, anything. Most of the
time you will not see a performance increase from optimizing an index that
consists
Thanks for the replies.
I think I'll take a look at NRT.
(2012/10/21 4:42), Nagendra Nagarajayya wrote:
You may want to look at realtime NRT for this kind of performance:
https://issues.apache.org/jira/browse/SOLR-3816
You can download realtime NRT integrated with Apache Solr from here:
Hi,
I have indexed from a database. I have specified a field type laptop. In
the database, laptop has the value equal to Dell. I can search laptop:
Dell from the database with the following command.
http://localhost:8983/solr/db/select/?q=laptop:Dellstart=0rows=4fl=laptop
Can i search for
Hi,
I added defaultSearchFieldlaptop/defaultSearchField to the schema.xml
file. However the query
http://.../solr/db/select?q=Dellstart=0rows=4fl=laptop is not able to
search for dell. Following is the response.
response
lst name=responseHeader
int name=status0/int
int
Are you applying any analyzer/tokenizer for the fieldType 'string' (i guess
no)
your query in the response shows '*dell*' where as you are store data is
'*Dell*'.
If you wan to search ignoring the case then you might need to use
LowerCaseFilterFactory as analyzer to the field. and then perform
Hi,
Sorry for the typo in the previous mail. I am searching for dell
actually. The query is
http://.../solr/db/select?q=dellstart=0rows=4fl=laptop
I am not applying any analyzer/tokenizer for the fieldType 'string'. I
also want to share my solrconfig file with you.
requestHandler
Hi,
It worked. I was specifying more than one filed under defaultSearchField.
Once I specified just the required field, it is able to do the search.
Thanks a lot for your guidance.
Romita
From: Romita Saha romita.s...@sg.panasonic.com
To: solr-user@lucene.apache.org,
Date:
50 matches
Mail list logo