Hi,
It worked. I was specifying more than one filed under defaultSearchField.
Once I specified just the required field, it is able to do the search.
Thanks a lot for your guidance.
Romita
From: "Romita Saha"
To: solr-user@lucene.apache.org,
Date: 10/23/2012 12:31 PM
Subject:
Hi,
Sorry for the typo in the previous mail. I am searching for dell
actually. The query is
http://.../solr/db/select?q=dell&start=0&rows=4&fl=laptop
I am not applying any analyzer/tokenizer for the fieldType 'string'. I
also want to share my solrconfig file with you.
data-
Are you applying any analyzer/tokenizer for the fieldType 'string' (i guess
no)
your query in the response shows '*dell*' where as you are store data is
'*Dell*'.
If you wan to search ignoring the case then you might need to use
LowerCaseFilterFactory as analyzer to the field. and then perform th
Hi,
I added laptop to the schema.xml
file. However the query
http://.../solr/db/select?q=Dell&start=0&rows=4&fl=laptop is not able to
search for dell. Following is the response.
0
2
on
0dell
2.2
10
Thanks and regards,
Romita Saha
F
Hi,
I have indexed from a database. I have specified a field type laptop. In
the database, laptop has the value equal to Dell. I can search laptop:
Dell from the database with the following command.
http://localhost:8983/solr/db/select/?q=laptop:Dell&start=0&rows=4&fl=laptop
Can i search for j
Thanks for the replies.
I think I'll take a look at NRT.
(2012/10/21 4:42), Nagendra Nagarajayya wrote:
> You may want to look at realtime NRT for this kind of performance:
> https://issues.apache.org/jira/browse/SOLR-3816
>
> You can download realtime NRT integrated with Apache Solr from here:
>
On Tue, Oct 23, 2012 at 3:52 AM, Shawn Heisey wrote:
> As soon as you make any change at all to an index, it's no longer
> "optimized." Delete one document, add one document, anything. Most of the
> time you will not see a performance increase from optimizing an index that
> consists of one larg
On 10/22/2012 3:11 PM, Dotan Cohen wrote:
On Mon, Oct 22, 2012 at 10:01 PM, Walter Underwood
wrote:
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the cause of your problem.
I have a few questions regarding Solr Cloud. I've been following it for quite
some time but I believe it wasn't ever production ready. I see that with the
release of 4.0 it's considered stable… is that the case? Can anyone out there
share your experiences with Solr Cloud in a production environm
can someone provide example configuration how to use new compression in
solr 4.1?
http://blog.jpountz.net/post/33247161884/efficient-compressed-stored-fields-with-lucene
On Mon, Oct 22, 2012 at 10:44 PM, Walter Underwood
wrote:
> Lucene already did that:
>
> https://issues.apache.org/jira/browse/LUCENE-3454
>
> Here is the Solr issue:
>
> https://issues.apache.org/jira/browse/SOLR-3141
>
> People over-use this regardless of the name. In Ultraseek Server, it was
>
any input on this?
thanks
Jie
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-memory-leak-prevent-tomcat-shutdown-tp4014788p4015265.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Mon, Oct 22, 2012 at 10:01 PM, Walter Underwood
wrote:
> First, stop optimizing. You do not need to manually force merges. The system
> does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
> might be the cause of your problem.
>
Thanks. Looking at the index statistic
On Mon, Oct 22, 2012 at 4:39 PM, Michael Della Bitta
wrote:
> Has the Solr team considered renaming the optimize function to avoid
> leading people down the path of this antipattern?
If it were never the right thing to do, it could simply be removed.
The problem is that it's sometimes the right t
Lucene already did that:
https://issues.apache.org/jira/browse/LUCENE-3454
Here is the Solr issue:
https://issues.apache.org/jira/browse/SOLR-3141
People over-use this regardless of the name. In Ultraseek Server, it was called
"force merge" and we had to tell people to stop doing that nearly e
Has the Solr team considered renaming the optimize function to avoid
leading people down the path of this antipattern?
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a
First, stop optimizing. You do not need to manually force merges. The system
does a great job. Forcing merges (optimize) uses a lot of CPU and disk IO and
might be the cause of your problem.
Second, the OS will use the "extra" memory for file buffers, which really helps
performance, so you migh
On Mon, Oct 22, 2012 at 9:22 PM, Mark Miller wrote:
> Perhaps you can grab a snapshot of the stack traces when the 60 second
> delay is occurring?
>
> You can get the stack traces right in the admin ui, or you can use
> another tool (jconsole, visualvm, jstack cmd line, etc)
>
Thanks. I've refacto
Perhaps you can grab a snapshot of the stack traces when the 60 second
delay is occurring?
You can get the stack traces right in the admin ui, or you can use
another tool (jconsole, visualvm, jstack cmd line, etc)
- Mark
On Mon, Oct 22, 2012 at 1:47 PM, Dotan Cohen wrote:
> On Mon, Oct 22, 2012
On Mon, Oct 22, 2012 at 7:29 PM, Shawn Heisey wrote:
> On 10/22/2012 9:58 AM, Dotan Cohen wrote:
>>
>> Thank you, I have gone over the Solr admin panel twice and I cannot find
>> the cache statistics. Where are they?
>
>
> If you are running Solr4, you can see individual cache autowarming times
>
On 10/22/2012 9:58 AM, Dotan Cohen wrote:
Thank you, I have gone over the Solr admin panel twice and I cannot
find the cache statistics. Where are they?
If you are running Solr4, you can see individual cache autowarming times
here, assuming your core is named collection1:
http://server:port/
On Mon, Oct 22, 2012 at 6:01 PM, Jack Krupansky wrote:
> And, are you using UUID's or providing specific key values?
specific key values
And, are you using UUID's or providing specific key values?
-- Jack Krupansky
-Original Message-
From: Robert Krüger
Sent: Monday, October 22, 2012 9:22 AM
To: solr-user@lucene.apache.org
Subject: Re: uniqueKey not enforced
On Mon, Oct 22, 2012 at 2:08 PM, Jack Krupansky
wrote:
Whi
On Mon, Oct 22, 2012 at 5:27 PM, Mark Miller wrote:
> Are you using Solr 3X? The occasional long commit should no longer
> show up in Solr 4.
>
Thank you Mark. In fact, this is the production release of Solr 4.
--
Dotan Cohen
http://gibberish.co.il
http://what-is-what.com
On Mon, Oct 22, 2012 at 5:02 PM, Rafał Kuć wrote:
> Hello!
>
> You can check if the long warming is causing the overlapping
> searchers. Check Solr admin panel and look at cache statistics, there
> should be warmupTime property.
>
Thank you, I have gone over the Solr admin panel twice and I canno
Are you using Solr 3X? The occasional long commit should no longer
show up in Solr 4.
- Mark
On Mon, Oct 22, 2012 at 10:44 AM, Dotan Cohen wrote:
> I've got a script writing ~50 documents to Solr at a time, then
> commiting. Each of these documents is no longer than 1 KiB of text,
> some much le
Hello!
You can check if the long warming is causing the overlapping
searchers. Check Solr admin panel and look at cache statistics, there
should be warmupTime property.
Lowering the autowarmCount should lower the time needed to warm up,
howere you can also look at your warming queries (if you hav
Further on that in recent versions of Solr, it's /browse, not the sillier
/itas handler name.
As far as the "best" search front end, it's such an opinionated answer here.
It all really depends on what technologies you'd like to deploy. The library
world has created two nice front-ends tha
When Solr is slow, I'm seeing these in the logs:
[collection1] Error opening new searcher. exceeded limit of
maxWarmingSearchers=2, try again later.
[collection1] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
Googling, I found this in the FAQ:
"Typically the way to avoid this error is to eit
All - I'm a bit new to Solr and looking for documentation or guides on
implementing Solr as an enterprise search solution over some other products we
are currently using. Ideally, I'd like to find out information about
* General Solr server hardware requirements and approx. starting siz
On Mon, Oct 22, 2012 at 2:08 PM, Jack Krupansky wrote:
> Which release of Solr?
3.6.1
>
> Is this a single node Solr or distributed or cloud?
single node, actually embedded in an application.
>
> Is is possible that you added documents with the "overwrite="false""
> attribute? That would suppres
Thanks let me try it
On Mon, Oct 22, 2012 at 3:13 PM, Paul Libbrecht wrote:
> My experience for the easiest query is solr/itas (aka velocity solr).
>
> paul
>
>
> Le 22 oct. 2012 à 11:15, Muwonge Ronald a écrit :
>
>> Hi all,
>> have done some crawls for certain urls with nutch and indexed them
Hi Mark,
Mark Miller wrote:
Still waiting on that issue. I think Andrzej should just update it to
trunk and commit - it's option and defaults to off. Go vote :)
Sounds like the problem is already solved and the remaining work
consists of code integration? Can somebody estimate how much work tha
I was trying to use phonetic filter factory , I have tried all the encoders
that are available with solr.PhoneticFilterFactory but none of them is
supporting indian languages . Is there any other Filter/Method available so
that i can get phonetic representation for indian languages e.g
Hindi,tamil,
hello jack,
that was it!
thx
mark
--
View this message in context:
http://lucene.472066.n3.nabble.com/need-help-with-exact-match-search-tp4014832p4015103.html
Sent from the Solr - User mailing list archive at Nabble.com.
My experience for the easiest query is solr/itas (aka velocity solr).
paul
Le 22 oct. 2012 à 11:15, Muwonge Ronald a écrit :
> Hi all,
> have done some crawls for certain urls with nutch and indexed them to
> solr.I kindly request for assistance in getting the best search
> interface but have
Which release of Solr?
Is this a single node Solr or distributed or cloud?
Is is possible that you added documents with the "overwrite="false""
attribute? That would suppress the uniqueness test.
Is it possible that you added those documents before adding the uniqueKey
element to your schema
Billy,
There's a great wiki page at:
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
which gives an example on indexing polygons
-Original Message-
From: Billy Newman [mailto:newman...@gmail.com]
Sent: Sunday, October 21, 2012 3:27 PM
To: solr-user@lucene.apache.org
Subject: [E
Hi,
This is how we do it in our Solr 3.4 setup:
curl http://:/solr/update?commit=true --data-binary
'here_goes_the_query' -H
'Content-type:text/xml'
i.e. no extra , tags surrounding the tags.
HTH,
Dmitry
On Mon, Oct 22, 2012 at 10:29 AM, Markus.Mirsberger <
markus.mirsber...@gmx.de> wro
That's what I thought.
I'm just curious that nobody else seems to have this problem although
I found the exact same issue description in the issue tracker
(https://issues.apache.org/jira/browse/SOLR-2141) which goes back to
October 2010 and is flagged as "Resolved: Cannot Reproduce".
2012/10/20 L
Hi,
I noticed a duplicate entry in my index and I am wondering how that
can be, because I have a uniqueKey defined.
I have the following defined in my schema.xml:
other field types omitted here ...
other fields omitted here ...
id
name
An
Hi Erick,
thanks alot. That trick fixed it :)
Regards,
Markus
On 22.10.2012 15:43, Erick Erickson wrote:
3.6 has some quirks around parsing pure negative queries sometimes. Try
*:* -whatever.
BTW, a syntax I like for doing delete-by-query just in a raw URL is
http://localhost:8983/solr/coll
LucidWorks is a commercial product supported by LucidWorks (the company). As
Hatcher already said, you really should ask the question on the LucidWorks forum
bq:
It's best to ask LucidWorks related questions at
http://support.lucidworks.com rather than in this e-mail list.
As for
3.6 has some quirks around parsing pure negative queries sometimes. Try
*:* -whatever.
BTW, a syntax I like for doing delete-by-query just in a raw URL is
http://localhost:8983/solr/collection1/update?commit=true&stream.body=*:*
-store_0_coordinate:[* TO *]
The curl you used is, of course, fine.
Yes Im sure.
I commited a second time too to be sure.
And I tried to delete just one entry with the same command but without a
negated query and this worked.
I think the problem is that its a negated query.
Markus
On 22.10.2012 14:46, Patrick Plaatje wrote:
Did you make sure to commit after
Amit,
Your guess was perfect and result is what expected:
fq=-location_0_coordinate:[* TO *] to get docs with no geo data
Thx,
Jul
--
View this message in context:
http://lucene.472066.n3.nabble.com/Easy-question-docs-with-empty-geodata-field-tp4014751p4015067.html
Sent from the Solr - User
Did you make sure to commit after the delete?
Patrick
Op 22 okt. 2012 08:43 schreef "Markus.Mirsberger"
het volgende:
> Hi, Patrick,
>
> Because I have the same amount of documents in my index than before I
> perform the query.
> And when I use the negated query just to select the documents I ca
Hi, Patrick,
Because I have the same amount of documents in my index than before I
perform the query.
And when I use the negated query just to select the documents I can see
they still there (and of course all other documents too :) )
Regards,
Markus
On 22.10.2012 14:38, Patrick Plaatje w
Hi Markus,
Why do you think it's not deleting amyrhing,?
Thanks,
Patrick
Op 22 okt. 2012 08:36 schreef "Markus.Mirsberger"
het volgende:
> Hi,
>
> I am trying to delete a some documents in my index by query.
> When I just select them with this negated query, I get all the documents I
> want to
Hi,
I am trying to delete a some documents in my index by query.
When I just select them with this negated query, I get all the documents
I want to delete but when I use this query in the DeleteByQuery it is
not working
Im trying to delete all elements which value ends with 'somename/'
Wh
Hi,
I am trying to delete a some documents in my index by query.
When I select them with this negated query I get all the documents I
want to delete but when I use this query in the DeleteByQuery it is not
working
Im trying to delete all elements which value ends with 'somename/'
When I u
51 matches
Mail list logo