exists an alternative to waitFlush?
in my setup this command is very usefull for my NRT. is nobody here with the
same problem?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Alternative-to-waitFlush-in-Solr4-0-tp3991489.html
Sent from the Solr - User mailing list archive at
Hi
In Solr Cloud when a Solr looses its ZooKeeper connection e.g. because
of a session timeout the LeaderElector ZooKeeper Watchers handling its
replica slices are notified with two events:
a Disconnected and a SyncConnected event. Currently the
On Wed, Jun 27, 2012 at 10:32 AM, Trym R. Møller t...@sigmat.dk wrote:
Hi
Hi,
The behaviour of this can be verified using the below test in the
org.apache.solr.cloud.LeaderElectionIntegrationTest
Can you reproduce the failure in your test every time or just rarely?
I added the test method to
Hi Sami
Thanks for your rapid reply.
Regarding 1) This seems to be time dependent but it is seen on my local
windows running the unit test and on a linux server running Solr.
Regarding 2) The test does not show the number of Watchers are
increasing, but this can be observed either by dumping
Hi Sami
Regarding 2) A simple way to inspect the number of watchers, is to add
an error log statement to the process method of the watcher
public void process(WatchedEvent event) {
log.error(seq + watcher received event: + event);
and see that the number of logs
ok,
I see what you mean. Looks to me that you're right. I am not too
familiar with the LeaderElector so I'll let Mark take a second look.
--
Sami Siren
On Wed, Jun 27, 2012 at 11:32 AM, Trym R. Møller t...@sigmat.dk wrote:
Hi Sami
Regarding 2) A simple way to inspect the number of watchers,
I want to search a word which may be in lower/upper case but the result
should be in both. I mean both cases should be include in the search.
what should I change in my configuration of solr.
You should add LowerCaseFilterFactory factories to both the index analyzer and
the query analyzer in your fieldType declaration in the schema file.
They will convert both the index and queries to lowercase which will give you
case insensitive results.
Mikael Jagekrans
Software Engineer
The ability of join operation supported as what
http://wiki.apache.org/solr/Join says is so limited.
I'm thinking how to support standard join operation in Solr/Lucene because not
all can be de-normalized efficiently.
Take 2 schemas below as an example:
(1)Student
sid
name
cid// class
In your example de-normalising would be fine in a vast number of
use-cases. multi value fields are fine.
If you really want to, see http://wiki.apache.org/solr/Join but make
sure you loose the default relational dba world view first
and only go down that route if you need to.
On 27 June 2012
I think we can treat this as a special join operation.
Here are my some clues to support it.
1, build each group as a separate index
Index 1's name group1
Key
Group 1's fields
Index 2's name group2
Key
Group 2's fields
The point is sometimes data after de-normalization will be huge, in some case,
it's even impossible.
Thanks,
-Original Message-
From: Lee Carroll [mailto:lee.a.carr...@googlemail.com]
Sent: Wednesday, June 27, 2012 7:38 PM
To: solr-user@lucene.apache.org
Subject: Re: how Solr/Lucene can
Sorry you have that link! and I did not see the question - apols
index schema could look something like:
id
name
classList - multi value
majorClassList - multi value
a standard query would do the equivalent of your sql
again apols for not seeing the link
lee c
On 27 June 2012 12:37, Lee
Anybody an idea?
The thread Dump looks like this:
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.1-b02 mixed mode):
http-8983-6 daemon prio=10 tid=0x41126000 nid=0x5c1 in
Object.wait() [0x7fa0ad197000]
java.lang.Thread.State: WAITING (on object monitor)
at
seems that the indexwriter wants to flush but need to wait others become
idle. but i see you the n gram filter is working. is your field's value too
long? you sould also tell us average load the system. the free memory and
memory used by jvm
在 2012-6-27 晚上7:51,Arkadi Colson ark...@smartbit.be写道:
How long is it hanging? And how are you sending files to Tika, and
especially how often do you commit? One problem that people
run into is that they commit too often, causing segments to be
merged and occasionally that just takes a while and people
think that Solr is hung.
18G isn't very large as
I've set the maxFieldLength to the maximum because I'm indexing
documents which can be quite big:
maxFieldLength2147483647/maxFieldLength
Load average is about 0.9 but CPU is running at 35% percent. Probably
because tika has to extract the documents
The virtual machine is having 4 CPU's
I'm sending files to solr with the php Solr library. I'm doing a commit
every 1000 documents:
autoCommit
maxDocs1000/maxDocs
!-- maxTime1000/maxTime --
/autoCommit
Hard to say how long it's hanging. At least for 1 hour. After that I
restarted Tomcat to
Hi,
*My input string is *: Hi how r u Test
I need to index this input text with double quotes. but solr is removing
double quotes while indexing .
I am using *string *as the data type
if test is searched then i am able to get result as Hi how r u Test (without
double quotes)
How to get search
I will be out of the office starting 26/06/2012 and will not return until
28/06/2012.
Please email to itsta...@actionimages.com for any urgent issues.
Action Images is a division of Reuters Limited and your data will therefore be
protected
in accordance with the Reuters Group Privacy / Data
Hi,
I need to specify an antonym list - similar to synonym list.
Whats the best way to go about it?
Currently, I am firing - RegularLuceneQuery AND (NOT keyword)
Example :Antonym list has four words - A, B1,B2,B3
A X B1
A X B2
A X B3
User Query contains 'A'
Expected result set: Documents NOT
My company, Lucid Imagination, is actively seeking full-time (and contract as
skills/needs/availability align) professional service technologists. Details
can be found here:
http://www.lucidimagination.com/about/careers/senior-consultant-position
I'll put in my personal bit and say that Lucid
On Tue, Jun 26, 2012 at 6:53 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
Why would the documentCache not be populated via firstSearcher warming
queries with a non-zero value for rows?
Solr streams documents (the stored fields) returned to the user (so
very large result sets can be
hello all,
environment: centOS, solr 3.5, jboss 5.1
i have been using wily (a monitoring tool) to instrument our solr instances
in stress.
can someone help me to understand something about the jmx values being
output from solr? please note - i am new to JMX.
problem / issue statement: for a
I am researching an issue w/ wildcard searches on complete words in 3.5. For
example, searching for kloster* returns klostermeyer, but klostermeyer*
returns nothing.
The field being queried has the following analysis chain (standard
'text_general'):
fieldType name=text_general
On Jun 27, 2012, at 12:01 , Yonik Seeley wrote:
On Tue, Jun 26, 2012 at 6:53 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
Why would the documentCache not be populated via firstSearcher warming
queries with a non-zero value for rows?
Solr streams documents (the stored fields)
On Wed, Jun 27, 2012 at 12:23 PM, Erik Hatcher erik.hatc...@gmail.com wrote:
On Jun 27, 2012, at 12:01 , Yonik Seeley wrote:
On Tue, Jun 26, 2012 at 6:53 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
Why would the documentCache not be populated via firstSearcher warming
queries with
Interesting!
We also tried routing the warming queries through our main search request
handler, with highlighting enabled, that has distrib=true as default. To
prevent warming queries to run over the cluster on all instances we set
distrib=false in the warming queries. The queries were fired
Hi Michael,
I solved a similar issue by reformatting my query to do an OR across
an exact match or a wildcard query, with the exact match boosted.
HTH,
Michael Della Bitta
Appinions, Inc. -- Where Influence Isn’t a Game.
http://www.appinions.com
Interesting solution. Can you then explain to me for a given query:
?q='kloster' OR kloster*
How the exact match part of that is boosted (assuming the above is how you
formulated your query)?
Thanks!
Mike
-Original Message-
From: Michael Della Bitta
q=kloster^3 OR kloster*
On Wed, Jun 27, 2012 at 2:16 PM, Klostermeyer, Michael
mklosterme...@riskexchange.com wrote:
Interesting solution. Can you then explain to me for a given query:
?q='kloster' OR kloster*
How the exact match part of that is boosted (assuming the above is how you
We're doing:
?'kloster'^2 OR kloster*
This is for a homegrown autocomplete index based on a database of
context-free terms, so we have kind of a weird use case.
Note that wildcard matches will all be scored the same, so you might
need to do something to order them to suit your needs. In our
How can I get a snapshot of the index in SOLR 3.x?
I am currently taking EBS (Amazon) snapshots of the volume where the data is
form one machine and creating new volumes form that snapshot. When the
service starts it still runs through an indexing process that takes forever.
Is there a way to
Use once off replication, or, if you prefer, on unix do
cp -lr your-index-dir your-backup-dir
at a time you know a commit isn't happening.
You'll have a clone of the index you can ship to another host. Remember
to delete your backup when done.
This uses the fact that files in a Lucene index
How many numbers? 0-9? Or every number under the sun?
You could achieve a limited number by using synonyms, 0 is a synonym for
nought and zero, etc.
Upayavira
On Wed, Jun 27, 2012, at 05:22 PM, Alireza Salimi wrote:
Hi,
I was wondering if there's a built in solution in Solr so that you can
Hi,
Well that's the only solution I got so far and it would work for most of
the cases,
but l thought there might be some better solutions.
Thanks
On Wed, Jun 27, 2012 at 5:49 PM, Upayavira u...@odoko.co.uk wrote:
How many numbers? 0-9? Or every number under the sun?
You could achieve a
Our Solr master server protects access to itself by requiring that the clients
provide a signed SSL client cert from the same CA as the Solr server itself.
This is all handled within an Nginx reverse-proxy thats on the Solr server
itself.
This works great for clients... not so great for
Hi,
as far as I know Solr does not provide such a feature. If you cannot make any
assumptions on the numbers, choose an appropriate library that is able to
transform between numerical and non-numerical representations and populate the
search field with both versions at index-time.
-Sascha
Hi,
Can someone explain to me please why these two queries return different
results:
1. -PaymentType:Finance AND -PaymentType:Lease AND -PaymentType:Cash *(700
results)*
2. (-PaymentType:Finance AND -PaymentType:Lease) AND -PaymentType:Cash *(0
results)*
Logically the two above queries should
have a field which uses a synonym file of your antonyms and a keep
word filter and use this field in your not query
On 27 June 2012 15:54, RajParakh rajpar...@gmail.com wrote:
Hi,
I need to specify an antonym list - similar to synonym list.
Whats the best way to go about it?
Currently, I
I have a function query that returns miles as a score along two points:
q={!func}sub(sum(geodist(OriginCoordinates,39,-105),geodist(DestinationCoordinates,36,-97),Mileage),1000)
The issue that I'm having now now my results give me a list of scores:
*score:10.1 (mi)
score: 20 (mi)
score: 75 (mi)
On Wed, Jun 27, 2012 at 6:50 PM, mcb thestreet...@gmail.com wrote:
I have a function query that returns miles as a score along two points:
q={!func}sub(sum(geodist(OriginCoordinates,39,-105),geodist(DestinationCoordinates,36,-97),Mileage),1000)
The issue that I'm having now now my results
I think: text fields are not exactly multi-valued. Instead there is
something called the 'positionIncrementGap' which gives a sweep
(usually 100) of empty positions (terms) to distinguish one field from
the next. If you set this to zero or one, that should give you one
long multi-valued field.
2)
I believe this is what the Java 'keystore' is for. You give a Java VM
start option for the keyring file, and from then on outgoing sockets
use the certs for the target clients.
http://www.startux.de/index.php/java/44-dealing-with-java-keystoresyvComment44
I would understand if you had said that Klostermeyer* returned nothing
because the presence of the wildcard used to suppress analysis, including
the lower case filter so that the capital K term would never match an
indexed term. But, I would have expected klostermeyer* to match
klostermeyer
1. precisionStep is used for ranging query of Numeric Fields. see
http://lucene.apache.org/core/old_versioned_docs/versions/3_5_0/api/all/org/apache/lucene/search/NumericRangeQuery.html
2. positionIncrementGap is used for phrase query of multi-value fields
e.g. doc1 has two titles.
title1: ab
The quotes are probably indexed correctly. You need to escape the quotes in
your query:
Hi how r u \Test\
-- Jack Krupansky
-Original Message-
From: ravicv
Sent: Wednesday, June 27, 2012 8:50 AM
To: solr-user@lucene.apache.org
Subject: How to index and search string which contains
I think they are logically the same. but 1 may be a little bit faster than 2
On Thu, Jun 28, 2012 at 5:59 AM, Rublex ruble...@hotmail.com wrote:
Hi,
Can someone explain to me please why these two queries return different
results:
1. -PaymentType:Finance AND -PaymentType:Lease AND
It should work properly with the edismax query parser. The traditional
lucene query parser is not smart enough about the fact that the Lucene
BooleanQuery can't properly handle queries with only negative clauses.
Put *:* in front of all your negative terms and you will get similar
results.
数据库的表有timestamp字段? 每次 进行更新和修改的时候,这个字段的值都会自动变化,做增量的时候就能根据这个处理了
On Tue, Jun 19, 2012 at 6:24 PM, alex.wang wang_...@sohu.com wrote:
hi all:
when i import the data from db to solr. and solr changed the value with
timezone.
eg, the original value is 16/02/2012 12:05:16 , changed to 1/02/2012
50 matches
Mail list logo