See https://hudson.apache.org/hudson/job/Solr-3.x/96/changes
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/SOLR-2002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Robert Muir updated SOLR-2002:
--
Attachment: SOLR-2002_merged.patch
since we merged lucene solr, the build system has been somewhat of
[
https://issues.apache.org/jira/browse/SOLR-2002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12906689#action_12906689
]
Robert Muir commented on SOLR-2002:
---
by the way, i think this really simplifies the
Hello-
I'm looking at using the new terms.getUniqueTermCount() to give a
quick count for the LukeRequestHandler rather then needing to walk all
the terms.
When solr index reader has just one segment, it works great. However
with more segments I get:
java.lang.UnsupportedOperationException:
Spelling Checking for Multiple Fields
-
Key: SOLR-2106
URL: https://issues.apache.org/jira/browse/SOLR-2106
Project: Solr
Issue Type: Bug
Components: spellchecker
Affects Versions: 1.4
See https://hudson.apache.org/hudson/job/Solr-trunk/1240/changes
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
[
https://issues.apache.org/jira/browse/LUCENE-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12906709#action_12906709
]
Lukas Vlcek commented on LUCENE-2464:
-
I found that even if the SingleFragListBuilder
This is expected/intentional, because computing the true unique term
count across multiple segments is exceptionally costly (you have to do
the merge sort to de-dup).
If you really want the true count, you can pull the TermsEnum and
.next() until exhaustion.
Alternatively, you can use
The failure was in TestIndexWriter.testThreadInterruptDeadlock:
[junit] java.lang.NoClassDefFoundError:
org/apache/lucene/util/ThreadInterruptedException$__CLR2_6_3c0c0gds5twgh
[junit] at
org.apache.lucene.util.ThreadInterruptedException.init(ThreadInterruptedException.java:28)
Thanks for reporting Steven!
This is LUCENE-2118, striking again, taunting me. This particular
failure bugs me!!
Mike
On Mon, Sep 6, 2010 at 8:10 PM, Steven A Rowe sar...@syr.edu wrote:
While testing changes for LUCENE-2611, I saw
TestIndexWriterMergePolicy.testMaxBufferedDocsChange() fail,
[
https://issues.apache.org/jira/browse/LUCENE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12906723#action_12906723
]
Michael McCandless commented on LUCENE-2573:
bq. We probably need a test that
Ahh -- this makes sense. I thought it was too good to be true!
On Tue, Sep 7, 2010 at 4:45 AM, Michael McCandless
luc...@mikemccandless.com wrote:
This is expected/intentional, because computing the true unique term
count across multiple segments is exceptionally costly (you have to do
the
[
https://issues.apache.org/jira/browse/LUCENE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12906798#action_12906798
]
Jason Rutherglen commented on LUCENE-2573:
--
bq. shouldn't tiered flushing take
[
https://issues.apache.org/jira/browse/LUCENE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12906801#action_12906801
]
Jason Rutherglen commented on LUCENE-2573:
--
bq. We can modify MockRAMDir to
[
https://issues.apache.org/jira/browse/SOLR-2002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12906806#action_12906806
]
Yonik Seeley commented on SOLR-2002:
Sounds cool! Whatever those strong in ant-foo come
[
https://issues.apache.org/jira/browse/SOLR-1316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12906812#action_12906812
]
Andrzej Bialecki commented on SOLR-1316:
-
I added license headers and committed the
[
https://issues.apache.org/jira/browse/SOLR-2002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12906826#action_12906826
]
Robert Muir commented on SOLR-2002:
---
thanks, the major thing left is to consolidate
[
https://issues.apache.org/jira/browse/LUCENE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Rutherglen updated LUCENE-2573:
-
Attachment: LUCENE-2573.patch
* perDocAllocator is removed from
[
https://issues.apache.org/jira/browse/LUCENE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12906918#action_12906918
]
Jason Rutherglen commented on LUCENE-2573:
--
The last patch also only flushes a
Hello,
I've tripped on this a few times lately, but never been able to reproduce
it: it seems now i am able to reproduce it now semi-consistently with the
below configuration.
It would be great if someone else could try this out and see if its a real
problem, or if its just my machine.
: I'm writing my first SearchComponent to do custom calculations on search
: results. Is it possible to get the facet values for a field from within a
: SearchComponent? I've thought of adapting the StatsComponent and
: FieldFacetStats classes to try and accomplish this. But before I try that,
:
[
https://issues.apache.org/jira/browse/SOLR-2052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Stephen Green updated SOLR-2052:
Attachment: SOLR-2052-2.patch
Updated patch that fixes a bug when combining filter docsets and
[
https://issues.apache.org/jira/browse/SOLR-2105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jan Høydahl updated SOLR-2105:
--
Attachment: SOLR-2105.patch
The attached patch renames the parameter, both in code and config. Tests
[
https://issues.apache.org/jira/browse/LUCENE-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jason Rutherglen updated LUCENE-2573:
-
Attachment: LUCENE-2573.patch
There was a small bug in the choice of the max DWPT, in
ReversedWildcardFilter can create false positives
-
Key: SOLR-2108
URL: https://issues.apache.org/jira/browse/SOLR-2108
Project: Solr
Issue Type: Bug
Reporter: Robert Muir
[
https://issues.apache.org/jira/browse/SOLR-2108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Robert Muir updated SOLR-2108:
--
Attachment: SOLR-2108.patch
Simple fix: if we are doing a wildcard query on a reversed field, but we
[
https://issues.apache.org/jira/browse/SOLR-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yonik Seeley updated SOLR-2107:
---
Attachment: SOLR-2107.patch
Here's a patch that adds qparser support for q and fq params.
[
https://issues.apache.org/jira/browse/SOLR-2107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Yonik Seeley resolved SOLR-2107.
Fix Version/s: 4.0
Resolution: Fixed
MoreLikeThisHandler doesn't work with alternate
Thank you for your reply,it is very import to me.
1.I agree with you by i read solr's source code,i found that it can resolve
this problem by config db-data-config.xml,like this(my database's
Sqlserver2005,other database will unavailable):
dataSource name=dsSqlServer type=JdbcDataSource
[
https://issues.apache.org/jira/browse/LUCENE-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12907069#action_12907069
]
Jason Rutherglen commented on LUCENE-2575:
--
bq. every term has its own open
[
https://issues.apache.org/jira/browse/SOLR-1665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12907080#action_12907080
]
Yonik Seeley commented on SOLR-1665:
Due to the cost of distributed search tests, I
Try to set the batchsize as -1.
2010-09-08
傅顺开
苏州广达友讯技术有限公司
江苏苏州工业园区金鸡湖大道1355号
国际科技园151A,215021
电话:(512)6288-8255(转612)
传真:(512)6288-8155
手机:(0)158-5018-8480
email:f...@peptalk.cn
http://www.bedo.cn, http://k.ai, http://www.lbs.org.cn
发件人: 郭芸
发送时间: 2010-09-07 09:55:05
收件人: Solr
2.But there are some problems:
if the table is very big,solr will spend a long time to import and index,may
be one day and more.so once occurred network problems and others during this
time,maybe solr can not remember what documents had processed,and if we
continue data import ,we do not know
33 matches
Mail list logo