Hi,
Does anybody know if work is in progress to make Lucene's concurrent query
execution accessible through Solr? I am talking about this:
http://blog.mikemccandless.com/2019/10/concurrent-query-execution-in-apache.html
I find this compelling in particular since the changes in LUCENE-7976 /
Solr
I have a problem when sorting by payload value ... the resulting sort order
is correct for some documents, but not all.
The field type and field definitions are as follows:
The request parameters are the following:
_exact_utag_primary_id:utag77n5840c6h5v0g9b9ww
_id,
its where related
to the fact that these fields had DocValues. After some profiling, it
clearly showed a lot of time was spent in FieldInfos' addOrUpdateInternal()
and related code.
André
Am Mi., 22. Mai 2019 um 18:12 Uhr schrieb André Widhani :
> Hi everyone,
>
> I need some advice
Hi everyone,
I need some advice how to debug slow soft commits.
We use Solr for searches in a DAM system and in similar setups, soft
commits take about one to two seconds, in this case nearly ten seconds.
Solr runs on a dedicated VM with eight cores and 64 GB RAM (16G heap),
which is common scena
5 GMT+01:00 Shawn Heisey :
> On 1/5/2018 6:51 AM, André Widhani wrote:
>
>> I know I can retrieve the entire schema using Schema API and I can also
>> use
>> it to manipulate the schema by adding fields etc.
>>
>> I don't see any way to post an entire schema
Hi,
I know I can retrieve the entire schema using Schema API and I can also use
it to manipulate the schema by adding fields etc.
I don't see any way to post an entire schema file back to the Schema API
though ... this is what most REST APIs offer: You retrieve an object,
modify it and send back
ve mentioned e-mail address, and permanently
> delete and/or destroy the original and any copy of this e-mail and/or its
> attachments, as well as any printout thereof. Additional information about
> our company may be obtained through the site http://www.uol.com.br/ir/.
>
>
>
Thanks, Mark!
Hi,
shouldn't there be a tag for the 4.5.1 release under
http://svn.apache.org/repos/asf/lucene/dev/tags/ ?
Or am I looking at the wrong place?
Regards,
André
fwiw, I can confirm that Solr 4.x can definitely not read indexes created with
1.4.
You'll get an exception like the following:
Caused by: org.apache.lucene.index.IndexFormatTooOldException: Format version
is not supported (resource: segment _16ofy in resource
ChecksumIndexInput(MMapIndexInput
We configure both baseletter conversion (removing accents and umlauts) and
alternate spelling through the mapping file.
For baseletter conversion and mostly german content we transform all accents
that are not used in german language (like french é, è, ê etc.) to their
baseletter. We do not do
>From what version are you upgrading? The compressed attribute is unsupported
>since the 3.x releases.
The change log (CHANGES.txt) has a section "Upgrading from Solr 1.4" in the
notes for Solr 3.1:
"Field compression is no longer supported. Fields that were formerly compressed
will be uncompr
I am just reading through this thread by chance, but doesn't this exception:
> Caused by: org.apache.solr.common.SolrException: Error in
> xpath:/config/luceneMatchVersion for solrconfig.xml
> org.apache.solr.common.SolrException: Error in
> xpath:/config/luceneMatchVersion for solrconfig.xml
I created SOLR-4862 ... I found no way to assign the ticket to somebody though
(I guess it is is under "Workflow", but the button is greyed out).
Thanks,
André
Added the issue.
https://issues.apache.org/jira/browse/SOLR-4857
"Core admin action RELOAD lacks requests parameters to point core to another
config or schema file or data dir"
Von: André Widhani [andre.widh...@digicol.de]
Gesendet: Donnersta
es seem to be a new limitation. Could you create a JIRA
issue for it?
It would be fairly simple to add another reload method that also took the name
of a new solrconfig/schema file.
- Mark
On May 23, 2013, at 4:11 PM, André Widhani wrote:
> Mark, Alan,
>
> thanks for explaining an
When I create a core with Core admin handler using these request parameters:
action=CREATE
&name=core-tex69bbum21ctk1kq6lmkir-index3
&schema=/etc/opt/dcx/solr/conf/schema.xml
&instanceDir=/etc/opt/dcx/solr/
&config=/etc/opt/dcx/solr/conf/solrconfig.xml
&dataDir=/var/opt/dcx/solr/core-tex69bbum21ct
his is to use SolrCore#reload, and that has
>>> been the case for all of 4.x release if I remember right. I supported
>>> making this change to force people who might still be doing what is likely
>>> quite a buggy operation to switch to the correct code.
>>>
>&g
It seems to me that the behavior of the Core admin action "CREATE" has changed
when going from Solr 4.1 to 4.3.
With 4.1, I could re-configure an existing core (changing path/name to
solrconfig.xml for example). In 4.3, I get an error message:
SEVERE: org.apache.solr.common.SolrException: Err
This usually happens when the client sending the request to Solr has given up
waiting for the response (terminated the connection).
In your example, we see that the Solr query time is 81 seconds. Probably the
client issuing the request has a time-out of maybe 30 or 60 seconds.
André
__
Hi,
what is the current status of the Extended DisMax Query Parser? The release
notes for Solr 3.1 say it was experimental at that time (two years back).
The current wiki page for EDisMax does not contain any such statement. We
recently ran into the issue described in SOLR-2649 (using q.op=AND)
Cc: André Widhani
Betreff: Re: AW: java.lang.OutOfMemoryError: Map failed
Hmmm I checked it and it seems to be ok:
root@solr01-dcg:~# ulimit -v
unlimited
Any other tips or do you need more debug info?
BR
On 04/02/2013 11:15 AM, André Widhani wrote:
> Hi Arkadi,
>
> this error usually
Hi Arkadi,
this error usually indicates that virtual memory is not sufficient (should be
"unlimited").
Please see http://comments.gmane.org/gmane.comp.jakarta.lucene.solr.user/69168
Regards,
André
Von: Arkadi Colson [ark...@smartbit.be]
Gesendet: Diens
Hi Dirk,
please check
http://wiki.apache.org/solr/HighlightingParameters#hl.requireFieldMatch - this
may help you.
Regards,
André
Von: Dirk Wintergruen [dwin...@mpiwg-berlin.mpg.de]
Gesendet: Montag, 11. März 2013 13:56
An: solr-user@lucene.apache.org
B
Could you check the virtual memory limit (ulimit -v, check this for the
operating system user that runs Solr).
It should report "unlimited".
André
Von: zqzuk [ziqizh...@hotmail.co.uk]
Gesendet: Dienstag, 26. Februar 2013 13:22
An: solr-user@lucene.apache
These are the figures I got after indexing 4 and half million documents with
both Solr 3.6.1 and 4.1.0 (and optimizing the index at the end).
$ du -h --max-depth=1
67G ./solr410
80G ./solr361
Main contributor to the reduced space consumption is (as expected I guess) the
.fdt file:
This is what it listed under the "Highlights" on the Apache page announcing the
Solr 4.1 release:
"The default codec incorporates an efficient compressed stored fields
implementation that compresses chunks of documents together with LZ4. (see
http://blog.jpountz.net/post/33247161884/efficient
This should be fixed in 3.6.2 which is available since Dec 25.
>From the release notes:
"Fixed ConcurrentModificationException during highlighting, if all fields were
requested."
André
Von: mechravi25 [mechrav...@yahoo.co.in]
Gesendet: Freitag, 18. Janu
I just saw that you are running on SUSE 11 - unlike RHEL for example, it does
not have virtual memory set to "unlimited" by default.
Please check is the virtual memory limit (ulimit -v, check this for the
operating system user that runs Tomcat /Solr).
Since 3.1, Solr maps the index files to vi
, schrieb André Widhani:
> Do you use the LowerCaseFilterFactory filter in your analysis chain? You will
> probably want to add it and if you aready have, make sure it is _before_ the
> stemming filter so you get consistent results regardless of lower- or
> uppercase spelling.
>
&g
Do you use the LowerCaseFilterFactory filter in your analysis chain? You will
probably want to add it and if you aready have, make sure it is _before_ the
stemming filter so you get consistent results regardless of lower- or uppercase
spelling.
You can protect words from being subject to stemmi
The first thing I would check is the virtual memory limit (ulimit -v, check
this for the operating system user that runs Tomcat /Solr).
It should be set to "unlimited", but this is as far as i remember not the
default settings on SLES 11.
Since 3.1, Solr maps the index files to virtual memory.
I think we had a similar exception recently when attempting to sort on a
multi-valued field ... could that be possible in your case?
André
-Ursprüngliche Nachricht-
Von: Dominik Lange [mailto:dominikla...@searchmetrics.com]
Gesendet: Mittwoch, 9. Februar 2011 10:55
An: solr-user@lucene.
33 matches
Mail list logo