Hi,
I am using the Microsoft Jdbc driver 6.4 version in Solr 7.4.0 . I have
tried removing the selectMethod=Cursor and still it runs out of heap space.
Do we have anyone who has faced similar issue.
Thanks
Tanya
On Tue, Sep 18, 2018 at 6:38 PM Shawn Heisey wrote:
> On 9/18/2018 4:48 PM, Tany
I think if you try hard enough, it is possible to get Solr to keep
multiple documents that would normally keep only the latest version.
They will just have different internal lucene id.
This may of course break a lot of other things like SolrCloud and
possibly facet counts.
So, I would ask the ac
No. Solr only has one version of a document. It is not a multi-version database.
Each replica will return the newest version it has.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Sep 18, 2018, at 7:11 PM, zhenyuan wei wrote:
>
> Hi all,
>add
Thanks Yasufumi.
I will check this option. I used schema API to make the changes.
Thanks & Regards
Piyush Rathor
Consultant
Deloitte Digital (Salesforce.com / Force.com)
Deloitte Consulting Pvt. Ltd.
Office: +1 (615) 209 4980
Mobile : +1 (302) 397 1491
prat...@deloitte.com | www.deloitte.com
Ple
Thanks Jan.
I used Schema API to do it.
Thanks & Regards
Piyush Rathor
Consultant
Deloitte Digital (Salesforce.com / Force.com)
Deloitte Consulting Pvt. Ltd.
Office: +1 (615) 209 4980
Mobile : +1 (302) 397 1491
prat...@deloitte.com | www.deloitte.com
Please consider the environment before print
Hi All,
How can we add a synonyms text file to solr cloud. I have a text file with
comma separated synonyms.
Thanks & Regards
Piyush Rathor
Consultant
Deloitte Digital (Salesforce.com / Force.com)
Deloitte Consulting Pvt. Ltd.
Office: +1 (615) 209 4980
Mobile : +1 (302) 397 1491
prat...@deloitt
In addition, I tried withmaxErrors=3 and with only 1error document, the
indexing process still gets aborted.
Could it be the way I defined the TolerantUpdateProcessorFactory in
solrconfg.xml?
On 18/9/2018 3:13 PM, Derek Poh wrote:
Hi
I am using CSV formatted indexupdates to index on tab del
Hi,
> https://github.com/airalcorn2/Solr-LTR#RankNet
>
> Has anyone tried on this before? And what is the format of the training
> data that this model requires?
I haven't tried it, but I'd like to inform you that there is another project of LTR we've been
developed:
https://github.com/LTR4L/
Hi all,
add solr document with overwrite=false will keepping multi version
documents,
My question is :
1. How to search newest documents?with what options?
2. How to delete old version < newest version documents?
for example:
{
"id":"1002",
"name":["james"]
On 9/18/2018 4:48 PM, Tanya Bompi wrote:
I have the SOLR 7.0 setup with the DataImportHandler connecting to the
sql server db. I keep getting OutOfMemory: Java Heap Space when doing a
full import. The size of the records is around 3 million so not very huge.
I tried the following steps and not
Hi,
I have the SOLR 7.0 setup with the DataImportHandler connecting to the
sql server db. I keep getting OutOfMemory: Java Heap Space when doing a
full import. The size of the records is around 3 million so not very huge.
I tried the following steps and nothing helped thus far.
1. Setting the "r
Thanks for the information. I thought backup is going to be more of the
disk activity. But I understand now that RAM is involved here as well. We
indeed did NOT have enough memory in this box, as it is 64GB box with index
size of 72GB, being backed up. The read (real time GET) performance was
bette
On 9/18/2018 2:21 PM, Christopher Schultz wrote:
AIUI, Solr doesn't support updating a single field in a document. The
document is replaced no matter how hard to try to be surgical about
updating a single field.
Solr does have Atomic Update functionality. For this to work, the index
must be a
Yup, thanks for the clarification. I see now that some of the items I list
in 2 are moot.
On Tue, Sep 18, 2018 at 4:16 PM Alexandre Rafalovitch
wrote:
> Uhm, inline:
>
> On 18 September 2018 at 17:05, Dan Brown wrote:
> > 1. Thank you.
> >
> > 2. I think this is what you're looking for. You'd
Oops, premature send.
But basically, nearly all the items below seem to be a mix of things
that CSV can already do or that URP can already do or would be the
good place to inject that as a plugin. E.g.
http://lucene.apache.org/solr/guide/7_4/update-request-processors.html#templateupdateprocessorfa
Uhm, inline:
On 18 September 2018 at 17:05, Dan Brown wrote:
> 1. Thank you.
>
> 2. I think this is what you're looking for. You'd be able to be more
> specific than with bin/post. For instance:
> a. specify the CSV delimiter, CSV quote character, and multivalued field
> delimiter
http://lucene
1. Thank you.
2. I think this is what you're looking for. You'd be able to be more
specific than with bin/post. For instance:
a. specify the CSV delimiter, CSV quote character, and multivalued field
delimiter
b. the dynamic-fields feature let's you write plugins in Java to define
values (very si
I think this is the issue with top-level negative clause. Lucene does
not know what "-x" means without "*:* -x" to establish the baseline
set to subtract from. Solr has a workaround for top-level negative
query, so "-WithinPrefixTreeQuery..." triggers that special treatment.
But "+(-WithinPrefixTre
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Dan,
On 9/18/18 2:51 PM, Dan Brown wrote:
> I've been working on this for a while and it's finally in a state
> where it's ready for public consumption.
>
> This is a command line indexer that will index CSV or JSON
> documents: https://github.com/
bq. can you share *ALL* of...
from both machines!
On Tue, Sep 18, 2018 at 12:40 PM Shawn Heisey wrote:
>
> On 9/18/2018 12:23 PM, Gu, Steve (CDC/OD/OADS) (CTR) wrote:
> > I have set up my solr as a standalone service and the its url is
> > http://solr.server:8983/solr. I opened 8983 on solr.se
1. Congrats!
2. How is this different from bin\post? CSV and JSON are both
supported formats. I am sure it is very clear to you, but to a visitor
- not so much.
3. What is the significance of "replace just the field". Is that an
atomic update? Similar to AtomicUpdateProcessorFactory? What is the
us
On 9/18/2018 11:00 AM, Ganesh Sethuraman wrote:
We are using Solr 7.2.1 with SolrCloud with 35 collections with 1 node ZK
ensemble (in lower environment, we will have 3 nodes ensemble) in AWS. We
are testing to see if we have Async Solr Cloud backup (
https://lucene.apache.org/solr/guide/7_2/col
On 9/18/2018 12:23 PM, Gu, Steve (CDC/OD/OADS) (CTR) wrote:
I have set up my solr as a standalone service and the its url is
http://solr.server:8983/solr. I opened 8983 on solr.server to anyone, and
solr can be accessed from laptops/desktops. But when I tried to access the
solr from some se
Three ways:
1. Use Admin UI Schema tab and add/delete fields/copyfields there. Not support
for fieldTypes
2. Use Schema API, see ref.guide
3. bin/solr zk cp zk:/configs/myconfig/managed-schema .
go ahead edit schema
bin/solr zk cp managed-schema zk:/configs/myconfig/managed-schema
rel
I guess you could do a version-independent backup with /export handler and store
docs in XML or JSON format. Or you could use streaming and store the entire
index
as JSON tuples, which could then be ingested into another version.
But it is correct that the backup/restore feature of Solr is not pr
Alex,
I tried to curl http://solr.server:8983/solr/ and got different results from
different machines. I also did shift-reload which gave me the same result. So
it does not seem to be a browser cache issue.
I also shut down solr and tried to access it. It gave connection failure error
for b
Hi Alex and Erick,
We could possibly put them in fq, but how we set everything up would make
it hard to do so, but going that route might be the only option.
I did take a look at the parsed query and this is the difference:
This is the one that works:
"-WithinPrefixTreeQuery(fieldName=collection
The only hard-and-fast rule is that you must re-index from source when
you upgrade to Solr X+2. Solr (well, Lucene) tries very hard to
maintain one-major-version back-compatibility, so Solr 8 will function
with Solr 7 indexes but _not_ any index _ever touched_ by 6x.
That said, it's usually a good
Also, Solr does _not_ implement strict Boolean logic, although with
appropriate parentheses you can get it to look like Boolean logic.
See: https://lucidworks.com/2011/12/28/why-not-and-or-and-not/.
Additionally, for _some_ clauses a pure-not query is translated into
*:* -pure_not_query which is h
I've been working on this for a while and it's finally in a state where
it's ready for public consumption.
This is a command line indexer that will index CSV or JSON documents:
https://github.com/likethecolor/solr-indexer
There are quite a few parameters/options that can be set.
One thing to not
Then you are either seeing different instances or your browser is
hard-caching the Admin pages. Trying shift-reload or anonymous mode to
get a full-refresh of HTML/Javascript. Or even a command line request.
Regards,
Alex.
On 18 September 2018 at 14:43, Gu, Steve (CDC/OD/OADS) (CTR)
wrote:
>
No the solr was not restarted as SolrCloud. We see solr from one computer and
all cores are available for query, but from another computer, it shows the
admin page as solrcloud with errors on the page. All the links on the left nav
do not work either.
-Original Message-
From: Alexa
Have a look at what debug shows in the parsed query. I think every
bracket is quite significant actually and you are generating a
different type of clause.
Also, have you thought about putting those individual clauses into
'fq' instead of jointly into 'q'? This may give you faster search too,
as S
Sounds like your Solr was restarted as a SolrCloud, maybe by an
automated script or an init service?
If you created a core in a standalone mode and then restart the same
configuration in a SolrCloud mode, it would know that you have those
collections/cores, but will not be able to find any configu
Hi,
I am doing some date queries and I was wondering if there is some way of
getting this query to work.
( ( !{!field f=collection_date_range op=Within v='[2000-01-01 TO
2018-09-18]'} AND !{!field f=collection_date_range op=Within v='[1960-01-01
TO 1998-09-18]'} ) AND collection_season:([1999-05
I have set up my solr as a standalone service and the its url is
http://solr.server:8983/solr. I opened 8983 on solr.server to anyone, and
solr can be accessed from laptops/desktops. But when I tried to access the
solr from some servers, I got the error of SolrCore Initialization Failures.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Walter,
On 9/18/18 11:24, Walter Underwood wrote:
> It isn’t very clear from that page, but the two backup methods make
> a copy of the indexes in a commit-aware way. That is all. One
> method copies them to a new server, the other to files in the d
Hi
We are using Solr 7.2.1 with SolrCloud with 35 collections with 1 node ZK
ensemble (in lower environment, we will have 3 nodes ensemble) in AWS. We
are testing to see if we have Async Solr Cloud backup (
https://lucene.apache.org/solr/guide/7_2/collections-api.html#backup) done
every time we a
Is it possible to get highlighting in more like this queries? My initial
attempts seem to indicate that it isn't possible (I've only attempted this
via modifying MLT query urls)
(I'm looking for something similar to hl=true&hl.fl=field1,field5,field6 in
a normal search)
Thanks,
Matt
It isn’t very clear from that page, but the two backup methods make a copy
of the indexes in a commit-aware way. That is all. One method copies them
to a new server, the other to files in the data directory.
Database backups generally have a separate backup format which is
independent of the data
Take a look at the metrics available starting with 6.4
(https://lucene.apache.org/solr/guide/7_4/performance-statistics-reference.html).
Or just hit: http://blahblahblah/solr/admin/metrics to see them all.
WARNING: be prepared to spend an hour looking through the list, there
are a _log_ of them...
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Walter,
On 9/17/18 11:39, Walter Underwood wrote:
> Do not use Solr as a database. It was never designed to be a
> database. It is missing a lot of features that are normal in
> databases.
>
> [...] * no real backups (Solr backup is a cold server,
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
All,
Our single-instance Solr server is just getting its first taste of
production load, and I'm seeing this periodically:
java.lang.IllegalStateException: Connection is still allocated
The stack trace shows it's coming from HTTP Client as called
Hello,
I'm setting up a new Solr server and am running into an issues I haven't
experienced in previous Solr installations. When I navigate to a core's
"Dataimport" tab (without even triggering an import request), several of the
HTTP requests made by the admin UI fail. Checking the Solr logs,
You have to increase your RAM. We have upgraded our Solr cluster to 12 solr
nodes, each with 64G RAM, our shard size is around 25G, each server only hosts
either one shard ( leading node or replica), Performance is very good.
For better performance, memory needs to be over your shard size.
Ki
On 9/18/2018 1:11 AM, zhenyuan wei wrote:
I have 6 machines,and each machine run a solr server, each solr server use
RAM 18GB. Total document number is 3.2billion,1.4TB ,
my collection‘s replica factor is 1。collection shard number is
60,currently each shard is 20~30GB。
15 fields per document。
On 9/5/2018 7:17 AM, shruti suri wrote:
I am using a custom handler with edismax parser. I am using uf parameter in
the handler to restrict some fields from search. But uf is not working with
post filter(fq). I want to restricted same fields in fq, so that people
could not filter on some fields.
On 9/18/2018 4:03 AM, zhenyuan wei wrote:
Does solr support rollback or any method to do the same job?
Like update/add/delete a document, can I rollback them?
With SolrCloud, rollback is not supported. This is because a typical
SolrCloud install spreads the index across multiple servers.
Hi Zheng,
I am using version 6.1.0. Basically, I want few fields to be blocked in fq.
Thanks
-
Regards
Shruti
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi
I am using CSV formatted indexupdates to index on tab delimited file.
I have define "TolerantUpdateProcessorFactory" with "maxErrors=-1" in
the solrconfig.xml to skip any document update error and proceed to
update the remaining documents without failing.
Howeverit does not seemto be workin
Hi
I am using CSV formatted indexupdates to index on tab delimited file.
I have define "TolerantUpdateProcessorFactory" with "maxErrors=-1" in
the solrconfig.xml to skip any document update error and proceed to
update the remaining documents without failing.
Howeverit does not seemto be workin
Hi Zahra,
I’m not sure I understand your question. Could you explain with more detail
what it is that you want to achieve?
> On 18 Sep 2018, at 06:00, Zahra Aminolroaya wrote:
>
> Hello Alfonso,
>
>
> Thanks. You used the dedupe updateRequestProcessorChain, so for this
> application we canno
Surprisingly, you can delete recently added doc not yet committed doc,
Lucene tracks the sequence and after following commit there will be no that
document.
"To rollback" delete/update one need to send original doc. But literally
there'is no fine grained rollback operation.
On Tue, Sep 18, 2018 at
Hi all,
Does solr support rollback or any method to do the same job?
Like update/add/delete a document, can I rollback them?
Best~
TinsWzy
Hi Bojan,
This will be fixed in the upcoming 7.5.0 release. Thank you for reporting this!
> On 6 Sep 2018, at 18:16, Bojan Šmid wrote:
>
> Hi,
>
> it seems the format of cache mbeans changed with 7.4.0. And from what I
> see similar change wasn't made for other mbeans, which may mean it was
*Boost Rules work -->*
q=*:*&spellcheck=true&spellcheck.dictionary=en&spellcheck.collate=true&spellcheck.q=&defType=edismax&bq=(code_string:258030ID^100.0)&fq=(allCategories_string_mv:A20012)&fq=(((catalogId:"xyzProductCatalog")
AND
(catalogVersion:Online)))&start=0&rows=8&facet=true&facet.fiel
Hi,
One way is re-upload config files via zkcli.sh and reload the collection.
See following.
https://lucene.apache.org/solr/guide/7_4/command-line-utilities.html
Thanks,
Yasufumi.
2018年9月18日(火) 14:30 Rathor, Piyush (US - Philadelphia) :
> Hi All,
>
>
>
> I am new to solr cloud.
>
>
>
> Can you
On Mon, 2018-09-17 at 17:52 +0200, Vincenzo D'Amore wrote:
> org.apache.solr.common.SolrException: Error while processing facet
> fields:
> java.lang.OutOfMemoryError: Java heap space
>
> Here the complete stacktrace:
> https://gist.github.com/freedev/a14aa9e6ae33fc3ddb2f02d602b34e2b
>
> I suppos
I have 6 machines,and each machine run a solr server, each solr server use
RAM 18GB. Total document number is 3.2billion,1.4TB ,
my collection‘s replica factor is 1。collection shard number is
60,currently each shard is 20~30GB。
15 fields per document。 Query rate is slow now,maybe 100-500 requests
59 matches
Mail list logo