Can you post the part of schema.xml where the fields are defined.
The error seems to be an incompatibility between your schema and what your
are trying to import.
By the way, I think you should have further informations for this error on
your log.
Best,
Sébastien
Le sam. 15 déc. 2018 à 05:51
": "",
"statusMessages": {
"Total Requests made to DataSource": "1",
"Total Rows Fetched": "1",
"Total Documents Processed": "0",
"Total Documents Skipped": "0",
"Full Dump
Images and the like are aggressively stripped by the e-mail server,
so there's no error information in your post.
Exactly _how_ are you importing? Data Import Handler? If so, please
show your config as well.
Best,
Erick
On Fri, Dec 14, 2018 at 4:19 PM Alexis Aravena Silva
wrote:
>
Hello,
I'am using Solr 7.5 and I have the following error at import data from SQL
Server:
[cid:45b7e3fd-bb2c-4308-8f4d-16f1cd6a38de]
The message doesn't say anything else, that's why, I don't know what is wrong,
could you help me with this please.
Note: I assig
w to solr. index folder
> contains all my indexes or data on which we do search. After last reboot of
> server we encountered this error.
> coreStore_shard1_replica1 is the only folder we have and as far as I know we
> do not have another replica.
>
> Erick: How much extra roo
cally.
Answer: I am not sure about this. I am very new to solr. index folder
contains all my indexes or data on which we do search. After last reboot of
server we encountered this error.
coreStore_shard1_replica1 is the only folder we have and as far as I know we
do not have another replica.
Erick
r front end I am getting "SolrCore Initialization Failures"
> coreStore_shard1_replica1:
> rg.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Error opening new searcher
>
> Can someone please review and let me know how to recover the system?
>
> Below is the error logs which I got from
> /solr/solr-
1_replica1:
rg.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Error opening new searcher
Can someone please review and let me know how to recover the system?
Below is the error logs which I got from
/solr/solr-6.4.1/server/logs/solr.log file.
-
20YY-MM-DD 11:05:04.327 INFO (main) [ ] o.e.j.s.Server
jetty-9.3.14.v20161028
20YY-M
Hi,
If I request some query which is incorrect grammatically, error message is
shown in web browser but not in my python request code.
It just outputs "400 Bad Request".
How do I get error message in my python code?
Below is s
ted to the synonyms.txt file size: if >
> 1,5 M Solr returns an error:
>
> adding: synonyms.txt (deflated 76%)
> {
> "responseHeader":{
> "status":500,
> "QTime":24325},
> "error":{
> "msg":"Keeper
Hi all,
i'm experiencing a probklem when uploading a new configset on a Solr
ionstance running in cloud mode.
The problem seems to be related to the synonyms.txt file size: if >
1,5 M Solr returns an error:
adding: synonyms.txt (deflated 76%)
{
"responseHeader":
ven if you aren’t on 7.5, the instructions will work for earlier versions
since those params have been in ZK forever.
Cassandra
> On Dec 4, 2018, at 6:38 PM, Edward Ribeiro wrote:
>
> By the default, ZooKeeper's znode maximum size limit is 1MB. If you try to
> send more than thi
By the default, ZooKeeper's znode maximum size limit is 1MB. If you try to
send more than this then an error occurs. You can increase this size limit
but it has to be done both on server (ZK) and client (Solr) side. See this
discussion for more details:
http://lucene.472066.n3.nabble.com/H
Hi all,
i'm experiencing a problem when uploading a new configset on a Solr 7.5
instance running in cloud mode.
The problem seems to be related to the synonyms.txt file size: if >
1,5 M Solr returns an error:
adding: synonyms.txt (deflated 76%)
{
"responseHeader":
What might the implications be if a DIH status request returns an error
response other than a 404?
A 404 says either the handler or the core probably don't exist.
My guess, and I admit that I haven't read the code closely, is that if
the handler exists but is so broken that it canno
On 11/7/2018 11:36 PM, nettadalet wrote:
Shawn Heisey-2 wrote
I do think that it is proper for empty parentheses to throw a syntax
error. The text of the exception message is saying that the parser
encountered the ) character at a point when it did not expect to
encounter that character.
A
x27;re trying to do and
how things are set up.
Best,
Erick
On Thu, Nov 8, 2018 at 10:15 AM Vidhya Kailash wrote:
>
> Any idea why I am getting this error inspite of the following:
>
> I have the customupdateprocessor jar in contrib/customupdate/lib directory
> I have the solrconf
Shawn Heisey-2 wrote
> I don't know whether that actually is written anywhere. I suspect it's
> not.
>
> I do think that it is proper for empty parentheses to throw a syntax
> error. The text of the exception message is saying that the parser
> encountered the )
Shawn Heisey-2 wrote
> I don't know whether that actually is written anywhere. I suspect it's
> not.
>
> I do think that it is proper for empty parentheses to throw a syntax
> error. The text of the exception message is saying that the parser
> encountered the )
Any idea why I am getting this error inspite of the following:
I have the customupdateprocessor jar in contrib/customupdate/lib directory
I have the solrconfig.xml with the lib directives to this jar as well as
solr-core.jar
and I see those jars being loaded on startup in the logs:
2018-11-08
On 11/7/2018 5:10 AM, nettadalet wrote:
I get the following error:
org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError:
Cannot parse '((TITLE_Name_t:( la verita))) AND ((TITLE_Artist_t:( ))) AND
(TITLE_Type_e : "Audio")': Encountered " ")&q
We are using Solr 4.6
(yes, I know. We plan an update in the near future)
I get the following error:
org.apache.solr.common.SolrException: org.apache.solr.search.SyntaxError:
Cannot parse '((TITLE_Name_t:( la verita))) AND ((TITLE_Artist_t:( ))) AND
(TITLE_Type_e : "Audio"
Thank you. Will check all options and let you know.
From: Alexandre Rafalovitch
Sent: Sunday, October 21, 2018 8:09:34 PM
To: solr-user
Subject: Re: Error while indexing Thai core with SolrCloud
Ok,
That may have been a bit too much :-) However, it was useful
is a likely scenario.
4) If all else fails, use something like Wireshark and capture the
network-level traffic during this error. This will show you exactly
what is being passed around (https://www.wireshark.org/). This is
using a power-hammer on a nail, but - if you read this far - I suspect
you are
Hi,
Thank you.
Full stacktrace below
"core_node_name":"172.19.218.201:8082_solr_core_th"}DEBUG - 2018-10-19
02:13:20.343; org.apache.zookeeper.ClientCnxn$SendThread; Reading reply
sessionid:0x200b5a04a770005, packet:: clientPath:null serverPath:null
finished:false header:: 356,1 replyHeader
gt;
>
> From: Alexandre Rafalovitch
> Sent: Sunday, October 21, 2018 5:18:24 PM
> To: solr-user
> Subject: Re: Error while indexing Thai core with SolrCloud
>
> I would check if the Byte-order mark is the cause:
> https://urldefense.proofpoi
Hi Alexandre,
Thank you.
How this explain the issue exists only with SolrCloud and not standalone?
Moshe
From: Alexandre Rafalovitch
Sent: Sunday, October 21, 2018 5:18:24 PM
To: solr-user
Subject: Re: Error while indexing Thai core with SolrCloud
I would
I would check if the Byte-order mark is the cause:
https://en.wikipedia.org/wiki/Byte_order_mark
The error message does not seem to be a perfect match to this issue,
but a good thing to check anyway.
That symbol (right at the file start) is usually invisible and can
trip Java XML parsers for
Hi,
We've specific exception that happening only on Thai core and only once we're
using SolrCloud.
Same indexing activity is running successfully while running on EN core with
SolrCloud or with Thai core and standalone configuration.
We're running on Linux with Solr 4.6
and with -Dfile.encod
Hi Atita,
It would be good to consider upgrading to have the use of the better
features like better memory consumption and better authentication.
On a side note, it is also good to upgrade now in Solr 7, as Solr Indexes
can only be upgraded from the previous major release version (Solr 6) to
the
Hi Andrzej,
We're rather weighing on a lot of other stuff to upgrade our Solr for a
very long time like better authentication handling, backups using CDCR, new
Replication mode and this probably has just given us another reason to
upgrade.
Thank you so much for the suggestion, I think its good to
I know it’s not much help if you’re stuck with Solr 6.1 … but Solr 7.5 comes
with an alternative strategy for SPLITSHARD that doesn’t consume as much memory
and nearly doesn’t consume additional disk space on the leader. This strategy
can be turned on by “splitMethod=link” parameter.
> On 4 Oct
Hi Edwin,
Thanks for following up on this.
So here are the configs :
Memory - 30G - 20 G to Solr
Disk - 1TB
Index = ~ 500G
and I think that it possibly is due to the reason why this could be
happening is that during split shard, the unsplit index + split index
persists on the instance and may b
Hi Atita,
What is the amount of memory that you have in your system?
And what is your index size?
Regards,
Edwin
On Tue, 25 Sep 2018 at 22:39, Atita Arora wrote:
> Hi,
>
> I am working on a test setup with Solr 6.1.0 cloud with 1 collection
> sharded across 2 shards with no replication. When t
On 9/29/2018 3:08 AM, Ryan Qin wrote:
I’m working on a project which uses solr as search engine. I found I
cannot get the root cause of error from SolrJ.
CloudSolrClient uses LBHttpSolrClient internally. This client has a
tendency to wrap all exceptions in the "No live SolrServers&quo
s search engine. I found I
> cannot get the root cause of error from SolrJ.
>
>
>
> *Case 1*
>
> I try to create a field with wrong field type
>
>
>
> Map fieldd = *new* LinkedHashMap Object>();
>
> fieldd.put("name", "e&quo
Hi there,
I'm working on a project which uses solr as search engine. I found I cannot get
the root cause of error from SolrJ.
Case 1
I try to create a field with wrong field type
Map fieldd = new LinkedHashMap();
fieldd.put("name", "e");
f
bq. In all my solr servers I have 40% free space
Well, clearly that's not enough if you're getting this error: "No
space left on device"
Solr/Lucene need _at least_ as much free space as the indexes occupy.
In some circumstances it can require more. It sounds like you
. Downloaded x!=y
OR
SolrException: Unable to download completely. (Downloaded x of y
bytes) No space left on device
OR
Error deleting file:
NoSuchFileException: /opt/solr//data/index./
All these errors I get when replica in recovering mode, sometimes after
physical machine failing or sometimes after
Hi,
I am working on a test setup with Solr 6.1.0 cloud with 1 collection
sharded across 2 shards with no replication. When triggered a SPLITSHARD
command it throws "java.lang.OutOfMemoryError: Java heap space" everytime.
I tried this with multiple heap settings of 8, 12 & 20G but every time it
doe
On 9/21/2018 10:31 AM, Christopher Schultz wrote:
For those interested, it looks like I was naïvely using
BasicHttpClientConnectionManager, which is totally inappropriate in a
multi-user threaded environment.
I switched to PooledHttpClientConnectionManager and that seems to be
working much bette
mmit" at the end, is there
> anything wrong with how we are using SolrJ client? Are instances
> of SolrJClient not thread-safe? My assumption was that they were
> threadsafe and that HTTP Client would manage the connection pool
> under the covers.
ber 18, 2018 4:18 PM
> To: solr-user
> Subject: Re: weird error for accessing solr
>
> bq. can you share *ALL* of...
>
> from both machines!
> On Tue, Sep 18, 2018 at 12:40 PM Shawn Heisey wrote:
> >
> > On 9/18/2018 12:23 PM, Gu, Steve (CDC/OD/OADS) (CTR
t is not a solr issue.
Thanks a lot
Steve
-Original Message-
From: Erick Erickson
Sent: Tuesday, September 18, 2018 4:18 PM
To: solr-user
Subject: Re: weird error for accessing solr
bq. can you share *ALL* of...
from both machines!
On Tue, Sep 18, 2018 at 12:40 PM Shawn Heisey wrote:
&
I opened 8983 on solr.server to anyone, and
> > solr can be accessed from laptops/desktops. But when I tried to access the
> > solr from some servers, I got the error of SolrCore Initialization
> > Failures. The left nav on the page is shown but indicates that the solr is
&
servers, I got the error of SolrCore Initialization Failures.
The left nav on the page is shown but indicates that the solr is set up as
SolrCloud, which is not.
On the dashboard when you see the Cloud tab, can you share *ALL* of
what's under JVM in the Args section?
Thanks,
Shawn
Alex,
I tried to curl http://solr.server:8983/solr/ and got different results from
different machines. I also did shift-reload which gave me the same result. So
it does not seem to be a browser cache issue.
I also shut down solr and tried to access it. It gave connection failure error
for
t; -Original Message-
> From: Alexandre Rafalovitch
> Sent: Tuesday, September 18, 2018 2:39 PM
> To: solr-user
> Subject: Re: weird error for accessing solr
>
> Sounds like your Solr was restarted as a SolrCloud, maybe by an automated
> script or an init service?
>
: Alexandre Rafalovitch
Sent: Tuesday, September 18, 2018 2:39 PM
To: solr-user
Subject: Re: weird error for accessing solr
Sounds like your Solr was restarted as a SolrCloud, maybe by an automated
script or an init service?
If you created a core in a standalone mode and then restart the same
configuration
files (because it will expect them in ZooKeeper, not on disk). So,
that would explain the error.
I would focus on the restart point, maybe check the logs (in
server/logs) and see if there are hints there.
Regards,
Alex.
P.s. Unless you are able to see the SolrCloud from one computer and
I have set up my solr as a standalone service and the its url is
http://solr.server:8983/solr. I opened 8983 on solr.server to anyone, and
solr can be accessed from laptops/desktops. But when I tried to access the
solr from some servers, I got the error of SolrCore Initialization Failures
ances of
SolrJClient not thread-safe? My assumption was that they were
threadsafe and that HTTP Client would manage the connection pool under
the covers.
Here is the full stack trace:
com.chadis.api.business.RegistrationProcessor- Error processing
registration request
java.lang.IllegalState
Thanks Shawn and Erick.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
"When used in combination with the RSA JSSE and RSA JCE providers, this
crypto module provides a FIPS-compliant (FIPS 140-2) implementation. "
This is written in the oracle website so I suppose JsafeJCE provides a
FIPS-compliant JSSE.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-Use
previously had our
> > maxBooleanClause limit set to 20k (eek!). but it worked phenomenally well
> > and i think our record amount from a user was ~19k items.
> >
> > we're now on 7.4 Cloud. i'm getting this error when testing with a measly
> > 600 sk
s is for users to
> drop in an untold amount of product skus. we previously had our
> maxBooleanClause limit set to 20k (eek!). but it worked phenomenally well
> and i think our record amount from a user was ~19k items.
>
> we're now on 7.4 Cloud. i
now on 7.4 Cloud. i'm getting this error when testing with a measly
600 skus:
org.apache.lucene.util.graph.GraphTokenStreamFiniteStrings.articulationPointsRecurse(GraphTokenStreamFiniteStrings.java:278)\n\tat
there's a lot more to the error message but that is the tail end of it all
and
On 9/11/2018 10:15 PM, Zahra Aminolroaya wrote:
Thanks Erick. We used to use TrieLongField for our unique id and in the
document it is said that all Trie* fieldtypes are casting to
*pointfieldtypes. What would be the alternative solution?
I've never heard of Trie casting to Point.
Point is the
People usually just use a string field in place of longs etc..
On Tue, Sep 11, 2018 at 9:15 PM Zahra Aminolroaya
wrote:
>
> Thanks Erick. We used to use TrieLongField for our unique id and in the
> document it is said that all Trie* fieldtypes are casting to
> *pointfieldtypes. What would be the a
Thanks Erick. We used to use TrieLongField for our unique id and in the
document it is said that all Trie* fieldtypes are casting to
*pointfieldtypes. What would be the alternative solution?
Best,
Zahra
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
d solr collection, it works
> normally.
> but when i try to render it from multi shards solr collection, i've found
> an error message below on my geoserver. Pls help
>
>
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Shalvak,
On 9/11/18 01:51, Shalvak Mittal (UST, ) wrote:
> I have recently installed solr 7.2.1 in my ubuntu 16.04 system.
> While creating a new core, the solr logging shows an error saying
>
>
> " Caused by: org.apache.solr.
On 9/10/2018 11:51 PM, Shalvak Mittal (UST, ) wrote:
I have recently installed solr 7.2.1 in my ubuntu 16.04 system. While creating
a new core, the solr logging shows an error saying
" Caused by: org.apache.solr.common.SolrException: fips module was not loaded."
I have never
t; all of our Trie* fields to *pointtype Fields.
>
> Our unique key field type is long, and we changed our long field type
> something like below;
>
> indexed="false"/>
>
> We get the error uniqueKey field can not be configured to use a Points based
> FieldType.
&
We read that in Solr 7, Trie* fields are deprecated, so we decided to change
all of our Trie* fields to *pointtype Fields.
Our unique key field type is long, and we changed our long field type
something like below;
We get the error uniqueKey field can not be configured to use a Points based
Hi,
I have recently installed solr 7.2.1 in my ubuntu 16.04 system. While creating
a new core, the solr logging shows an error saying
" Caused by: org.apache.solr.common.SolrException: fips module was not loaded."
I have downloaded the necessary jar files like cryptoj.jar and copi
can check the solr.log or the solr-console.log. Another
> > option
> > >>>>> is to
> > >>>>> activate the debug mode in the Solr console before running the
> > >>>>> data import.
> > >>>>>
> > >>>>> Andrea
> > >>>>>
> > >>>>> On 10/09/2018 16:57, Monique Monteiro wrote:
> > >>>>> > Hi all,
> > >>>>> >
> > >>>>> > I have a data import handler configured with an Oracle SQL
> > query
> > >>>>> which
> > >>>>> > works like a charm. However, when I have the same query
> > >>>>> configured in
> > >>>>> > Solr's data import handler, nothing happens, and it returns:
> > >>>>> >
> > >>>>> >
> > >>>>> >
> > >>>>> > "*Total Requests made to DataSource*": "1",
> > >>>>> >
> > >>>>> > "*Total Rows Fetched*": "0",
> > >>>>> >
> > >>>>> > "*Total Documents Processed*": "0",
> > >>>>> >
> > >>>>> > "*Total Documents Skipped*": "0",
> > >>>>> >
> > >>>>> > "Full Dump Started": "2018-09-06 18:15:59", "Full Import
> > >>>>> failed": "2018-09-06
> > >>>>> > 18:16:02"
> > >>>>> >
> > >>>>> > Has anyone any ideas about what may be happening? Is there
> > any
> > >>>>> log file
> > >>>>> > which can tell the error?
> > >>>>> >
> > >>>>> > Thanks in advance,
> > >>>>> >
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>>> --
> > >>>>> Monique Monteiro
> > >>>>> Twitter: http://twitter.com/monilouise
> > >>
> >
> >
>
> --
> Monique Monteiro
> Twitter: http://twitter.com/monilouise
; > > > > > We had a running cluster with CDCR and there were some issues
> with
> > > > > > indexing on Source cluster which got resolved after restarting
> the
> > > > nodes
> > > > > > (in my absence...) a
rote:
> >>>>>
> >>>>> You can check the solr.log or the solr-console.log. Another
> option
> >>>>> is to
> >>>>> activate the debug mode in the Solr console before running the
urce*": "1",
>
> "*Total Rows Fetched*": "0",
>
> "*Total Documents Processed*": "0",
>
> "*Total Documents Skipped*": "0",
>
> "Full Dump Started": "2018-09-06 18:15:59", "Full Import
failed": "2018-09-06
> 18:16:02"
>
> Has anyone any ideas about what may be happening? Is there any
log file
> which can tell the error?
>
> Thanks in advance,
>
--
Monique Monteiro
Twitter: http://twitter.com/monilouise
te:
> >>> > Hi all,
> >>> >
> >>> > I have a data import handler configured with an Oracle SQL query
> >>> which
> >>> > works like a charm. However, when I have the same query
> >>> confi
which
> works like a charm. However, when I have the same query
configured in
> Solr's data import handler, nothing happens, and it returns:
>
>
>
> "*Total Requests made to DataSource*": "1",
>
> "*Tot
a data import handler configured with an Oracle SQL query
> > which
> > > works like a charm. However, when I have the same query
> > configured in
> > > Solr's data import handler, nothing happens, and it returns:
> > >
> >
Copy and paste the text of the error. Pictures of text aren’t very useful, even
when
they do make it through the mail reflector.
Also, expand the error (the small info button) to get a stack trace.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
>
s Skipped*": "0",
>
> "Full Dump Started": "2018-09-06 18:15:59", "Full Import
failed": "2018-09-06
> 18:16:02"
>
> Has anyone any ideas about what may be happening? Is there any
log file
> which can tell the error?
>
> Thanks in advance,
>
--
Monique Monteiro
Twitter: http://twitter.com/monilouise
otal Documents Processed*": "0",
> >
> > "*Total Documents Skipped*": "0",
> >
> > "Full Dump Started": "2018-09-06 18:15:59", "Full Import failed":
> "2018-09-06
> > 18:16:02"
> >
> > Has anyone any ideas about what may be happening? Is there any log file
> > which can tell the error?
> >
> > Thanks in advance,
> >
>
>
--
Monique Monteiro
Twitter: http://twitter.com/monilouise
quot;0",
"*Total Documents Skipped*": "0",
"Full Dump Started": "2018-09-06 18:15:59", "Full Import failed": "2018-09-06
18:16:02"
Has anyone any ideas about what may be happening? Is there any log file
which can tell the error?
Thanks in advance,
quot;*Total Rows Fetched*": "0",
"*Total Documents Processed*": "0",
"*Total Documents Skipped*": "0",
"Full Dump Started": "2018-09-06 18:15:59", "Full Import failed": "2018-09-06
18:16:02"
Has an
; indexing on Source cluster which got resolved after restarting the
> > > nodes
> > > > > (in my absence...) and now I see below errors on a shard at Target
> > > > > cluster. Any suggestions / ideas what could have cause
rting the
> > nodes
> > > > (in my absence...) and now I see below errors on a shard at Target
> > > > cluster. Any suggestions / ideas what could have caused this and
> whats
> > > the
> > > > best way to recover.
> &g
gt; > the
> > > best way to recover.
> > >
> > > Thnx
> > >
> > > Caused by: org.apache.solr.common.SolrException: Error opening new
> > searcher
> > > at
> > > org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java
see below errors on a shard at Target
> > cluster. Any suggestions / ideas what could have caused this and whats
> the
> > best way to recover.
> >
> > Thnx
> >
> > Caused by: org.apache.solr.common.SolrException: Error opening new
> searcher
> &g
) and now I see below errors on a shard at Target
> cluster. Any suggestions / ideas what could have caused this and whats the
> best way to recover.
>
> Thnx
>
> Caused by: org.apache.solr.common.SolrException: Error opening new searcher
> at
> org.apache.solr.core.Solr
way to
recover.
Thnx
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2069)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2189)
at org.apache.solr.core.SolrCore.getSearcher
Hi i wanna try to rendering solr data spatial from geoserver layer.
when i try to render it from single shard solr collection, it works
normally.
but when i try to render it from multi shards solr collection, i've
found an error message below on my geoserver. Pls help
Thanks Shawn. I wrote my own filter. I attached my jar in hear.
I found your answer in hear:
http://lucene.472066.n3.nabble.com/How-to-load-plugins-with-Solr-4-9-and-SolrCloud-td4312113.html
Based on your answer, is it possible that blob Api does not work for my own
filter jar?
norm.jar
tps://github.com/RBMHTechnology/vind> library from solr 5 to
7.4.0
and I
am facing an error which I have no idea how to solve...
The library provides a wrapper (and some extra stuff) to develop
search
tools over Solr and uses SolrJ to access it, more info about it can
be
seen
in the
On 8/30/2018 3:14 AM, Salvo Bonanno wrote:
The solr version in both enviroment is 7.4.0
Looks that there was a problem using the intPointField type for a key
field in my schema, I've changed the type to string and now everything
works.
Seeing that problem in 7.4.0 definitely sounds like you've
n 8/29/2018 1:27 AM, Salvo Bonanno wrote:
> > [error]
> > corename:
> > org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> > Could not load conf for core corename: Can't load schema
> > /opt/solr/server/solr/corename/conf/managed-sc
On 8/29/2018 1:27 AM, Salvo Bonanno wrote:
[error]
corename:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Could not load conf for core corename: Can't load schema
/opt/solr/server/solr/corename/conf/managed-schema: Plugin init
failure for [schema.xml] fiel
, but on the new
> one, with the exactly same configutation, gives an Initialitation
> Error.
>
> [error]
> corename:
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
> Could not load conf for core corename: Can't load schema
> /opt
; >>> Best,
> >>> Alfonso.
> >>>
> >>> On Wed, 29 Aug 2018 at 12:57, Andrea Gazzarini
> >> wrote:
> >>>> Hi Alfonso,
> >>>> could you please paste an extract of the client code? Specifically
> those
> >>
but in your case, if I
got you, you're sending plain query parameters (through SolrJ).
Best,
Andrea
On 29/08/2018 12:45, Alfonso Noriega wrote:
Hi,
I am implementing a migration of Vind
<https://github.com/RBMHTechnology/vind> library from solr 5 to 7.4.0
and I
am facing an error whic
gt;> The line you mentioned is dealing with ContentStream which as far as I
> >> remember wraps the request body, and not the request params. So as
> >> request body Solr expects a valid JSON payload, but in your case, if I
> >> got you, you're sending plain query
ion of Vind
<https://github.com/RBMHTechnology/vind> library from solr 5 to 7.4.0
and I
am facing an error which I have no idea how to solve...
The library provides a wrapper (and some extra stuff) to develop search
tools over Solr and uses SolrJ to access it, more info about it can be
12:45, Alfonso Noriega wrote:
> > Hi,
> > I am implementing a migration of Vind
> > <https://github.com/RBMHTechnology/vind> library from solr 5 to 7.4.0
> and I
> > am facing an error which I have no idea how to solve...
> >
> > The library provides a wrapp
7.4.0 and I
am facing an error which I have no idea how to solve...
The library provides a wrapper (and some extra stuff) to develop search
tools over Solr and uses SolrJ to access it, more info about it can be seen
in the public repo, but basically all requests are done to solr through a
client
Hi,
I am implementing a migration of Vind
<https://github.com/RBMHTechnology/vind> library from solr 5 to 7.4.0 and I
am facing an error which I have no idea how to solve...
The library provides a wrapper (and some extra stuff) to develop search
tools over Solr and uses SolrJ to access it
ng flawlessy on the former server, but on the new
one, with the exactly same configutation, gives an Initialitation
Error.
[error]
corename:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Could not load conf for core corename: Can't load schema
/opt/solr/serve
ilto:solr-user@lucene.apache.org>"
mailto:solr-user@lucene.apache.org>>
Subject: Spring Content Error in Plugin
Hi,
We have a custom java plugin that leverages the UpdateRequestProcessorFactory
to push data to multiple cores when a single core is written to. We are
building the plu
301 - 400 of 3729 matches
Mail list logo