What Jörn said.
Your jar should be nothing but your custom code. Usually I cheat in
IntelliJ and check the box for artifacts that says something like
"only compiled output"
On Sun, Feb 10, 2019 at 10:37 PM Jörn Franke wrote:
>
> You can put all solr dependencies as provided. They are already
Do you have something about legacyCloud in your CLUSTERSTATUS response?
I have "properties":{"legacyCloud":"false"}
In the legacy cloud mode, also calles format 1, the state is stored in a
central clusterstate.js node in ZK, which does not scale well. In the
modern mode every collection has its
Please do not cross-post, this thread is for the users mailing list, not dev.
You have got the answer several times already: clean your input data. You
obviously parse some pdf that contains bad data that result in one single token
(word) being >32kb. Clean your input data either in your applica
Solr is not designed to store chunks of binary data. This is not a bug. It will
probably not be “fixed”.
I strongly recommend putting your chunks of data in a database. Then store the
primary key in a field in Solr. When the Solr results are returned, the client
code can use the keys to fetch t
You can put all solr dependencies as provided. They are already on the class
path - no need to put them in the fat jar.
> Am 11.02.2019 um 05:59 schrieb Aroop Ganguly :
>
> Thanks Erick!
>
> I see. Yes it is a fat jar post shadowJar process (in the order of MBs).
> It contains solrj and solr-co
Thanks Erick!
I see. Yes it is a fat jar post shadowJar process (in the order of MBs).
It contains solrj and solr-core dependencies plus a few more scala related ones.
I guess the solr-core dependencies are unavoidable (right ?), let me try to
trim the others.
Regards
Aroop
> On Feb 10, 2019, a
I found the reason,
&legacyCloud=true when I create a collection with this parameter I could
find that replicas data in CLUSTERSTATUS api request,. is there anything
wrong if I use this in SOLR 7.5.0 when create a collection ?
Please advice.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-
Aroop:
How big is your custom jar file? The name "test-plugins-aroop-all.jar"
makes me suspicious. It should be very small and should _not_ contain
any of the Solr distribution jar files, just your compiled custom
code. I'm grasping at straws a bit, but it may be that you have the
same jar files f
Hi,
Yes we are running full indexing every 4 hours and also we are using more
than 4 views to get data and each view has its own update date.
-
Regards
Shruti
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
SOLR 7.5.0
Created collection as below:
/admin/collections?action=CREATE&name=test_collection&numShards=1&maxShardsPerNode=1&rule=replica:<2,node:*&collection.configName=test_collection_config
Created Successfully.
After that when I try to see CLUSTERSTATUS, it is giving "replicas": {}
empty.
[resending due to bounce warning from the other email]
Hi Team
I thought this was simple, but I am just missing something here. Any guidance
would be very appreciated.
What have I done so far:
1. I have created a custom querParser (class SamplePluggin extends
QParserPlugin { ), which
Hi Team
I thought this was simple, but I am just missing something here. Any guidance
would be very appreciated.
What have I done so far:
1. I have created a custom querParser (class SamplePluggin extends
QParserPlugin { ), which right now does nothing but logs an info message, and
ret
bq. But I would assume it should still be ok. The number of watchers
should still not be gigantic.
This assumption would need to be rigorously tested before I'd be
comfortable. I've spent quite a
bit of time with unhappy clients chasing down issues in the field where
1> it takes hours to cold-sta
Hello, all.
I have 1 collection running, and when I tried to create a new collection with
the following command,
-
$ solr-6.2.0/bin/solr create -c collection2 -d data_driven_schema_configs
-
I got the following error.
-
Connecting to ZooKeeper at sample1:2181,sample2:2182,sample3:2
I opened https://issues.apache.org/jira/browse/SOLR-13240 for the exception.
On 10.02.2019 01:35, Hendrik Haddorp wrote:
Hi,
I have two Solr clouds using Version 7.6.0 with 4 nodes each and about
500 collections with one shard and a replication factor of 2 per Solr
cloud. The data is stored i
I opened now https://issues.apache.org/jira/browse/SOLR-13239 for the
problem I observed.
Well, who can really be sure about those things. But I would assume it
should still be ok. The number of watchers should still not be gigantic.
I have setups with about 2000 collections each but far less
Solr version is 7.6.0
autoAddReplicas is set to true
/api/cluster/autoscaling returns this:
{
"responseHeader":{
"status":0,
"QTime":1},
"cluster-preferences":[{
"minimize":"cores",
"precision":1}],
"cluster-policy":[{
"replica":"<2",
"shard":"#EACH",
"
17 matches
Mail list logo