Hi, Maybe SOLR-12715&SOLR-12716 can help you.
Mark Thill 于2019年7月9日周二 上午2:42写道:
> My scenario is:
>
>- 60 GB collection
>- 2 shards of ~30GB
>- Each shard having 2 replicas so I have a backup
>- So I have 4 nodes with each node holding a single core
>
> My goal is to have autosc
Hi all,
I have a collection1 with 8 shards,each shard‘s replicationFactor=1.
I have an application adding 6000w document with infinite retry if any
Exception catch.
That is to say, finally it should be found 6000w docs when query=*:*.
Normally, all things good, but if in the same time, a
Hi all,
I use solr-7.3.1 release,when split a shard1 to shard1_0&shard1_1,
encountered OOM error,then shard1_0&shard1_1 publish a status as
recovery_failed.
How to deal with a shard in recovery_failed status?
Remove shard1_0&shard1_1 and then do split shard1 again?
Or any other way to retry?
Hi,
With current newest version, 9.0.0-snapshot,In
Builder.UnCompileNode.addArc() function,
found this line:
assert numArcs == 0 || label > arcs[numArcs-1].label: "arc[-1].label="
+ arcs[numArcs-1].label + " new label=" + label + " numArcs=" +
numArcs;
Maybe assert tips is :
assert numArcs ==
uestion directly though, no. Split-shard creates two
> > new subshards, but it doesn't do anything to remove or cleanup the
> > original shard. The original shard remains with its data and will
> > delegate future requests to the result shards.
> >
> > Hope that h
Hi,
If I split shard1 to shard1_0,shard1_1, Is the parent shard1 will
never be clean up?
Best,
Tinswzy
ej...@eolya.fr>
> wrote:
>
> > Hi,
> >
> > There are the powerfull JMeter obviously and also SolrMeter (
> > https://github.com/tflobbe/solrmeter).
> >
> > Regards
> >
> > Dominique
> >
> >
> > Le jeu. 20 déc. 2018 à 03:17, zhenyua
Hi all,
Is there a common tool for SOLR benckmark? YCSB is not very
suitable for SOLR. Currently, Is there a good benchmark tool for SOLR?
Best, TinsWzy
Hi all,
I indexed 4810 documents,and make some kill processes tests.
After all indexed done, I query all many time, and found the numFound is
not the same number. OutPut Example:
"responseHeader":{ "zkConnected":true, "status":0, "QTime":8, "params":{ "q
":"*:*", "_":"1544171213624"}}, "res
Hi all,
I found it is too troublesome to start a solr mini cluster in my
project,the MiniSolrCloudCluster has too many properties related to the
folders of Solr Source Project. Is there a simple way to start a mini solr
cluster out of Solr Project,such as in my custom Project?
to solr, it will
retry to solr infinitely。
If write to solr is failed,and server was kill,I can use the
transaction log of the true data store to replay and write to solr again。
Shawn Heisey 于2018年9月19日周三 下午10:38写道:
> On 9/18/2018 8:11 PM, zhenyuan wei wrote:
> > Hi all,
>
Walter Underwood
> > wun...@wunderwood.org
> > http://observer.wunderwood.org/ (my blog)
> >
> >> On Sep 18, 2018, at 7:11 PM, zhenyuan wei wrote:
> >>
> >> Hi all,
> >>add solr document with overwrite=false will keepping multi version
>
Hi all,
add solr document with overwrite=false will keepping multi version
documents,
My question is :
1. How to search newest documents?with what options?
2. How to delete old version < newest version documents?
for example:
{
"id":"1002",
"name":["james"]
Hi all,
Does solr support rollback or any method to do the same job?
Like update/add/delete a document, can I rollback them?
Best~
TinsWzy
requests per
second.
Shawn Heisey 于2018年9月18日周二 下午12:07写道:
> On 9/17/2018 9:05 PM, zhenyuan wei wrote:
> > Is that means: Small amount of shards gains better performance?
> > I also have a usecase which contains 3 billion documents,the collection
> > contains 60 shard now. Is th
Is that means: Small amount of shards gains better performance?
I also have a usecase which contains 3 billion documents,the collection
contains 60 shard now. Is that 10 shard is better than 60 shard?
Shawn Heisey 于2018年9月18日周二 上午12:04写道:
> On 9/17/2018 7:04 AM, KARTHICKRM wrote:
> > Dear SO
;
> }
>
> Got it now? :)
>
> Petr
> ______
> > Od: "zhenyuan wei"
> > Komu: solr-user@lucene.apache.org
> > Datum: 03.09.2018 11:21
> > Předmět: Re: Is that a mistake or bug?
> >
> >
ot;:0.0},
"facet_module":{
"time":0.0},
"mlt":{
"time":0.0},
"highlight":{
"time":0.0},
"stats":{
"time":0.0},
"expand":{
&q
ot;:0.0},
"facet_module":{
"time":0.0},
"mlt":{
"time":0.0},
"highlight":{
"time":0.0},
"stats":{
"time":0.0},
"expand":{
&
Hi ,
I am curious “How long does a query q=field1:2312 cost , which
exactly match only one document? ”, Of course we just discuss no
queryResultCache with match in this situation.
In fact my QTime is 150ms+, it is too long.
;
> The only line that could be improved, is probably replacing
> "Boolean.FALSE" by simply "false", but that is really a minor thing...
>
> Regards
>
> PB
> __________
> > Od: "zhenyuan wei&quo
>
> On Mon, Sep 3, 2018 at 9:09 AM zhenyuan wei wrote:
>
> > Yeah,got it~. So the QueryResult.segmentTerminatedEarly maybe a boolean,
> > instead of Boolean, is better, right?
> >
> > Mikhail Khludnev 于2018年9月3日周一 下午1:36写道:
> >
> > > It's neit
in result output. see
> ResponseBuilder.setResult(QueryResult).
> So, if cmd requests early termination, it sets false by default, enabling
> "false" output even it won't be the case. And later it might be flipped to
> true.
>
>
> On Mon, Sep 3, 2018 at 5:57 AM zheny
Hi all,
I saw the code like following:
QueryResult result = new QueryResult();
cmd.setSegmentTerminateEarly(params.getBool(CommonParams.SEGMENT_TERMINATE_EARLY,
CommonParams.SEGMENT_TERMINATE_EARLY_DEFAULT));
if (cmd.getSegmentTerminateEarly()) {
result.setSegmentTerminatedEarly(Boolean.FAL
Oh ~ my fault!Sorry for that, I should say somebody,like me~
Bram Van Dam 于2018年8月29日周三 下午3:28写道:
> On 28/08/18 08:03, zhenyuan wei wrote:
> > But this is not a common way to do so, I mean, nobody want to ADDREPLICA
> > after collection was created.
>
> I wouldn't say "nobody"..
>
Pretty cool,here creates an issue to put this discussion into practice.
issues: https://issues.apache.org/jira/browse/SOLR-12713
Best,
TinsWzy
Erick Erickson 于2018年8月28日周二 下午11:51写道:
> Patches welcome.
>
> On Mon, Aug 27, 2018, 23:03 zhenyuan wei wrote:
>
> > But this is n
d you can have as many replicas per Solr instance
> as makes sense.
>
> Best,
> Erick
> On Mon, Aug 27, 2018 at 8:48 PM zhenyuan wei wrote:
> >
> > @Christopher Schultz
> > So you mean that one 4TB disk is the same as four 1TB disks ?
> > HDFS、cassandra、
xplain
Christopher Schultz 于2018年8月28日周二 上午11:16写道:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Shawn,
>
> On 8/27/18 22:37, Shawn Heisey wrote:
> > On 8/27/2018 8:29 PM, zhenyuan wei wrote:
> >> I found the “solr.data.dir” can only config a single directory.
&g
写道:
> On 8/27/2018 8:29 PM, zhenyuan wei wrote:
> > I found the “solr.data.dir” can only config a single directory. I
> > think it is necessary to be config multi dirs,such as
> > ”solr.data.dir:/mnt/disk1,/mnt/disk2,/mnt/disk3" , due to one disk
> overload
>
Hi all,
I found the “solr.data.dir” can only config a single directory. I
think it is necessary to be config multi dirs,such as
”solr.data.dir:/mnt/disk1,/mnt/disk2,/mnt/disk3" , due to one disk overload
or capacity limitation. Any reason to support why not do so?
Best,
TinsWzy
@Shawn Heisey Yeah, delete "write.lock" files manually is ok finally。
@Walter Underwood Have some performace evaluation about Solr on HDFS vs
LocalFS recently?
Shawn Heisey 于2018年8月28日周二 上午4:10写道:
> On 8/26/2018 7:47 PM, zhenyuan wei wrote:
> > I found an exception w
Erickson 于2018年8月27日周一 上午11:41写道:
> Because HDFS doesn't follow the file semantics that Solr expects.
>
> There's quite a bit of background here:
> https://issues.apache.org/jira/browse/SOLR-8335
>
> Best,
> Erick
> On Sun, Aug 26, 2018 at 6:47 PM zhenyuan wei
Hi all,
I found an exception when running Solr on HDFS。The detail is:
Running solr on HDFS,and update doc was running always,
then,kill -9 solr JVM or reboot linux os/shutdown linux os,then restart all.
The exception appears like:
2018-08-26 22:23:12.529 ERROR
(coreContainerWorkExecutor-2-thr
Hi All,
I am confuse about How to hit filterCache?
If filterQuery is range [3 to 100] , but not cache in FilterCache,
and filterCache already exists filterQuery range [2 to 100],
My question is " Dose this filterQuery range [3 to 100] will fetch DocSet
from FilterCache range[2 to 100]" ?
Count on filterCache, find alternatives to
> wildcard query and more.
>
> But all in all, I'd be very very satisfied with those low response times
> given the size of your data.
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
&
Thanks for your detail answer @Shawn
Yes I run the query in SolrCloud mode, and my collection has 20 shards,
each shard size is 30~50GB。
4 solr server, each solr JVM use 6GB, HDFS datanode are 4 too, each
datanode JVM use 2.5GB。
Linux server host are 4 node too,each node is 16 core/32GB RAM/1600G
I have 4 solr server, each allocated 6GB。My dataset on HDFS is 787GB, 2 billion
documents totally,each document is 300 Bytes。 Follow is my cache related
configuration。
20
200
zhenyuan wei 于2018年8月23日周四 下午5:41写道:
> Thank you very much to answer. @Jan Høydahl
> My query is simple
ponent spends the most time?
> With shards.info=true you see what shard is the slowest, if your index is
> sharded.
> With echoParams=all you get the full list of query parameters in use,
> perhaps you spot something?
> If you start Solr with -v option then you get more verbose logg
Hi all,
I do care query performance, but do not know how to find out the reason
why a query so slow.
*How to trace one query?*the debug/debugQuery info are not enough to find
out why a query is slow。
Thanks a lot~
Hi all,
My solr version is release7.3.1, and I follow the solr 7.3.0 ref guide to
config ganglia
reporter in solr.xml as below:
..
emr-header-1
8649
than start solr service and encounted the execption like:
2018-07-11 17:47:31.246 ERROR (main) [ ] o.a
Hi all,
My solr version is release7.3.1, and I follow the solr 7.3.0 ref guide to
config ganglia
reporter in solr.xml as below:
..
emr-header-1
8649
than start solr service and encounted the execption like:
2018-07-11 17:47:31.246 ERROR (main) [ ] o.a
I'd like to subscribe this maillist, thanks.
42 matches
Mail list logo