out and just starting to drop all packets.
>
> Regards,
>Alex
>
> On Mon., Aug. 17, 2020, 6:22 p.m. Susheel Kumar,
> wrote:
>
> > Thanks for the all responses.
> >
> > Shawn - to your point both ping or select in between taking 600+ seconds
> to
> &g
http://server1:8080/solr/COLL/select?indent=on&q=*:*&wt=json&rows=0'
{
"responseHeader":{
"zkConnected":true,
"status":0,
"QTime":600093,
"params":{
"q":"*:*",
"indent":&qu
Hello,
One of our Solr 6.6.2 DR cluster (target CDCR) which even doesn't have any
live search load seems to be taking 60 ms many times for the ping /
health check calls. Anyone has seen this before/suggestion what could be
wrong. The collection has 8 shards/3 replicas and 64GB memory and index
Basic auth should help you to start
https://lucene.apache.org/solr/guide/8_1/basic-authentication-plugin.html
On Mon, Mar 16, 2020 at 10:44 AM Ryan W wrote:
> How do you, personally, do it? Do you use IPTables? Basic Authentication
> Plugin? Something else?
>
> I'm asking in part so I'l have
check if below directories have correct permission. solr.log file not
created implies some issue
tail: cannot open
'/home/pawasthi/projects/solr_practice/ex1/solr-8.4.1/example/cloud/node1/solr/../logs/solr.log'
Solr home directory
/home/pawasthi/projects/solr_practice/ex1/solr-8.4.1/example/clo
245.76 245.76
> >
> > and we get very good performance.
> >
> > ultimately though it's going to depend on your workload
> >
> > From: Susheel Kumar
> > Sent: 06 February 2020 13:43
> > To: solr-user@lucene.apache.org
>
Hello,
Whats type of storage/volume is recommended to run Solr on Kubernetes POD?
I know in the past Solr has issues with NFS storing its indexes and was not
recommended.
https://kubernetes.io/docs/concepts/storage/volumes/
Thanks,
Susheel
Hello,
I am trying to keep multiple versions of same document (empId,
empName,deptID,effectiveDt,empTitle..,..) with different effective dates
(composite key: deptID,empID,effectiveDt) but mark/ soft delete (deleted=Y)
the older ones and keep deleted=N for the latest one.
This way i can query the
equestHandler" >
>
> true
> ignored_
> _text_
>
>
>
> Is there anything wrong with it and how to fix it?
>
> Thank you.
>
> Pasle Choix
>
>
>
> On Mon, Sep 23, 2019 at 2:09 PM Susheel Kumar
> wrote:
>
>>
Not sure which configuration you are using but double check solrconfig.xml
to have entries like below and have below sr_mv_txt below in schema.xml for
storing and indexing.
true
ignored_
sr_mv_txt
Thnx
On Thu, Sep 19, 2019 at 11:02 PM PasLe Choix wrote:
> I am on Solr 7.7
Hello,
What does that mean by below. How do we set which cluster will act as
source or target at a time?
Both Cluster 1 and Cluster 2 can act as Source and Target at any given
point of time but a cluster cannot be both Source and Target at the same
time.
Also following the directions mentioned i
Hello,
If you are running Solr in a docker container in Production, can you
please share you experience/tips/issues you came across with
performance/sharding/CDCR etc.
Also I assume you might have take image from docker hub/or do you compose
your own image https://hub.docker.com/_/solr
Thanks,
Do you see CDCR forward messages in source solr logs and with some
numbers? That will confirm if data indeed going thru source and forwarded
to target.
Also any auto/soft commit settings difference between source & target?
On Wed, Feb 20, 2019 at 8:29 AM ypriverol wrote:
> Hi:
>
> I'm using a
Are we saying it has to do something with stop and restarting replica's
otherwise I haven't seen/heard any issues with document updates and
forwarding to replica's...
Thanks,
Susheel
On Thu, Nov 1, 2018 at 12:58 PM Erick Erickson
wrote:
> So this seems like it absolutely needs a JIRA
> On
] scsi target0:0:3: FAST-40 WIDE SCSI 80.0 MB/s ST
(25 ns, offset 127)
On Mon, Oct 22, 2018 at 10:15 PM Susheel Kumar
wrote:
> Hi Shawn,
>
> Here is the link for Solr GC log and it doesn't look Solr GC problem. The
> total GC is 12 GB. The GC log is from yesterday and the issue happ
ay be network issue but just looking the message "ZookeeperSolr server
not running" and later it instantiate doesn't give any clue.
Thnx
On Mon, Oct 22, 2018 at 9:54 PM Shawn Heisey wrote:
> On 10/22/2018 7:32 PM, Susheel Kumar wrote:
> > Hi Shawn, you meant ZK GC log c
Hi Shawn, you meant ZK GC log correct?
Thnx
On Mon, Oct 22, 2018 at 7:03 PM Shawn Heisey wrote:
> On 10/22/2018 3:31 PM, Susheel Kumar wrote:
> > Hello,
> >
> > I am seeing "ZookeeperServer not running" WARM messages in zookeeper logs
> > which is causing t
Hello,
I am seeing "ZookeeperServer not running" WARM messages in zookeeper logs
which is causing the Solr client connections to timeout...
What could be the problem?
ZK: 3.4.10
Zookeeper.out
==
2018-10-22 06:04:51,071 [myid:2] - INFO
[WorkerReceiver[myid=2]:FastLeaderElection@600] - Notificati
ith a bad checksum will just stay in the
> collection forever. Shouldn't the node go to "down" if it has an
> irreparable checksum?
>
> On Fri, Oct 5, 2018 at 5:25 AM Susheel Kumar
> wrote:
>
> > My understanding is once the index is corrupt, the only way to fix is
&g
S instance, like Simon's report. The drive doesn't
> > show any sign of being unhealthy, either from cursory investigation.
> FWIW,
> > this occurred during a collection backup.
> >
> > Erick, is there some diagnostic data we can find to help pin this down?
>
ing suspicious then i'll return here.
> >
> > Wondering if any body has seen similar error and if they were able to
> > confirm if it was hardware fault or so.
> >
> > Thnx
> >
> > On Mon, Sep 24, 2018 at 1:01 PM Erick Erickson
> > wrote:
> >
Also are you using Solr data import? That will be much slower compare to if
you write our own little indexer which does indexing in batches and with
multiple threads.
On Wed, Sep 26, 2018 at 8:00 AM Vincenzo D'Amore wrote:
> Hi, I know this is the shortest way but, had you tried to add more core
wrote:
> Mind you it could _still_ be Solr/Lucene, but let's check the hardware
> first ;)
> On Mon, Sep 24, 2018 at 9:50 AM Susheel Kumar
> wrote:
> >
> > Hi Erick,
> >
> > Thanks so much for your reply. I'll now look mostly into any possible
> >
ding reliably.
>
> The "possible hardware issue" message is to alert you that this is
> highly unusual and you should at leasts consider doing integrity
> checks on your disk before assuming it's a Solr/Lucene problem
>
> Best,
> Erick
> On Mon, Sep 24, 2018 at 9:26
Hello,
I am still trying to understand the corrupt index exception we saw in our
logs. What does the hardware problem comment indicates here? Does that
mean it caused most likely due to hardware issue?
We never had this problem in last couple of months. The Solr is 6.6.2 and
ZK: 3.4.10.
Please
Uninverting(_hbfm(6.6.2):C6/2:delGen=1)
Uninverting(_hbfn(6.6.2):C560/281:delGen=2)
Uninverting(_hbfo(6.6.2):C12/7:delGen=1)
Uninverting(_hbfp(6.6.2):C4/2:delGen=1)
Uninverting(_hbfq(6.6.2):C4/2:delGen=1)
Uninverting(_hbfs(6.6.2):C9/5:delGen=1)))}
On Sat, Sep 22, 2018 at 2:20 AM Susheel Kumar wrot
Hello,
I noticed one of the replica's in Recover Failed status and after trying to
recreate the replica/restart the node to recover, I see below error from
solr log. The leader for this shard8 replica2 i.e. replica1@server 61
seems to be fine and serving the queries.
What does this indicate "Sol
I'll highly advice if you can use Java library/SolrJ to connect to Solr
than .Net. There are many things taken care by CloudSolrClient and other
classes when communicated with Solr Cloud having shards/replica's etc and
if your .Net port for SolrJ are not up to date/having all the functionality
(wh
s connected to 6.6 specifically.
>
> If you want to resolve the problem, you should be able to use the
> collection api delete that node from the collection, and then re-add it
> which will trigger resync.
>
>
> On Fri, Sep 7, 2018, 10:35 AM Susheel Kumar wrote:
>
> > N
;
> On Fri, Sep 7, 2018, 6:07 AM Susheel Kumar wrote:
>
> > Anyone has insight / have faced above errors ?
> >
> > On Thu, Sep 6, 2018 at 12:04 PM Susheel Kumar
> > wrote:
> >
> > > Hello,
> > >
> > > We had a running cluster with
Anyone has insight / have faced above errors ?
On Thu, Sep 6, 2018 at 12:04 PM Susheel Kumar wrote:
> Hello,
>
> We had a running cluster with CDCR and there were some issues with
> indexing on Source cluster which got resolved after restarting the nodes
> (in my absence...
How about you search with Intermodal Schedules (plural) & try phrase slop
for better control on relevancy order
https://lucene.apache.org/solr/guide/6_6/the-extended-dismax-query-parser.html
On Thu, Sep 6, 2018 at 12:10 PM Muddapati, Jagadish <
jagadish.muddap...@nscorp.com> wrote:
> Label: new
Hello,
We had a running cluster with CDCR and there were some issues with indexing
on Source cluster which got resolved after restarting the nodes (in my
absence...) and now I see below errors on a shard at Target cluster. Any
suggestions / ideas what could have caused this and whats the best wa
and as you suggested, use stop word before shingles...
On Fri, Aug 3, 2018 at 8:10 AM, Clemens Wyss DEV
wrote:
>
>
>
>outputUnigrams="true" tokenSeparator=""/>
>
>
> seems to "work"
>
> -Ursprüngliche Nachricht-
> Von: Clemens Wyss DEV
> Gesendet: Freitag, 3. August 2018 13
hanks.
>
> On Tue, Jul 24, 2018 at 2:31 AM Susheel Kumar
> wrote:
>
> > Something messed up with DNS which resulted into unknown host exception
> for
> > one the machines in our env and caused Solr to throw the above exception
> >
> > Eric, I have the Solr
om: Name or service not
> > known
> >
> > Is this address actually resolvable at the time?
> >
> > On Mon, Jul 23, 2018 at 3:46 PM, Susheel Kumar
> > wrote:
> >
> >> In usual circumstances when one Zookeeper goes down while others 2 are
> up,
>
In usual circumstances when one Zookeeper goes down while others 2 are up,
Solr continues to operate but when one of the ZK machine was not reachable
with ping returning below results, Solr count't starts. See stack trace
below
ping: cannot resolve ditsearch001.es.com: Unknown host
Setup: Solr
03 4925
>
>
> On Fri, Jul 20, 2018 at 3:11 PM, Susheel Kumar
> wrote:
>
> > I think CJKFoldingFilter will work for you. I put 舊小說 in index and then
> > each of A, B or C or D in query and they seems to be matching and CJKFF
> is
> > transforming the 舊 to 旧
>
I think CJKFoldingFilter will work for you. I put 舊小說 in index and then
each of A, B or C or D in query and they seems to be matching and CJKFF is
transforming the 舊 to 旧
On Fri, Jul 20, 2018 at 9:08 AM, Susheel Kumar
wrote:
> Lack of my chinese language knowledge but if you want, I can
Lack of my chinese language knowledge but if you want, I can do quick test
for you in Analysis tab if you can give me what to put in index and query
window...
On Fri, Jul 20, 2018 at 8:59 AM, Susheel Kumar
wrote:
> Have you tried to use CJKFoldingFilter https://github.com/sul-d
Have you tried to use CJKFoldingFilter
https://github.com/sul-dlss/CJKFoldingFilter. I am not sure if this would
cover your use case but I am using this filter and so far no issues.
Thnx
On Fri, Jul 20, 2018 at 8:44 AM, Amanda Shuman
wrote:
> Thanks, Alex - I have seen a few of those links but
Did you try to see where/which component like query, facet highlight... is
taking time by debugQuery=on when performance is slow. Just to rule out any
other component is not the culprit...
Thnx
On Mon, Jun 25, 2018 at 2:06 PM, Chris Troullis
wrote:
> FYI to all, just as an update, we rebuilt t
lose any updates.
> >
> > Later versions can do the full sync of the index and buffering is being
> removed.
> >
> > Best,
> > Erick
> >
> > On Tue, Jun 19, 2018 at 7:31 AM, Brian Yee wrote:
> >> Thanks for the suggestion. Can you p
You may have to DISABLEBUFFER in source to get rid of tlogs.
On Mon, Jun 18, 2018 at 6:13 PM, Brian Yee wrote:
> So I've read a bunch of stuff on hard/soft commits and tlogs. As I
> understand, after a hard commit, solr is supposed to delete old tlogs
> depending on the numRecordsToKeep and maxN
Is this collection anyway drastically different than others in terms of
schema/# of fields/total document etc is it sharded and if so can you look
which shard taking more time with shard.info=true.
Thnx
Susheel
On Wed, Jun 13, 2018 at 2:29 PM, Chris Troullis
wrote:
> Thanks Erick,
>
> Seems to
No idea about Solaris much but the only option is to install manually as
you did and try to modify /bin/solr script to get rid of the errors you are
seeing etc.
Thnx
On Thu, May 24, 2018 at 5:40 AM, Takuya Kawasaki
wrote:
> Please let me ask a question.
>
> I would like to use Solr on Solaris 10
1. a) is accurate while 2. b) is accurate.
if query 1. a) is just for example then its fine but otherwise usually want
to use filter on fields which has low cardinality like state, country,
gender etc. Name is a high cardinality column and using filter query
wouldn't be efficient and also doesn't
A very high rate of indexing documents could cause heap usage to go high
(all temporary objects getting created are in JVM memory and with very high
rate heap utilization may go high)
Having Cache's not sized/set correctly would also return in high JVM usage
since as searches are happening, it wil
Take a look at https://wiki.apache.org/solr/SolrPerformanceProblems. The
section "how much heap do i need" talks about that.
Cache also goes to JVM so take a look how much you need/allocating for
different cache's.
Thnx
On Tue, May 1, 2018 at 7:33 PM, Greenhorn Techie
wrote:
> Hi,
>
> Wonderi
This may not be the reason but i noticed you have FlattenGraphFilterFactory
at query time while its only required at index time. I would suggest to go
Analysis tab if not checked already.
Thnx
On Mon, Apr 30, 2018 at 2:22 PM, Hodder, Rick wrote:
> I upgraded from SOLR 4.10 to SOLR 7.1
>
> In t
ollowing query to the leader in your target data center.
>
> /solr//cdcr?action=BOOTSTRAP&masterUrl= URL>
>
> The masterUrl will look something like (change the necessary values):
> http%3A%2F%2Fsolr-leader.solrurl%3A8983%2Fsolr%2Fcollection
>
> > On Apr
Anybody has idea how to trigger Solr CDCR BOOTSTRAP or under what condition
it gets triggered ?
Thanks,
Susheel
On Tue, Apr 24, 2018 at 12:34 PM, Susheel Kumar
wrote:
> Hello,
>
> I am wondering under what different conditions does that CDCR bootstrap
> process gets triggered. I d
Hello,
I am wondering under what different conditions does that CDCR bootstrap
process gets triggered. I did notice it getting triggered after I stopped
CDCR and then started again later and now I am trying to reproduce the same
behavior.
In case target cluster is left behind and buffer was disa
I was able to resolve this issue by start/stop the cdcr process couple of
times until all shards leaders started forwarding updates...
Thnx
On Tue, Apr 17, 2018 at 3:20 PM, Susheel Kumar
wrote:
> Hi Amrit,
>
> The cdcr?action=ERRORS is returning consecutiveErrors=1 on the shards
&g
ter.com/lucidworks
> LinkedIn: https://www.linkedin.com/in/sarkaramrit2
> Medium: https://medium.com/@sarkaramrit2
>
> On Tue, Apr 17, 2018 at 9:04 PM, Susheel Kumar
> wrote:
>
> > Hi,
> >
> > Has anyone gone thru this issue where few shard leaders are forwarding
> > updat
Hi,
Has anyone gone thru this issue where few shard leaders are forwarding
updates to their counterpart leaders in target cluster while some of the
shards leaders are not forwarding the updates.
on Solr 6.6, 4 of the shards logs I see below entries and their
counterpart in target are getting upd
DISABLEBUFFER on source cluster would solve this problem.
On Tue, Apr 17, 2018 at 9:29 AM, Chris Troullis
wrote:
> Hi,
>
> We are attempting to use CDCR with solr 7.2.1 and are experiencing odd
> behavior with transaction logs. My understanding is that by default, solr
> will keep a maximum of 1
tp://twitter.com/lucidworks
> > LinkedIn: https://www.linkedin.com/in/sarkaramrit2
> > Medium: https://medium.com/@sarkaramrit2
> >
> > On Mon, Apr 16, 2018 at 11:35 PM, Susheel Kumar
> > wrote:
> >
> >> Does anybody know about known issue where CDCR boo
Does anybody know about known issue where CDCR bootstrap sync leaves the
replica's on target cluster non touched/out of sync.
After I stopped and restart CDCR, it builds my target leaders index but
replica's on target cluster still showing old index / not modified.
Thnx
I figured it out that after restarting nodes, source cluster leaders were
switched and causing above warning and cdcr replication to stop. After
stopping the CDCR process and then restart again, above warning disappear
and bootstrap sync stepped in.
On Sun, Apr 15, 2018 at 7:54 PM, Susheel Kumar
Hello,
Over the weekend, we restarted nodes on target and source and after that I
see the replication from source to target has stopped and see below warning
messages in solr.log.
What could be done to resolve this. The leaders on source cluster
shows updateLogSynchronizer
stoppped while replica
Hi Doug, are you able to connect to Zookeeper thru Zookeeper zkCli.sh or
does Zookeeper.out show anything useful.
Thnx
On Wed, Apr 4, 2018 at 2:13 PM, Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> Thanks for the responses. Yeah I thought they were weird errors too... :)
>
> Belo
Hello,
I did schema update to Solr cloud of Source CDCR cluster and same on
target. After Collection Reload, noticed "error opening searcher" /
IndexWriter closed etc. on leader node while all replica's went into
recovery mode.
Later after restarting Solr on Leader noticed below too many file ope
> Lucidworks, Inc.
> 415-589-9269
> www.lucidworks.com
> Twitter http://twitter.com/lucidworks
> LinkedIn: https://www.linkedin.com/in/sarkaramrit2
> Medium: https://medium.com/@sarkaramrit2
>
> On Fri, Mar 23, 2018 at 6:02 PM, Susheel Kumar
> wrote:
>
> > Just a simple check,
Just a simple check, if you go to source solr and index single document
from Documents tab, then keep querying target solr for the same document.
How long does it take the document to appear in target data center. In our
case, I can see document show up in target within 30 sec which is our soft
co
Cool, Thanks, Shawn. I was also looking the swapiness and it is set to
60. Will try this out and let you know. Thanks, again.
On Thu, Feb 22, 2018 at 10:55 AM, Shawn Heisey wrote:
> On 2/21/2018 7:58 PM, Susheel Kumar wrote:
>
>> Below output for prod machine based on the steps y
:30 PM, Susheel Kumar wrote:
> > I did go thru your posts on swap usage http://lucene.472066.n3.
> > nabble.com/Solr-4-3-1-memory-swapping-td4126641.html and my situation is
> > also similar. Below is top output from our prod and performance test
> > machine and as you can se
rote:
> No worries, I don't mind being confused with Erick ;)
>
> Emir
>
> On Feb 9, 2018 9:16 PM, "Susheel Kumar" wrote:
>
> > Sorry, I meant Emir.
> >
> > On Fri, Feb 9, 2018 at 3:15 PM, Susheel Kumar
> > wrote:
> >
> > > Thank
Sorry, I meant Emir.
On Fri, Feb 9, 2018 at 3:15 PM, Susheel Kumar wrote:
> Thanks, Shawn, Eric. I see that same using swapon -s. Looks like during
> the OS setup, it was set as 2 GB (Solr 6.0) and other 16GB (Solr 6.6)
>
> Our 6.0 instance has been running since 1+ year but
wrote:
> On 2/7/2018 12:01 PM, Susheel Kumar wrote:
>
>> Just trying to find where do we set swap space available to Solr process.
>> I
>> see in our 6.0 instances it was set to 2GB on and on 6.6 instances its set
>> to 16GB.
>>
>
> Solr has absolutely no i
Hello,
Just trying to find where do we set swap space available to Solr process. I
see in our 6.0 instances it was set to 2GB on and on 6.6 instances its set
to 16GB.
Thanks,
Susheel
Hi Deepak, As Shawn mentioned, switch your q and fq values above like
q=facilityName:"orthodontist"+OR+facilityName:*orthodontist*
+OR+facilityName:"paul"+OR+facilityName:*paul*+OR+facilityName:*paul+
orthodontist*+OR+facilityName:"paul+orthodontist"+OR+
firstName:"orthodontist"+OR+firstName:*ort
Hi Ketan,
I believe you need multiple shard looking the count 800M. How much will be
the index size? Assume it comes out to 400G and assume your VM/machines
has 64GB and practically you want to fit your index into memory for each
shard... With that I would create 10shards on 10 machines (40 GB
ed for now.
Thanks,
Susheel
On Mon, Dec 18, 2017 at 11:37 AM, Shawn Heisey wrote:
> On 12/18/2017 9:01 AM, Susheel Kumar wrote:
> > Any thoughts on how one can provide HA in these situations.
>
> As I have said already a couple of times today on other threads, there
> are *exactly
tion
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 18 Dec 2017, at 15:36, Susheel Kumar wrote:
> >
> > Yes, Emir. If I repeat the query, it will spread to other nodes but
> that's
> > not the case. This is my test
he only
> collection on cluster etc.
>
> Thanks,
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 18 Dec 2017, at 15:07, Susheel Kumar wrote:
> >
> &
Hello,
I was testing Solr to see if a query which would cause OOM and would limit
the OOM issue to only the replica set's which gets hit first.
But the behavior I see that after all set of first replica's went down due
to OOM (gone on cloud view) other replica's starts also getting down. Total
6
>
> On Tue, Oct 17, 2017 at 11:25 AM, Susheel Kumar
> wrote:
> > Thanks, Shalin.
> >
> > But the download mirror still has 7.0.1 not 7.1.0.
> >
> > http://www.apache.org/dyn/closer.lua/lucene/solr/7.0.1
> >
> >
> >
> >
> > On Tu
Thanks, Shalin.
But the download mirror still has 7.0.1 not 7.1.0.
http://www.apache.org/dyn/closer.lua/lucene/solr/7.0.1
On Tue, Oct 17, 2017 at 5:28 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> 17 October 2017, Apache Solr™ 7.1.0 available
>
> The Lucene PMC is pleased to a
I have this snippet with couple of functions e.g. if that helps
---
TupleStream stream;
List tuples;
StreamContext streamContext = new StreamContext();
SolrClientCache solrClientCache = new SolrClientCache();
streamContext.setSolrClientCache(solrClientCache);
StreamFactory
ignore solr version...
On Mon, Sep 25, 2017 at 11:21 AM, Susheel Kumar
wrote:
> Check if your jar is present at solr-6.0.0/server/solr//lib/ or do
> a find under solr directory...
>
> On Mon, Sep 25, 2017 at 9:59 AM, Florian Le Vern > wrote:
>
>> Hi,
>>
>>
Check if your jar is present at solr-6.0.0/server/solr//lib/ or do a
find under solr directory...
On Mon, Sep 25, 2017 at 9:59 AM, Florian Le Vern
wrote:
> Hi,
>
> I added a custom Function Query in a jar library that is loaded from the
> `solr/data/lib` folder (same level as the cores) with the
It may happen that you may never find the queries/query time being logged
for the queries which caused OOM and your app never got chance to log how
much time it took...
So if you had proper exception handled in your client code, you may see
exception being logged but not see the query time for suc
Checkout this article for working with date types and format etc.
http://lucene.apache.org/solr/guide/6_6/working-with-dates.html
On Wed, Sep 20, 2017 at 6:32 AM, shankhamajumdar <
shankha.majum...@lexmark.com> wrote:
> Hi,
>
> I have a field with timestamp data in Cassandra for example - 2017-09
You can follow the sectionCreating an Alert With the Topic Streaming
Expression" at http://joelsolr.blogspot.com/ and use random function for
getting random records and schedule using daemon function to retrieve
periodically etc.
Thanks,
Susheel
On Tue, Sep 19, 2017 at 4:56 PM, Erick Erickson
+1. Asking for way more than anything you need may result into OOM. rows
and facet.limit should be carefully passed.
On Tue, Sep 19, 2017 at 1:23 PM, Toke Eskildsen wrote:
> shamik wrote:
> > I've facet.limit=-1 configured for few search types, but facet.mincount
> is
> > always set as 1. Didn
Hi Ivan, Can you please submit a JIRA/bug report for this at
https://issues.apache.org/jira/projects/SOLR
Thanks,
Susheel
On Tue, Sep 19, 2017 at 11:12 AM, Pekhov, Ivan (NIH/NLM/NCBI) [C] <
ivan.pek...@nih.gov> wrote:
> Hello Guys,
>
> We've been noticing this problem with Solr version 5.4.1 and
What fields do you want to search among two separate collections/cores and
provide some details on your use case.
Thnx
On Mon, Sep 18, 2017 at 1:42 AM, Agrawal, Harshal (GE Digital) <
harshal.agra...@ge.com> wrote:
> Hello Folks,
>
> I want to search data in two separate cores. Both cores are un
You may want to use UAX29URLEmailTokenizerFactory tokenizer into your
analysis chain.
Thanks,
Susheel
On Thu, Sep 14, 2017 at 8:46 AM, Shawn Heisey wrote:
> On 9/14/2017 5:06 AM, Mannott, Birgit wrote:
> > I have a problem when searching on email addresses.
> > @ seems to be handled as a speci
I am not able to follow what's the use case/ask is, but you already have
the query. You can search/highlight whatever you want to do with the query
string. Remember you search a single query against multiple (hundreds of
documents)
On Tue, Sep 12, 2017 at 1:31 AM, Nithin Sreekumar
wrote:
> My q
rlington, MA 01803, USA
> 42° 29' 7" N 71° 11' 32" W
>
> <http://360.here.com/> <https://www.twitter.com/here>
> <https://www.facebook.com/here>
> <https://www.linkedin.com/company/heremaps>
> <https://www.instagram.com/here/>
You may want to look at fetch function of Streaming expressions
http://lucene.apache.org/solr/guide/6_6/stream-decorators.html
Thanks,
Susheel
On Tue, Sep 12, 2017 at 11:11 AM, Brian Yee wrote:
> I have one solr collection used for auto-suggestions. If I submit a query
> with q="coffe", I will
Kelly -
If you do not make any change to schema and just reload your collection,
does it work fine? How much time it takes to reload the collection?
I am suspecting some conflict with commit frequency (5mins) and collection
reload.
Thnx
On Tue, Sep 12, 2017 at 6:59 AM, Kelly, Frank wrote:
> N
Does all 4 document's have same docID (Unqiue key)?
On Mon, Sep 11, 2017 at 2:44 PM, Kaushik wrote:
> I am using Solr 5.3 and have a custom Solr J application to write to Solr.
> When I index using this application, I expect to see 4 documents indexed.
> But for some strange reason, 3 documents
Hi Wei,
I'm assuming the lastModified time is when latest hard commit happens. Is
that correct?
>> Yes. its correct.
I also see sometime difference between replicas and leader commit
timestamps where the "diff/lag < autoCommit interval". So in your case you
noticed like upto 10 mins.
My guess is
Nick, checkout terms query parser
http://lucene.apache.org/solr/guide/6_6/other-parsers.html or streaming
expressions.
Thnx
On Wed, Sep 6, 2017 at 8:33 AM, alex goretoy wrote:
> https://www.youtube.com/watch?v=pNe1wWeaHOU&list=
> PLYI8318YYdkCsZ7dsYV01n6TZhXA6Wf9i&index=1
> https://www.youtube.
Try to utilize the steps mentioned here at
http://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html
On Wed, Sep 6, 2017 at 3:52 AM, Michael Kuhlmann wrote:
> Why would you need to start Solr as root? You should definitely not do
> this, there's no reason for that.
>
> And even if y
Which solr and zookeeper version you have. Any why do you have just 1 node
zookeeper. Usually you have 3 or so to maintain quorum.
Thnx
On Fri, Sep 1, 2017 at 7:24 AM, Mikhail Ibraheem <
arsenal2...@yahoo.com.invalid> wrote:
>
> Any help please? From: Mikhail Ibraheem
> To: Solr-user
>
gt; >> - Let suppose there are two documents doc1 and doc2.
> > >> - I want to fetch the data from doc2 on the basis of doc1 fields
> which
> > >> are related to doc2.
> > >>
> > >> How to achieve this efficiently.
> > >>
>
n2?
>
> 2) Performance: In my current single collection setup, I have 2 shards per
> node. After creating the second collection, there will be 4 shards per
> node. Do I have to edit the RAM per node value (raise the -m parameter when
> starting the node)? In my case, I am quite sure th
1 - 100 of 417 matches
Mail list logo