On 9/9/2016 9:17 PM, Prasanna S. Dhakephalkar wrote:
> Further search on net got me answer
>
> The query to be
>
> a_id:20 OR (*:* NOT a_id:*)
>
> I don't understand this syntax
The basic problem here is that negative queries don't work. If you're
going to subtract X, you have to start with
On 9/9/2016 4:38 PM, Brent wrote:
> Is there a way to tell whether or not a node at a specific address is
> up using a SolrJ API?
Based on your other questions, I think you're running cloud. If that
assumption is correct, use the Collections API with HttpSolrClient
(instead of CloudSolrClient)
Hi,
Further search on net got me answer
The query to be
a_id:20 OR (*:* NOT a_id:*)
I don't understand this syntax
I am bit raw at solr query formations :)
Regards,
Prasanna.
From: Prasanna S. Dhakephalkar [mailto:prasann...@merajob.in]
Sent: Saturday, September 10, 2016
Greetings Group,
I am attempting to formulate a query that gives me all the records such that
1. The record does not have field a_id
2. If a_id field exists then it should have a value 20
So, for 1. I used -a_id:* (got 25 results)
For 2. I used a_id:20 (got 3 results)
Let me explain further,
let's assume a simple case when we have 2 shards.
ReRankDocs =10 , rows=10 .
Correct me if I am wrong Joel,
What we would like :
1 page : top 10 re-scored
2 page: remaining 10 re-scored
>From page 3 the original scored docs.
This is what is happening in a single sol
Is there a way to tell whether or not a node at a specific address is up
using a SolrJ API?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Detecting-down-node-with-SolrJ-tp4295402.html
Sent from the Solr - User mailing list archive at Nabble.com.
Sure - and the apps may not have been pointed at the new ZK?
If you want the new apps to use the latest Solr, then I would assume you
want them on the new ZK, but I'll bet a dollar that there are some
configurations, etc, that need to change in the applications before that
will work right...
On
Sorry for the confusion... I forget that not everyone sees what I see ;)
The other configs that I mention are from another application that uses Solr
and Zookeeper. In theory, both apps should be able to share resources like
Solr and ZK, but I need to double-check on the necessity to have both
I'm not sure, but based on what I saw in the script command line, I would
expect ONLY otac_en to show up - since that is the only one mentioned on
the -confname flag...
Glad you've made some progress - happy to try to assist in figuring out the
larger problem if you want to.
I have sweat a lot
Hi again;
To answer your questions:
1. I use a ZK browser, so I can see what happens in ZK.
https://github.com/DeemOpen/zkui
2. Solr 4.3 is on it's own ZK, on one server. Solr 5.4 is on another ZK, on
a different server. No mixing whatsoever.
3. Take to mean "ZK root", so, yes /ot is at the
I'm afraid I'm pretty confused about what's going on... Naturally, because
it's new to me and you've been staring at it for a long time...
I'm afraid I'll have to ask some basic questions to get myself on the right
page...
When you say this:
The scrtipt does the job: the config is visible at
Hi JohnB;
We have a script that calls the ZK CLI with this command:
zk.bat -cmd upconfig -zkhost %SOLR_ZK_ENSEMBLE% -confdir
%INSTALL_DIR%\etc\solr\otac\default-en\conf -confname otac_en
The scrtipt does the job: the config is visible at
/ot/solr/configs/otac_en
You can see in Solr's error
Specifically, how did you push the configuration to Zookeeper?
Does the config exist in a separate "chroot" on Zookeeper? If so, do all
the collection names exist inside there (On Zookeeper)?
On Fri, Sep 9, 2016 at 2:01 PM, igiguere wrote:
> Hi;
>
> I am migrating
Thank you Anshum.
I would try the approach of managing it from outside first and see how it
works.
On Fri, Sep 9, 2016 at 1:51 PM, Anshum Gupta wrote:
> If you want to build a monitoring tool that maintains a replication factor,
> I would suggest you use the Collections
If you want to build a monitoring tool that maintains a replication factor,
I would suggest you use the Collections APIs (ClusterStatus, AddReplica,
DeleteReplica, etc.) and manage this from outside of Solr. I don't want to
pull you back from trying to build something but I think you'd be biting a
I am experimenting on this functionality and see how the overseer monitors
and keeps the minimum no of replicas up and running.
In heavy indexing/search flow , if any replica goes down we need to keep
the minimum no. of replicas up and running to serve the traffic and
mainitain the availability
Just to clarify here, I said that I really think it's an XY problem here. I
still don't know what is being attempted/built.
>From the last email, sounds like you want to build/support auto-addition of
replica but I would wait until you clarify the use case to suggest anything.
-Anshum
On Fri,
Hi;
I am migrating collections from Solr 4.3 to Solr 5.4.1
The configuration was pushed to Zookeeper, on the Zookeeper connected with
Solr 5.4.1:
schema.xml:
solrconfig.xml: 5.4.1
I can manually create a new core, using the Solr Admin UI, as long as I use
the name "otac_en" for parameter
09 September 2016, Apache Solr™ 5.5.3 available
The Lucene PMC is pleased to announce the release of Apache Solr 5.5.3
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting,
I'm not understanding where the inconsistency comes into play.
The re-ranking occurs on the shards. The aggregator node will be sent some
docs that have been re-scored and others that are not. But the sorting
should be the same as someone pages through the result set.
Joel Bernstein
Hi,
I was having an issue setting up a Solr instance w/ a external Zookeeper.
My SOLR_HOME is not set to the default location. I believe the problem is
related to the following line and I wanted to confirm if this is a bug:
https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L1383
Thanks Erick and Kshitij.
Would try both the options and see what works best.
Regards
Ankush Khanna
On Fri, 9 Sep 2016 at 16:33 Erick Erickson wrote:
> The soft commit interval governs opening new
> searchers, which should be "warmed" in order
> load up caches. Mu
Hello,
I have no results on full-text searching with a combination of numbers and dots
in research terms (example : 304.411)
Does Lucene core (version 4.1.3) have limits or do I have missing parameters ?
Thanks in advance,
Sambeau PRAK
Efalia (DMS editor)
All,
Running Solr 5.4.1 with embedded Jetty with frequent updates coming in
and softCommit is set to 10 min. What I am noticing is occasional "slow"
updates (takes 8 sec to 15 sec sometimes) and about the same time slow
QTimes. Upon investigating, it appears that
Le 09/09/2016 à 17:57, Shawn Heisey a écrit :
On 9/8/2016 9:41 AM, Bruno Mannina wrote:
- I stop SOLR 5.4 on Ubuntu 14.04LTS - 16Go - i3-2120 CPU @ 3.30Ghz
- I do a simple directory copy /data to my HDD backup (from 2To SATA
to 2To SATA directly connected to the Mothercard).
All files are
On 9/8/2016 9:41 AM, Bruno Mannina wrote:
> - I stop SOLR 5.4 on Ubuntu 14.04LTS - 16Go - i3-2120 CPU @ 3.30Ghz
>
> - I do a simple directory copy /data to my HDD backup (from 2To SATA
> to 2To SATA directly connected to the Mothercard).
>
> All files are copied fine but one not ! the biggest
Dear Solr Users,
I use since several years SOLR and since two weeks, I have a problem
when I try to copy my solr index.
My solr index is around 180Go (~100 000 000 docs, 1 doc ~ 3ko)
My method to save my index every Sunday:
- I stop SOLR 5.4 on Ubuntu 14.04LTS - 16Go - i3-2120 CPU @ 3.30Ghz
I think you're missing my point. The _feature_ may be there,
you'll have to investigate. But it is not named "smartCloud" or
"autoManageCluster". Those terms
1> do not appear in the final patch.
2> do not appear in any file in Solr 6x.
They were suggested names, what the final implementation
I am working on solr 6.0.0 to implement this feature.
I had a chat with Anshum and confirmed that this feature is available in
6.0.0 version.
The functionality is that to allow the overseer to bring up
the minimum no. of replicas for each shard as per the replicationFactor
set.
I will look
You cannot just pick arbitrary parts of a JIRA discussion
and expect them to work. JIRAs are places where
discussion of alternatives takes place and the discussion
often suggests ideas that are not incorporated
in the final patch. The patch for the JIRA you mentioned,
for instance, does not
The soft commit interval governs opening new
searchers, which should be "warmed" in order
load up caches. Mu guess is that you're not doing much
warming and thus seeing long search times.
Most attachments are stripped by the mail server,
if you want people to see the images put them up somewhere
Hi All,
We implement the Hortonworks Standby NameNode and i'm wondering how to
configure the Solr to point to the cluster name instead of the Name node
Hostname?
?
I tried to configure Dolr in several ways without succes:
1) Using the cluser name
2) using a "," separate host
I would partially agree with Walter - having more resources allows us to
include stopwords in index and let scoring model do its job. However,
there are other Solr features that can suffer from that approach: e.g.
if you use edismax and mm=80%, in case of query with stopwords, you can
end up
Hi guys,
was just experimenting some reranker with really low number of rerank docs
( 10= pageSize) .
Let's focus on the distributed enviroment and the manual sharding approach.
Currently what happens is that the reranking task is delivered by the
shards, they rescore the docs and then send them
Hi Alex,
DateRangeField extends some spatial stuff, which has that error message in
it, not in DateRangeField proper. You cannot sort on a DateRangeField. If
you want to... try adding either one plain docValues field if you just have
date instances, or a pair of them to hold a min & max and
Hi everybody,
i'm currently working on using the json request api for solr and hit a problem
using facets. I'm using solr 5.5.2 and solrJ 5.5.2
When querying solr by url-parameters like so:
After some more testing it feels like the parsing in 5.5.3 is _really_ messed
up.
Query version 4.10.4:
(text:(star AND trek AND wars)^200 OR text:("star trek wars")^350)
(text:(star AND trek AND wars)^200 OR text:("star trek wars")^350)
(+(((+text:star +text:trek +text:war)^200.0)
Hi Ankush,
As you are updating highly on one of the cores, hard commit will play a
major role.
Reason: During hard commits solr merges your segments and this is a time
taking process.
During merging of segments indexing of documents gets affected i.e. gets
slower.
Try figuring out the right
Hello,
We are running some test for improving our solr performance.
We have around 15 collections on our solr cluster.
But we are particularly interested in one collection holding high amount of
documents. (
https://gist.github.com/AnkushKhanna/9a472bccc02d9859fce07cb0204862da)
Issue:
We see
Hi Greg,
thanks a lot, thats it.
After setting q.op to OR it works _nearly_ as before with 4.10.4.
But how stupid this?
I have in my schema
and also had q.op to AND to make sure my default _is_ AND,
meant as conjunction between terms.
But now I have q.op to OR and defaultOperator in schema to
40 matches
Mail list logo