q: manufacture_t:The Hershey Company^100 OR title_t:The Hershey
Company^1000
Firstly, Make sure that manufacture_t and title_t are text_general type, and
Let's use this approach instead of your approach
q=The Hershey Companyq.op=ANDqf=manufacture_t title_tdefType=edismax
--
View this
Please try this
if(and(exists(query({!v=BUS_CITY:regina})),exists(BUS_IS_NEARBY)),20,1)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Complex-boost-statement-tp4164572p4164885.html
Sent from the Solr - User mailing list archive at Nabble.com.
check https://issues.apache.org/jira/browse/SOLR-6633
On Fri, Oct 17, 2014 at 5:35 PM, Alexandre Rafalovitch arafa...@gmail.com
wrote:
I wonder how hard it would be to write an URP to just copy JSON from the
request into a store-only field?
Regards,
Alex
On 17/10/2014 1:21 am, Noble
Thanks guys for a quick reply,
Adding ( ) to query values resolved the issue!
Tanya
--
View this message in context:
http://lucene.472066.n3.nabble.com/Query-parsing-difference-between-Analysis-and-parsedquery-toString-output-tp4164851p4164912.html
Sent from the Solr - User mailing list
Hello
I have a procedure that sends small data changes during the day to a
solrcloud cluster, version 4.8
The cluster is made of three nodes, and three shards, each node contains
two shards
The procedure has been running for days; I don't know when but at some
point one of the cores has gone
Ok, thank you for your response. But why I cannot use '~'?
On 20 October 2014 07:40, Ramzi Alqrainy ramzi.alqra...@gmail.com wrote:
You can use Levenstein Distance algorithm inside solr without writing code
by
specifing the source of terms in solrconfig.xml
searchComponent name=spellcheck
Hi,
I'm trying to get all shards statistics in cloud configuration. I'v used
CoreAdminRequest but the problem is I get statistics for only shards (or core)
in one node (I've 2 nodes):
String zkHostString = 10.0.1.4:2181;
CloudSolrServer solrServer= new CloudSolrServer(zkHostString);
Hi all,
This is my use case:
I have a stored field, field_a, which is atomic updated (let's say by inc).
field_a is stored but not indexed due to the large number of distinct values it
can have.
I need to index field_b (I need facet and stats on it) which is not in the
document but its value
Thanks Walter!
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: Monday, October 20, 2014 12:09 AM
To: solr-user@lucene.apache.org
Subject: Re: CopyField from text to multi value
I think that info is available with termvectors. That should give a list of the
Hello,
I have a problem which I can't figure out how to solve it.
For a little scenario, I've setup a cluster with two nodes, one shard, and
two replicas, and both nodes connected to an external ZooKeeper.
Great, but now I want to stop replication for an amount of time (or more
precisely, to stop
Awesome. How long did it take?
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
On 20 October 2014 03:59, Noble Paul noble.p...@gmail.com wrote:
Hello Nabil,
isn't that what should be expected? Cores are local to nodes, so you
only get the core status from the node you're asking. Cluster status
refers to the entire SolrCloud cluster, so you will get the status over
all collection/nodes/shards[=cores]. Check the Core Admin REST interface
Querying all shards for a collection should look familiar; it's as though
SolrCloud didn't even come into play:
/http://localhost:8983/solr/collection1/select?q=*:*/
If, on the other hand, you wanted to search just one shard, you can specify
that shard, as in:
You can delete replicas from shard by using this command
/admin/collections?action=DELETEREPLICAcollection=collectionshard=shardreplica=replica
Delete a replica from a given collection and shard. If the corresponding
core is up and running the core is unloaded and the entry is removed from
the
You can also delete shard also
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-replicas-stop-replication-and-start-again-tp4164931p4164945.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thank you for your answer,
But, how do you 'revive' the replica after that?
I tried with add replica, but creates another one...(a solr_node3_replica)
Odd solution, but if you solved the problem with reviving the old replica,
it could a viable solution.
Thank you,
Andrei
--
View this
Hi Jürgen,
As you can see, I'm not using direct connection to node. It's a CloudServer.
Do you have example to how to get Cluster status from solrJ.
Regards,
Nabil.
Le Lundi 20 octobre 2014 13h44, Jürgen Wagner (DVT)
juergen.wag...@devoteam.com a écrit :
Hello Nabil,
isn't that what
Hi Nabil,
you can get /clusterstate.json from Zookeeper. Check
CloudSolrServer.getZkStateReader():
http://lucene.apache.org/solr/4_10_1/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrServer.html
Best regards,
--Jürgen
On 20.10.2014 15:16, nabil Kouici wrote:
Hi Jürgen,
As you can
Thank you Jürgen for this link.
However, this will not give number of documents and shard size.
Regards,
Nabil.
Le Lundi 20 octobre 2014 15h23, Jürgen Wagner (DVT)
juergen.wag...@devoteam.com a écrit :
Hi Nabil,
you can get /clusterstate.json from Zookeeper. Check
Another idea,
I turned off the replica in which I want to insert data and then to process
them, I started again, BUT, without -DzkHost, or -DzkRun, so the new started
solr instance. I put my data into it, I stopped again, and I started with
-DzkHost that points to my zoo keeper.
But the problem
Hi,
Could you please point me to the link where I can learn about the
theory behind the implementation of word break spell checker?
Like we know that the solr's DirectSolrSpellCheck component uses levenstian
distance algorithm, what is the algorithm used behind the word break spell
checker
On 10/19/2014 11:32 PM, Ramzi Alqrainy wrote:
You can create a script to ping on Solr every 10 sec. if no response, then
restart it (Kill process id and run Solr again).
This is the fastest and easiest way to do that on windows.
I wouldn't do this myself. Any temporary problem that results in
Andrei,
I'm wondering if you've considered using Classic replication for this use
case. It seems better suited for it.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New York, NY 10017
t: @appinions
Hello Michael,
Do you want to say, the replication from solr, that with master-slave?
Thank you,
Andrei
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-replicas-stop-replication-and-start-again-tp4164931p4164965.html
Sent from the Solr - User mailing list archive at
Hi all,
I'm trying to make use of the min_rf (minimum replication factor) feature
described in https://issues.apache.org/jira/browse/SOLR-5468. According to
the ticket, all that is needed is to pass min_rf param into the update
request and get back the rf param from the response or even easier
That's why it is considered better to crash the program and restart it
for OOME.
In the end aren't you also saying the same thing or I misunderstood
something?
We don't get this issue on master server (indexing). Our real concern is
slave where sometimes (rare) so not an obvious heap config
i think we can agree that the basic requirement of *knowing* when the OOM
occurs is the minimal requirement, triggering an alert (email, etc) would be
the first thing to get into your script
once you know when the OOM conditions are occuring you can start to get to the
root cause or remedy
Hi,
How do I verify if Solr core reload is successful or not? I use Solr 4.6.
To reload the core I send the below request:
http://hostname:7090/solr/admin/cores?action=RELOADcore=core0wt=json
Also is the above request synchronous ( I mean will the reload happen
before the response is
You can add a new replicas but I think you can't revive the old one.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-replicas-stop-replication-and-start-again-tp4164931p4164988.html
Sent from the Solr - User mailing list archive at Nabble.com.
Exactly,
So what good to delete my replica if I can't then to put it back? Is
supposed that this replica contains data, older data, but still updated,
which I need them, so to delete my replica, to create another one, to copy
all documents and then to put the new documents and processing them,
when you hit a request in the browser
http://localhost:8983/solr/admin/cores?action=RELOADcore=core0
you will receive this response
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeader
int name=status0/int
int name=QTime1316/int
/lst
/response
That means that
WordBreakSolrSpellChecker offers suggestions by combining adjacent query
terms and/or breaking terms into multiple words. It is a SpellCheckComponent
enhancement, leveraging Lucene's WordBreakSpellChecker. It can detect
spelling errors resulting from misplaced whitespace without the use of
I found this very ancient bit of code, not sure it even works anymore, but you
can give it a try. The problem isn't so much sending the request (if you've
got the original query with params, you can call Solr through a plain old HTTP
request), but it's parsing the response that's the tedious
Can you please provide us the exception when the shard goes out of sync ?
Please monitor the logs.
--
View this message in context:
http://lucene.472066.n3.nabble.com/unstable-results-on-refresh-tp4164913p4165002.html
Sent from the Solr - User mailing list archive at Nabble.com.
Yes, that's what I'm suggesting. It seems a perfect fit for a single shard
collection with an offsite remote that you don't always want to write to.
Michael Della Bitta
Senior Software Engineer
o: +1 646 532 3062
appinions inc.
“The Science of Influence Marketing”
18 East 41st Street
New
What are the differences on. The document count or things like facets?
This could be important.
Also, I think there was a similar thread on the mailing list a week or
two ago, might be worth looking for it.
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources
hello list,
The functionality I would like to add the the existing /browse request
handler is using a user interface (e.g.,webform) to collect the user's
input.
My approach is add a javascript form into the velocity template, below is
the code I added to the velocity
template(for example):
form
I am considering using a boost as follows:
boost=log(qty)
Where qty is the quantity in stock of a given product i.e. qty could be 0,
1, 2, 3, … etc. The problem I see is that log(0) is -Infinity. Would this be
a problem for Solr? For me it is not a problem because
log(0) log(1) log(2) etc.
The usual fix for this is log(1+qty). If you might have negative values, you
can use log(max(1,qty)).
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/
On Oct 20, 2014, at 3:04 PM, O. Olson olson_...@yahoo.it wrote:
I am considering using a boost as follows:
Hi Folks,
Here are some my ideas to use shared file system with two separate Solr
Clouds(Writer Solr Cloud and Reader Solr Cloud).
I want to get your valuable feedbacks
For prototype, I setup two separate Solr Clouds(one for Writer and the
other for Reader).
Basically big picture of my
One possibility is to send to one of Solr's /update handlers from your page.
It won't be straightforward unless you were POSTing a file to /update/extract,
but it would be possible for a little bit of JavaScript onSubmit to format the
data amenable to Solr.
I've not done this myself but
There is a couple of issues with what you are saying here:
1) You should not be exposing Solr directly to the internet. They
would be able to delete all your records and do other damage. /browse
endpoint is there to show-off what Solr can do, not to be used in
production
2) Solr is Java, it does
Hi Jae,
Sounds a bit complicated and messy to me, but maybe I'm missing something.
What are you trying to accomplish with this approach? Which problems do
you have that are making you look for non-straight forward setup?
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log
I guess I'm not quite sure what the point is. So can you back up a bit
and explain what problem this is trying to solve? Because all it
really appears to be doing that's not already done with stock Solr
is saving some disk space, and perhaps your reader SolrCloud
is having some more cycles to
I guess the admin UI for adding docs you mentioned is Data Import Handler.
If I understand your reply correctly, the idea is to post the javascript
form data from the webpage to /update/extract handler. Thank you for
shedding some light.
--
View this message in context:
In the most recent Solr, there is a Documents page, next after the
DataImportHandler page. That's got several different ways to add a
document.
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr
The solr users are trustworthy and it's only for internal use. The purpose of
this form is to allow user to directly input data to be further indexed by
solr. I am interested in this sentence from your reply which is Or you can
run Javascript to post back to Solr. Please bear with me if I ask very
Most of the javascript frameworks (AngularJS, etc) allow to post
information back to the server. If you use gmail or yahoo mail or
anything else, it's a javascript that lets you send a message.
So, if you completely trust your users, you can just have Javascript
and Solr and nothing else.
Though
What would be the response if the Core reload failed due to incorrect
configurations?
Thanks,
Prathik
On Mon, Oct 20, 2014 at 11:24 PM, Ramzi Alqrainy ramzi.alqra...@gmail.com
wrote:
when you hit a request in the browser
http://localhost:8983/solr/admin/cores?action=RELOADcore=core0
you
thank you very much. Alex. You reply is very informative and I really
appreciate it. I hope I would be able to help others in this forum like you
are in the future.
--
View this message in context:
The response would be
http://lucene.472066.n3.nabble.com/file/n4165076/Screen_Shot_2014-10-21_at_7.png
--
View this message in context:
http://lucene.472066.n3.nabble.com/Verify-if-solr-reload-core-is-successful-or-not-tp4164981p4165076.html
Sent from the Solr - User mailing list archive
In my case, injest rate is very high(above 300K docs/sec) and data are kept
inserted. So CPU is already bottleneck because of indexing.
older-style master/slave replication with http or scp takes long to copy
big files from master/slave.
That's why I setup two separate Solr Clouds. One for
Because ~ is proximity matching. Lucene supports finding words are a within a
specific distance away.
Search for foo bar within 4 words from each other.
foo bar~4
Note that for proximity searches, exact matches are proximity zero, and word
transpositions (bar foo) are proximity 1.
A query such
53 matches
Mail list logo