.
Upayavira
On Wed, Sep 9, 2015, at 04:12 PM, abhi Abhishek wrote:
> Hi,
>Thanks for the reply Shawn and Mugeesh. I was just trying to
>understand
> the working of Distributed Querying in SOLR.
>
> Thanks,
> Abhishek Das
>
> On Wed, Sep 9, 2015 at 8:
of its time, or try connecting a debugger (with source
code) such as Eclipse, to Solr and see where it is spending most of its
time.
Just some thoughts.
Upayavira
On Wed, Sep 9, 2015, at 04:24 PM, Russell Taylor wrote:
> Hi Upayavira,
> Here are a couple examples with debugQuery set.
a filter on the main query.
I can provide examples if needed.
Upayavira
On Mon, Sep 7, 2015, at 07:21 PM, Aman Tandon wrote:
> I am currently doing boosting for 5-7 things. will it work great with
> this
> too?
>
> With Regards
> Aman Tandon
>
> On Mon, Sep 7, 2015 a
you can add bq= inside your {!synonym_edismax} section, if you wish and
it will apply to that query parser only.
Upayavira
On Mon, Sep 7, 2015, at 03:05 PM, dinesh naik wrote:
> Please find below the detail:
>
> My main query is like this:
>
> q=(((_query_:"{!synonym_
Have you tried it? I suspect your issue will be with the process of
reloading the external file rather than consuming it once loaded.
What are you using the external file for? There may be other ways also.
E.g. external file fields don't play nice with Solr Cloud.
Upayavira
On Mon, Sep 7, 2015
ice a more reasonable value.
Upayavira
On Mon, Sep 7, 2015, at 06:21 PM, Aman Tandon wrote:
> Any suggestions?
>
> With Regards
> Aman Tandon
>
> On Mon, Sep 7, 2015 at 1:07 PM, Aman Tandon <amantandon...@gmail.com>
> wrote:
>
> > Hi Upayavira,
> >
> > Ha
eplace it?
Since 4.0, the default UpdateRequestHandler will detect the content type
and act accordingly.
What do you want your UpdateRequestHandler to do?
If you want to parse your own data format, I believe you can implement
your own ContentStreamLoader.
Upayavira
I can't answer it, but I wonder if searches related to the international
date line might help - that's where the equivalent issue is in spatial
terms.
Upayavira
On Sun, Sep 6, 2015, at 06:32 PM, O. Klein wrote:
> OK. I got most of it working.
>
> I created a worldBounds="0 -1
it?
Do you want to get the value back in a search result?
The first two can be done, the third hasn't yet been implemented in
Solr.
Upayavira
I don't have a code snippet - I just found it in the solrj source code.
As to using JSON, I'm not sure of the structure of the JSON you are
getting back, but you might find adding json.nl=map, which changes the
way it returns named lists, which may be easier to parse.
Upayavira
On Fri, Sep 4
Yes, look at the one I mentioned further up in this thread, which is a
part of SolrJ: FieldAnalysisRequest
That uses the same HTTP call in the backend, but formats the result in a
Java friendly manner.
Upayavira
On Fri, Sep 4, 2015, at 05:52 AM, Ritesh Sinha wrote:
> Yeah, I got. Tha
le to help you with a solution.
Upayavira
On Thu, Sep 3, 2015, at 03:04 PM, Mohan gupta wrote:
> Folks,
>
> Really looking forward to any help on this.
>
> On Tue, Sep 1, 2015 at 8:39 PM, Mohan gupta <mohangupt...@gmail.com>
> wrote:
>
> > *Bump*
> >
yes, the URL should be something like:
http://localhost:8983/solr/images/analysis/field?wt=json=true==
Upayavira
On Thu, Sep 3, 2015, at 03:23 PM, Jack Krupansky wrote:
> The # in the URL says to send the request to the admin UI, which of
> course
> returns an HTML web page. Inst
On Thu, Sep 3, 2015, at 02:31 PM, shahper wrote:
>
> On Thursday 03 September 2015 05:48 PM, Upayavira wrote:
> >
> > On Thu, Sep 3, 2015, at 11:32 AM, shahper wrote:
> >> Hi,
> >>
> >> I have setup solr when I am clicking on logging there nothing co
.
Then, you do a q=make:toyota={!mypostfilter id=12345}=0
This simply tells Solr to return a numFound value (your required
position) whilst ignoring the actual documents themselves.
This would, I reckon, be as performant as a search for q=make:toyota and
wouldn't require too much coding.
Upayavira
this?
Upayavira
On Thu, Sep 3, 2015, at 08:52 PM, Renee Sun wrote:
> I will need to figure out when was last index activity on a core.
>
> I can't use [corename]/index timestamp, because it only reflex the file
> deletion or addition, not file update.
>
> I am curious if any solr
, my question is why do you want to know this timestamp? There is
probably an easier way to achieve what you are trying to do.
Upayavira
On Thu, Sep 3, 2015, at 10:39 PM, Toke Eskildsen wrote:
> Renee Sun <renee_...@mcafee.com> wrote:
> > [core]/index is a folder holding index files
A & B and shard2 returns docs B & C (letters
> denoting
> what I consider to be unique docs), can my implementation of a merge
> strategy return only docs A, B, & C, rather than A, B, B, & C?
How did you end up with document B in both shard1 and shard2? Can't you
prevent that from happening, and thus not have this issue?
Upayavira
min UI analysis tab.
It is just an HTTP call that you can replicate from Java. I see that
SolrJ has a FieldAnalysisRequest that I suspect does the very same
thing.
Hope that helps.
Upayavira
give us more information as to what version of Solr you are
using, and how you started it, we might be able to tell you where you
can find logs on the filesystem also.
Or, just search through your Solr directory for solr.log. That should
have your log info in it.
Upayavira
Do you have a predefined list of such filters?
You can do fun things with synonyms: define an ipad->tablet synonym, and
use it at query time. Filter out all non-synonym terms in your query
time analysis chain, and then use that field as a filter.
Upayavira
On Wed, Sep 2, 2015, at 09:07
you are attempting to write your signature to your ID field. That's not
a good idea. You are generating your signature from the content field,
which seems okay. Change your id to be
your 'signature' field instead of id, and something different will
happen :-)
Upayavira
On Tue, Sep 1, 2015, at 04
Take a step back. *why* do you need a blend? Can you adjust the scores
on your shards to make the normal algorithm work better for you?
Upayavira
On Mon, Aug 31, 2015, at 08:47 PM, Mohan gupta wrote:
> Hi Folks,
>
> I need to merge docs received from multiple shards via a cus
I wonder if this resolves it [1]. It has been applied to trunk, but not
to the 5.x release branch.
If you needed it in 5.x, I wonder if there's a way that particular
choice could be made configurable.
Upayavira
[1] https://issues.apache.org/jira/browse/LUCENE-6711
On Tue, Sep 1, 2015, at 02:43
your posts, and another containing your likes.
You cannot *sort* on these values, but you can include your likes into
the score, which might even be better.
If this sounds good, I can dig up some syntax for such a query.
Upayavira
On Tue, Sep 1, 2015, at 10:36 AM, sara hajili wrote:
> hi.
&
you don't need to use a dynamic field, just a normal field will work for
you. But, you *will* want to index it, and you may benefit from
docValues, so:
Upayavira
On Tue, Sep 1, 2015, at 10:59 AM, sara hajili wrote:
> my solr version is 5.2.1
> i have a question.
> if i create 2
Can you repeat the config you have for the dedup update chain?
Thx
On Tue, Sep 1, 2015, at 02:57 PM, Zheng Lin Edwin Yeo wrote:
> Hi Upayavira,
>
> Yes, I tried with a completely new index. I found that once I added the
> line below to my /update handler in solrconfig.xml, the inde
Have you tried with a completely clean index? Are you deduping, or just
calculating the signature? Is it possible dedup is preventing your
documents from indexing (because it thinks they are dups)?
On Tue, Sep 1, 2015, at 09:46 AM, Zheng Lin Edwin Yeo wrote:
> Hi Upayavira,
>
> I
e
cross-datacentre replication (CDCR) work that has been done recently
might help you. Not so sure what state that is in - I'm sure Erick can
say more!
Upayavira
it.
Upayavira
On Mon, Aug 31, 2015, at 06:46 AM, davidphilip cherian wrote:
> Hi,
>
> The below curl command worked without error, you can try.
>
> curl http://localhost:8983/solr/techproducts/update?commit=true -H
> "Content-Type: text/xml" --data-binary ' expungeDeletes=&
that is
of less value to you (e.g. gmail or yahoo). Then continue to
participate, and any older posts will be so low down in search results
as to not matter.
Upayavira
On Mon, Aug 31, 2015, at 09:57 AM, Roshan Agarwal wrote:
> .nabble.com is indexing each post, is it possible to delete my p
It doesn't matter which node you do it on. And, you can replace an
existing alias by just creating another one with the same name.
Upayavira
On Mon, Aug 31, 2015, at 02:04 PM, Bill Au wrote:
> Thank, Shawn. So I only need to issue the command to update the alias on
> one of th
, not sure.
The stream.body feature allows you to do an HTTP GET that has a stream
within it, but you are already doing a POST so it isn't needed.
Upayavira
your
own tool to push stuff to Solr at the same time as it goes into
Cassandra.
Upayavira
field type, otherwise it'd be
impossible to work out which rules should be considered from the
different field types.
Upayavira
- as in, adding
you to the allow list.
I have switched your subscription to full list membership - you should
get this mail to your inbox.
Upayavira
On Thu, Aug 27, 2015, at 05:39 PM, Vijaya Narayana Reddy Bhoomi Reddy
wrote:
Hi,
Sorry to spam everyone with this email.
I am not able to get emails
analysis tab does not support multi-valued fields. It only analyses a
single field value.
On Wed, Aug 26, 2015, at 05:05 PM, Erick Erickson wrote:
bq: my dog
has fleas
I wouldn't want some variant of og ha to match,
Here's where the mysterious positionIncrementGap comes in. If you
make
, is the shop open for 10 minutes either side of now. Of
course, you could use spatial for a time within a range, and it might be
a little more elegant because you can use a multivalued field to specify
the open/close ranges for your store.
Upayavira
would
do a search for 683:683.
If you have a shop that is open over Sunday night to Monday, you just
list it as open until Sunday 23:59 and open again Monday 00:00.
Would that do it?
Upayavira
Darren,
That was delightfully dense. Do you think you could unpack it a bit
more? Possibly some sample (pseudo) queries?
Upayavira
On Wed, Aug 26, 2015, at 03:02 PM, Darren Spehr wrote:
If you wanted to try a spatial approach that blended times like above,
you
could try a polygon of minimum
delightfully dense = really intriguing, but I couldn't quite
understand it - really hoping for more info
On Wed, Aug 26, 2015, at 03:49 PM, Upayavira wrote:
Darren,
That was delightfully dense. Do you think you could unpack it a bit
more? Possibly some sample (pseudo) queries?
Upayavira
.
But then, perhaps there's more to your usecase than I have so far
understood.
Upayavira
the
clustering algorithm have to work harder, and therefore it *IS* going to
take longer. Either use less documents, or only use the first 1000 terms
when clustering, or do your clustering offline and include the results
of the clustering into your index.
Upayavira
On Mon, Aug 24, 2015, at 04:59 AM
Are you grouping or collapsing? Look at the {!collapse} post filter and
the associated ExpandComponent, which may give you a similar outcome
(depending upon what you are trying to achieve) but with better
performance.
Upayavira
On Mon, Aug 24, 2015, at 07:42 AM, Pavel Hladik wrote:
Nobody knows
a difference to
performance.
Upayavira
On Sun, Aug 23, 2015, at 05:32 PM, Erick Erickson wrote:
You're confusing clustering with searching. Sure, Solr can index
and lots of data, but clustering is essentially finding ad-hoc
similarities between arbitrary documents. It must take each
To be strict about it, I'd say that TrieDateFields CANNOT be null, but
they CAN be excluded from the document.
You could then check whether or not a value exists for this field.
Upayavira
On Sun, Aug 23, 2015, at 02:55 AM, Erick Erickson wrote:
TrieDateFields can be null. Actually, just
a document id?
If you are talking about what I think you are, then that is used by the
Admin UI to implement the analysis tab. You pass in a document, and it
returns it analysed.
As Alexandre says, faceting may well get you there if you want to query
a document already in your index.
Upayavira
those cached entries.
Upayavira
On Wed, Aug 19, 2015, at 02:25 PM, wwang525 wrote:
Hi Erick,
All my queries are based on fq (filter query). I have to send the
randomly
generated queries to warm up low level lucene cache.
I went to the more tedious way to warm up low level cache without
for that
field.
Upayavira
On Wed, Aug 19, 2015, at 05:23 PM, Erick Erickson wrote:
bq: can I limit the size of the three
caches so that the RAM usage will be under control
That's exactly what the size parameter is for.
As Upayavira says, the rough size of each entry in
the filterCache
its score. So long as the
data required by the similarity is already in the index, I don't see why
changing similarity would require a re-index.
But then, who ever wrote that must have been thinking of something...
Upayavira
On Wed, Aug 19, 2015, at 05:40 PM, Tom Burton-West wrote:
Hello all
- it is just handled for you. However, there
is a performance hit there. Push content direct to the correct node
(either using implicit routing, or by replicating the compositeId hash
calculation in your client) and you'd increase your indexing throughput
significantly, I would theorise.
Upayavira
pivot facet
on it.
If you have a flow of content coming in that you cannot re-index, then
they will effectively have an empty entryDate_month field, and won't
show up in this pivot facet, but it won't otherwise break.
Upayavira
On Mon, Aug 17, 2015, at 11:38 PM, Lewin Joy (TMS) wrote:
Hi Yonik
Where is Zookeeper running? Is it running as an independent service on a
separate box?
Also, 4.0 is very old now - the code has matured a LOT since then.
Upayavira
On Tue, Aug 18, 2015, at 09:54 PM, Erick Erickson wrote:
You might be hitting: https://issues.apache.org/jira/browse/SOLR-7361
How much memory does each server have? How much of that memory is
assigned to the JVM? Is anything reported in the logs (e.g.
OutOfMemoryError)?
On Mon, Aug 17, 2015, at 12:29 PM, Modassar Ather wrote:
Hi,
I have a Solr cluster which hosts around 200 GB of index on each node and
are 6 nodes.
are running.
Upayavira
On Mon, Aug 17, 2015, at 12:45 PM, Modassar Ather wrote:
The servers have 32g memory each. Solr JVM memory is set to -Xms20g
-Xmx24g. There are no OOM in logs.
Regards,
Modassar
On Mon, Aug 17, 2015 at 5:06 PM, Upayavira u...@odoko.co.uk wrote:
How much memory does each
, though, is a reasonable amount of thinking to get said
components right.
Upayavira
On Mon, Aug 17, 2015, at 05:19 PM, Erick Erickson wrote:
True, I haven't looked at it closely. Not sure where it is in the
priority list though.
However, I would recommend you _really_ look at _why_ you
think you
You can do what are called pseudo joins, which are eqivalent to a
nested query in SQL. You get back data from one core, based upon
criteria in the other. You cannot (yet) merge the results to create a
composite document.
Upayavira
On Sun, Aug 16, 2015, at 06:02 PM, Nagasharath wrote:
I exactly
You almost certainly have a non-unique ID field. Some documents are
overwritten during indexing. Try it with a clean index, and then review
the number of deleted documents (updates are a delete then insert
action). Deletes are calculated with maxDocs minus numDocs.
Upayavira
On Sun, Aug 16, 2015
://github.com/toastdriven/pysolr/pull/138
Upayavira
On Fri, Aug 7, 2015, at 11:37 PM, Erick Erickson wrote:
bq: So, How much minimum concurrent threads should I run?
I really can't answer that in the abstract, you'll simply have to
test.
I'd prefer SolrJ to post.jar. If you're not going
Use the DedupUpdateProcessor, which can compute a signature based upon
the specified fields.
Upayavira
On Fri, Aug 7, 2015, at 03:56 PM, Davis, Daniel (NIH/NLM) [C] wrote:
I have an application that knows enough to tell me that a document has
been updated, but not which document has been
How many CPUs do you have? 100 concurrent indexing calls seems like
rather a lot. You're gonna end up doing a lot of context switching,
hence degraded performance. Dunno what others would say, but I'd aim for
approx one indexing thread per CPU.
Upayavira
On Fri, Aug 7, 2015, at 02:58 PM, Nitin
On Thu, Aug 6, 2015, at 06:56 PM, Toke Eskildsen wrote:
Upayavira u...@odoko.co.uk wrote:
Also, attempting to facet across a large number of docs is going to take
some time. Perhaps you might gain some performance benefit by sharding
your index?
One should be aware that distributed
How do you know those boost values? Do they come from the outside? Could
you put them in the index with the docs themselves? Then you can sort on
a field in the doc.
On Fri, Aug 7, 2015, at 04:40 AM, rachun wrote:
Hi all,
I'm trying to sort some docs which is about 200 or more docs.
by using
-docValues field, and the second query is faster,
then you could add the query to your static warming, look for
newSearcher in your solrconfig.xml. That will execute your query,
warming the caches used by faceting, before a new searcher is made
available for searches.
Upayavira
On Thu, Aug 6, 2015
Have you looked at the collections API? It has the ability to set
properties against collections. I wonder if that'll achieve the same
thing as adding them to core.properties? I've never used it myself, but
wonder if it'll solve your issue.
Upayavira
On Thu, Aug 6, 2015, at 12:35 PM, marotosg
Also, attempting to facet across a large number of docs is going to take
some time. Perhaps you might gain some performance benefit by sharding
your index?
Upayavira
On Thu, Aug 6, 2015, at 04:48 PM, Mikhail Khludnev wrote:
On Thu, Aug 6, 2015 at 3:56 PM, Bernd Fehling
bernd.fehl...@uni
How did you trigger that exception, and can you guve the full exception?
Upayavira
On Tue, Aug 4, 2015, at 09:14 PM, wwang525 wrote:
Hi Upayavira,
I have physically cleaned up the files under index directory, and
re-index
did not fix the problem.
The following is an example
). Both would be a
TrieField, you would use a copyField declaration in your schema to
duplicate the field.
Upayavira
On Wed, Aug 5, 2015, at 09:55 AM, Upayavira wrote:
How did you trigger that exception, and can you guve the full exception?
Upayavira
On Tue, Aug 4, 2015, at 09:14 PM, wwang525
If you are using Java, you will likely find SolrJ the best way - it uses
serialised Java objects to communicate with Solr - you don't need to
worry about that. Just use code similar to that earlier in the thread.
No XML, no CSV, just simple java code.
Upayavira
On Wed, Aug 5, 2015, at 04:50 PM
Post your docs in sets of 1000. Create a:
ListSolrInputDocument docs
Then add 1000 docs to it, then client.add(docs);
Repeat until your 40m are indexed.
Upayavira
On Wed, Aug 5, 2015, at 05:07 PM, Mugeesh Husain wrote:
filesystem are about 40 millions of document it will iterate 40 times
checking.
Upayavira
On Tue, Aug 4, 2015, at 03:06 PM, adfel70 wrote:
Hello,
I'm using solr 5.2.1
I'm running indexing of a collection with 20 shards.
around 1.7 billion docs should be indexed.
the indexer is a mapreduce job that runs on yarn, running 60 concurrent
containers.
I index
Yes, you are right - generally autocommit is a better way. If you are
doing a one-off indexing, then a manual commit may well be the best
option, but generally, autocommit is a better way.
Upayavira
On Mon, Aug 3, 2015, at 11:15 PM, Konstantin Gribov wrote:
Upayavira, manual commit isn't a good
On Tue, Aug 4, 2015, at 06:13 PM, Mugeesh Husain wrote:
@Upayavira if i uses Solrj for indexing. autocommit or Softautocommit
will
work in case of SolJ
There are two ways to get content into Solr:
* push it in via an HTTP post.
- this is what SolrJ uses, what bin/post uses
is just a suggestion. You could
potentially port your textgen fieldType from 1.4.1 to 5.2, and it may
(depending upon the components you use) actually work.
If things fail, compare the components in your config with other
examples in the 5.2 schema.
Upayavira
, which is
intended to be a catch all field type. Or you can craft your own from
the components already available. E.g. stopword filter, etc is likely to
work okay on most languages.
What unsupported languages are you concerned with?
Upayavira
perform
better? They are the recommended field type in Solr 4+, but I'm not
aware of the performance benefits (doesn't mean there aren't!)
Trie fields give significant benefit, when used with a precisionStep,
for range queries, but that's not what you are talking abou there.
Upayavira
the docValues for your field.
As has been said here before, if at all possible, with a schema change
like that, wipe the index and start again.
Upayavira
On Tue, Aug 4, 2015, at 08:25 PM, wwang525 wrote:
Hi Upayavira,
My queries has all the features: search, sorting, grouping, faceting. As
I
, you are being told to write
some Java code that happens to use the SolrJ client library for Solr.
Upayavira
On Mon, Aug 3, 2015, at 10:01 PM, Alexandre Rafalovitch wrote:
Well,
If it is just file names, I'd probably use SolrJ client, maybe with
Java 8. Read file names, split the name
you are attempting to achieve.
For garbage collection, see here for a good Solr related write-up:
http://lucidworks.com/blog/garbage-collection-bootcamp-1-0/
Upayavira
On Mon, Aug 3, 2015, at 12:29 AM, Jay Potharaju wrote:
Shawn,
Thanks for the feedback. I agree that increasing timeout might
decide for yourself which shard a doc
goes into, but it seems too late to make that decision in your case.
Upayavira
On Sun, Aug 2, 2015, at 12:40 AM, Nagasharath wrote:
Yes, shard splitting will only help in managing large clusters and to
improve query performance. In my case as index size
collection (at least, one
that is using the compositeId router (the default). If a shard is too
large, you will need to split an existing shard, which you can do with
the collections API.
It is much better though, to start with the right number of shards if at
all possible.
Upayavira
ticket?
On Sat, Aug 1, 2015, at 02:02 PM, Erick Erickson wrote:
How soon? It's pretty much done AFAIK, but the folks trying to work on
it have had their priorities re-arranged.
So I really don't have a date.
Erick
On Fri, Jul 31, 2015 at 4:59 PM, Upayavira u...@odoko.co.uk wrote:
How
that.
Upayavira
. You could also split shard1
into three parts instead, if you preferred:
shard1_0: 1m docs
shard1_1: 1m docs
shard1_2: 1m docs
shard2: 3m docs
Upayavira
On Sun, Aug 2, 2015, at 12:25 AM, Nagasharath wrote:
If my current shard is holding 3 million documents will the new subshard
after splitting
, remove that
write.lock file, then restart Solr, and I suspect you should be okay.
Upayavira
On Fri, Jul 31, 2015, at 10:41 AM, sudeepgarg wrote:
I am getting below exception in catalina.out file while doing indexing
solr
3.6
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock
) or perhaps your documents simply have different
sizes, as Raja suggested.
I'm not really sure that what you see below is something to be overly
concerned about. Is it causing you issues?
Upayavira
On Fri, Jul 31, 2015, at 04:14 PM, Raja Pothuganti wrote:
As far as I know sharding is done
How soon? And will you be able to use them for querying, or just
faceting/sorting/displaying?
Thx!
Upayavira
On Fri, Jul 31, 2015, at 09:27 PM, Erick Erickson wrote:
And coming soon will be docvalues field updates that don't require
reindexing the whole doc.
Best,
Erick
On Jul 31, 2015 6
via user X
Upayavira
The reason is almost certainly because the query parser is splitting on
whitespace before the analysis chain gets the query - thus, each token
travels separately through your chain. Try it with quotes around it to
see if this is your issue.
Upayavira
On Thu, Jul 30, 2015, at 04:52 PM, Jack
look at Lucidworks Banana in its place.
Upayavira
that has the value of
key?
Upayavira
On Fri, Jul 10, 2015, at 04:15 PM, Mikhail Khludnev wrote:
I've heard that people use
https://issues.apache.org/jira/browse/SOLR-6234
for such purpose - adding scores from fast moving core to the bigger slow
moving one
On Fri, Jul 10, 2015 at 4:54 PM
own
request handler, and make it a part of Solr itself.
See: http://wiki.apache.org/solr/WhyNoWar
Upayavira
.
Upayavira
Be sure to be sending plain text emails, not HTML, and watch out for
things that could be considered spam. Apache mail servers do receive a
LOT of spam, so need to have relatively aggressive spam filters in
place.
Upayavira
On Thu, Jul 23, 2015, at 07:29 PM, Steven White wrote:
Hi Everyone
to have
ZK_Host=zk1,zk2,zk3/DevConfigs. I did see that you should bootstrap
the
chroot configs (znode tree) to Solr home as well.
I'm now able to run the commands and create a collection!
Thank you for all the help!
On Thu, Jul 23, 2015 at 1:24 PM, Upayavira u...@odoko.co.uk wrote
but not the other I cannot explain.
Upayavira
On Thu, Jul 23, 2015, at 08:52 PM, Tarala, Magesh wrote:
I added the explicit sort:
http://server1.domain.com:8983/solr/serviceorder_shard1_replica2/select?q=description%3Ajackshaftfl=service_orderwt=jsonindent=truedebugQuery=truesort=score%20desc,id
,id asc
i.e. sort by score, but if the scores are the same, sort by id.
Upayavira
On Thu, Jul 23, 2015, at 07:22 PM, Tarala, Magesh wrote:
I'm executing a very simple search in a 3 node cluster - 3 shards with 1
replica each. Solr version 4.10.2:
http://server1.domain.com:8983/solr
is trying the same query on the standard /select
URL (i.e. using the lucene query parser) and see whether it works there.
Remember to add debugQuery=true to see how it parses the query.
Upayavira
On Thu, Jul 23, 2015, at 02:51 PM, Joseph Obernberger wrote:
Hi Upayavira - the URL was:
http
the response back to you?
Upayavira
On Thu, Jul 23, 2015, at 01:50 PM, Aaron Gibbons wrote:
I've mainly used Oracle Java 8, but tested 7 also. Typically I'll wipe
the
machines and start from scratch before installing a different version.
The
latest attempt followed these steps exactly on each
for the record, by API I mean HTTP API, so calling the
solr instance from a browser, for example.
Upayavira
On Thu, Jul 23, 2015, at 04:07 PM, Aaron Gibbons wrote:
*When you run bin/solr you are doing that on the instance itself? *
Yes
*You show a collections API URL below. Does that fail the same
201 - 300 of 854 matches
Mail list logo