We're thinking of writing a custom request handler to do that, although the
handler will also query all the collections at the backend.
Will this lead to a faster response speed for the user?
Regards,
Edwin
On 8 June 2015 at 00:06, Erick Erickson erickerick...@gmail.com wrote:
bq: we still
bq: we still need those information to be stored in a separate collection
for security reasons.
Not necessarily. I've seen lots of installations where auth tokens are
embedded in the document (say groups that can see this doc). Then
the front-end simply attaches fq=auth_field:(groups each user
The reasons we want to have different collections is that each of the
collections have different fields, and that some collections will contain
information that are more sensitive than others.
As such, we may need to restrict access to certain collections for some
users. Although the restriction
bq: Yup this information will need to be collected each time the user search
for a query, as we want to show the number of records that matches the
search query in each of the collections.
You're looking at something akin to federated search. About all you can
do is send out parallel queries to
The query for *:* with rows=0 is only for the initial startup. When there's
search query and filter, these need to be added in to the query as we
wanted to display the total number of records in each of the collections
with respect to the query and filter.
Regards,
Edwin
On 5 June 2015 at
Yup this information will need to be collected each time the user search
for a query, as we want to show the number of records that matches the
search query in each of the collections.
Currently I only have 6 collections, but it could increase to hundreds of
collections in the future. So I'm
I'm not so sure this is as bad as it sounds. When your collection is
sharded, no single node knows about the documents in other shards/nodes,
so to find the total number, a query will need to go to every node.
Trying to work out something to do a single request to every node,
combine their
On 6/5/2015 7:00 AM, Upayavira wrote:
I'm not so sure this is as bad as it sounds. When your collection is
sharded, no single node knows about the documents in other shards/nodes,
so to find the total number, a query will need to go to every node.
Trying to work out something to do a single
Have you considered spawning a bunch of threads, one per collection
and having them all run in parallel?
Best,
Erick
On Thu, Jun 4, 2015 at 4:52 PM, Zheng Lin Edwin Yeo
edwinye...@gmail.com wrote:
The reason we wanted to do a single call is to improve on the performance,
as our application
I'm trying to write a SolrJ program in Java to read and consolidate all the
information into a JSON file, The client will just need to call this SolrJ
program and read this JSON file to get the details. But the problem is we
are still querying the Solr once for each collection, just that this time
The reason we wanted to do a single call is to improve on the performance,
as our application requires to list the total number of records in each of
the collections, and the number of records that matches the query each of
the collections.
Currently we are querying each collection one by one to
Not in a single call that I know of. These are really orthogonal
concepts. Getting the cluster status merely involves reading the
Zookeeper clusterstate whereas getting the total number of docs for
each would involve querying each collection, i.e. going to the Solr
nodes themselves. I'd guess it's
Hi,
Would like to check, are we able to use the Collection API or any other
method to list all the collections in the cluster together with the number
of records in each of the collections in one output?
Currently, I only know of the List Collections
/admin/collections?action=LIST. However, this
13 matches
Mail list logo