message in context:
http://lucene.472066.n3.nabble.com/Multiple-Collections-in-one-Zookeeper-tp4045936.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Jan 25, 2013, at 1:56 PM, Walter Underwood wrote:
> I started out doing separate Zookeeper loads and linkconfigs, but backed off
> to bootstrapping after a total lack of success. I'll try that again (in my
> copious free time), because that seems like the right approach for
> production.
if I give them a
>>> -Dsolr.solr.home pointing to a directory with a solr.xml and subdirectories
>>> with configuration for each collection. If I don't do that, they look for
>>> solr/solr.xml, then fail. But what is the point of putting conf
to start and sync if I give them a
>> -Dsolr.solr.home pointing to a directory with a solr.xml and subdirectories
>> with configuration for each collection. If I don't do that, they look for
>> solr/solr.xml, then fail. But what is the point of putting configs in
>&
directory with a solr.xml and subdirectories
with configuration for each collection. If I don't do that, they look for
solr/solr.xml, then fail. But what is the point of putting configs in Zookeeper
if each host needs a copy anyway?
The wiki does not have an example of how to start a cluster with m
start a cluster with multiple
collections.
Am I missing something here?
I am a beginner at SolrCloud. I did recently get 4.1 running, though.
With help from Mark Miller, what I did was set up a basic solr.xml based
on the example, but with zero cores. I did add a sharedLib parameter
beca
subdirectories
with configuration for each collection. If I don't do that, they look for
solr/solr.xml, then fail. But what is the point of putting configs in Zookeeper
if each host needs a copy anyway?
The wiki does not have an example of how to start a cluster with multiple
collections.
Thanks Mark!
Cheers, Jeeva
On Oct 19, 2012, at 8:35 AM, Mark Miller wrote:
> Yes, those exceptions are fine. These are cases where we try to delete the
> node if it's there, but don't care if it's not there - things like that. In
> some of these cases, ZooKeeper logs things we can't stop, ev
Yes, those exceptions are fine. These are cases where we try to delete the
node if it's there, but don't care if it's not there - things like that. In
some of these cases, ZooKeeper logs things we can't stop, even though it's
expected that sometimes we will try and remove nodes that are not there o
Per Steffensen skrev:
Hi
Due to what we have seen in recent tests I got in doubt how Solr
search is actually supposed to behave
* Searching with
"distrib=true&q=*:*&rows=10&collection=x,y,z&sort=timestamp asc"
** Is Solr supposed to return the 10 documents with the lowest
timestamp across a
Hi
Due to what we have seen in recent tests I got in doubt how Solr search
is actually supposed to behave
* Searching with
"distrib=true&q=*:*&rows=10&collection=x,y,z&sort=timestamp asc"
** Is Solr supposed to return the 10 documents with the lowest timestamp
across all documents in all sli
:36 PM, Daniel Brügge wrote:
> >>
> >>> Hi,
> >>>
> >>> i am creating several cores using the following script. I use this for
> >>> testing SolrCloud and to learn about the distribution of multiple
> >>> coll
; We should have this soon.
>>
>> - Mark
>>
>> On May 23, 2012, at 2:36 PM, Daniel Brügge wrote:
>>
>>> Hi,
>>>
>>> i am creating several cores using the following script. I use this for
>>> testing SolrCloud and to le
>
> On May 23, 2012, at 2:36 PM, Daniel Brügge wrote:
>
> > Hi,
> >
> > i am creating several cores using the following script. I use this for
> > testing SolrCloud and to learn about the distribution of multiple
> > collections.
> >
> > max=500
&g
ript. I use this for
> > testing SolrCloud and to learn about the distribution of multiple
> > collections.
> >
> > max=500
> >> for ((i=2; i<=$max; ++i )) ;
> >> do
> >>curl "
> >>
> http://solrinstance1:8983/solr/admin/
using the following script. I use this for
> testing SolrCloud and to learn about the distribution of multiple
> collections.
>
> max=500
>> for ((i=2; i<=$max; ++i )) ;
>> do
>>curl "
>> http://solrinstance1:8983/solr/admin/cores?action=CREATE&name=
Hi,
i am creating several cores using the following script. I use this for
testing SolrCloud and to learn about the distribution of multiple
collections.
max=500
> for ((i=2; i<=$max; ++i )) ;
> do
> curl "
> http://solrinstance1:8983/solr/admin/cores?action=CREA
ACTIVE if overseer does not hear from that node within the wait time.
> Something similar to heartbeats in several other systems.
The /live_nodes stuff does use a heartbeat. That's why we use it as we do in
combination with the state.
>
>
>
> --
> View this message in context:
text:
http://lucene.472066.n3.nabble.com/SolrCloud-Programmatically-create-multiple-collections-tp3916927p3944327.html
Sent from the Solr - User mailing list archive at Nabble.com.
issing?
>
> Thanks!
> Ravi
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-Programmatically-create-multiple-collections-tp3916927p3924698.html
> Sent from the Solr - User mailing list archive at Nabble.com.
- Mark Miller
lucidimagination.com
wn but for some strange reason its not
updating clusterstate.json thing.
Has this already been reported? or there is something that i am missing?
Thanks!
Ravi
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Programmatically-create-multiple-collections-tp3916927p39
t to organise my indexes? Am i
> missing something very obvious?
>
> Thanks!
> Ravi
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-Programmatically-create-multiple-collections-tp3916927p3916927.html
> Sent from the Solr - User mailing list archive at Nabble.com.
- Mark Miller
lucidimagination.com
way i want to organise my indexes? Am i
missing something very obvious?
Thanks!
Ravi
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-Programmatically-create-multiple-collections-tp3916927p3916927.html
Sent from the Solr - User mailing list archive at Nabble.com.
: I have an application that manages documents in real-time into
: collections where a given document can live in more than one collection
: and multiple users can create collections on the fly.
: I get from reading that it's better to have a single index over all
: documents than to have one per
Hello,
I'm yet another new solr user and I'll confess that I haven't read the
documentation in great depth but hope someone can at least point me in
the right direction.
I have an application that manages documents in real-time into
collections where a given document can live in more than o
: different field sets under solr. Would I have to have multiple
: implementations of solr running, or can I have more than one schema.xml
: file per "collection" ?
currently the only supported way to do this is run multiple isntances of
the solr.war ... if you look at the various container spec
I was wondering how I might create multiple collections that have
different field sets under solr. Would I have to have multiple
implementations of solr running, or can I have more than one schema.xml
file per "collection" ?
Thanks
Andrew
101 - 127 of 127 matches
Mail list logo