Re: Problems with special characters

2007-03-22 Thread Thierry Collogne

Thanks. I made some modifications to SolrQuery.java to allow highlighting. I
will post the code on

http://issues.apache.org/jira/browse/SOLR-20



On 22/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


we didnt use it, but i took a quick look :

you need to implement the "hl=on" attribute in the getquerystring() method
of the solrqueryImpl

the resultdocs allready contain highlighting, that's why you found
processHighlighting in the Resultparser

good luck !
m




"Thierry Collogne" <[EMAIL PROTECTED]>
21/03/2007 17:04
Please respond to
solr-user@lucene.apache.org


To
solr-user@lucene.apache.org
cc

Subject
Re: Problems with special characters






Thank you. When I add the code you described, the Solr Java Client works.
One more question about the Solr Java Client.

Does it allow the use of highlighting? I void a processHighlighting method
in ResultsParser.java, but I can't find a way of enabling it.

Did you use highlighting?

On 21/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
wrote:
>
> hey,
>
> we had the same problem with the Solr Java Client ...
>
> they forgot to put UTF-8 encoding on the stream ...
>
> i posted our fix on http://issues.apache.org/jira/browse/SOLR-20
> it's this post :
> http://issues.apache.org/jira/browse/SOLR-20#action_12478810
> Frederic Hennequin [07/Mar/07 08:27 AM]
>
> grts,m
>
>
>
>
>
> "Bertrand Delacretaz" <[EMAIL PROTECTED]>
> Sent by: [EMAIL PROTECTED]
> 21/03/2007 11:19
> Please respond to
> solr-user@lucene.apache.org
>
>
> To
> solr-user@lucene.apache.org
> cc
>
> Subject
> Re: Problems with special characters
>
>
>
>
>
>
> On 3/21/07, Thierry Collogne <[EMAIL PROTECTED]> wrote:
>
> > I used the new jar file and removed -Dfile.encoding=UTF-8 from my jar
> call
> > and the problem isn't there anymore...
>
> ok, thanks for the feedback!
>
> -Bertrand
>
>




Re: how to use snappuller

2007-03-22 Thread James liu

i should setup rsyncd.conf, if yes,how to setup and snappuller will be ok.



2007/3/22, James liu <[EMAIL PROTECTED]>:


I think it work. but failed.


[root@ /tmp/solr1]# bash bin/snappuller-enable
> [root@ /tmp/solr1]# bash bin/snappuller-enable -u root
> [root@ /tmp/solr1]# bash bin/snappuller -M 192.168.7.56 -D data -S data
> -u root
> Password:
> Password:
> Password:
> usage: date [-jnu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]]
> ...
> [-f fmt date | [cc]yy]mm]dd]HH]MM[.ss]] [+format]
> usage: date [-jnu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]]
> ...
> [-f fmt date | [cc]yy]mm]dd]HH]MM[.ss]] [+format]
> Password:
> @ERROR: invalid gid root
> rsync error: error starting client-server protocol (code 5) at main.c(1383)
> [receiver=2.6.9]
> usage: date [-jnu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]]
> ...
> [-f fmt date | [cc]yy]mm]dd]HH]MM[.ss]] [+format]
> usage: date [-jnu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]]
> ...
> [-f fmt date | [cc]yy]mm]dd]HH]MM[.ss]] [+format]
> expr: syntax error
> Password:
> [root@ /tmp/solr1]# ls data/
>
>



--
regards
jl


Re: Problems with special characters

2007-03-22 Thread Maarten . De . Vilder
nice one !




"Thierry Collogne" <[EMAIL PROTECTED]> 
22/03/2007 09:00
Please respond to
solr-user@lucene.apache.org


To
solr-user@lucene.apache.org
cc

Subject
Re: Problems with special characters






Thanks. I made some modifications to SolrQuery.java to allow highlighting. 
I
will post the code on

http://issues.apache.org/jira/browse/SOLR-20



On 22/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> 
wrote:
>
> we didnt use it, but i took a quick look :
>
> you need to implement the "hl=on" attribute in the getquerystring() 
method
> of the solrqueryImpl
>
> the resultdocs allready contain highlighting, that's why you found
> processHighlighting in the Resultparser
>
> good luck !
> m
>
>
>
>
> "Thierry Collogne" <[EMAIL PROTECTED]>
> 21/03/2007 17:04
> Please respond to
> solr-user@lucene.apache.org
>
>
> To
> solr-user@lucene.apache.org
> cc
>
> Subject
> Re: Problems with special characters
>

>
>
>
>
>
> Thank you. When I add the code you described, the Solr Java Client 
works.
> One more question about the Solr Java Client.
>
> Does it allow the use of highlighting? I void a processHighlighting 
method
> in ResultsParser.java, but I can't find a way of enabling it.
>
> Did you use highlighting?
>
> On 21/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
> wrote:
> >
> > hey,
> >
> > we had the same problem with the Solr Java Client ...
> >
> > they forgot to put UTF-8 encoding on the stream ...
> >
> > i posted our fix on http://issues.apache.org/jira/browse/SOLR-20
> > it's this post :
> > http://issues.apache.org/jira/browse/SOLR-20#action_12478810
> > Frederic Hennequin [07/Mar/07 08:27 AM]
> >
> > grts,m
> >
> >
> >
> >
> >
> > "Bertrand Delacretaz" <[EMAIL PROTECTED]>
> > Sent by: [EMAIL PROTECTED]
> > 21/03/2007 11:19
> > Please respond to
> > solr-user@lucene.apache.org
> >
> >
> > To
> > solr-user@lucene.apache.org
> > cc
> >
> > Subject
> > Re: Problems with special characters
> >
> >
> >
> >
> >
> >
> > On 3/21/07, Thierry Collogne <[EMAIL PROTECTED]> wrote:
> >
> > > I used the new jar file and removed -Dfile.encoding=UTF-8 from my 
jar
> > call
> > > and the problem isn't there anymore...
> >
> > ok, thanks for the feedback!
> >
> > -Bertrand
> >
> >
>
>



Re: Problems with special characters

2007-03-22 Thread Thierry Collogne

Thanks. Did you also try using it?

On 22/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:


nice one !




"Thierry Collogne" <[EMAIL PROTECTED]>
22/03/2007 09:00
Please respond to
solr-user@lucene.apache.org


To
solr-user@lucene.apache.org
cc

Subject
Re: Problems with special characters






Thanks. I made some modifications to SolrQuery.java to allow highlighting.
I
will post the code on

http://issues.apache.org/jira/browse/SOLR-20



On 22/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
wrote:
>
> we didnt use it, but i took a quick look :
>
> you need to implement the "hl=on" attribute in the getquerystring()
method
> of the solrqueryImpl
>
> the resultdocs allready contain highlighting, that's why you found
> processHighlighting in the Resultparser
>
> good luck !
> m
>
>
>
>
> "Thierry Collogne" <[EMAIL PROTECTED]>
> 21/03/2007 17:04
> Please respond to
> solr-user@lucene.apache.org
>
>
> To
> solr-user@lucene.apache.org
> cc
>
> Subject
> Re: Problems with special characters
>

>
>
>
>
>
> Thank you. When I add the code you described, the Solr Java Client
works.
> One more question about the Solr Java Client.
>
> Does it allow the use of highlighting? I void a processHighlighting
method
> in ResultsParser.java, but I can't find a way of enabling it.
>
> Did you use highlighting?
>
> On 21/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
> wrote:
> >
> > hey,
> >
> > we had the same problem with the Solr Java Client ...
> >
> > they forgot to put UTF-8 encoding on the stream ...
> >
> > i posted our fix on http://issues.apache.org/jira/browse/SOLR-20
> > it's this post :
> > http://issues.apache.org/jira/browse/SOLR-20#action_12478810
> > Frederic Hennequin [07/Mar/07 08:27 AM]
> >
> > grts,m
> >
> >
> >
> >
> >
> > "Bertrand Delacretaz" <[EMAIL PROTECTED]>
> > Sent by: [EMAIL PROTECTED]
> > 21/03/2007 11:19
> > Please respond to
> > solr-user@lucene.apache.org
> >
> >
> > To
> > solr-user@lucene.apache.org
> > cc
> >
> > Subject
> > Re: Problems with special characters
> >
> >
> >
> >
> >
> >
> > On 3/21/07, Thierry Collogne <[EMAIL PROTECTED]> wrote:
> >
> > > I used the new jar file and removed -Dfile.encoding=UTF-8 from my
jar
> > call
> > > and the problem isn't there anymore...
> >
> > ok, thanks for the feedback!
> >
> > -Bertrand
> >
> >
>
>




Re: Problems with special characters

2007-03-22 Thread Maarten . De . Vilder
No, i didn't try to use it (on account of the fact that we dont use Solr 
to display the results)
the only thing our Solr server returns are ID's ... so there is nothing to 
put highlights on

but the code doesnt look half bad :)
lets hope the Client Developers pick up on it :)




"Thierry Collogne" <[EMAIL PROTECTED]> 
22/03/2007 11:27
Please respond to
solr-user@lucene.apache.org


To
solr-user@lucene.apache.org
cc

Subject
Re: Problems with special characters






Thanks. Did you also try using it?

On 22/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> 
wrote:
>
> nice one !
>
>
>
>
> "Thierry Collogne" <[EMAIL PROTECTED]>
> 22/03/2007 09:00
> Please respond to
> solr-user@lucene.apache.org
>
>
> To
> solr-user@lucene.apache.org
> cc
>
> Subject
> Re: Problems with special characters
>
>
>
>
>
>
> Thanks. I made some modifications to SolrQuery.java to allow 
highlighting.
> I
> will post the code on
>
> http://issues.apache.org/jira/browse/SOLR-20
>
>
>
> On 22/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
> wrote:
> >
> > we didnt use it, but i took a quick look :
> >
> > you need to implement the "hl=on" attribute in the getquerystring()
> method
> > of the solrqueryImpl
> >
> > the resultdocs allready contain highlighting, that's why you found
> > processHighlighting in the Resultparser
> >
> > good luck !
> > m
> >
> >
> >
> >
> > "Thierry Collogne" <[EMAIL PROTECTED]>
> > 21/03/2007 17:04
> > Please respond to
> > solr-user@lucene.apache.org
> >
> >
> > To
> > solr-user@lucene.apache.org
> > cc
> >
> > Subject
> > Re: Problems with special characters
> >
>
> >
> >
> >
> >
> >
> > Thank you. When I add the code you described, the Solr Java Client
> works.
> > One more question about the Solr Java Client.
> >
> > Does it allow the use of highlighting? I void a processHighlighting
> method
> > in ResultsParser.java, but I can't find a way of enabling it.
> >
> > Did you use highlighting?
> >
> > On 21/03/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
> > wrote:
> > >
> > > hey,
> > >
> > > we had the same problem with the Solr Java Client ...
> > >
> > > they forgot to put UTF-8 encoding on the stream ...
> > >
> > > i posted our fix on http://issues.apache.org/jira/browse/SOLR-20
> > > it's this post :
> > > http://issues.apache.org/jira/browse/SOLR-20#action_12478810
> > > Frederic Hennequin [07/Mar/07 08:27 AM]
> > >
> > > grts,m
> > >
> > >
> > >
> > >
> > >
> > > "Bertrand Delacretaz" <[EMAIL PROTECTED]>
> > > Sent by: [EMAIL PROTECTED]
> > > 21/03/2007 11:19
> > > Please respond to
> > > solr-user@lucene.apache.org
> > >
> > >
> > > To
> > > solr-user@lucene.apache.org
> > > cc
> > >
> > > Subject
> > > Re: Problems with special characters
> > >
> > >
> > >
> > >
> > >
> > >
> > > On 3/21/07, Thierry Collogne <[EMAIL PROTECTED]> wrote:
> > >
> > > > I used the new jar file and removed -Dfile.encoding=UTF-8 from my
> jar
> > > call
> > > > and the problem isn't there anymore...
> > >
> > > ok, thanks for the feedback!
> > >
> > > -Bertrand
> > >
> > >
> >
> >
>
>



bzr branches for Apache Lucene/Nutch/Solr/Hadoop at Launchpad

2007-03-22 Thread rubdabadub

Hi:

First of all apology to those friends who follow all the list.

Often times I work offline and I do not have any commit rights to any
of the projects. All the modifications I make for various clients and
trying to keep up to date with latest trunk somehow makes it difficult
for me to just stick with "subversion". I have heard many things about
distributed
revision control system and I am sure there are tricks/fixes for the
subversion problem i mentioned above, but I also wanted to learn
something new :-) So after some trial with many DRCS I have decided to
go for Bazaar! Its really cool DRCS.. you got try it.

http://bazaar-vcs.org/.

Now due to the fact that SVN is RCS and bzr is DRCS one need to
convert SVN repos to bzr repos. and cool enough.. there is a free vcs
mirroring service at Launchpad

https://launchpad.net/

So now the following projects are available via bzr branch. You can
access them here.

Nutch - https://launchpad.net/nutch
Solr - https://launchpad.net/solr
Lucene - https://launchpad.net/lucene
Hadoop - https://launchpad.net/hadoop

It only mirrors "trunk". Thats what I need to follow thats why and I
don't see any reason to mirror releases.

Regards


Setting "Solr Home" via JNDI on Tomcat Bundled with JBoss

2007-03-22 Thread Theodan

Hello.

I apologize in advance if I've overlooked something obvious, but I've been
trying for over a day to setup Solr to run on the Tomcat 5.5 that's bundled
with JBoss AS 4.0.5 GA.  There is plenty of help on the Solr Wiki about
setting it up on Tomcat 5.5 Standalone, but no help on Tomcat 5.5 Bundled.

Ideally, it should be a similar process, but it doesn't seem to be.  Tomcat
Bundled is quite different from Tomcat Standalone.  There is no "webapps"
deployment folder or "\conf\Catalina\localhost" config folder, as
referred to on the "Solr Tomcat" wiki page, and in fact the entire folder
structure is quite different, due to JBoss' treatment of Tomcat as just
another service.

I've actually gotten past all of the hurdles, except for figuring out how to
setup "Solr Home" via JNDI so that Tomcat Bundled will see the binding.  I
have tried various approaches:

1. Based on instructions at "http://wiki.apache.org/solr/SolrTomcat";, I
created an XML fragment:



and tried placing it in various promising places:
- jboss-4.0.5.GA/server/default/deploy//META-INF/context.xml
- jboss-4.0.5.GA/server/default/deploy//META-INF/solr.xml
- jboss-4.0.5.GA/server/default/deploy/jbossweb-tomcat55.sar/context.xml
- jboss-4.0.5.GA/server/default/deploy/jbossweb-tomcat55.sar/solr.xml
-
jboss-4.0.5.GA/server/default/deploy/jbossweb-tomcat55.sar/conf/context.xml
- 
jboss-4.0.5.GA/server/default/deploy/jbossweb-tomcat55.sar/conf/solr.xml

2. Based on instructions at
"http://wiki.jboss.org/wiki/Wiki.jsp?page=JNDIBindingServiceMgr";, I copied
the sample JNDIBindingServiceMgr , and customized it with the
following JNDI binding:

C:\Dev\MyTestSolrHome

and tried placing it in
"jboss-4.0.5.GA\server\default\conf\jboss-service.xml", under the "JNDI"
section.

None of these approaches worked, and during Solr startup I still get the
following in the logs:

==
16:56:33,532 ERROR [STDERR] Mar 21, 2007 4:56:33 PM
org.apache.solr.servlet.SolrServlet init
INFO: SolrServlet.init()
16:56:33,532 ERROR [STDERR] Mar 21, 2007 4:56:33 PM
org.apache.solr.servlet.SolrServlet init
INFO: No /solr/home in JNDI
16:56:33,532 ERROR [STDERR] Mar 21, 2007 4:56:33 PM
org.apache.solr.servlet.SolrServlet init
INFO: user.dir=C:\Dev\jboss-4.0.5.GA\bin
16:56:33,610 ERROR [STDERR] Mar 21, 2007 4:56:33 PM
org.apache.solr.core.Config getInstanceDir
INFO: Solr home defaulted to 'solr/'
==

Can anyone please help?

Thanks,
-Dan
-- 
View this message in context: 
http://www.nabble.com/Setting-%22Solr-Home%22-via-JNDI-on-Tomcat-Bundled-with-JBoss-tf3448553.html#a9617997
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Thank you...

2007-03-22 Thread Kevin Lewandowski

Thanks for sharing the info, Cass.  Is eBay still using Texis? (this used to be 
obvious from eBay's URLs a few years ago).  I used Texis with their Vortex 
script before Lucene was born.


I'd guess no. I read a PDF about ebay's architecture a few months ago
and it said all of the search stuff was custom.


Re: Tiny term boost with an e in it

2007-03-22 Thread Brian Whitman


On Mar 21, 2007, at 4:22 PM, Yonik Seeley wrote:

For each TermQuery, I do tq.setBoost( score ); where score is a float
my app generates. This usually works except when the numbers get real
small, like "2.712607e-4" that I just encountered. Solr seems to
destroy the e bit, turning the boost into 2.712607 and added a
default field search for "e 4".

I assume you are using the QueryParser? I don't believe the lucene
query parser supports scientific notation for boosts.



Yes, that was the case. It was the toString of Query putting in the E  
and then the QueryParser not handling it. Not a Solr issue, sorry for  
the clutter!


-Brian



Re: multiple indexes

2007-03-22 Thread Mike Klaas

On 3/22/07, Kevin Osborn <[EMAIL PROTECTED]> wrote:

Here is an issue that I am trying to resolve. We have a large catalog of 
documents, but our customers (several hundred) can only see a subset of those 
documents. And the subsets vary in size greatly. And some of these customers 
will be creating a lot of traffic. Also, there is no way to map the subsets to 
a query. The customer either has access to a document or they don't.

Has anybody worked on this issue before? If I use one large index and do the 
filtering in my application, then Solr will be serving a lot of useless 
documents. The counts would also be screwed up for facet queries. Is the best 
solution to extend Solr and do the filtering there?

The other potential solution is to have one index per customer. This would 
require one instance of the servlet per index, correct? It just seems like this 
would require a lot of hardware and complexity (configuring the memory of each 
servlet instance to index size and traffic).


Why not create a multivalued field that stores the customer perms?
add has_access:cust1 has_access:cust2, etc to the document at index
time, and turn this into a filter query at query time?

-Mike


Re: How to assure a permanent index.

2007-03-22 Thread Mike Klaas

On 3/22/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

well, yes indeed :)
but i do think it is easier to put up synchronisation for deleted
documents as well
clearing the whole index is kind of overkill

when you do this :
* delete all documents
* submit all documents
* commit
you should also keep in mind that Solr will do an autocommit after a
certain number of documents ...


Solr should only do so if you explicitly configured it as such.

Regardless, if you are rebuilding the index from scratch, delete-all
then rebuilt is a dangerous and less efficient method compared to
creating a new index and pointing solr to the new index once it is
completed.

-Mike


Re: Thank you...

2007-03-22 Thread Cass Costello

I haven't heard anyone say "texis" in any discussion since the aquisition.
I'll ask about it specifically next week.

On 3/22/07, Kevin Lewandowski <[EMAIL PROTECTED]> wrote:


> Thanks for sharing the info, Cass.  Is eBay still using Texis? (this
used to be obvious from eBay's URLs a few years ago).  I used Texis with
their Vortex script before Lucene was born.

I'd guess no. I read a PDF about ebay's architecture a few months ago
and it said all of the search stuff was custom.





--
A complex system that works is invariably found to have evolved from a
simple system that works.
 - John Gaule


Re: How to assure a permanent index.

2007-03-22 Thread Mike Klaas

On 3/22/07, Thierry Collogne <[EMAIL PROTECTED]> wrote:

And how would you do that? Create a new index and point solr to the new
index?


I don't think that is possible without restarting  Solr.

You could have two solr webapps, and alternate between to two,
pointing your app at one and building on the other, then switching.

Another possibility is to build the index on a master and use
snappuller to install it on the slave (I'll admit that I've never used
replication and so don't know how it handes the deletion of all
segments).

-Mike


Re: How to assure a permanent index.

2007-03-22 Thread Thierry Collogne

And how would you do that? Create a new index and point solr to the new
index?

On 22/03/07, Mike Klaas <[EMAIL PROTECTED]> wrote:


On 3/22/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> well, yes indeed :)
> but i do think it is easier to put up synchronisation for deleted
> documents as well
> clearing the whole index is kind of overkill
>
> when you do this :
> * delete all documents
> * submit all documents
> * commit
> you should also keep in mind that Solr will do an autocommit after a
> certain number of documents ...

Solr should only do so if you explicitly configured it as such.

Regardless, if you are rebuilding the index from scratch, delete-all
then rebuilt is a dangerous and less efficient method compared to
creating a new index and pointing solr to the new index once it is
completed.

-Mike



Re: How to assure a permanent index.

2007-03-22 Thread Thierry Collogne

Ok. Thanks.

On 22/03/07, Mike Klaas <[EMAIL PROTECTED]> wrote:


On 3/22/07, Thierry Collogne <[EMAIL PROTECTED]> wrote:
> And how would you do that? Create a new index and point solr to the new
> index?

I don't think that is possible without restarting  Solr.

You could have two solr webapps, and alternate between to two,
pointing your app at one and building on the other, then switching.

Another possibility is to build the index on a master and use
snappuller to install it on the slave (I'll admit that I've never used
replication and so don't know how it handes the deletion of all
segments).

-Mike



Re: multiple indexes

2007-03-22 Thread Chris Hostetter

: Why not create a multivalued field that stores the customer perms?
: add has_access:cust1 has_access:cust2, etc to the document at index
: time, and turn this into a filter query at query time?

this can be a particularly effective solution when the permissions don't
change at all .. the ideal solution is where each doc is "owned" by one
and only one customer, but either way it's a matter of listing all of the
customers that have access to the document in a field, and filtering on
it. -- for a few hundred customers it's not a lot of work to cache those
filters, autowarming will help ensure that it's efficient.

this approach doesn't scale particulararly well to the tens of thousands
of "users" thta might search your site, but at that point you have to
start thinking about how you model the "access" in your underlying
datamodel ... odds are you have some concept of "public" documents versus
"private" documents, and hte private documents might have Access Control
lists based on "groups" and you can filter on that type of information
instead.



-Hoss



Re: How to assure a permanent index.

2007-03-22 Thread Chris Hostetter

: > And how would you do that? Create a new index and point solr to the new
: > index?
:
: I don't think that is possible without restarting  Solr.

: Another possibility is to build the index on a master and use
: snappuller to install it on the slave (I'll admit that I've never used

that's pretty much the same thing, just refered to in different ways.  I
think the CollectionBuilding wiki is just talking about how you can build
a new index with an incompatibly differnet schema.xml on a seperate Solr
port and then manually put it into place on your primary query port with a
quick bounce -- allowing very short downtime.

if you're replacing your index but the schema is still the same, it's
really just a snappulling situation.

: replication and so don't know how it handes the deletion of all
: segments).

it works fine ... from a replication standpoint doing a full rebuild like
this where you delete everything and then readd them is no worse then
doing an optimize ... all of hte files the slave use to have go away, and
you push out all new files.


-Hoss



Re: How to assure a permanent index.

2007-03-22 Thread Thierry Collogne

Where can I find some information about snappulling?

On 23/03/07, Chris Hostetter <[EMAIL PROTECTED]> wrote:



: > And how would you do that? Create a new index and point solr to the
new
: > index?
:
: I don't think that is possible without restarting  Solr.

: Another possibility is to build the index on a master and use
: snappuller to install it on the slave (I'll admit that I've never used

that's pretty much the same thing, just refered to in different ways.  I
think the CollectionBuilding wiki is just talking about how you can build
a new index with an incompatibly differnet schema.xml on a seperate Solr
port and then manually put it into place on your primary query port with a
quick bounce -- allowing very short downtime.

if you're replacing your index but the schema is still the same, it's
really just a snappulling situation.

: replication and so don't know how it handes the deletion of all
: segments).

it works fine ... from a replication standpoint doing a full rebuild like
this where you delete everything and then readd them is no worse then
doing an optimize ... all of hte files the slave use to have go away, and
you push out all new files.


-Hoss