So my apologies for the duplicate comments, I went to go get proof of
duplicates and was confused as we apparently have duplicates across
different shards now in our distributed setup (a bug on our end.) I assumed
when I saw duplicates that it was the same problem as last time. Still
doesn't help me get at my segment corruption problem, though :(

Michael, in answer to your question: java 1.6 64-bit, debian linux, amazon
ec2 machine with the index on an elastic block store. No other problems with
that setup for a few months now.

I ran checkindex with -fix on and optimize still throws the same error.


On Fri, Jan 2, 2009 at 5:26 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Also, this (Solr server going down during an add) should not be able to
> cause this kind of corruption.
> Mike
>
> Yonik Seeley <ysee...@gmail.com> wrote:
>
> > On Fri, Jan 2, 2009 at 3:47 PM, Brian Whitman <br...@echonest.com>
> wrote:
> > > I will but I bet I can guess what happened -- this index has many
> > duplicates
> > > in it as well (same uniqueKey id multiple times) - this happened to us
> > once
> > > before and it was because the solr server went down during an add.
> >
> > That should no longer be possible with Solr 1.3, which uses Lucene for
> > handling the duplicates in a transactional manner.
> >
> > -Yonik
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: java-user-h...@lucene.apache.org
> >
> >
>

Reply via email to