Re: Background merge errors with Solr 4.4.0 on Optimize call

2013-11-02 Thread Erick Erickson
See: https://issues.apache.org/jira/browse/SOLR-5418

Thanks Matthew and Robert! I'll see if I can get to this this weekend.





On Wed, Oct 30, 2013 at 7:45 AM, Erick Erickson wrote:

> Robert:
>
> Thanks. I'm on my way out the door, so I'll have to put up a JIRA with
> your patch later if it hasn't been done already
>
> Erick
>
>
> On Tue, Oct 29, 2013 at 10:14 PM, Robert Muir  wrote:
>
>> I think its a bug, but thats just my opinion. i sent a patch to dev@
>> for thoughts.
>>
>> On Tue, Oct 29, 2013 at 6:09 PM, Erick Erickson 
>> wrote:
>> > Hmmm, so you're saying that merging indexes where a field
>> > has been removed isn't handled. So you have some documents
>> > that do have a "what" field, but your schema doesn't have it,
>> > is that true?
>> >
>> > It _seems_ like you could get by by putting the _what_ field back
>> > into your schema, just not sending any data to it in new docs.
>> >
>> > I'll let others who understand merging better than me chime in on
>> > whether this is a case that should be handled or a bug. I pinged the
>> > dev list to see what the opinion is
>> >
>> > Best,
>> > Erick
>> >
>> >
>> > On Mon, Oct 28, 2013 at 6:39 PM, Matthew Shapiro 
>> wrote:
>> >
>> >> Sorry for reposting after I just sent in a reply, but I just looked at
>> the
>> >> error trace closer and noticed
>> >>
>> >>
>> >>1. Caused by: java.lang.IllegalArgumentException: no such field what
>> >>
>> >>
>> >> The 'what' field was removed by request of the customer as they wanted
>> the
>> >> logic behind what gets queried in the "what" field to be code side
>> instead
>> >> of solr side (for easier changing without having to re-index
>> everything.  I
>> >> didn't feel strongly either way and since they are paying me, I took it
>> >> out).
>> >>
>> >> This makes me wonder if its crashing while merging because a field that
>> >> used to be there is now gone.  However, this seems odd to me as Solr
>> >> doesn't even let me delete the old data and instead its leaving my
>> >> collection in an extremely bad state, with the only remedy I can think
>> of
>> >> is to nuke the index at the filesystem level.
>> >>
>> >> If this is indeed the cause of the crash, is the only way to delete a
>> field
>> >> to first completely empty your index first?
>> >>
>> >>
>> >> On Mon, Oct 28, 2013 at 6:34 PM, Matthew Shapiro 
>> wrote:
>> >>
>> >> > Thanks for your response.
>> >> >
>> >> > You were right, solr is logging to the catalina.out file for tomcat.
>> >>  When
>> >> > I click the optimize button in solr's admin interface the following
>> logs
>> >> > are written: http://apaste.info/laup
>> >> >
>> >> > About JVM memory, solr's admin interface is listing JVM memory at
>> 3.1%
>> >> > (221.7MB is dark grey, 512.56MB light grey and 6.99GB total).
>> >> >
>> >> >
>> >> > On Mon, Oct 28, 2013 at 6:29 AM, Erick Erickson <
>> erickerick...@gmail.com
>> >> >wrote:
>> >> >
>> >> >> For Tomcat, the Solr is often put into catalina.out
>> >> >> as a default, so the output might be there. You can
>> >> >> configure Solr to send the logs most anywhere you
>> >> >> please, but without some specific setup
>> >> >> on your part the log output just goes to the default
>> >> >> for the servlet.
>> >> >>
>> >> >> I took a quick glance at the code but since the merges
>> >> >> are happening in the background, there's not much
>> >> >> context for where that error is thrown.
>> >> >>
>> >> >> How much memory is there for the JVM? I'm grasping
>> >> >> at straws a bit...
>> >> >>
>> >> >> Erick
>> >> >>
>> >> >>
>> >> >> On Sun, Oct 27, 2013 at 9:54 PM, Matthew Shapiro 
>> >> wrote:
>> >> >>
>> >> >> > I am working at implementing solr to work as the search backend
>> for
>> >> our
>> >> >> web
>> >> >> > system.  So far things have been going well, but today I made some
>> >> >> schema
>> >> >> > changes and now things have broken.
>> >> >> >
>> >> >> > I updated the schema.xml file and reloaded the core (via the admin
>> >> >> > interface).  No errors were reported in the logs.
>> >> >> >
>> >> >> > I then pushed 100 records to be indexed.  A call to Commit
>> afterwards
>> >> >> > seemed fine, however my next call for Optimize caused the
>> following
>> >> >> errors:
>> >> >> >
>> >> >> > java.io.IOException: background merge hit exception:
>> >> >> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into
>> _37
>> >> >> > [maxNumSegments=1]
>> >> >> >
>> >> >> > null:java.io.IOException: background merge hit exception:
>> >> >> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into
>> _37
>> >> >> > [maxNumSegments=1]
>> >> >> >
>> >> >> >
>> >> >> > Unfortunately, googling for background merge hit exception came up
>> >> >> > with 2 thing: a corrupt index or not enough free space.  The host
>> >> >> > machine that's hosting solr has 227 out of 229GB free (according
>> to df
>> >> >> > -h), so that's not it.
>> >> >> >
>> >> >> >
>> >> >> > I then ran CheckIndex on the index, and got the following results:
>> >> >>

Re: Background merge errors with Solr 4.4.0 on Optimize call

2013-10-30 Thread Erick Erickson
Robert:

Thanks. I'm on my way out the door, so I'll have to put up a JIRA with your
patch later if it hasn't been done already

Erick


On Tue, Oct 29, 2013 at 10:14 PM, Robert Muir  wrote:

> I think its a bug, but thats just my opinion. i sent a patch to dev@
> for thoughts.
>
> On Tue, Oct 29, 2013 at 6:09 PM, Erick Erickson 
> wrote:
> > Hmmm, so you're saying that merging indexes where a field
> > has been removed isn't handled. So you have some documents
> > that do have a "what" field, but your schema doesn't have it,
> > is that true?
> >
> > It _seems_ like you could get by by putting the _what_ field back
> > into your schema, just not sending any data to it in new docs.
> >
> > I'll let others who understand merging better than me chime in on
> > whether this is a case that should be handled or a bug. I pinged the
> > dev list to see what the opinion is
> >
> > Best,
> > Erick
> >
> >
> > On Mon, Oct 28, 2013 at 6:39 PM, Matthew Shapiro 
> wrote:
> >
> >> Sorry for reposting after I just sent in a reply, but I just looked at
> the
> >> error trace closer and noticed
> >>
> >>
> >>1. Caused by: java.lang.IllegalArgumentException: no such field what
> >>
> >>
> >> The 'what' field was removed by request of the customer as they wanted
> the
> >> logic behind what gets queried in the "what" field to be code side
> instead
> >> of solr side (for easier changing without having to re-index
> everything.  I
> >> didn't feel strongly either way and since they are paying me, I took it
> >> out).
> >>
> >> This makes me wonder if its crashing while merging because a field that
> >> used to be there is now gone.  However, this seems odd to me as Solr
> >> doesn't even let me delete the old data and instead its leaving my
> >> collection in an extremely bad state, with the only remedy I can think
> of
> >> is to nuke the index at the filesystem level.
> >>
> >> If this is indeed the cause of the crash, is the only way to delete a
> field
> >> to first completely empty your index first?
> >>
> >>
> >> On Mon, Oct 28, 2013 at 6:34 PM, Matthew Shapiro 
> wrote:
> >>
> >> > Thanks for your response.
> >> >
> >> > You were right, solr is logging to the catalina.out file for tomcat.
> >>  When
> >> > I click the optimize button in solr's admin interface the following
> logs
> >> > are written: http://apaste.info/laup
> >> >
> >> > About JVM memory, solr's admin interface is listing JVM memory at 3.1%
> >> > (221.7MB is dark grey, 512.56MB light grey and 6.99GB total).
> >> >
> >> >
> >> > On Mon, Oct 28, 2013 at 6:29 AM, Erick Erickson <
> erickerick...@gmail.com
> >> >wrote:
> >> >
> >> >> For Tomcat, the Solr is often put into catalina.out
> >> >> as a default, so the output might be there. You can
> >> >> configure Solr to send the logs most anywhere you
> >> >> please, but without some specific setup
> >> >> on your part the log output just goes to the default
> >> >> for the servlet.
> >> >>
> >> >> I took a quick glance at the code but since the merges
> >> >> are happening in the background, there's not much
> >> >> context for where that error is thrown.
> >> >>
> >> >> How much memory is there for the JVM? I'm grasping
> >> >> at straws a bit...
> >> >>
> >> >> Erick
> >> >>
> >> >>
> >> >> On Sun, Oct 27, 2013 at 9:54 PM, Matthew Shapiro 
> >> wrote:
> >> >>
> >> >> > I am working at implementing solr to work as the search backend for
> >> our
> >> >> web
> >> >> > system.  So far things have been going well, but today I made some
> >> >> schema
> >> >> > changes and now things have broken.
> >> >> >
> >> >> > I updated the schema.xml file and reloaded the core (via the admin
> >> >> > interface).  No errors were reported in the logs.
> >> >> >
> >> >> > I then pushed 100 records to be indexed.  A call to Commit
> afterwards
> >> >> > seemed fine, however my next call for Optimize caused the following
> >> >> errors:
> >> >> >
> >> >> > java.io.IOException: background merge hit exception:
> >> >> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
> >> >> > [maxNumSegments=1]
> >> >> >
> >> >> > null:java.io.IOException: background merge hit exception:
> >> >> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
> >> >> > [maxNumSegments=1]
> >> >> >
> >> >> >
> >> >> > Unfortunately, googling for background merge hit exception came up
> >> >> > with 2 thing: a corrupt index or not enough free space.  The host
> >> >> > machine that's hosting solr has 227 out of 229GB free (according
> to df
> >> >> > -h), so that's not it.
> >> >> >
> >> >> >
> >> >> > I then ran CheckIndex on the index, and got the following results:
> >> >> > http://apaste.info/gmGU
> >> >> >
> >> >> >
> >> >> > As someone who is new to solr and lucene, as far as I can tell this
> >> >> > means my index is fine. So I am coming up at a loss. I'm somewhat
> sure
> >> >> > that I could probably delete my data directory and rebuild it but
> I am
> >> >> > more interested in finding out why is it

Re: Background merge errors with Solr 4.4.0 on Optimize call

2013-10-29 Thread Robert Muir
I think its a bug, but thats just my opinion. i sent a patch to dev@
for thoughts.

On Tue, Oct 29, 2013 at 6:09 PM, Erick Erickson  wrote:
> Hmmm, so you're saying that merging indexes where a field
> has been removed isn't handled. So you have some documents
> that do have a "what" field, but your schema doesn't have it,
> is that true?
>
> It _seems_ like you could get by by putting the _what_ field back
> into your schema, just not sending any data to it in new docs.
>
> I'll let others who understand merging better than me chime in on
> whether this is a case that should be handled or a bug. I pinged the
> dev list to see what the opinion is
>
> Best,
> Erick
>
>
> On Mon, Oct 28, 2013 at 6:39 PM, Matthew Shapiro  wrote:
>
>> Sorry for reposting after I just sent in a reply, but I just looked at the
>> error trace closer and noticed
>>
>>
>>1. Caused by: java.lang.IllegalArgumentException: no such field what
>>
>>
>> The 'what' field was removed by request of the customer as they wanted the
>> logic behind what gets queried in the "what" field to be code side instead
>> of solr side (for easier changing without having to re-index everything.  I
>> didn't feel strongly either way and since they are paying me, I took it
>> out).
>>
>> This makes me wonder if its crashing while merging because a field that
>> used to be there is now gone.  However, this seems odd to me as Solr
>> doesn't even let me delete the old data and instead its leaving my
>> collection in an extremely bad state, with the only remedy I can think of
>> is to nuke the index at the filesystem level.
>>
>> If this is indeed the cause of the crash, is the only way to delete a field
>> to first completely empty your index first?
>>
>>
>> On Mon, Oct 28, 2013 at 6:34 PM, Matthew Shapiro  wrote:
>>
>> > Thanks for your response.
>> >
>> > You were right, solr is logging to the catalina.out file for tomcat.
>>  When
>> > I click the optimize button in solr's admin interface the following logs
>> > are written: http://apaste.info/laup
>> >
>> > About JVM memory, solr's admin interface is listing JVM memory at 3.1%
>> > (221.7MB is dark grey, 512.56MB light grey and 6.99GB total).
>> >
>> >
>> > On Mon, Oct 28, 2013 at 6:29 AM, Erick Erickson > >wrote:
>> >
>> >> For Tomcat, the Solr is often put into catalina.out
>> >> as a default, so the output might be there. You can
>> >> configure Solr to send the logs most anywhere you
>> >> please, but without some specific setup
>> >> on your part the log output just goes to the default
>> >> for the servlet.
>> >>
>> >> I took a quick glance at the code but since the merges
>> >> are happening in the background, there's not much
>> >> context for where that error is thrown.
>> >>
>> >> How much memory is there for the JVM? I'm grasping
>> >> at straws a bit...
>> >>
>> >> Erick
>> >>
>> >>
>> >> On Sun, Oct 27, 2013 at 9:54 PM, Matthew Shapiro 
>> wrote:
>> >>
>> >> > I am working at implementing solr to work as the search backend for
>> our
>> >> web
>> >> > system.  So far things have been going well, but today I made some
>> >> schema
>> >> > changes and now things have broken.
>> >> >
>> >> > I updated the schema.xml file and reloaded the core (via the admin
>> >> > interface).  No errors were reported in the logs.
>> >> >
>> >> > I then pushed 100 records to be indexed.  A call to Commit afterwards
>> >> > seemed fine, however my next call for Optimize caused the following
>> >> errors:
>> >> >
>> >> > java.io.IOException: background merge hit exception:
>> >> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
>> >> > [maxNumSegments=1]
>> >> >
>> >> > null:java.io.IOException: background merge hit exception:
>> >> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
>> >> > [maxNumSegments=1]
>> >> >
>> >> >
>> >> > Unfortunately, googling for background merge hit exception came up
>> >> > with 2 thing: a corrupt index or not enough free space.  The host
>> >> > machine that's hosting solr has 227 out of 229GB free (according to df
>> >> > -h), so that's not it.
>> >> >
>> >> >
>> >> > I then ran CheckIndex on the index, and got the following results:
>> >> > http://apaste.info/gmGU
>> >> >
>> >> >
>> >> > As someone who is new to solr and lucene, as far as I can tell this
>> >> > means my index is fine. So I am coming up at a loss. I'm somewhat sure
>> >> > that I could probably delete my data directory and rebuild it but I am
>> >> > more interested in finding out why is it having issues, what is the
>> >> > best way to fix it, and what is the best way to prevent it from
>> >> > happening when this goes into production.
>> >> >
>> >> >
>> >> > Does anyone have any advice that may help?
>> >> >
>> >> >
>> >> > As an aside, i do not have a stacktrace for you because the solr admin
>> >> > page isn't giving me one.  I tried looking in my logs file in my solr
>> >> > directory, but it does not contain any logs.  I opened up my
>> >> > ~/tomcat/lib/log4

Re: Background merge errors with Solr 4.4.0 on Optimize call

2013-10-29 Thread Erick Erickson
Hmmm, so you're saying that merging indexes where a field
has been removed isn't handled. So you have some documents
that do have a "what" field, but your schema doesn't have it,
is that true?

It _seems_ like you could get by by putting the _what_ field back
into your schema, just not sending any data to it in new docs.

I'll let others who understand merging better than me chime in on
whether this is a case that should be handled or a bug. I pinged the
dev list to see what the opinion is

Best,
Erick


On Mon, Oct 28, 2013 at 6:39 PM, Matthew Shapiro  wrote:

> Sorry for reposting after I just sent in a reply, but I just looked at the
> error trace closer and noticed
>
>
>1. Caused by: java.lang.IllegalArgumentException: no such field what
>
>
> The 'what' field was removed by request of the customer as they wanted the
> logic behind what gets queried in the "what" field to be code side instead
> of solr side (for easier changing without having to re-index everything.  I
> didn't feel strongly either way and since they are paying me, I took it
> out).
>
> This makes me wonder if its crashing while merging because a field that
> used to be there is now gone.  However, this seems odd to me as Solr
> doesn't even let me delete the old data and instead its leaving my
> collection in an extremely bad state, with the only remedy I can think of
> is to nuke the index at the filesystem level.
>
> If this is indeed the cause of the crash, is the only way to delete a field
> to first completely empty your index first?
>
>
> On Mon, Oct 28, 2013 at 6:34 PM, Matthew Shapiro  wrote:
>
> > Thanks for your response.
> >
> > You were right, solr is logging to the catalina.out file for tomcat.
>  When
> > I click the optimize button in solr's admin interface the following logs
> > are written: http://apaste.info/laup
> >
> > About JVM memory, solr's admin interface is listing JVM memory at 3.1%
> > (221.7MB is dark grey, 512.56MB light grey and 6.99GB total).
> >
> >
> > On Mon, Oct 28, 2013 at 6:29 AM, Erick Erickson  >wrote:
> >
> >> For Tomcat, the Solr is often put into catalina.out
> >> as a default, so the output might be there. You can
> >> configure Solr to send the logs most anywhere you
> >> please, but without some specific setup
> >> on your part the log output just goes to the default
> >> for the servlet.
> >>
> >> I took a quick glance at the code but since the merges
> >> are happening in the background, there's not much
> >> context for where that error is thrown.
> >>
> >> How much memory is there for the JVM? I'm grasping
> >> at straws a bit...
> >>
> >> Erick
> >>
> >>
> >> On Sun, Oct 27, 2013 at 9:54 PM, Matthew Shapiro 
> wrote:
> >>
> >> > I am working at implementing solr to work as the search backend for
> our
> >> web
> >> > system.  So far things have been going well, but today I made some
> >> schema
> >> > changes and now things have broken.
> >> >
> >> > I updated the schema.xml file and reloaded the core (via the admin
> >> > interface).  No errors were reported in the logs.
> >> >
> >> > I then pushed 100 records to be indexed.  A call to Commit afterwards
> >> > seemed fine, however my next call for Optimize caused the following
> >> errors:
> >> >
> >> > java.io.IOException: background merge hit exception:
> >> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
> >> > [maxNumSegments=1]
> >> >
> >> > null:java.io.IOException: background merge hit exception:
> >> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
> >> > [maxNumSegments=1]
> >> >
> >> >
> >> > Unfortunately, googling for background merge hit exception came up
> >> > with 2 thing: a corrupt index or not enough free space.  The host
> >> > machine that's hosting solr has 227 out of 229GB free (according to df
> >> > -h), so that's not it.
> >> >
> >> >
> >> > I then ran CheckIndex on the index, and got the following results:
> >> > http://apaste.info/gmGU
> >> >
> >> >
> >> > As someone who is new to solr and lucene, as far as I can tell this
> >> > means my index is fine. So I am coming up at a loss. I'm somewhat sure
> >> > that I could probably delete my data directory and rebuild it but I am
> >> > more interested in finding out why is it having issues, what is the
> >> > best way to fix it, and what is the best way to prevent it from
> >> > happening when this goes into production.
> >> >
> >> >
> >> > Does anyone have any advice that may help?
> >> >
> >> >
> >> > As an aside, i do not have a stacktrace for you because the solr admin
> >> > page isn't giving me one.  I tried looking in my logs file in my solr
> >> > directory, but it does not contain any logs.  I opened up my
> >> > ~/tomcat/lib/log4j.properties file and saw http://apaste.info/0rTL,
> >> > which didnt really help me find log files.  Doing a 'find . | grep
> >> > solr.log' didn't really help either.  Any help for finding log files
> >> > (which may help find the actual cause of this) would also be
> >> > a

Re: Background merge errors with Solr 4.4.0 on Optimize call

2013-10-28 Thread Matthew Shapiro
Sorry for reposting after I just sent in a reply, but I just looked at the
error trace closer and noticed


   1. Caused by: java.lang.IllegalArgumentException: no such field what


The 'what' field was removed by request of the customer as they wanted the
logic behind what gets queried in the "what" field to be code side instead
of solr side (for easier changing without having to re-index everything.  I
didn't feel strongly either way and since they are paying me, I took it
out).

This makes me wonder if its crashing while merging because a field that
used to be there is now gone.  However, this seems odd to me as Solr
doesn't even let me delete the old data and instead its leaving my
collection in an extremely bad state, with the only remedy I can think of
is to nuke the index at the filesystem level.

If this is indeed the cause of the crash, is the only way to delete a field
to first completely empty your index first?


On Mon, Oct 28, 2013 at 6:34 PM, Matthew Shapiro  wrote:

> Thanks for your response.
>
> You were right, solr is logging to the catalina.out file for tomcat.  When
> I click the optimize button in solr's admin interface the following logs
> are written: http://apaste.info/laup
>
> About JVM memory, solr's admin interface is listing JVM memory at 3.1%
> (221.7MB is dark grey, 512.56MB light grey and 6.99GB total).
>
>
> On Mon, Oct 28, 2013 at 6:29 AM, Erick Erickson 
> wrote:
>
>> For Tomcat, the Solr is often put into catalina.out
>> as a default, so the output might be there. You can
>> configure Solr to send the logs most anywhere you
>> please, but without some specific setup
>> on your part the log output just goes to the default
>> for the servlet.
>>
>> I took a quick glance at the code but since the merges
>> are happening in the background, there's not much
>> context for where that error is thrown.
>>
>> How much memory is there for the JVM? I'm grasping
>> at straws a bit...
>>
>> Erick
>>
>>
>> On Sun, Oct 27, 2013 at 9:54 PM, Matthew Shapiro  wrote:
>>
>> > I am working at implementing solr to work as the search backend for our
>> web
>> > system.  So far things have been going well, but today I made some
>> schema
>> > changes and now things have broken.
>> >
>> > I updated the schema.xml file and reloaded the core (via the admin
>> > interface).  No errors were reported in the logs.
>> >
>> > I then pushed 100 records to be indexed.  A call to Commit afterwards
>> > seemed fine, however my next call for Optimize caused the following
>> errors:
>> >
>> > java.io.IOException: background merge hit exception:
>> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
>> > [maxNumSegments=1]
>> >
>> > null:java.io.IOException: background merge hit exception:
>> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
>> > [maxNumSegments=1]
>> >
>> >
>> > Unfortunately, googling for background merge hit exception came up
>> > with 2 thing: a corrupt index or not enough free space.  The host
>> > machine that's hosting solr has 227 out of 229GB free (according to df
>> > -h), so that's not it.
>> >
>> >
>> > I then ran CheckIndex on the index, and got the following results:
>> > http://apaste.info/gmGU
>> >
>> >
>> > As someone who is new to solr and lucene, as far as I can tell this
>> > means my index is fine. So I am coming up at a loss. I'm somewhat sure
>> > that I could probably delete my data directory and rebuild it but I am
>> > more interested in finding out why is it having issues, what is the
>> > best way to fix it, and what is the best way to prevent it from
>> > happening when this goes into production.
>> >
>> >
>> > Does anyone have any advice that may help?
>> >
>> >
>> > As an aside, i do not have a stacktrace for you because the solr admin
>> > page isn't giving me one.  I tried looking in my logs file in my solr
>> > directory, but it does not contain any logs.  I opened up my
>> > ~/tomcat/lib/log4j.properties file and saw http://apaste.info/0rTL,
>> > which didnt really help me find log files.  Doing a 'find . | grep
>> > solr.log' didn't really help either.  Any help for finding log files
>> > (which may help find the actual cause of this) would also be
>> > appreciated.
>> >
>>
>
>


Re: Background merge errors with Solr 4.4.0 on Optimize call

2013-10-28 Thread Matthew Shapiro
Thanks for your response.

You were right, solr is logging to the catalina.out file for tomcat.  When
I click the optimize button in solr's admin interface the following logs
are written: http://apaste.info/laup

About JVM memory, solr's admin interface is listing JVM memory at 3.1%
(221.7MB is dark grey, 512.56MB light grey and 6.99GB total).


On Mon, Oct 28, 2013 at 6:29 AM, Erick Erickson wrote:

> For Tomcat, the Solr is often put into catalina.out
> as a default, so the output might be there. You can
> configure Solr to send the logs most anywhere you
> please, but without some specific setup
> on your part the log output just goes to the default
> for the servlet.
>
> I took a quick glance at the code but since the merges
> are happening in the background, there's not much
> context for where that error is thrown.
>
> How much memory is there for the JVM? I'm grasping
> at straws a bit...
>
> Erick
>
>
> On Sun, Oct 27, 2013 at 9:54 PM, Matthew Shapiro  wrote:
>
> > I am working at implementing solr to work as the search backend for our
> web
> > system.  So far things have been going well, but today I made some schema
> > changes and now things have broken.
> >
> > I updated the schema.xml file and reloaded the core (via the admin
> > interface).  No errors were reported in the logs.
> >
> > I then pushed 100 records to be indexed.  A call to Commit afterwards
> > seemed fine, however my next call for Optimize caused the following
> errors:
> >
> > java.io.IOException: background merge hit exception:
> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
> > [maxNumSegments=1]
> >
> > null:java.io.IOException: background merge hit exception:
> > _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
> > [maxNumSegments=1]
> >
> >
> > Unfortunately, googling for background merge hit exception came up
> > with 2 thing: a corrupt index or not enough free space.  The host
> > machine that's hosting solr has 227 out of 229GB free (according to df
> > -h), so that's not it.
> >
> >
> > I then ran CheckIndex on the index, and got the following results:
> > http://apaste.info/gmGU
> >
> >
> > As someone who is new to solr and lucene, as far as I can tell this
> > means my index is fine. So I am coming up at a loss. I'm somewhat sure
> > that I could probably delete my data directory and rebuild it but I am
> > more interested in finding out why is it having issues, what is the
> > best way to fix it, and what is the best way to prevent it from
> > happening when this goes into production.
> >
> >
> > Does anyone have any advice that may help?
> >
> >
> > As an aside, i do not have a stacktrace for you because the solr admin
> > page isn't giving me one.  I tried looking in my logs file in my solr
> > directory, but it does not contain any logs.  I opened up my
> > ~/tomcat/lib/log4j.properties file and saw http://apaste.info/0rTL,
> > which didnt really help me find log files.  Doing a 'find . | grep
> > solr.log' didn't really help either.  Any help for finding log files
> > (which may help find the actual cause of this) would also be
> > appreciated.
> >
>


Re: Background merge errors with Solr 4.4.0 on Optimize call

2013-10-28 Thread Erick Erickson
For Tomcat, the Solr is often put into catalina.out
as a default, so the output might be there. You can
configure Solr to send the logs most anywhere you
please, but without some specific setup
on your part the log output just goes to the default
for the servlet.

I took a quick glance at the code but since the merges
are happening in the background, there's not much
context for where that error is thrown.

How much memory is there for the JVM? I'm grasping
at straws a bit...

Erick


On Sun, Oct 27, 2013 at 9:54 PM, Matthew Shapiro  wrote:

> I am working at implementing solr to work as the search backend for our web
> system.  So far things have been going well, but today I made some schema
> changes and now things have broken.
>
> I updated the schema.xml file and reloaded the core (via the admin
> interface).  No errors were reported in the logs.
>
> I then pushed 100 records to be indexed.  A call to Commit afterwards
> seemed fine, however my next call for Optimize caused the following errors:
>
> java.io.IOException: background merge hit exception:
> _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
> [maxNumSegments=1]
>
> null:java.io.IOException: background merge hit exception:
> _2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
> [maxNumSegments=1]
>
>
> Unfortunately, googling for background merge hit exception came up
> with 2 thing: a corrupt index or not enough free space.  The host
> machine that's hosting solr has 227 out of 229GB free (according to df
> -h), so that's not it.
>
>
> I then ran CheckIndex on the index, and got the following results:
> http://apaste.info/gmGU
>
>
> As someone who is new to solr and lucene, as far as I can tell this
> means my index is fine. So I am coming up at a loss. I'm somewhat sure
> that I could probably delete my data directory and rebuild it but I am
> more interested in finding out why is it having issues, what is the
> best way to fix it, and what is the best way to prevent it from
> happening when this goes into production.
>
>
> Does anyone have any advice that may help?
>
>
> As an aside, i do not have a stacktrace for you because the solr admin
> page isn't giving me one.  I tried looking in my logs file in my solr
> directory, but it does not contain any logs.  I opened up my
> ~/tomcat/lib/log4j.properties file and saw http://apaste.info/0rTL,
> which didnt really help me find log files.  Doing a 'find . | grep
> solr.log' didn't really help either.  Any help for finding log files
> (which may help find the actual cause of this) would also be
> appreciated.
>


Background merge errors with Solr 4.4.0 on Optimize call

2013-10-27 Thread Matthew Shapiro
I am working at implementing solr to work as the search backend for our web
system.  So far things have been going well, but today I made some schema
changes and now things have broken.

I updated the schema.xml file and reloaded the core (via the admin
interface).  No errors were reported in the logs.

I then pushed 100 records to be indexed.  A call to Commit afterwards
seemed fine, however my next call for Optimize caused the following errors:

java.io.IOException: background merge hit exception:
_2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
[maxNumSegments=1]

null:java.io.IOException: background merge hit exception:
_2n(4.4):C4263/154 _30(4.4):C134 _32(4.4):C10 _31(4.4):C10 into _37
[maxNumSegments=1]


Unfortunately, googling for background merge hit exception came up
with 2 thing: a corrupt index or not enough free space.  The host
machine that's hosting solr has 227 out of 229GB free (according to df
-h), so that's not it.


I then ran CheckIndex on the index, and got the following results:
http://apaste.info/gmGU


As someone who is new to solr and lucene, as far as I can tell this
means my index is fine. So I am coming up at a loss. I'm somewhat sure
that I could probably delete my data directory and rebuild it but I am
more interested in finding out why is it having issues, what is the
best way to fix it, and what is the best way to prevent it from
happening when this goes into production.


Does anyone have any advice that may help?


As an aside, i do not have a stacktrace for you because the solr admin
page isn't giving me one.  I tried looking in my logs file in my solr
directory, but it does not contain any logs.  I opened up my
~/tomcat/lib/log4j.properties file and saw http://apaste.info/0rTL,
which didnt really help me find log files.  Doing a 'find . | grep
solr.log' didn't really help either.  Any help for finding log files
(which may help find the actual cause of this) would also be
appreciated.