[jira] Commented: (SOLR-1466) Fix File descriptor leak in SnapPuller

2009-09-26 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759992#action_12759992
 ] 

Shalin Shekhar Mangar commented on SOLR-1466:
-

Yep, there is a leak alright. Commit when ready.

> Fix File descriptor leak in SnapPuller
> --
>
> Key: SOLR-1466
> URL: https://issues.apache.org/jira/browse/SOLR-1466
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Reporter: Mark Miller
> Fix For: 1.4
>
> Attachments: SOLR-1466.patch, SOLR-1466.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: DebugLogger oddity

2009-09-26 Thread Shalin Shekhar Mangar
On Sun, Sep 27, 2009 at 9:41 AM, Mark Miller  wrote:

> DebugLogger has the following code:
>
>public DebugInfo(String name, int type, DebugInfo parent) {
>  this.name = name;
>  this.type = type;
>  this.parent = parent;
>  lst = new NamedList();
>  if (parent != null) {
>String displayName = null;
>if (type == SolrWriter.START_ENTITY) {
>  displayName = "entity:" + name;
>} else if (type == SolrWriter.TRANSFORMED_ROW
>|| type == SolrWriter.TRANSFORMER_EXCEPTION) {
>  displayName = "transformer:" + name;
>} else if (type == SolrWriter.START_DOC) {
>  name = displayName = "document#" + SolrWriter.getDocCount();
>}
>parent.lst.add(displayName, lst);
>  }
>}
>
> That last assignment to name does nothing, so I'm guessing its meant to
> be this.name, or else it can be removed.
>
>
Yeah, that should've been this.name. The name is not used anywhere other
than the constructor so it can be made a local variable. However, I guess it
is there for future use.

-- 
Regards,
Shalin Shekhar Mangar.


Re: svn commit: r818816 - /lucene/solr/trunk/src/java/org/apache/solr/search/SolrFieldCacheMBean.java

2009-09-26 Thread Shalin Shekhar Mangar
On Sun, Sep 27, 2009 at 1:50 AM, Chris Hostetter
wrote:

>
> FWIW: adding the whitespaces makes these key names inconcsistant with
> every other stat name in solr ... none of them use whitespace
>
> In cases where the lack of spaces makes things hard to read other mbeans
> use "_" (ie "cumulative_deletesById" in DirectUpdateHandler2)  I used that
> same convention in FieldCacheMBean (see "entires_count") but it didn't
> really seems like a good idea for "entry_#0")
>
> I'm not saying the whitespace is bad ... just pointing out that it's
> inconsistent, and there was a reason i didn't have it in before.
>
>
Whatever makes you happy Hoss :)

I have reverted that change.

-- 
Regards,
Shalin Shekhar Mangar.


[jira] Commented: (SOLR-1319) Upgrade custom Solr Highlighter classes to new Lucene Highlighter API

2009-09-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759988#action_12759988
 ] 

Mark Miller commented on SOLR-1319:
---

bq. Given there is no patch, should this be pushed to 1.5?

Nope, the actual work was part of another issue - the only reason this is still 
open was because there was a back compat break (due to a back compat break in 
Lucene's Highlighter, which doesn't promise back compat). So this is open as a 
marker to somehow deal with the break in Solr - 

Our options are fairly limited - basically, I think it means adding a notice in 
Changes about the whole affair. I'll try and work something up.

> Upgrade custom Solr Highlighter classes to new Lucene Highlighter API
> -
>
> Key: SOLR-1319
> URL: https://issues.apache.org/jira/browse/SOLR-1319
> Project: Solr
>  Issue Type: Task
>  Components: highlighter
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 1.4
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1466) Fix File descriptor leak in SnapPuller

2009-09-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-1466:
--

Attachment: SOLR-1466.patch

> Fix File descriptor leak in SnapPuller
> --
>
> Key: SOLR-1466
> URL: https://issues.apache.org/jira/browse/SOLR-1466
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Reporter: Mark Miller
> Fix For: 1.4
>
> Attachments: SOLR-1466.patch, SOLR-1466.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Reopened: (SOLR-1466) Fix File descriptor leak in SnapPuller

2009-09-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-1466:
---


flippity flop - there is still a bug here - it is a descriptor leak. After I 
open and close this a few more times I'll fix it.

> Fix File descriptor leak in SnapPuller
> --
>
> Key: SOLR-1466
> URL: https://issues.apache.org/jira/browse/SOLR-1466
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Reporter: Mark Miller
> Fix For: 1.4
>
> Attachments: SOLR-1466.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-1466) Fix File descriptor leak in SnapPuller

2009-09-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-1466.
---

Resolution: Invalid

my bad - cleanup gets it, and you wouldnt want to close it there anyway

> Fix File descriptor leak in SnapPuller
> --
>
> Key: SOLR-1466
> URL: https://issues.apache.org/jira/browse/SOLR-1466
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Reporter: Mark Miller
> Fix For: 1.4
>
> Attachments: SOLR-1466.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1466) Fix File descriptor leak in SnapPuller

2009-09-26 Thread Mark Miller (JIRA)
Fix File descriptor leak in SnapPuller
--

 Key: SOLR-1466
 URL: https://issues.apache.org/jira/browse/SOLR-1466
 Project: Solr
  Issue Type: Bug
  Components: replication (java)
Reporter: Mark Miller
 Fix For: 1.4
 Attachments: SOLR-1466.patch



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1466) Fix File descriptor leak in SnapPuller

2009-09-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-1466:
--

Attachment: SOLR-1466.patch

> Fix File descriptor leak in SnapPuller
> --
>
> Key: SOLR-1466
> URL: https://issues.apache.org/jira/browse/SOLR-1466
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Reporter: Mark Miller
> Fix For: 1.4
>
> Attachments: SOLR-1466.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



DebugLogger oddity

2009-09-26 Thread Mark Miller
DebugLogger has the following code:

public DebugInfo(String name, int type, DebugInfo parent) {
  this.name = name;
  this.type = type;
  this.parent = parent;
  lst = new NamedList();
  if (parent != null) {
String displayName = null;
if (type == SolrWriter.START_ENTITY) {
  displayName = "entity:" + name;
} else if (type == SolrWriter.TRANSFORMED_ROW
|| type == SolrWriter.TRANSFORMER_EXCEPTION) {
  displayName = "transformer:" + name;
} else if (type == SolrWriter.START_DOC) {
  name = displayName = "document#" + SolrWriter.getDocCount();
}
parent.lst.add(displayName, lst);
  }
}

That last assignment to name does nothing, so I'm guessing its meant to
be this.name, or else it can be removed.

-- 
- Mark

http://www.lucidimagination.com





[jira] Created: (SOLR-1465) XPathRecordReader does a bunch of String concat using +

2009-09-26 Thread Mark Miller (JIRA)
XPathRecordReader does a bunch of String concat using +
---

 Key: SOLR-1465
 URL: https://issues.apache.org/jira/browse/SOLR-1465
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Reporter: Mark Miller
Priority: Minor


should use a StringBuilder - very inefficient to keep creating new strings that 
way

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Updated: (SOLR-1449) solrconfig.xml syntax to add classpath elements from outside of instanceDir

2009-09-26 Thread Mark Miller
Chris Hostetter wrote:
> : I'm too lazy to look the patch. Please make sure that 'ant clean'
> : causes the contrib modules to remove the files they copy over to
> : example/
>
> I think you're missunderstanding the point of the issue .. in the 
> current trunk the jars are copied to example/solr/lib ... the patch 
> changes things so that we *stop* copying jars from the contribs into 
> example and just use them directly out of dist/
>
>
>
>
> -Hoss
>
>   
Gosh, I know it. If your too lazy to look at the patch or read the
issue, please be too lazy to comment.

-- 
- Mark

http://www.lucidimagination.com





[jira] Commented: (SOLR-1449) solrconfig.xml syntax to add classpath elements from outside of instanceDir

2009-09-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759971#action_12759971
 ] 

Mark Miller commented on SOLR-1449:
---

+1. I don't find the pluses overwhelming, but I find pluses. On the other hand, 
after reading all the comments, I don't see a single good reason against.

Its a no brainer to me. I think I might have some time to play with it 
tomorrow. Will report back if I do.

> solrconfig.xml syntax to add classpath elements from outside of instanceDir
> ---
>
> Key: SOLR-1449
> URL: https://issues.apache.org/jira/browse/SOLR-1449
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
> Fix For: 1.4
>
> Attachments: SOLR-1449.patch, SOLR-1449.patch, SOLR-1449.patch
>
>
> the idea has been discussed numerous times that it would be nice if there was 
> a way to configure a core to load plugins from specific jars (or "classes" 
> style directories) by path  w/o needing to copy them to the "./lib" dir in 
> the instanceDir.
> The current workaround is "symlinks" but that doesn't really help the 
> situation of the Solr Release artifacts, where we wind up making numerous 
> copies of jars to support multiple example directories (you can't have 
> reliable symlinks in zip files)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: svn commit: r819234 - in /lucene/solr/trunk/src/java/org/apache/solr: handler/component/QueryElevationComponent.java schema/RandomSortField.java

2009-09-26 Thread Yonik Seeley
Ah, yet another reason to use @Override

-Yonik
http://www.lucidimagination.com



On Sat, Sep 26, 2009 at 7:46 PM,   wrote:
> Author: markrmiller
> Date: Sat Sep 26 23:46:44 2009
> New Revision: 819234
>
> URL: http://svn.apache.org/viewvc?rev=819234&view=rev
> Log:
> sortType was removed from FieldComparator
>
> Modified:
>    
> lucene/solr/trunk/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
>    lucene/solr/trunk/src/java/org/apache/solr/schema/RandomSortField.java
>
> Modified: 
> lucene/solr/trunk/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
> URL: 
> http://svn.apache.org/viewvc/lucene/solr/trunk/src/java/org/apache/solr/handler/component/QueryElevationComponent.java?rev=819234&r1=819233&r2=819234&view=diff
> ==
> --- 
> lucene/solr/trunk/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
>  (original)
> +++ 
> lucene/solr/trunk/src/java/org/apache/solr/handler/component/QueryElevationComponent.java
>  Sat Sep 26 23:46:44 2009
> @@ -484,10 +484,6 @@
>         idIndex = FieldCache.DEFAULT.getStringIndex(reader, fieldname);
>       }
>
> -      public int sortType() {
> -        return SortField.CUSTOM;
> -      }
> -
>       public Comparable value(int slot) {
>         return values[slot];
>       }
>
> Modified: 
> lucene/solr/trunk/src/java/org/apache/solr/schema/RandomSortField.java
> URL: 
> http://svn.apache.org/viewvc/lucene/solr/trunk/src/java/org/apache/solr/schema/RandomSortField.java?rev=819234&r1=819233&r2=819234&view=diff
> ==
> --- lucene/solr/trunk/src/java/org/apache/solr/schema/RandomSortField.java 
> (original)
> +++ lucene/solr/trunk/src/java/org/apache/solr/schema/RandomSortField.java 
> Sat Sep 26 23:46:44 2009
> @@ -134,10 +134,6 @@
>           seed = getSeed(fieldname, reader);
>         }
>
> -        public int sortType() {
> -          return SortField.CUSTOM;
> -        }
> -
>         public Comparable value(int slot) {
>           return values[slot];
>         }
>
>
>


Re: [jira] Updated: (SOLR-1449) solrconfig.xml syntax to add classpath elements from outside of instanceDir

2009-09-26 Thread Chris Hostetter

: I'm too lazy to look the patch. Please make sure that 'ant clean'
: causes the contrib modules to remove the files they copy over to
: example/

I think you're missunderstanding the point of the issue .. in the 
current trunk the jars are copied to example/solr/lib ... the patch 
changes things so that we *stop* copying jars from the contribs into 
example and just use them directly out of dist/




-Hoss



[jira] Commented: (SOLR-1449) solrconfig.xml syntax to add classpath elements from outside of instanceDir

2009-09-26 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759969#action_12759969
 ] 

Hoss Man commented on SOLR-1449:


bq.   Is this an issue which users have reported? in my experience with Solr 
mailing list, I am yet to see a request where users wish to add arbitrary 
directories to classpath

I don't really feel like searching through the archives at the moment, but it 
has come up -- i don't know if anyone has explicitly requested the ability to 
add arbitrary directories, but there have certainly been discussions about the 
annoyance of needing to copy and/or symlink jars.

If nothing else: I'm a user, and i'm requesting it.

bq. How important is this feature to be in 1.4?

As i said in my first comment i don't know.

It would be nice to have, but i certainly don't think it's a blocker ... even 
with the testing i've done, and even with the new tests i added to the patch, 
and even though the behavior for existing ./lib users hasn't been changed, i 
still wouldn't consider committing unless other people try it out and gave it a 
thumbs up.

bq. Users in general have a lot of problems with classloading. Even with the 
current support with one lib directory I have seen so many users having trouble 
with classloading . This can only add to that confusion

I don't really see how this will make confusion about classloading any worse.  
most problems i can think of where people have classloader difficulty in solr 
stem from not understanding where they are suppose to copy their jars -- they 
tend to get confused about which "lib" directory, particularly with example/lib 
containing the jetty jars.  Allowing people to put the jar any where they want 
and point at it by name in the config file should _reduce_ confusion.

Besides which: they're still free to create a ./lib dir and copy jars -- that 
still works, no configuration needed.

I agree that the original patch (with the order in the config mattering) would 
have been confusing for people, but with the more recent patches where all jars 
are in the same classloader i can't imagine any situation where this will cause 
more problems/confusion then forcing people to make a lib dir.

bq. I am sure most of the users will be happy with the minimal solr. The rest 
of them will happily download the whole thing however big it is.

I *REALLY* don't want to argue the merrits of this issue as if it's purpose was 
to decrease the size of the distribution -- it was _not_ the purpose, it's just 
a possible additional benefit -- but i *HAVE* to disagree with you on this ... 
most users may only _need_ a minimal solr, but we should not passively 
discourage people from finding features that can make them happier by making 
those features more complex to get (via an alternate larger download).

Adding this feature, and using it to reduce the size of the _current_ examples 
may not reduce the size of the _current_ distribution enough to be worth 
worrying about, that's fine.  But i'm trying to thing longer term: there have 
been multiple threads discussing the goal of adding *many* more example 
directories demonstrating cool use cases of solr via interesting permutations 
of features (DIH, clustering, solr cell, velocity, etc...). This patch (or 
something like it) is going to be necessary before we can do anything like that.


> solrconfig.xml syntax to add classpath elements from outside of instanceDir
> ---
>
> Key: SOLR-1449
> URL: https://issues.apache.org/jira/browse/SOLR-1449
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
> Fix For: 1.4
>
> Attachments: SOLR-1449.patch, SOLR-1449.patch, SOLR-1449.patch
>
>
> the idea has been discussed numerous times that it would be nice if there was 
> a way to configure a core to load plugins from specific jars (or "classes" 
> style directories) by path  w/o needing to copy them to the "./lib" dir in 
> the instanceDir.
> The current workaround is "symlinks" but that doesn't really help the 
> situation of the Solr Release artifacts, where we wind up making numerous 
> copies of jars to support multiple example directories (you can't have 
> reliable symlinks in zip files)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Updated: (SOLR-1449) solrconfig.xml syntax to add classpath elements from outside of instanceDir

2009-09-26 Thread Lance Norskog
I'm too lazy to look the patch. Please make sure that 'ant clean'
causes the contrib modules to remove the files they copy over to
example/

On Sat, Sep 26, 2009 at 4:41 PM, Hoss Man (JIRA)  wrote:
>
>     [ 
> https://issues.apache.org/jira/browse/SOLR-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
>  ]
>
> Hoss Man updated SOLR-1449:
> ---
>
>    Attachment: SOLR-1449.patch
>
> Added tests for all the permutations of the solrconfig.xml syntax.
>
> Also fixed a missing dependency in the build.xml that i didn't notice before 
> because i hadn't tried "ant clean test" (previously test depending on a 
> special target for copying the solr cell libs so the SolrExample*Tests would 
> find them, now it just depends on dist-contrib to ensure all the contrib 
> jars/dependencies are in the expected place)
>
>> solrconfig.xml syntax to add classpath elements from outside of instanceDir
>> ---
>>
>>                 Key: SOLR-1449
>>                 URL: https://issues.apache.org/jira/browse/SOLR-1449
>>             Project: Solr
>>          Issue Type: Improvement
>>            Reporter: Hoss Man
>>             Fix For: 1.4
>>
>>         Attachments: SOLR-1449.patch, SOLR-1449.patch, SOLR-1449.patch
>>
>>
>> the idea has been discussed numerous times that it would be nice if there 
>> was a way to configure a core to load plugins from specific jars (or 
>> "classes" style directories) by path  w/o needing to copy them to the 
>> "./lib" dir in the instanceDir.
>> The current workaround is "symlinks" but that doesn't really help the 
>> situation of the Solr Release artifacts, where we wind up making numerous 
>> copies of jars to support multiple example directories (you can't have 
>> reliable symlinks in zip files)
>
> --
> This message is automatically generated by JIRA.
> -
> You can reply to this email to add a comment to the issue online.
>
>



-- 
Lance Norskog
goks...@gmail.com


[jira] Updated: (SOLR-1221) Change Solr Highlighting to use the SpanScorer with MultiTerm expansion by default

2009-09-26 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-1221:
--

Attachment: SOLR-1221.patch

fix a compile issue in last patch - param needs to be "true" rather than true.
also adds changes entry.

will commit very soon.

> Change Solr Highlighting to use the SpanScorer with MultiTerm expansion by 
> default
> --
>
> Key: SOLR-1221
> URL: https://issues.apache.org/jira/browse/SOLR-1221
> Project: Solr
>  Issue Type: Improvement
>  Components: highlighter
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 1.4
>
> Attachments: SOLR-1221.patch, SOLR-1221.patch, SOLR-1221.patch
>
>
> To improve the out of the box experience of Solr 1.4, I really think we 
> should make this change. You will still be able to turn both off.
> Comments?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1449) solrconfig.xml syntax to add classpath elements from outside of instanceDir

2009-09-26 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-1449:
---

Attachment: SOLR-1449.patch

Added tests for all the permutations of the solrconfig.xml syntax.

Also fixed a missing dependency in the build.xml that i didn't notice before 
because i hadn't tried "ant clean test" (previously test depending on a special 
target for copying the solr cell libs so the SolrExample*Tests would find them, 
now it just depends on dist-contrib to ensure all the contrib jars/dependencies 
are in the expected place)

> solrconfig.xml syntax to add classpath elements from outside of instanceDir
> ---
>
> Key: SOLR-1449
> URL: https://issues.apache.org/jira/browse/SOLR-1449
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
> Fix For: 1.4
>
> Attachments: SOLR-1449.patch, SOLR-1449.patch, SOLR-1449.patch
>
>
> the idea has been discussed numerous times that it would be nice if there was 
> a way to configure a core to load plugins from specific jars (or "classes" 
> style directories) by path  w/o needing to copy them to the "./lib" dir in 
> the instanceDir.
> The current workaround is "symlinks" but that doesn't really help the 
> situation of the Solr Release artifacts, where we wind up making numerous 
> copies of jars to support multiple example directories (you can't have 
> reliable symlinks in zip files)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759957#action_12759957
 ] 

Yonik Seeley commented on SOLR-1458:


Update: OK... my failures in DirectUpdateHandlerTest turned out to be 
testExpungeDeletes().  I've committed simpler test code previously attached to 
SOLR-1275.
I was initially thrown off by seeing exceptions in building the spell check 
index... but the actual test failure was caused by testExpungeDeletes.

So - is there a really bug lurking in the spellchecker component? I'm at a loss 
of how the old testExpungeDeletes code could trigger these exceptions (or of 
they did/do).  It's also possible that these spellcheck exceptions spuriously 
happened before but they don't cause the test to fail.


> Java Replication error: NullPointerException SEVERE: SnapPull failed on 
> 2009-09-22 nightly
> --
>
> Key: SOLR-1458
> URL: https://issues.apache.org/jira/browse/SOLR-1458
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4
> Environment: CentOS x64
> 8GB RAM
> Tomcat, running with 7G max memory; memory usage is <2GB, so it's not the 
> problem
> Host a: master
> Host b: slave
> Multiple single core Solr instances, using JNDI.
> Java replication
>Reporter: Artem Russakovskii
>Assignee: Noble Paul
> Fix For: 1.4
>
> Attachments: SOLR-1458.patch, SOLR-1458.patch, SOLR-1458.patch, 
> SOLR-1458.patch, SOLR-1458.patch, SolrDeletionPolicy.patch, 
> SolrDeletionPolicy.patch
>
>
> After finally figuring out the new Java based replication, we have started 
> both the slave and the master and issued optimize to all master Solr 
> instances. This triggered some replication to go through just fine, but it 
> looks like some of it is failing.
> Here's what I'm getting in the slave logs, repeatedly for each shard:
> {code} 
> SEVERE: SnapPull failed 
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
> at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code} 
> If I issue an optimize again on the master to one of the shards, it then 
> triggers a replication and replicates OK. I have a feeling that these 
> SnapPull failures appear later on but right now I don't have enough to form a 
> pattern.
> Here's replication.properties on one of the failed slave instances.
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 19:35:30 PDT 2009
> replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> previousCycleTimeInSeconds=0
> timesFailed=113
> indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> indexReplicatedAt=1253759730020
> replicationFailedAt=1253759730020
> lastCycleBytesDownloaded=0
> timesIndexReplicated=113
> {code}
> and another
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 18:42:01 PDT 2009
> replicationFailedAtList=1253756490034,1253756460169
> previousCycleTimeInSeconds=1
> timesFailed=2
> indexReplicatedAtList=1253756521284,1253756490034,1253756460169
> indexReplicatedAt=1253756521284
> replicationFailedAt=1253756490034
> lastCycleBytesDownloaded=22932293
> timesIndexReplicated=3
> {code}
> Some relevant configs:
> In solrconfig.xml:
> {code}
> 
>   
> 
> ${enable.master:false}
> optimize
> optimize
> 00:00:20
> 
> 
> ${enable.slave:false}
> 
> ${master.url}
> 
> 00:00:30
> 
>   
> {code}
> The s

[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759952#action_12759952
 ] 

Yonik Seeley commented on SOLR-1458:


Testing Update: I rebooted my ubuntu box, did a clean solr checkout, re-applied 
the patch, and got a much higher rate of test passes.  Looks like it was 
Gremlins.

Last night I set it up to build continuously in a loop - and got about a 25% 
failure rate.  Problem is, I didn't have it copy out failed tests for 
inspection,
 so I don't know why it failed, and it may be as simple as a loss of internet 
connectivity or DNS service, or apache going down, etc (yes we have tests
 that rely on external networks - that's a pain).

I'm re-running tests now, with a stop on a test failure so I can figure out if 
anything is actually related to this proposed patch!

> Java Replication error: NullPointerException SEVERE: SnapPull failed on 
> 2009-09-22 nightly
> --
>
> Key: SOLR-1458
> URL: https://issues.apache.org/jira/browse/SOLR-1458
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4
> Environment: CentOS x64
> 8GB RAM
> Tomcat, running with 7G max memory; memory usage is <2GB, so it's not the 
> problem
> Host a: master
> Host b: slave
> Multiple single core Solr instances, using JNDI.
> Java replication
>Reporter: Artem Russakovskii
>Assignee: Noble Paul
> Fix For: 1.4
>
> Attachments: SOLR-1458.patch, SOLR-1458.patch, SOLR-1458.patch, 
> SOLR-1458.patch, SOLR-1458.patch, SolrDeletionPolicy.patch, 
> SolrDeletionPolicy.patch
>
>
> After finally figuring out the new Java based replication, we have started 
> both the slave and the master and issued optimize to all master Solr 
> instances. This triggered some replication to go through just fine, but it 
> looks like some of it is failing.
> Here's what I'm getting in the slave logs, repeatedly for each shard:
> {code} 
> SEVERE: SnapPull failed 
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
> at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code} 
> If I issue an optimize again on the master to one of the shards, it then 
> triggers a replication and replicates OK. I have a feeling that these 
> SnapPull failures appear later on but right now I don't have enough to form a 
> pattern.
> Here's replication.properties on one of the failed slave instances.
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 19:35:30 PDT 2009
> replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> previousCycleTimeInSeconds=0
> timesFailed=113
> indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> indexReplicatedAt=1253759730020
> replicationFailedAt=1253759730020
> lastCycleBytesDownloaded=0
> timesIndexReplicated=113
> {code}
> and another
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 18:42:01 PDT 2009
> replicationFailedAtList=1253756490034,1253756460169
> previousCycleTimeInSeconds=1
> timesFailed=2
> indexReplicatedAtList=1253756521284,1253756490034,1253756460169
> indexReplicatedAt=1253756521284
> replicationFailedAt=1253756490034
> lastCycleBytesDownloaded=22932293
> timesIndexReplicated=3
> {code}
> Some relevant configs:
> In solrconfig.xml:
> {code}
> 
>   
> 
> ${enable.master:false}
> optimize
> optimize
> 00:00:20
> 
> 
> ${enable.slave:false}
> 
> ${master.url}

Re: svn commit: r818816 - /lucene/solr/trunk/src/java/org/apache/solr/search/SolrFieldCacheMBean.java

2009-09-26 Thread Chris Hostetter

: Fixed typos and added a little whitespace in the key name

FWIW: adding the whitespaces makes these key names inconcsistant with 
every other stat name in solr ... none of them use whitespace 

In cases where the lack of spaces makes things hard to read other mbeans 
use "_" (ie "cumulative_deletesById" in DirectUpdateHandler2)  I used that 
same convention in FieldCacheMBean (see "entires_count") but it didn't 
really seems like a good idea for "entry_#0")

I'm not saying the whitespace is bad ... just pointing out that it's 
inconsistent, and there was a reason i didn't have it in before.

: Modified:
: lucene/solr/trunk/src/java/org/apache/solr/search/SolrFieldCacheMBean.java
: 
: Modified: 
lucene/solr/trunk/src/java/org/apache/solr/search/SolrFieldCacheMBean.java
: URL: 
http://svn.apache.org/viewvc/lucene/solr/trunk/src/java/org/apache/solr/search/SolrFieldCacheMBean.java?rev=818816&r1=818815&r2=818816&view=diff
: ==
: --- 
lucene/solr/trunk/src/java/org/apache/solr/search/SolrFieldCacheMBean.java 
(original)
: +++ 
lucene/solr/trunk/src/java/org/apache/solr/search/SolrFieldCacheMBean.java Fri 
Sep 25 11:00:15 2009
: @@ -42,8 +42,8 @@
:public String getName() { return this.getClass().getName(); }
:public String getVersion() { return SolrCore.version; }
:public String getDescription() {
: -return "Provides introspection of the Lucene FiledCache, "
: -  +"this is **NOT** a cache that is manged by Solr.";
: +return "Provides introspection of the Lucene FieldCache, "
: +  +"this is **NOT** a cache that is managed by Solr.";
:}
:public Category getCategory() { return Category.CACHE; } 
:public String getSourceId() { 
: @@ -62,14 +62,14 @@
:  for (int i = 0; i < entries.length; i++) {
:CacheEntry e = entries[i];
:e.estimateSize();
: -  stats.add("entry#" + i, e.toString());
: +  stats.add("entry #" + i, e.toString());
:  }
:  
:  Insanity[] insanity = checker.checkSanity(entries);
:  
:  stats.add("insanity_count", insanity.length);
:  for (int i = 0; i < insanity.length; i++) {
: -  stats.add("insanity#" + i, insanity[i].toString());
: +  stats.add("insanity #" + i, insanity[i].toString());
:  }
:  return stats;
:}
: 
: 



-Hoss



[jira] Commented: (SOLR-1374) When a test fails, display the test file in the console via ant junit

2009-09-26 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759943#action_12759943
 ] 

Hoss Man commented on SOLR-1374:


bq. Related - what's the reason that we use XML formatting for junit output?

hudson, as well as "ant test-reports" 

it would be pretty trivial to rplace the formatter name ("xml") with a property 
so you could change it at run time ... or we could.  We can also declare as 
many formatters as you want in the  task, each with differnet options, 
and then mark them as "if" or "unless" certain properties (the "brief" 
formatter is already declared that way)

You just have to pick what you want.

bq. I was thinking the list of test files and their paths

It sounds like you just want to see the output of... 
{code}
grep -rL 'errors="0" failures="0"' build/test-results
{code}
...but i'm not really clear on what you'd want to do with that info, so maybe 
i'm missing something.

> When a test fails, display the test file in the console via ant junit
> -
>
> Key: SOLR-1374
> URL: https://issues.apache.org/jira/browse/SOLR-1374
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.4
>Reporter: Jason Rutherglen
>Priority: Trivial
> Fix For: 1.5
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> When a test fails, it would be great if the junit test output file were 
> displayed in the terminal.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1448) Addition of weblogic.xml required for solr to run under weblogic 10.3

2009-09-26 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759939#action_12759939
 ] 

Hoss Man commented on SOLR-1448:


Grant: I don't understand your objection to this addition.  It makes solr work 
in weblogic by default (currently it will only work if people manually hack the 
war) and it doesn't have any impact on any other servlet containers.

If we're going to do things like SOLR-1091 to work around odd behavior in 
specific containers, why wouldn't we do this as well?


> Addition of weblogic.xml required for solr to run under weblogic 10.3
> -
>
> Key: SOLR-1448
> URL: https://issues.apache.org/jira/browse/SOLR-1448
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 1.4
> Environment: Weblogic 10.3
>Reporter: Ilan Rabinovitch
>Priority: Minor
> Fix For: 1.4
>
> Attachments: weblogic.xml
>
>
> Weblogic appears to have filters enabled even on FORWARD, which is listed as 
> something that will not function properly in the Solr documentation. As a 
> result, the administrative application generates a StackOverflow when 
> accessed. 
> This can be resolved by adding the attached weblogic.xml file to solr.  No 
> other changes are required.
> 
>  xmlns="http://www.bea.com/ns/weblogic/90";
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> xsi:schemaLocation="http://www.bea.com/ns/weblogic/90 
> http://www.bea.com/ns/weblogic/90/weblogic-web-app.xsd";>
> 
> 
> false
> 
> 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759937#action_12759937
 ] 

Yonik Seeley commented on SOLR-1458:


bq. Yonik, shouldn't ReplicationHandler be the one to reserve commit points? 

It does... that part doesn't change, but replication handler requests short 
term reservations based on use.
The existing SolrDeletionPolicy already had the functionality of always keeping 
an optimized commit point around, and so this actually isn't new.


> Java Replication error: NullPointerException SEVERE: SnapPull failed on 
> 2009-09-22 nightly
> --
>
> Key: SOLR-1458
> URL: https://issues.apache.org/jira/browse/SOLR-1458
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4
> Environment: CentOS x64
> 8GB RAM
> Tomcat, running with 7G max memory; memory usage is <2GB, so it's not the 
> problem
> Host a: master
> Host b: slave
> Multiple single core Solr instances, using JNDI.
> Java replication
>Reporter: Artem Russakovskii
>Assignee: Noble Paul
> Fix For: 1.4
>
> Attachments: SOLR-1458.patch, SOLR-1458.patch, SOLR-1458.patch, 
> SOLR-1458.patch, SOLR-1458.patch, SolrDeletionPolicy.patch, 
> SolrDeletionPolicy.patch
>
>
> After finally figuring out the new Java based replication, we have started 
> both the slave and the master and issued optimize to all master Solr 
> instances. This triggered some replication to go through just fine, but it 
> looks like some of it is failing.
> Here's what I'm getting in the slave logs, repeatedly for each shard:
> {code} 
> SEVERE: SnapPull failed 
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
> at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code} 
> If I issue an optimize again on the master to one of the shards, it then 
> triggers a replication and replicates OK. I have a feeling that these 
> SnapPull failures appear later on but right now I don't have enough to form a 
> pattern.
> Here's replication.properties on one of the failed slave instances.
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 19:35:30 PDT 2009
> replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> previousCycleTimeInSeconds=0
> timesFailed=113
> indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> indexReplicatedAt=1253759730020
> replicationFailedAt=1253759730020
> lastCycleBytesDownloaded=0
> timesIndexReplicated=113
> {code}
> and another
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 18:42:01 PDT 2009
> replicationFailedAtList=1253756490034,1253756460169
> previousCycleTimeInSeconds=1
> timesFailed=2
> indexReplicatedAtList=1253756521284,1253756490034,1253756460169
> indexReplicatedAt=1253756521284
> replicationFailedAt=1253756490034
> lastCycleBytesDownloaded=22932293
> timesIndexReplicated=3
> {code}
> Some relevant configs:
> In solrconfig.xml:
> {code}
> 
>   
> 
> ${enable.master:false}
> optimize
> optimize
> 00:00:20
> 
> 
> ${enable.slave:false}
> 
> ${master.url}
> 
> 00:00:30
> 
>   
> {code}
> The slave then has this in solrcore.properties:
> {code}
> enable.slave=true
> master.url=URLOFMASTER/replication
> {code}
> and the master has
> {code}
> enable.master=true
> {code}
> I'd be glad to provide more details but I'm not sure what else I can do.  
> SOLR-926 may

Re: svn commit: r818618 - in /lucene/solr/trunk: CHANGES.txt src/java/org/apache/solr/core/SolrCore.java src/java/org/apache/solr/search/SolrFieldCacheMBean.java

2009-09-26 Thread Chris Hostetter

: On Sep 24, 2009, at 4:35 PM, hoss...@apache.org wrote:
: > +stats.add("instanity_count", insanity.length);
: 
: hoss spelling alert!  insanity!

no ... insanity would be if i actually managed to spell insanity correctly 
every time.

thanks for fixing in Shalin.


-Hoss



[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-26 Thread Artem Russakovskii (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759927#action_12759927
 ] 

Artem Russakovskii commented on SOLR-1458:
--

Shalin, I've taken the backupAfter line directly from the SolrReplication wiki 
which talks about Java based replication: 
http://wiki.apache.org/solr/SolrReplication. I realize now it says in the 
comment above that line it's for backup only but why is it there in the first 
place? It threw me off a bit.

> Java Replication error: NullPointerException SEVERE: SnapPull failed on 
> 2009-09-22 nightly
> --
>
> Key: SOLR-1458
> URL: https://issues.apache.org/jira/browse/SOLR-1458
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4
> Environment: CentOS x64
> 8GB RAM
> Tomcat, running with 7G max memory; memory usage is <2GB, so it's not the 
> problem
> Host a: master
> Host b: slave
> Multiple single core Solr instances, using JNDI.
> Java replication
>Reporter: Artem Russakovskii
>Assignee: Noble Paul
> Fix For: 1.4
>
> Attachments: SOLR-1458.patch, SOLR-1458.patch, SOLR-1458.patch, 
> SOLR-1458.patch, SOLR-1458.patch, SolrDeletionPolicy.patch, 
> SolrDeletionPolicy.patch
>
>
> After finally figuring out the new Java based replication, we have started 
> both the slave and the master and issued optimize to all master Solr 
> instances. This triggered some replication to go through just fine, but it 
> looks like some of it is failing.
> Here's what I'm getting in the slave logs, repeatedly for each shard:
> {code} 
> SEVERE: SnapPull failed 
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
> at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code} 
> If I issue an optimize again on the master to one of the shards, it then 
> triggers a replication and replicates OK. I have a feeling that these 
> SnapPull failures appear later on but right now I don't have enough to form a 
> pattern.
> Here's replication.properties on one of the failed slave instances.
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 19:35:30 PDT 2009
> replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> previousCycleTimeInSeconds=0
> timesFailed=113
> indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> indexReplicatedAt=1253759730020
> replicationFailedAt=1253759730020
> lastCycleBytesDownloaded=0
> timesIndexReplicated=113
> {code}
> and another
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 18:42:01 PDT 2009
> replicationFailedAtList=1253756490034,1253756460169
> previousCycleTimeInSeconds=1
> timesFailed=2
> indexReplicatedAtList=1253756521284,1253756490034,1253756460169
> indexReplicatedAt=1253756521284
> replicationFailedAt=1253756490034
> lastCycleBytesDownloaded=22932293
> timesIndexReplicated=3
> {code}
> Some relevant configs:
> In solrconfig.xml:
> {code}
> 
>   
> 
> ${enable.master:false}
> optimize
> optimize
> 00:00:20
> 
> 
> ${enable.slave:false}
> 
> ${master.url}
> 
> 00:00:30
> 
>   
> {code}
> The slave then has this in solrcore.properties:
> {code}
> enable.slave=true
> master.url=URLOFMASTER/replication
> {code}
> and the master has
> {code}
> enable.master=true
> {code}
> I'd be glad to provide more details but I'm not sure what else I can do.  
> SOLR-926 may be relevant.
> T

[jira] Resolved: (SOLR-1462) DIH won't run script transormer anymore. Complains I'm not running java 6

2009-09-26 Thread Edward Rudd (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Rudd resolved SOLR-1462.
---

Resolution: Fixed

yeah.. I know it sounds weird.. but I figured out what happened.

RHEL 5.3 includes openjdk (before I was using the package from EPEL) and in 
august they released a new version that "downgraded" the EPEL release (which 
was B12) down to B09.And for some reason RHEL's openjdk build has no rhino 
support...  this DIH scripttransformer breaks.

I have upgraded openjdk to the EPEL release I found on an old archive and filed 
an issue with RHEL via my subscription about the issue.

> DIH won't run script transormer anymore.  Complains I'm not running java 6
> --
>
> Key: SOLR-1462
> URL: https://issues.apache.org/jira/browse/SOLR-1462
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
> Environment: CentOS 5.3 with Java-1.6.0-openjdk-1.6.0.0-1.2.b09.el5 
> (this version has been installed since August)
>Reporter: Edward Rudd
>
> Before a reboot 2 weeks ago DIH worked fine, but now constantly returns this 
> error anytime an import is used.  Any clues how to diagnose what is going on?
> org.apache.solr.handler.dataimport.DataImportHandlerException: 

[jira] Reopened: (SOLR-1092) DIH can have a new command "import" which does not clean

2009-09-26 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-1092:
-


Import command does not commit documents.

> DIH can have a new command "import" which does not clean
> 
>
> Key: SOLR-1092
> URL: https://issues.apache.org/jira/browse/SOLR-1092
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Noble Paul
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1092.patch
>
>
> full-import is slightly risky that it cleans the index if we fail to mention 
> clean= false. We must add a command import which will work like full-import 
> and does not clean

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1462) DIH won't run script transormer anymore. Complains I'm not running java 6

2009-09-26 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759901#action_12759901
 ] 

Shalin Shekhar Mangar commented on SOLR-1462:
-

Edward, I'm sorry I don't understand. A reboot caused ScriptTransformer to stop 
working? You didn't upgrade the code or change the environment at all?

> DIH won't run script transormer anymore.  Complains I'm not running java 6
> --
>
> Key: SOLR-1462
> URL: https://issues.apache.org/jira/browse/SOLR-1462
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
> Environment: CentOS 5.3 with Java-1.6.0-openjdk-1.6.0.0-1.2.b09.el5 
> (this version has been installed since August)
>Reporter: Edward Rudd
>
> Before a reboot 2 weeks ago DIH worked fine, but now constantly returns this 
> error anytime an import is used.  Any clues how to diagnose what is going on?
> org.apache.solr.handler.dataimport.DataImportHandlerException: 

[jira] Updated: (SOLR-236) Field collapsing

2009-09-26 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated SOLR-236:
---

Attachment: field-collapse-5.patch

I have created a new patch that has the following changes:
1) Non adajacent collasping with sorting on score also uses the Solr caches 
now. So now every field collapse searches are using the Solr caches properly. 
This was not the case in my previous versions of the patch. This improvement 
will make field collapsing perform better and reduce the query time for regular 
searches. The downside was, that in order to make this work I had to modify 
some methods in the SolrIndexSearcher. 

When sorting on score the non adjacent collapsing algorithm needs the score per 
document. The score is collected in a Lucene collector. The previous version of 
the patch uses the searcher.search(Query, Filter, Collector) method to collect 
the documents (as a DocSet) and scores, but by using this method the Solr 
caches were ignored.

The methods that return a DocSet in the SolrIndexSearcher do not offer the 
ability the specify your own collector. I changed that so you can specify your 
own collector and still benefit from the Solr caches. I did this in a non 
intrusive manner, so that nothing changes for existing code that uses the 
normal versions of these methods. 
{code}

   public DocSet getDocSet(Query query) throws IOException {
DocSetCollector collector = new DocSetCollector(maxDoc()>>6, maxDoc());
return getDocSet(query, collector);
   }

   public DocSet getDocSet(Query query, DocSetAwareCollector collector) throws 
IOException {

   }

  DocSet getPositiveDocSet(Query q) throws IOException {
DocSetCollector collector = new DocSetCollector(maxDoc()>>6, maxDoc());
return getPositiveDocSet(q, collector);
   }

  DocSet getPositiveDocSet(Query q, DocSetAwareCollector collector) throws 
IOException {
.
   }

  public DocSet getDocSet(List queries) throws IOException {
DocSetCollector collector = new DocSetCollector(maxDoc()>>6, maxDoc());
return getDocSet(queries, collector);
   }

  public DocSet getDocSet(List queries, DocSetAwareCollector collector) 
throws IOException {
   ...
   }

  protected DocSet getDocSetNC(Query query, DocSet filter) throws IOException {
DocSetCollector collector = new DocSetCollector(maxDoc()>>6, maxDoc());
return getDocSetNC(query,  filter, collector);
   }

  protected DocSet getDocSetNC(Query query, DocSet filter, DocSetAwareCollector 
collector) throws IOException {
   .
   }
{code}
I also made a DocSetAwareCollector that both DocSetCollector and 
DocSetScoreCollector implement.
2) The collapse.includeCollapsedDocs parameters has been removed. In order to 
include the collapsed documents the parameter collapse.includeCollapsedDocs.fl 
must be specified. collapse.includeCollapsedDocs.fl=* will include all fields 
of the collapsed documents and collapse.includeCollapsedDocs.fl=id,name will 
only include the id and name field of the collapsed documents.

> Field collapsing
> 
>
> Key: SOLR-236
> URL: https://issues.apache.org/jira/browse/SOLR-236
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 1.3
>Reporter: Emmanuel Keller
> Fix For: 1.5
>
> Attachments: collapsing-patch-to-1.3.0-dieter.patch, 
> collapsing-patch-to-1.3.0-ivan.patch, collapsing-patch-to-1.3.0-ivan_2.patch, 
> collapsing-patch-to-1.3.0-ivan_3.patch, field-collapse-3.patch, 
> field-collapse-4-with-solrj.patch, field-collapse-5.patch, 
> field-collapse-5.patch, field-collapse-5.patch, field-collapse-5.patch, 
> field-collapse-5.patch, field-collapse-5.patch, 
> field-collapse-solr-236-2.patch, field-collapse-solr-236.patch, 
> field-collapsing-extended-592129.patch, field_collapsing_1.1.0.patch, 
> field_collapsing_1.3.patch, field_collapsing_dsteigerwald.diff, 
> field_collapsing_dsteigerwald.diff, field_collapsing_dsteigerwald.diff, 
> SOLR-236-FieldCollapsing.patch, SOLR-236-FieldCollapsing.patch, 
> SOLR-236-FieldCollapsing.patch, solr-236.patch, SOLR-236_collapsing.patch, 
> SOLR-236_collapsing.patch
>
>
> This patch include a new feature called "Field collapsing".
> "Used in order to collapse a group of results with similar value for a given 
> field to a single entry in the result set. Site collapsing is a special case 
> of this, where all results for a given web site is collapsed into one or two 
> entries in the result set, typically with an associated "more documents from 
> this site" link. See also Duplicate detection."
> http://www.fastsearch.com/glossary.aspx?m=48&amid=299
> The implementation add 3 new query parameters (SolrParams):
> "collapse.field" to choose the field used to group results
> "collapse.type" normal (default value) or adjacent
> "collapse.m

[jira] Commented: (SOLR-1462) DIH won't run script transormer anymore. Complains I'm not running java 6

2009-09-26 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759903#action_12759903
 ] 

Shalin Shekhar Mangar commented on SOLR-1462:
-

OK, thanks for clearing that up.

> DIH won't run script transormer anymore.  Complains I'm not running java 6
> --
>
> Key: SOLR-1462
> URL: https://issues.apache.org/jira/browse/SOLR-1462
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler
> Environment: CentOS 5.3 with Java-1.6.0-openjdk-1.6.0.0-1.2.b09.el5 
> (this version has been installed since August)
>Reporter: Edward Rudd
>
> Before a reboot 2 weeks ago DIH worked fine, but now constantly returns this 
> error anytime an import is used.  Any clues how to diagnose what is going on?
> org.apache.solr.handler.dataimport.DataImportHandlerException: 

[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-26 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759900#action_12759900
 ] 

Shalin Shekhar Mangar commented on SOLR-1458:
-

bq. bq. I think this should be the deletion policy that keeps around the last 
optimized commit point if necessary.

Yonik, shouldn't ReplicationHandler be the one to reserve commit points? Also, 
if we go down this way (having SolrDeletionPolicy decide these things), would a 
custom deletion policy play nicely with Solr?

> Java Replication error: NullPointerException SEVERE: SnapPull failed on 
> 2009-09-22 nightly
> --
>
> Key: SOLR-1458
> URL: https://issues.apache.org/jira/browse/SOLR-1458
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4
> Environment: CentOS x64
> 8GB RAM
> Tomcat, running with 7G max memory; memory usage is <2GB, so it's not the 
> problem
> Host a: master
> Host b: slave
> Multiple single core Solr instances, using JNDI.
> Java replication
>Reporter: Artem Russakovskii
>Assignee: Noble Paul
> Fix For: 1.4
>
> Attachments: SOLR-1458.patch, SOLR-1458.patch, SOLR-1458.patch, 
> SOLR-1458.patch, SOLR-1458.patch, SolrDeletionPolicy.patch, 
> SolrDeletionPolicy.patch
>
>
> After finally figuring out the new Java based replication, we have started 
> both the slave and the master and issued optimize to all master Solr 
> instances. This triggered some replication to go through just fine, but it 
> looks like some of it is failing.
> Here's what I'm getting in the slave logs, repeatedly for each shard:
> {code} 
> SEVERE: SnapPull failed 
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
> at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code} 
> If I issue an optimize again on the master to one of the shards, it then 
> triggers a replication and replicates OK. I have a feeling that these 
> SnapPull failures appear later on but right now I don't have enough to form a 
> pattern.
> Here's replication.properties on one of the failed slave instances.
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 19:35:30 PDT 2009
> replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> previousCycleTimeInSeconds=0
> timesFailed=113
> indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> indexReplicatedAt=1253759730020
> replicationFailedAt=1253759730020
> lastCycleBytesDownloaded=0
> timesIndexReplicated=113
> {code}
> and another
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 18:42:01 PDT 2009
> replicationFailedAtList=1253756490034,1253756460169
> previousCycleTimeInSeconds=1
> timesFailed=2
> indexReplicatedAtList=1253756521284,1253756490034,1253756460169
> indexReplicatedAt=1253756521284
> replicationFailedAt=1253756490034
> lastCycleBytesDownloaded=22932293
> timesIndexReplicated=3
> {code}
> Some relevant configs:
> In solrconfig.xml:
> {code}
> 
>   
> 
> ${enable.master:false}
> optimize
> optimize
> 00:00:20
> 
> 
> ${enable.slave:false}
> 
> ${master.url}
> 
> 00:00:30
> 
>   
> {code}
> The slave then has this in solrcore.properties:
> {code}
> enable.slave=true
> master.url=URLOFMASTER/replication
> {code}
> and the master has
> {code}
> enable.master=true
> {code}
> I'd be glad to provide more details but I'm not sure what else I can do.  
> SOLR-9

[jira] Resolved: (SOLR-1092) DIH can have a new command "import" which does not clean

2009-09-26 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-1092.
-

Resolution: Fixed

Committed revision 819170.

> DIH can have a new command "import" which does not clean
> 
>
> Key: SOLR-1092
> URL: https://issues.apache.org/jira/browse/SOLR-1092
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Noble Paul
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1092.patch, SOLR-1092.patch
>
>
> full-import is slightly risky that it cleans the index if we fail to mention 
> clean= false. We must add a command import which will work like full-import 
> and does not clean

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1458) Java Replication error: NullPointerException SEVERE: SnapPull failed on 2009-09-22 nightly

2009-09-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12759898#action_12759898
 ] 

Yonik Seeley commented on SOLR-1458:


Strange stuff - the last error I just saw was a corrupted index exception from 
the spellchecker - couldn't load the segments_n file.
But the spellcheck building code is Lucene code - Solr's deletion policy should 
have no effect... weird.

> Java Replication error: NullPointerException SEVERE: SnapPull failed on 
> 2009-09-22 nightly
> --
>
> Key: SOLR-1458
> URL: https://issues.apache.org/jira/browse/SOLR-1458
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4
> Environment: CentOS x64
> 8GB RAM
> Tomcat, running with 7G max memory; memory usage is <2GB, so it's not the 
> problem
> Host a: master
> Host b: slave
> Multiple single core Solr instances, using JNDI.
> Java replication
>Reporter: Artem Russakovskii
>Assignee: Noble Paul
> Fix For: 1.4
>
> Attachments: SOLR-1458.patch, SOLR-1458.patch, SOLR-1458.patch, 
> SOLR-1458.patch, SOLR-1458.patch, SolrDeletionPolicy.patch, 
> SolrDeletionPolicy.patch
>
>
> After finally figuring out the new Java based replication, we have started 
> both the slave and the master and issued optimize to all master Solr 
> instances. This triggered some replication to go through just fine, but it 
> looks like some of it is failing.
> Here's what I'm getting in the slave logs, repeatedly for each shard:
> {code} 
> SEVERE: SnapPull failed 
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:271)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:258)
> at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:181)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:205)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:619)
> {code} 
> If I issue an optimize again on the master to one of the shards, it then 
> triggers a replication and replicates OK. I have a feeling that these 
> SnapPull failures appear later on but right now I don't have enough to form a 
> pattern.
> Here's replication.properties on one of the failed slave instances.
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 19:35:30 PDT 2009
> replicationFailedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> previousCycleTimeInSeconds=0
> timesFailed=113
> indexReplicatedAtList=1253759730020,1253759700018,1253759670019,1253759640018,1253759610018,1253759580022,1253759550019,1253759520016,1253759490026,1253759460016
> indexReplicatedAt=1253759730020
> replicationFailedAt=1253759730020
> lastCycleBytesDownloaded=0
> timesIndexReplicated=113
> {code}
> and another
> {code}
> cat data/replication.properties 
> #Replication details
> #Wed Sep 23 18:42:01 PDT 2009
> replicationFailedAtList=1253756490034,1253756460169
> previousCycleTimeInSeconds=1
> timesFailed=2
> indexReplicatedAtList=1253756521284,1253756490034,1253756460169
> indexReplicatedAt=1253756521284
> replicationFailedAt=1253756490034
> lastCycleBytesDownloaded=22932293
> timesIndexReplicated=3
> {code}
> Some relevant configs:
> In solrconfig.xml:
> {code}
> 
>   
> 
> ${enable.master:false}
> optimize
> optimize
> 00:00:20
> 
> 
> ${enable.slave:false}
> 
> ${master.url}
> 
> 00:00:30
> 
>   
> {code}
> The slave then has this in solrcore.properties:
> {code}
> enable.slave=true
> master.url=URLOFMASTER/replication
> {code}
> and the master has
> {code}
> enable.master=true
> {code}
> I'd be glad to provide more details but I'm not sure what else I can do.  
> SOLR-926 may be relevant.
> Thanks.

-- 
This message is automatically generated by JIRA.
-
You can reply to

[jira] Updated: (SOLR-1092) DIH can have a new command "import" which does not clean

2009-09-26 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-1092:


Attachment: SOLR-1092.patch

Fix with test.

> DIH can have a new command "import" which does not clean
> 
>
> Key: SOLR-1092
> URL: https://issues.apache.org/jira/browse/SOLR-1092
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Noble Paul
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1092.patch, SOLR-1092.patch
>
>
> full-import is slightly risky that it cleans the index if we fail to mention 
> clean= false. We must add a command import which will work like full-import 
> and does not clean

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Build failed in Hudson: Solr-trunk #936

2009-09-26 Thread Apache Hudson Server
See 

Changes:

[shalin] Fixed typos and added a little whitespace in the key name

[shalin] Remove unused import and redundant casts

--
[...truncated 2310 lines...]
[junit] Running org.apache.solr.analysis.TestWordDelimiterFilter
[junit] Tests run: 14, Failures: 0, Errors: 0, Time elapsed: 33.924 sec
[junit] Running org.apache.solr.client.solrj.SolrExceptionTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.556 sec
[junit] Running org.apache.solr.client.solrj.SolrQueryTest
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.382 sec
[junit] Running org.apache.solr.client.solrj.TestBatchUpdate
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 19.669 sec
[junit] Running org.apache.solr.client.solrj.TestLBHttpSolrServer
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 14.633 sec
[junit] Running org.apache.solr.client.solrj.beans.TestDocumentObjectBinder
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.803 sec
[junit] Running org.apache.solr.client.solrj.embedded.JettyWebappTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 18.515 sec
[junit] Running 
org.apache.solr.client.solrj.embedded.LargeVolumeBinaryJettyTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 20.136 sec
[junit] Running 
org.apache.solr.client.solrj.embedded.LargeVolumeEmbeddedTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.869 sec
[junit] Running org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.809 sec
[junit] Running 
org.apache.solr.client.solrj.embedded.MergeIndexesEmbeddedTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.924 sec
[junit] Running org.apache.solr.client.solrj.embedded.MultiCoreEmbeddedTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 6.525 sec
[junit] Running 
org.apache.solr.client.solrj.embedded.MultiCoreExampleJettyTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.679 sec
[junit] Running 
org.apache.solr.client.solrj.embedded.SolrExampleEmbeddedTest
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 27.036 sec
[junit] Running org.apache.solr.client.solrj.embedded.SolrExampleJettyTest
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 32.857 sec
[junit] Running 
org.apache.solr.client.solrj.embedded.SolrExampleStreamingTest
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 40.963 sec
[junit] Running org.apache.solr.client.solrj.embedded.TestSolrProperties
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 8.774 sec
[junit] Running org.apache.solr.client.solrj.request.TestUpdateRequestCodec
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.402 sec
[junit] Running 
org.apache.solr.client.solrj.response.AnlysisResponseBaseTest
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.354 sec
[junit] Running 
org.apache.solr.client.solrj.response.DocumentAnalysisResponseTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.435 sec
[junit] Running 
org.apache.solr.client.solrj.response.FieldAnalysisResponseTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.457 sec
[junit] Running org.apache.solr.client.solrj.response.QueryResponseTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.522 sec
[junit] Running org.apache.solr.client.solrj.response.TestSpellCheckResponse
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 15.367 sec
[junit] Running org.apache.solr.client.solrj.util.ClientUtilsTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.418 sec
[junit] Running org.apache.solr.common.SolrDocumentTest
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.67 sec
[junit] Running org.apache.solr.common.params.ModifiableSolrParamsTest
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.687 sec
[junit] Running org.apache.solr.common.params.SolrParamTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.538 sec
[junit] Running org.apache.solr.common.util.ContentStreamTest
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.692 sec
[junit] Running org.apache.solr.common.util.DOMUtilTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.493 sec
[junit] Running org.apache.solr.common.util.FileUtilsTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.354 sec
[junit] Running org.apache.solr.common.util.IteratorChainTest
[junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.445 sec
[junit] Running org.apache.solr.common.util.NamedListTest
[junit] Tests run: 1, Failures: