Solr nightly build failure

2007-05-15 Thread solr-dev

checkJunitPresence:

compile:
[mkdir] Created dir: /tmp/apache-solr-nightly/build
[javac] Compiling 183 source files to /tmp/apache-solr-nightly/build
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

compileTests:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/tests
[javac] Compiling 43 source files to /tmp/apache-solr-nightly/build/tests
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

junit:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/test-results
[junit] Running org.apache.solr.BasicFunctionalityTest
[junit] Tests run: 24, Failures: 0, Errors: 0, Time elapsed: 21.364 sec
[junit] Running org.apache.solr.ConvertedLegacyTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 10.805 sec
[junit] Running org.apache.solr.DisMaxRequestHandlerTest
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.573 sec
[junit] Running org.apache.solr.EchoParamsTest
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 1.218 sec
[junit] Running org.apache.solr.HighlighterTest
[junit] Tests run: 10, Failures: 0, Errors: 0, Time elapsed: 7.402 sec
[junit] Running org.apache.solr.OutputWriterTest
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 1.84 sec
[junit] Running org.apache.solr.SampleTest
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.205 sec
[junit] Running org.apache.solr.analysis.TestBufferedTokenStream
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.054 sec
[junit] Running org.apache.solr.analysis.TestHyphenatedWordsFilter
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.045 sec
[junit] Running org.apache.solr.analysis.TestPatternReplaceFilter
[junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.06 sec
[junit] Running org.apache.solr.analysis.TestPatternTokenizerFactory
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.065 sec
[junit] Running org.apache.solr.analysis.TestPhoneticFilter
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.062 sec
[junit] Running org.apache.solr.analysis.TestRemoveDuplicatesTokenFilter
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.047 sec
[junit] Running org.apache.solr.analysis.TestSynonymFilter
[junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 0.093 sec
[junit] Running org.apache.solr.analysis.TestTrimFilter
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.041 sec
[junit] Running org.apache.solr.analysis.TestWordDelimiterFilter
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 2.679 sec
[junit] Running org.apache.solr.core.RequestHandlersTest
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 5.3 sec
[junit] Running org.apache.solr.core.SolrCoreTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 4.575 sec
[junit] Running org.apache.solr.core.TestBadConfig
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.157 sec
[junit] Running org.apache.solr.core.TestConfig
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.795 sec
[junit] Running org.apache.solr.handler.StandardRequestHandlerTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.795 sec
[junit] Running org.apache.solr.handler.TestCSVLoader
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 8.078 sec
[junit] Running org.apache.solr.handler.admin.SystemInfoHandlerTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.088 sec
[junit] Running org.apache.solr.schema.BadIndexSchemaTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.94 sec
[junit] Running org.apache.solr.schema.IndexSchemaTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.261 sec
[junit] Running org.apache.solr.schema.NotRequiredUniqueKeyTest
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.034 sec
[junit] Running org.apache.solr.schema.RequiredFieldsTest
[junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 5.162 sec
[junit] Running org.apache.solr.search.TestDocSet
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.76 sec
[junit] Running org.apache.solr.search.TestQueryUtils
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.284 sec
[junit] Running org.apache.solr.servlet.DirectSolrConnectionTest
[junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 2.344 sec
   

[jira] Updated: (SOLR-215) Multiple Solr Cores

2007-05-15 Thread Henri Biestro (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henri Biestro updated SOLR-215:
---

Attachment: solr-trunk-538091.patch

Updated for revision 538091

> Multiple Solr Cores
> ---
>
> Key: SOLR-215
> URL: https://issues.apache.org/jira/browse/SOLR-215
> Project: Solr
>  Issue Type: Improvement
>Reporter: Henri Biestro
>Priority: Minor
> Attachments: solr-trunk-533775.patch, solr-trunk-538091.patch, 
> solr-trunk-src.patch
>
>
> Allow multiple cores in one web-application (or one class-loader):
> This allows to have multiple cores created from different config & schema in 
> the same application.
> The side effect is that this also allows different indexes.
> Implementation notes for the patch:
> The patch allows to have multiple 'named' cores in the same application.
> The current single core behavior has been retained  - the core named 'null' - 
> but code could not be kept 100% compatible. (In particular, Solrconfig.config 
> is gone; SolrCore.getCore() is still here though).
> A few classes were only existing as singletons and have thus been refactored.
> The Config class feature-set has been narrowed to class loading relative to 
> the installation (lib) directory;
> The SolrConfig class feature-set has evolved towards the 'solr config' part, 
> caching frequently accessed parameters;
> The IndexSchema class uses a SolrConfig instance; there are a few parameters 
> in the configuration that pertain to indexing that were needed.
> The SolrCore is built from a SolrConfig & an IndexSchema.
> The creation of a core has become:
> //create a configuration
> SolrConfig config = SolrConfig.createConfiguration("solrconfig.xml");
> //create a schema
> IndexSchema schema = new IndexSchema(config, "schema0.xml");
> //create a core from the 2 other.
> SolrCore core = new SolrCore("core0", "/path/to/index", config, schema);
> //Accessing a core:
> SolrCore core = SolrCore.getCore("core0");
> There are few other changes mainly related to passing through constructors 
> the SolrCore/SolrConfig used.
> Some background on the 'whys':
> http://www.nabble.com/Multiple-Solr-Cores-tf3608399.html#a10082201
> http://www.nabble.com/Embedding-Solr-vs-Lucene%2C-multiple-Solr-cores--tf3572324.html#a9981355

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-238) [Patch] The tutorial on our website is against trunk which causes confusion by user

2007-05-15 Thread Thorsten Scherler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thorsten Scherler updated SOLR-238:
---

Attachment: SOLR-238.diff

Implementing the solution David suggested in 
http://marc.info/?t=11791669763&r=1&w=2

Now the file you want to generate via e.g. ant (or change it by hand) is 
src/site/src/documentation/resources/schema/symbols-project-v10.ent

This allows you to use &solr-v; and it will be substituted with the value 
defined in symbols-project-v10.ent. More information can be found 
http://forrest.zones.apache.org/ft/build/forrest-seed/samples/xml-entities.html

> [Patch] The tutorial on our website is against trunk which causes confusion 
> by user
> ---
>
> Key: SOLR-238
> URL: https://issues.apache.org/jira/browse/SOLR-238
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Thorsten Scherler
> Attachments: SOLR-238.diff, SOLR-238.diff, SOLR-238.png
>
>
> The patch will add a note to the tutorial page with the following headsup:
> "This is documentation for the development version (TRUNK). Some instructions 
> may only work if you are working against svn head."

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Commented: (SOLR-238) [Patch] The tutorial on our website is against trunk which causes confusion by user

2007-05-15 Thread Thorsten Scherler
On Tue, 2007-05-15 at 11:04 +1000, David Crossley wrote:
> Chris Hostetter wrote:
...
> > that's why i was hoping forrest had a variable substitution mechanism
> > built into it that could just read from some file that we have ant
> > generate.
...
> > is there something like that in forrest?
> 
> There is a facility for "XML Entities" ...
> 
> http://forrest.apache.org/faq.html#xml-entities
> which refers to a demo and explanation in the 'forrest seed'
> http://forrest.zones.apache.org/ft/build/forrest-seed/samples/xml-entities.html
> 
> Get your Ant to create the file "symbols-project-v10.ent"
> on each nightly run.
> 
> -David

Thanks David.

That is actually a really nice way and IMO the cleanest solution. I
changed the patch to do exactly what David is recommending.

https://issues.apache.org/jira/browse/SOLR-238

salu2
-- 
Thorsten Scherler thorsten.at.apache.org
Open Source Java  consulting, training and solutions



[jira] Updated: (SOLR-239) Read IndexSchema from InputStream instead of Config file

2007-05-15 Thread Will Johnson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Will Johnson updated SOLR-239:
--

Attachment: IndexSchemaStream2.patch

patch updated.  now with the added benefit of compiling.

> Read IndexSchema from InputStream instead of Config file
> 
>
> Key: SOLR-239
> URL: https://issues.apache.org/jira/browse/SOLR-239
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.2
> Environment: all
>Reporter: Will Johnson
>Priority: Minor
> Fix For: 1.2
>
> Attachments: IndexSchemaStream.patch, IndexSchemaStream2.patch
>
>
> Soon to follow patch adds a constructor to IndexSchema to allow them to be 
> created directly from InputStreams.  The overall logic for the Core's use of 
> the IndexSchema creation/use does not change however this allows java clients 
> like those in SOLR-20 to be able to parse an IndexSchema.  Once a schema is 
> parsed, the client can inspect an index's capabilities which is useful for 
> building generic search UI's.  ie provide a drop down list of fields to 
> search/sort by.  

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-221) faceting memory and performance improvement

2007-05-15 Thread J.J. Larrea (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12496046
 ] 

J.J. Larrea commented on SOLR-221:
--

Clearly Solr is going to end up with more than 2 algorithms for computing 
facets, and there's no reason to think they won't be able to happily coexist in 
SimpleFacets.  And we will surely need additional control parameters even for 
the 2.5 (with your patch) algorithms now in place.  So I think we should 
establish a convention for separating algorithm-specific parameters so we don't 
end up with a jumble of top-level parameters.

So rather than facet.minDfFilterCache, how about:
facet.enum.cache.minDF (enable term enum cache for terms with docFreq > 
minDF)
f..facet.enum.cache.minDF

Might it not be useful to turn off term enum caching when the number of terms 
was above a certain maximum, even if the minDF criterion is met, to trade 
cycles for memory when neither the field cache nor filter cache is practicable? 
 In that case, it could be:
facet.enum.cache.maxTerm  (enable term enum cache for fields where numTerms 
<= maxTerm)


> faceting memory and performance improvement
> ---
>
> Key: SOLR-221
> URL: https://issues.apache.org/jira/browse/SOLR-221
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
> Assigned To: Yonik Seeley
> Attachments: facet.patch
>
>
> 1) compare minimum count currently needed to the term df and avoid 
> unnecessary intersection count
> 2) set a minimum term df in order to use the filterCache, otherwise iterate 
> over TermDocs

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Will Johnson (JIRA)
java.io.IOException: Lock obtain timed out: SimpleFSLock


 Key: SOLR-240
 URL: https://issues.apache.org/jira/browse/SOLR-240
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 1.2
 Environment: windows xp
Reporter: Will Johnson


when running the soon to be attached sample application against solr it will 
eventually die.  this same error has happened on both windows and rh4 linux.  
the app is just submitting docs with an id in batches of 10, performing a 
commit then repeating over and over again.



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Will Johnson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Will Johnson updated SOLR-240:
--

Attachment: stacktrace.txt
ThrashIndex.java

> java.io.IOException: Lock obtain timed out: SimpleFSLock
> 
>
> Key: SOLR-240
> URL: https://issues.apache.org/jira/browse/SOLR-240
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.2
> Environment: windows xp
>Reporter: Will Johnson
> Attachments: stacktrace.txt, ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12496106
 ] 

Yonik Seeley commented on SOLR-240:
---

Thanks Will, I'll try and reproduce this.

> java.io.IOException: Lock obtain timed out: SimpleFSLock
> 
>
> Key: SOLR-240
> URL: https://issues.apache.org/jira/browse/SOLR-240
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.2
> Environment: windows xp
>Reporter: Will Johnson
> Attachments: stacktrace.txt, ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Will Johnson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Will Johnson updated SOLR-240:
--

Attachment: IndexWriter.patch

I found this:

http://lucene.zones.apache.org:8080/hudson/job/Lucene-Nightly/javadoc/or
g/apache/lucene/store/NativeFSLockFactory.html

And so I made the attached patch which seems to run at least 100x longer
than without.

- will







> java.io.IOException: Lock obtain timed out: SimpleFSLock
> 
>
> Key: SOLR-240
> URL: https://issues.apache.org/jira/browse/SOLR-240
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.2
> Environment: windows xp
>Reporter: Will Johnson
> Attachments: IndexWriter.patch, stacktrace.txt, ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12496115
 ] 

Hoss Man commented on SOLR-240:
---

the idea of using different lock implementations has come up in the past, 

http://www.nabble.com/switch-to-native-locks-by-default--tf2967095.html

one reason not to hardcode native locks was because not all file systems 
support it -- so we left in the usage of SimpleFSLock because it's the most 
generally reusable.

rather then change from one hardcoded lock type to another hardcoded lock type, 
we should support a config option for making the choice ... perhaps even adding 
a SolrLockFactory that defines an init(NamedList) method and creating simple 
SOlr sucbclasses of all the core Lucene LockFactor Imples so it's easy for 
people to write their own if they want (and we don't just have "if 
(lockType.equlas("simple"))..." type config parsing.

> java.io.IOException: Lock obtain timed out: SimpleFSLock
> 
>
> Key: SOLR-240
> URL: https://issues.apache.org/jira/browse/SOLR-240
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.2
> Environment: windows xp
>Reporter: Will Johnson
> Attachments: IndexWriter.patch, stacktrace.txt, ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



RE: [jira] Commented: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Will Johnson
True, but the javadocs for the Standard Lock's implementation classes
also say they don't work:

http://java.sun.com/j2se/1.4.2/docs/api/java/io/File.html

Further, NFS locking is also clearly stated to not work in the
SimpleFSLockFactory:

http://lucene.zones.apache.org:8080/hudson/job/Lucene-Nightly/javadoc/or
g/apache/lucene/store/SimpleFSLockFactory.html

So it appears we're in between a lock and a hard place...  (oh the 80's
sitcom humor)

Adding a config parameter sounds good too but the new patch is no worse
than what exists in terms of javadoc warnings and has been shown to
actually fix what I would imagine is a rather standard configuration
(local disk xp/rh)

- will

 

-Original Message-
From: Hoss Man (JIRA) [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 15, 2007 4:27 PM
To: solr-dev@lucene.apache.org
Subject: [jira] Commented: (SOLR-240) java.io.IOException: Lock obtain
timed out: SimpleFSLock


[
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.p
lugin.system.issuetabpanels:comment-tabpanel#action_12496115 ] 

Hoss Man commented on SOLR-240:
---

the idea of using different lock implementations has come up in the
past, 

http://www.nabble.com/switch-to-native-locks-by-default--tf2967095.html

one reason not to hardcode native locks was because not all file systems
support it -- so we left in the usage of SimpleFSLock because it's the
most generally reusable.

rather then change from one hardcoded lock type to another hardcoded
lock type, we should support a config option for making the choice ...
perhaps even adding a SolrLockFactory that defines an init(NamedList)
method and creating simple SOlr sucbclasses of all the core Lucene
LockFactor Imples so it's easy for people to write their own if they
want (and we don't just have "if (lockType.equlas("simple"))..." type
config parsing.

> java.io.IOException: Lock obtain timed out: SimpleFSLock
> 
>
> Key: SOLR-240
> URL: https://issues.apache.org/jira/browse/SOLR-240
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.2
> Environment: windows xp
>Reporter: Will Johnson
> Attachments: IndexWriter.patch, stacktrace.txt,
ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr
it will eventually die.  this same error has happened on both windows
and rh4 linux.  the app is just submitting docs with an id in batches of
10, performing a commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: (solr 240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Yonik Seeley

I've been running this for for an hour so far... how long does it
normally take you to get an exception?


RE: (solr 240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Will Johnson
On my XP laptop it takes a couple minutes, on the Linux server it took 2
days.

- will

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik
Seeley
Sent: Tuesday, May 15, 2007 4:57 PM
To: solr-dev@lucene.apache.org
Subject: Re: (solr 240) java.io.IOException: Lock obtain timed out:
SimpleFSLock

I've been running this for for an hour so far... how long does it
normally take you to get an exception?


RE: [jira] Commented: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Chris Hostetter

: Adding a config parameter sounds good too but the new patch is no worse
: than what exists in terms of javadoc warnings and has been shown to
: actually fix what I would imagine is a rather standard configuration
: (local disk xp/rh)

the patch you submitted may be "no worse" from an abstract cost/benefit
standpoint just because it may be worse in a differetbut equally bad way.
I look at it fromthe perspective that while the currentsolution may be
broke for some people - it's always been like that so it's always been
broke for those people, meanwhile it's obviously worked for some people.

if we change it in a way that doesn't have any clearly obvious "win" for
the majority of people, we may just wind up hurting our existing users,
for the theoretical benefit of new users we don't currently have.

New config options would let hte user be in control, but if we're going to
hardcode one or the other, i'd rather hardocde the one we've always had so
we don't pull the rug out from under our existing users.



-Hoss



[jira] Updated: (SOLR-240) java.io.IOException: Lock obtain timed out: SimpleFSLock

2007-05-15 Thread Will Johnson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Will Johnson updated SOLR-240:
--

Attachment: IndexWriter2.patch

the attached patch adds a param to SolrIndexConfig called useNativeLocks.  the 
default is false which will keeps with the existing design using 
SimpleFSLockFactory.  if people think we should allow fully pluggable locking 
mechanisms i'm game but i wasn't quite sure how to tackle that problem.  

as for testing, i wasn't quite sure how to run tests to ensure that the locks 
were working beyond some basic println's (which passed).  if anyone has good 
ideas i'm all ears.

- will


> java.io.IOException: Lock obtain timed out: SimpleFSLock
> 
>
> Key: SOLR-240
> URL: https://issues.apache.org/jira/browse/SOLR-240
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 1.2
> Environment: windows xp
>Reporter: Will Johnson
> Attachments: IndexWriter.patch, IndexWriter2.patch, stacktrace.txt, 
> ThrashIndex.java
>
>
> when running the soon to be attached sample application against solr it will 
> eventually die.  this same error has happened on both windows and rh4 linux.  
> the app is just submitting docs with an id in batches of 10, performing a 
> commit then repeating over and over again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-208) RSS feed XSL example

2007-05-15 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-208:
-

Assignee: Hoss Man

these seems like a handy little example to me ... anyone see any objections to 
including this as example/solr/conf/xslt/rss_example.xsl assuming we...

1) add the stock License header
2) change the title to "Example Solr RSS Feed"
3) fill in the main description tag with some text like...
   This has been formatted by the sample "rss_example.xsl" transform - use your 
own XSLT to get a nicer RSS feed
4) eliminate the docpos variable since it isn't used.

> RSS feed XSL example
> 
>
> Key: SOLR-208
> URL: https://issues.apache.org/jira/browse/SOLR-208
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 1.2
>Reporter: Brian Whitman
> Assigned To: Hoss Man
>Priority: Trivial
> Attachments: rss.xsl
>
>
> A quick .xsl file for transforming solr queries into RSS feeds. To get the 
> date and time in properly you'll need an XSL 2.0 processor, as in 
> http://wiki.apache.org/solr/XsltResponseWriter .  Tested to work with the 
> example solr distribution in the nightly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Luke request handler issue

2007-05-15 Thread Erik Hatcher
I've switched Flare to use the Luke request handler simply to  
retrieve the fields in the index.


In the case of a 3.7M document index, it takes a LONG time to execute  
because of the top terms its generating.  I tried setting numTerms=0  
and got an array index out of bounds exception.  Is there a trick I'm  
not seeing in getting just the list of fields back in the fastest  
possible way with this request handler?


If Ryan is date-less again tonight, I'm sure he'll have it all fixed  
up by the time I wake up :)  Otherwise, I'll dig in and roll up my  
sleeves sometime this week and make some adjustments to allow turning  
off the top terms feature.


Thanks,
Erik



Re: Luke request handler issue

2007-05-15 Thread Yonik Seeley

On 5/15/07, Erik Hatcher <[EMAIL PROTECTED]> wrote:

I've switched Flare to use the Luke request handler simply to
retrieve the fields in the index.

In the case of a 3.7M document index, it takes a LONG time to execute
because of the top terms its generating.  I tried setting numTerms=0
and got an array index out of bounds exception.


I never had chance to check out the luke handler, but since it's still
experimental, I think all of the "parts" should be optional.

Something that takes as long as generating top terms should also be
able to be specified per-field IMO.

-Yonik