[jira] Updated: (SOLR-415) LoggingFilter for debug

2007-11-20 Thread Koji Sekiguchi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Koji Sekiguchi updated SOLR-415:


Attachment: SOLR-415.patch

 LoggingFilter for debug
 ---

 Key: SOLR-415
 URL: https://issues.apache.org/jira/browse/SOLR-415
 Project: Solr
  Issue Type: Improvement
Reporter: Koji Sekiguchi
Priority: Trivial
 Attachments: SOLR-415.patch


 logging version of analysis.jsp

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: QueryParsing.SortSpec

2007-11-20 Thread Chris Hostetter

: I think it does make sense to keep them together.
: offset and length only make sense if an ordering is specified.

true.

: If we change the name, we should also move it to a top-level class
: (from a static inner).
: Any suggestions?

good point .. promoting it to first order is probably more important then 
renaming.  i don't have any good suggestions for a better name ... i nthe 
back of my mind the word slice keeps coming up since it effectively 
decides which slice of a DocList/DocSet you get in the result ... 
(SliceSortSpec?) but that may just be me.

since 99% of hte use cases are pulling sort, rows, and start from some 
SOlrParams, that use case should probably be refactored into a utility 
method as well.


-Hoss



Re: QueryParsing.SortSpec

2007-11-20 Thread Ryan McKinley


since 99% of hte use cases are pulling sort, rows, and start from some 
SOlrParams, that use case should probably be refactored into a utility 
method as well.




While we are at it, we should limit consolidate the null sort checking. 
 See: http://www.nabble.com/Sorting-problem-tf4762114.html#a13627492


Currently for score desc, the parsing utility returns a null SortSpec, 
then has to create a new one to attach rows/offset


ryan


[jira] Commented: (SOLR-416) need to audit all methods that might be using default Locale

2007-11-20 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12543924
 ] 

Hoss Man commented on SOLR-416:
---

FWIW: 'grep -r toUpper\|toLower java webapp' shows 32 places where toUpper or 
toLower are used ... that's probably where we should start trying to fix things 
... there may be other equally heinous Locale aware comparisons being done that 
don't involve these methods.




 need to audit all methods that might be using default Locale
 

 Key: SOLR-416
 URL: https://issues.apache.org/jira/browse/SOLR-416
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man

 As discussed on the mailing list, there are places in Solr where java methods 
 that rely on the default locale are used to copare input with constants ... 
 the specific use case that prompted this bug being string comparison after 
 calling toUpperCase() ... this won't do what it should in some Locales...
 http://www.nabble.com/Invalid-value-%27explicit%27-for-echoParams-parameter-tf4837914.html
 we should audit the code as much as possible and try to replace these use 
 cases in a way that will work for everyone

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: rows=VERY_LARGE_VALUE throws exception, and error in some cases

2007-11-20 Thread Yonik Seeley
I recently fixed this in the trunk.
-Yonik

On Nov 20, 2007 10:31 AM, Rishabh Joshi [EMAIL PROTECTED] wrote:
 Hi,

 We are using Solr 1.2 for our project and have come across the following
 exception and error:

 Exception:
 SEVERE: java.lang.OutOfMemoryError: Java heap space
 at org.apache.lucene.util.PriorityQueue.initialize (PriorityQueue.java
 :36)

 Steps to reproduce:
 1. Restart your Web Server.
 2. Enter a query with VERY_LARGE_VALUE for rows field. For example:
 http://xx.xx.xx.xx:8080/solr/select?q=unix%20start=0fl=idindent=offrows=9
 3. Press enter or click on the 'Go' button on the browser.

 NOTE:
 1. This exception is thrown if'999' (seven digits) 
 VERY_LARGE_VALUE  '9' (nine digits).
 2. The exception DOES NOT APPEAR AGAIN if we change the VERY_LARGE_VALUE to
 = '999', execute the query and then change the VERY_LARGE_VALUE  back
 to it's original value and execute the query again.
 3. If the VERY_LARGE_VALUE = '99' (ten digits) we get the following
 error:

 Error:
 HTTP Status 400 - For input string: 99

 Has anyone come across this scenario before?

 Regards,
 Rishabh



[jira] Created: (SOLR-416) need to audit all methods that might be using default Locale

2007-11-20 Thread Hoss Man (JIRA)
need to audit all methods that might be using default Locale


 Key: SOLR-416
 URL: https://issues.apache.org/jira/browse/SOLR-416
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


As discussed on the mailing list, there are places in Solr where java methods 
that rely on the default locale are used to copare input with constants ... 
the specific use case that prompted this bug being string comparison after 
calling toUpperCase() ... this won't do what it should in some Locales...

http://www.nabble.com/Invalid-value-%27explicit%27-for-echoParams-parameter-tf4837914.html

we should audit the code as much as possible and try to replace these use cases 
in a way that will work for everyone

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



rows=VERY_LARGE_VALUE throws exception, and error in some cases

2007-11-20 Thread Rishabh Joshi
Hi,

We are using Solr 1.2 for our project and have come across the following
exception and error:

Exception:
SEVERE: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.PriorityQueue.initialize (PriorityQueue.java
:36)

Steps to reproduce:
1. Restart your Web Server.
2. Enter a query with VERY_LARGE_VALUE for rows field. For example:
http://xx.xx.xx.xx:8080/solr/select?q=unix%20start=0fl=idindent=offrows=9
3. Press enter or click on the 'Go' button on the browser.

NOTE:
1. This exception is thrown if'999' (seven digits) 
VERY_LARGE_VALUE  '9' (nine digits).
2. The exception DOES NOT APPEAR AGAIN if we change the VERY_LARGE_VALUE to
= '999', execute the query and then change the VERY_LARGE_VALUE  back
to it's original value and execute the query again.
3. If the VERY_LARGE_VALUE = '99' (ten digits) we get the following
error:

Error:
HTTP Status 400 - For input string: 99

Has anyone come across this scenario before?

Regards,
Rishabh


Re: QueryParsing.SortSpec

2007-11-20 Thread Chris Hostetter

: While we are at it, we should limit consolidate the null sort checking.  See:
: http://www.nabble.com/Sorting-problem-tf4762114.html#a13627492
: 
: Currently for score desc, the parsing utility returns a null SortSpec, then
: has to create a new one to attach rows/offset

+1 ... there's no reason the SolrSpec needs to be null, it can contain a 
null Sort object in the score desc case (or it can contain Sort.SCORE 
and the SOlrIndexSearcher can be the single place that optimizes that into 
a null sort)


-Hoss



[jira] Updated: (SOLR-303) Distributed Search over HTTP

2007-11-20 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-303:
--

Description: 
Searching over multiple shards and aggregating results.
Motivated by http://wiki.apache.org/solr/DistributedSearch


  was:
Motivated by http://wiki.apache.org/solr/FederatedSearch
Index view consistency between multiple requests requirement is relaxed in 
this implementation.

Does the federated search query side. Update not yet done.

Tries to achieve:-

- The client applications are totally agnostic to federated search. The 
federated search and merging of results are totally behind the scene in Solr in 
request handler . Response format remains the same after merging of results.
The response from individual shard is deserialized into SolrQueryResponse 
object. The collection of SolrQueryResponse objects are merged to produce a 
single SolrQueryResponse object. This enables to use the Response writers as it 
is; or with minimal change.

- Efficient query processing with highlighting and fields getting generated 
only for merged documents. The query is executed in 2 phases. First phase gets 
the doc unique keys with sort criteria. Second phase brings all requested 
fields and highlighting information. This saves lot of CPU in case there are 
good number of shards and highlighting info is requested.
Should be easy to customize the query execution. For example: user can specify 
to execute query in just 1 phase itself. (For some queries when highlighting 
info is not required and number of fields requested are small; this can be more 
efficient.)

- Ability to easily overwrite the default Federated capability by appropriate 
plugins and request parameters. As federated search is performed by the 
RequestHandler itself, multiple request handlers can easily be pre-configured 
with different federated search settings in solrconfig.xml

- Global weight calculation is done by querying the terms' doc frequencies from 
all shards.

- Federated search works on Http transport. So individual shard's VIP can be 
queried. Load-balancing and Fail-over taken care by VIP as usual.

-Sub-searcher response parsing as a plugin interface. Different implementation 
could be written based on JSON, xml SAX etc. Current one based on XML DOM.


HOW:
---
A new RequestHandler called MultiSearchRequestHandler does the federated search 
on multiple sub-searchers, (referred as shards going forward). It extends the 
RequestHandlerBase. handleRequestBody method in RequestHandlerBase has been 
divided into query building and execute methods. This has been done to 
calculate global numDocs and docFreqs; and execute the query efficiently on 
multiple shards.
All the search request handlers are expected to extend 
MultiSearchRequestHandler class in order to enable federated capability for the 
handler. StandardRequestHandler and DisMaxRequestHandler have been changed to 
extend this class.
 
The federated search kicks in if shards is present in the request parameter. 
Otherwise search is performed as usual on the local index. eg. 
shards=local,host1:port1,host2:port2 will search on the local index and 2 
remote indexes. The search response from all 3 shards are merged and serviced 
back to the client. 

The search request processing on the set of shards is performed as follows:

STEP 1: The query is built, terms are extracted. Global numDocs and docFreqs 
are calculated by requesting all the shards and adding up numDocs and docFreqs 
from each shard.

STEP 2: (FirstQueryPhase) All shards are queried. Global numDocs and docFreqs 
are passed as request parameters. All document fields are NOT requested, only 
document uniqFields and sort fields are requested. MoreLikeThis and 
Highlighting information are NOT requested.

STEP 3: Responses from FirstQueryPhase are merged based on sort, start and 
rows params. Merged doc uniqField and sort fields are collected. Other 
information like facet and debug is also merged.

STEP 4: (SecondQueryPhase) Merged doc uniqFields and sort fields are grouped 
based on shards. All shards in the grouping are queried for the merged doc 
uniqFields (from FirstQueryPhase), highlighting and moreLikeThis info.

STEP 5: Responses from all shards from SecondQueryPhase are merged.

STEP 6: Document fields , highlighting and moreLikeThis info from 
SecondQueryPhase are merged into FirstQueryPhase response.




TODO:
-Support sort field other than default score
-Support ResponseDocs in writers other than XMLWriter
-Http connection timeouts

OPEN ISSUES;
-Merging of facets by top n terms of field f 

Scope for Performance optimization:-
-Search shards in parallel threads
-Http connection Keep-Alive ?
-Cache global numDocs and docFreqs
-Cache Query objects in handlers ??

Would appreciate feedback on my approach. I understand that there would be lot 
things I might have over-looked. 


   

[jira] Commented: (SOLR-416) need to audit all methods that might be using default Locale

2007-11-20 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12543932
 ] 

Yonik Seeley commented on SOLR-416:
---

I personally hate case insensitivity for parameters if not explicitly 
documented and used, or if it was added after Solr 1.2, I'd vote to kill it.

Character.toLower/toUpper aren't locale aware.
String.equalsIgnoreCase() uses Character.toLower/toUpper which aren't local 
aware and would be fine for comparisons against a known string.


 need to audit all methods that might be using default Locale
 

 Key: SOLR-416
 URL: https://issues.apache.org/jira/browse/SOLR-416
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man

 As discussed on the mailing list, there are places in Solr where java methods 
 that rely on the default locale are used to copare input with constants ... 
 the specific use case that prompted this bug being string comparison after 
 calling toUpperCase() ... this won't do what it should in some Locales...
 http://www.nabble.com/Invalid-value-%27explicit%27-for-echoParams-parameter-tf4837914.html
 we should audit the code as much as possible and try to replace these use 
 cases in a way that will work for everyone

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-303) Federated Search over HTTP

2007-11-20 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12543931
 ] 

Hoss Man commented on SOLR-303:
---

Note: there has been discussion recently about the terminology distinction 
between federated search and distributed search (which ken recently updated 
on the wiki) ... this issue is tracking distributed search and not federated 
search correct?

if so, the issue summary should be updated

http://wiki.apache.org/solr/FederatedSearch
http://wiki.apache.org/solr/DistributedSearch




 Federated Search over HTTP
 --

 Key: SOLR-303
 URL: https://issues.apache.org/jira/browse/SOLR-303
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Sharad Agarwal
Priority: Minor
 Attachments: fedsearch.patch, fedsearch.patch, fedsearch.patch, 
 fedsearch.patch, fedsearch.patch, fedsearch.stu.patch, fedsearch.stu.patch


 Motivated by http://wiki.apache.org/solr/FederatedSearch
 Index view consistency between multiple requests requirement is relaxed in 
 this implementation.
 Does the federated search query side. Update not yet done.
 Tries to achieve:-
 
 - The client applications are totally agnostic to federated search. The 
 federated search and merging of results are totally behind the scene in Solr 
 in request handler . Response format remains the same after merging of 
 results.
 The response from individual shard is deserialized into SolrQueryResponse 
 object. The collection of SolrQueryResponse objects are merged to produce a 
 single SolrQueryResponse object. This enables to use the Response writers as 
 it is; or with minimal change.
 - Efficient query processing with highlighting and fields getting generated 
 only for merged documents. The query is executed in 2 phases. First phase 
 gets the doc unique keys with sort criteria. Second phase brings all 
 requested fields and highlighting information. This saves lot of CPU in case 
 there are good number of shards and highlighting info is requested.
 Should be easy to customize the query execution. For example: user can 
 specify to execute query in just 1 phase itself. (For some queries when 
 highlighting info is not required and number of fields requested are small; 
 this can be more efficient.)
 - Ability to easily overwrite the default Federated capability by appropriate 
 plugins and request parameters. As federated search is performed by the 
 RequestHandler itself, multiple request handlers can easily be pre-configured 
 with different federated search settings in solrconfig.xml
 - Global weight calculation is done by querying the terms' doc frequencies 
 from all shards.
 - Federated search works on Http transport. So individual shard's VIP can be 
 queried. Load-balancing and Fail-over taken care by VIP as usual.
 -Sub-searcher response parsing as a plugin interface. Different 
 implementation could be written based on JSON, xml SAX etc. Current one based 
 on XML DOM.
 HOW:
 ---
 A new RequestHandler called MultiSearchRequestHandler does the federated 
 search on multiple sub-searchers, (referred as shards going forward). It 
 extends the RequestHandlerBase. handleRequestBody method in 
 RequestHandlerBase has been divided into query building and execute methods. 
 This has been done to calculate global numDocs and docFreqs; and execute the 
 query efficiently on multiple shards.
 All the search request handlers are expected to extend 
 MultiSearchRequestHandler class in order to enable federated capability for 
 the handler. StandardRequestHandler and DisMaxRequestHandler have been 
 changed to extend this class.
  
 The federated search kicks in if shards is present in the request 
 parameter. Otherwise search is performed as usual on the local index. eg. 
 shards=local,host1:port1,host2:port2 will search on the local index and 2 
 remote indexes. The search response from all 3 shards are merged and serviced 
 back to the client. 
 The search request processing on the set of shards is performed as follows:
 STEP 1: The query is built, terms are extracted. Global numDocs and docFreqs 
 are calculated by requesting all the shards and adding up numDocs and 
 docFreqs from each shard.
 STEP 2: (FirstQueryPhase) All shards are queried. Global numDocs and docFreqs 
 are passed as request parameters. All document fields are NOT requested, only 
 document uniqFields and sort fields are requested. MoreLikeThis and 
 Highlighting information are NOT requested.
 STEP 3: Responses from FirstQueryPhase are merged based on sort, start 
 and rows params. Merged doc uniqField and sort fields are collected. Other 
 information like facet and debug is also merged.
 STEP 4: (SecondQueryPhase) Merged doc uniqFields and sort fields are grouped 
 based on shards. All shards in the grouping are 

Re: logging through log4j

2007-11-20 Thread Chris Hostetter

: It's cool to know about the jul-log4j-bridge and I like the way with a 
: single method call one can effectively short-circuit j.u.l. by stealing 
: its output, but on very quick read it appears there is a potentially 
: non-trivial performance penalty from the need to re-multiplex 
: everything, and some of the limitations of j.u.l. remain in effect.

i haven't looked at the code for this jul-log4j-bridge, but if anyone 
*really* wants to ridge the JDK / log4j gap once and for all there is a 
fairly strait forward way to do it that can be as efficient as you want it 
to be:

   Implement the JDK Logging API using classes that delegate to Log4J.

The only problems (in my humble opinion) with JDK logging are that:

  1) it wsa branded poorly
  2) it is both an API and a default implementation all rolled into one 
 without a well advertised seperation.
  3) the default implementation supports only supports a very limited 
 config syntax.

The JDK logging docs are very clear about the fact that *ANYONE* can 
extend the j.u.logging.LogManager class and make it behave anyway they 
want -- the java.util.logging.manager system property can be used to pick 
an implementation for your JVM at run time -- no one writting an 
application *has* to use the j.u.logging.LogManager (or it's default 
config file syntax) just because they use a library that uses 
a j.u.logging.Logger.

I have never understood why Log4j doesn't provide some class like...

public JavaUtilLoggingManagerFacade extends java.util.logging.LogManager { ... }


-Hoss



[jira] Commented: (SOLR-409) Allow configurable class loader sharing between cores

2007-11-20 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12543970
 ] 

Hoss Man commented on SOLR-409:
---

i'm not sure how to interpret Yes #1, not #2 ... are you saying my 
understanding about case #2 is wrong, or that you agree with me we shouldn't do 
it? (i was basing that comment on the original issue description)

 I don't see a need to have 'hierarchical' classloaders - a single global 
 shared library should be sufficient:

by hierarchical i'm referring to the normal way java classloaders work: every 
classloader has a parent, which it delegates to automaticly.  if we're going to 
have a shared class loader available for all cores, then it should be the 
parent of each core specific plugin classloader.

(currently the plugin ClassLoader uses the context classloader as it's parent 
... that can still work as long as whatever MultiCoreManager object that 
deals with multicore.xml uses the new shared code classloader as the context 
when instantiating each SolrCore ... or we can be more explicit about it.)


 Allow configurable class loader sharing between cores
 -

 Key: SOLR-409
 URL: https://issues.apache.org/jira/browse/SOLR-409
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 1.3
Reporter: Henri Biestro
Priority: Minor
 Fix For: 1.3

 Attachments: solr-350_409.patch, solr-350_409.patch, 
 solr-350_409.patch, solr-409.patch, solr-409.patch


 WHAT:
 This patch allows to configure in the solrconfig.xml the library directory 
 and associated class loader used to dynamically create instances.
 The solr config XML can now specify a libDir element (the same way the 
 dataDir can be specified).
 That element can also specify through the 'shared' attribute whether the 
 library is shared.
 By default, the shared attribute is true; if you specify a libDir, its class 
 loader is made shareable.
 Syntax:
 libDir shared='true*|false'/path/to/shareable/dir/libDir
 WHY:
 Current behavior allocates one class loader per config  thus per core.
 However, there are cases where one would like different cores to share some 
 objects that are dynamically instantiated (ie, where the class name is used 
 to find the class through the class loader and instantiate). In the current 
 form; since each core possesses its own class loader, static members are 
 indeed different objects. For instance, there is no way of implementing a 
 singleton shared between 2 request handlers.
 Originally from 
 http://www.nabble.com/Post-SOLR215-SOLR350-singleton-issue-tf4776980.html
 HOW:
 The libDir element is extracted from the XML configuration file and parsed in 
 the Config constructor.
 A static map of weak reference to classloaders keyed by library path is used 
 to keep track of already built shareable class loaders.
 The weak reference is here to allow class eviction by the GC, avoiding 
 leakage that would result by keeping hard reference to class loaders.
 STATUS:
 initial code drop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Initializing - break init() API compatibility?

2007-11-20 Thread Ryan McKinley




...the only thing slightly non-standard about calling these methods 
setters is that they aren't expected to jsut be POJO style setters ... 
the expectation is that the class will do some work when they are called 
-- hence my tell (or maybe notify or inform is a better verb) 
suggestion.


Using the terms aware and inform makes it all sit better with me. 
That makes a clean init path


1. call init( existing args )
2. For each XXXAware impl, call call informXXX( )

for now XXX is SolrCore and ResourceLoader (ClassLoader stuff refactored 
out of Config)





: To sum up, I see two directions:
: 
: 1. Keep existing init() apis.  Add more init() functions that pass in SolrCore

: or a ResourceFinder.
: 
: 2. Keep existing init() apis, but advertise they will change in the future.

: As a stop-gap measure make the parameters that will be available in the future
: available through ThreadLocal. (SOLR-414)
: 
: I vote for #2 because after the 2.0 release, we will have a clean API with all

: init options available in a single place.  I am ok with #1 also.

i vote for #1, but as i said: the new methods shouldn't be named init 
since that will clearly confuse people about the semantics.  this also has 
the nice property that 1 year from now when we decide there's another 
(new) important object we'd like plugins to have access to (if they want 
it) before they are asked to do work we don't have to jump through all the 
hoops of changing the init() signature again.




I change my vote to #1  -- I will modify SOLR-414 to take this strategy

ryan


[jira] Commented: (SOLR-409) Allow configurable class loader sharing between cores

2007-11-20 Thread Ryan McKinley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12544016
 ] 

Ryan McKinley commented on SOLR-409:


I agree with everything you say above.

Lets take the approach in #1 and avoid anything resembling #2


 Allow configurable class loader sharing between cores
 -

 Key: SOLR-409
 URL: https://issues.apache.org/jira/browse/SOLR-409
 Project: Solr
  Issue Type: Sub-task
Affects Versions: 1.3
Reporter: Henri Biestro
Priority: Minor
 Fix For: 1.3

 Attachments: solr-350_409.patch, solr-350_409.patch, 
 solr-350_409.patch, solr-409.patch, solr-409.patch


 WHAT:
 This patch allows to configure in the solrconfig.xml the library directory 
 and associated class loader used to dynamically create instances.
 The solr config XML can now specify a libDir element (the same way the 
 dataDir can be specified).
 That element can also specify through the 'shared' attribute whether the 
 library is shared.
 By default, the shared attribute is true; if you specify a libDir, its class 
 loader is made shareable.
 Syntax:
 libDir shared='true*|false'/path/to/shareable/dir/libDir
 WHY:
 Current behavior allocates one class loader per config  thus per core.
 However, there are cases where one would like different cores to share some 
 objects that are dynamically instantiated (ie, where the class name is used 
 to find the class through the class loader and instantiate). In the current 
 form; since each core possesses its own class loader, static members are 
 indeed different objects. For instance, there is no way of implementing a 
 singleton shared between 2 request handlers.
 Originally from 
 http://www.nabble.com/Post-SOLR215-SOLR350-singleton-issue-tf4776980.html
 HOW:
 The libDir element is extracted from the XML configuration file and parsed in 
 the Config constructor.
 A static map of weak reference to classloaders keyed by library path is used 
 to keep track of already built shareable class loaders.
 The weak reference is here to allow class eviction by the GC, avoiding 
 leakage that would result by keeping hard reference to class loaders.
 STATUS:
 initial code drop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Initializing - break init() API compatibility?

2007-11-20 Thread Ryan McKinley

I started implementing this suggestion and am running into one hitch.

The cleanest solution is to have a single place manage Aware/Inform. 
The cleanest place for this is in ResourceLoader (refactored Config 
superclass)


 public Object newInstance(String cname, String... subpackages) {
...
  Object obj = clazz.newInstance();
  if( obj instanceof ResourceLoaderAware ) {
((ResourceLoaderAware)obj).informResourceLoader( this );
  }
  if( obj instanceof SolrCoreAware ) {
if( core != null ) {
  ((SolrCoreAware)obj).informSolrCore( core );
}
else {
  waitingForCore.add( ((SolrCoreAware)obj) );
}
  }
...
 }


BUT this has one major drawback.  This will call informXXX *before* 
init() is called.  In the core case, it may call before *or* after!


Alternatively, we can put the Aware/Inform logic in the 
AbstractPluginLoader -- this works fine for ResourceLoaderAware, but 
gets complicated for SolrCoreAware (as is AbstractPluginLoader does not 
know about the core, and without huge changes can not wait till the core 
is fully loaded)


Thoughts?

ryan


[jira] Updated: (SOLR-414) Coherent plugin initialization strategy

2007-11-20 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley updated SOLR-414:
---

Attachment: SOLR-414-Initialization.patch

Updated patch to reflect ideas discussed in:
http://www.nabble.com/Initializing---break-init%28%29-API-compatibility--tf4808463.html

This sticks with 1.2 init APIs and abandons the previous ThreadLocal approach.  
Instead, this

1. Adds two interfaces: ResourceLoaderAware, SolrCoreAware
2. The ResourceLoader keeps track of  Aware instances until it is told to 
Inform them



 Coherent plugin initialization strategy
 ---

 Key: SOLR-414
 URL: https://issues.apache.org/jira/browse/SOLR-414
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.3
Reporter: Ryan McKinley
Assignee: Ryan McKinley
 Fix For: 1.3

 Attachments: SOLR-414-Initialization.patch, 
 SOLR-414-Initialization.patch, SOLR-414-Initialization.patch, 
 SOLR-414-Initialization.patch


 We currently load many plugins with a Map or NamedList -- since SOLR-215, the 
 current core is not available through SolrCore.getSolrCore() and may need to 
 be used for initialization.
 Ideally, we could change the init() methods from:
 {panel}void init( final MapString,String args );{panel}
 to
 {panel}void init( final SolrCore core, final MapString,String args );{panel}
 Without breaking existing APIs, this change is difficult (some ugly options 
 exist).  This patch offers a solution to keep existing 1.2 APIs, and allow 
 access to the SolrConfig and SolrCore though ThreadLocal.  This should be 
 removed in a future release.
 {panel}
   DeprecatedPluginUtils.getCurrentCore();
   DeprecatedPluginUtils.getCurrentConfig();
 {panel}
 This patch removes the SolrConfig.Initalizable that was introduced in 
 SOLR-215.
 For background, see:
 http://www.nabble.com/Initializing---break-init%28%29-API-compatibility--tf4808463.html
 See also: SOLR-260, SOLR-215,  SOLR-399

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-417) Move SortSpec to top level class and cleanup

2007-11-20 Thread Ryan McKinley (JIRA)
Move SortSpec to top level class and cleanup


 Key: SOLR-417
 URL: https://issues.apache.org/jira/browse/SOLR-417
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Ryan McKinley
Priority: Minor
 Fix For: 1.3


Move SortSpec from within IndexSchema.

see discussion: http://www.nabble.com/QueryParsing.SortSpec-tf4840762.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-417) Move SortSpec to top level class and cleanup

2007-11-20 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley updated SOLR-417:
---

Attachment: SOLR-417-SortSpec.patch

Moves SortSpec to its own class and cleans up lots of null checking.

The one notable change with API consequences is that I changed:

public static SortSpec parseSort(String sortSpec, IndexSchema schema);
 to:
public static Sort parseSort(String sortSpec, IndexSchema schema)

Is this an OK change?  Is that part of the non-fungible API?  If yes, then we 
just make something like:

{code:java}
  public static SortSpec parseSpec(String sortSpec, IndexSchema schema) 
  {
Sort sort = parseLuceneSort(sortSpec, schema);
if( sort != null ) {
  return new SortSpec( sort, -1 );
}
return null;
  }
{code}

 Move SortSpec to top level class and cleanup
 

 Key: SOLR-417
 URL: https://issues.apache.org/jira/browse/SOLR-417
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Ryan McKinley
Priority: Minor
 Fix For: 1.3

 Attachments: SOLR-417-SortSpec.patch


 Move SortSpec from within IndexSchema.
 see discussion: http://www.nabble.com/QueryParsing.SortSpec-tf4840762.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Issue Comment Edited: (SOLR-417) Move SortSpec to top level class and cleanup

2007-11-20 Thread patrick o'leary




Looks good for me, thanks.

Ryan McKinley (JIRA) wrote:

  [ https://issues.apache.org/jira/browse/SOLR-417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12544118 ] 

ryantxu edited comment on SOLR-417 at 11/20/07 3:44 PM:
--

Moves SortSpec to its own class and cleans up lots of null checking.

The one notable change with API consequences is that I changed:

{code:java}
public static SortSpec parseSort(String sortSpec, IndexSchema schema);
{code}
 to:
{code:java}
public static Sort parseSort(String sortSpec, IndexSchema schema)
{code}

Is this an OK change?  Is that part of the non-fungible API?  If not, then we just make something like:

{code:java}
  public static SortSpec parseSpec(String sortSpec, IndexSchema schema) 
  {
Sort sort = parseLuceneSort(sortSpec, schema);
if( sort != null ) {
  return new SortSpec( sort, -1 );
}
return null;
  }
{code}

  was (Author: ryantxu):
Moves SortSpec to its own class and cleans up lots of null checking.

The one notable change with API consequences is that I changed:

public static SortSpec parseSort(String sortSpec, IndexSchema schema);
 to:
public static Sort parseSort(String sortSpec, IndexSchema schema)

Is this an OK change?  Is that part of the non-fungible API?  If yes, then we just make something like:

{code:java}
  public static SortSpec parseSpec(String sortSpec, IndexSchema schema) 
  {
Sort sort = parseLuceneSort(sortSpec, schema);
if( sort != null ) {
  return new SortSpec( sort, -1 );
}
return null;
  }
{code}
  
  
  
Move SortSpec to top level class and cleanup


Key: SOLR-417
URL: https://issues.apache.org/jira/browse/SOLR-417
Project: Solr
 Issue Type: Improvement
 Components: search
   Reporter: Ryan McKinley
   Priority: Minor
Fix For: 1.3

Attachments: SOLR-417-SortSpec.patch


Move SortSpec from within IndexSchema.
see discussion: http://www.nabble.com/QueryParsing.SortSpec-tf4840762.html

  
  
  


-- 
Patrick O'Leary


You see, wire telegraph is a kind of a very, very long cat. You pull his tail in New York and his head is meowing in Los Angeles.
 Do you understand this? 
And radio operates exactly the same way: you send signals here, they receive them there. The only difference is that there is no cat.
  - Albert Einstein

View
Patrick O Leary's profile