This should be done.  Great idea.

On Wed, Sep 17, 2008 at 3:41 PM, Lance Norskog <[EMAIL PROTECTED]> wrote:
> My vote is for dynamically scanning a directory of configuration files. When
> a new one appears, or an existing file is touched, load it. When a
> configuration disappears, unload it.  This model works very well for servlet
> containers.
>
> Lance
>
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik Seeley
> Sent: Wednesday, September 17, 2008 11:21 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Some new SOLR features
>
> On Wed, Sep 17, 2008 at 1:27 PM, Jason Rutherglen
> <[EMAIL PROTECTED]> wrote:
>> If the configuration code is going to be rewritten then I would like
>> to see the ability to dynamically update the configuration and schema
>> without needing to reboot the server.
>
> Exactly.  Actually, multi-core allows you to instantiate a completely new
> core and swap it for the old one, but it's a bit of a heavyweight approach.
>
> The key is finding the right granularity of change.
> My current thought is that a schema object would not be mutable, but that
> one could easily swap in a new schema object for an index at any time.  That
> would allow a single request to see a stable view of the schema, while
> preventing having to make every aspect of the schema thread-safe.
>
>> Also I would like the
>> configuration classes to just contain data and not have so many
>> methods that operate on the filesystem.
>
> That's the plan... completely separate the serialized and in memory
> representations.
>
>> This way the configuration
>> object can be serialized, and loaded by the server dynamically.  It
>> would be great for the schema to work the same way.
>
> Nothing will stop one from using java serialization for config persistence,
> however I am a fan of human readable for config files...
> so much easier to debug and support.  Right now, people can cut-n-paste
> relevant parts of their config in email for support, or to a wiki to explain
> things, etc.
>
> Of course, if you are talking about being able to have custom filters or
> analyzers (new classes that don't even exist on the server yet), then it
> does start to get interesting.  This intersects with deployment in
> general... and I'm not sure what the right answer is.
> What if Lucene or Solr needs an upgrade?  It would be nice if that could
> also automatically be handled in a a large cluster... what are the options
> for handling that?  Is there a role here for OSGi to play?
>  It sounds like at least some of that is outside of the Solr domain.
>
> An alternative to serializing everything would be to ship a new schema along
> with a new jar file containing the custom components.
>
> -Yonik
>
>

Reply via email to