Because Terracotta treats the Lucene indexes as cluster-wide shared objects,
updates to those indexes are made automatically available across the entire
cluster.

On one machine, a plain Lucene index will give you the in-memory caching and
spill-to-disk behavior you are looking for.  Terracotta will give you that
behavior across a cluster of machines.  I wouldn't compare the performance
of a plain Lucene index on one machine with the performance of a clustered
Lucene index on one machine.  Use Terracotta if you need the index available
on multiple machines; use plain Lucene if you want high-performance on a
single machine.

That said, if you need help setting up and tuning a  Terracotta-clustered
Lucene index for your use-case, you can visit the Terracotta forums: 

  http://forums.terracotta.org/

Cheers,
--Orion
http://www.terracotta.org/


Haroldo Nascimento wrote:
> 
> Hi,
> 
>   I have a very great volume of data (6.000.000 of documents) and I
> need to have a very fast search. I am thinking about using Terracotta
> (with Lucene) for clustering the solution.
> 
>   One of the advantages of the Terracotta is that part of the index is
> stored in memory and part is persisted em disk. If not to find in
> memory the application searchs in disk. This is transparent for the
> user. The problem would be in the process of updat of index.
> 
>   Another solution would be to persist the index using only Lucene,
> but I believe that the reply time very using disk either bigger that
> the solution in memory.
> 
>   You know some document comparative about the solution em memory
> (RAMDirectory) and solution em disk using Lucene?
> 
>   Tip: In another application I am using the solution index in memory
> (RAMDirectory), to initiate the process of load of the indice I
> serialized the RAMDirectory object. For it I need  insert "implements
> Serializable" in some classrooms of Lucene.
> 
> 
> 
> On Nov 25, 2007 5:55 PM, Erick Erickson <[EMAIL PROTECTED]> wrote:
>> As I understand, Lucene does a fair amount of caching of terms in
>> memory without you having to specify anything.
>>
>> But it's hard to see how your question relates. Remember that Lucene is
>> finding *all* matching docs. So searching in a RAMdirectory and then
>> searching in the file doesn't really seem possible since Lucene has to
>> search
>> the entire index every time to score the docs. It doesn't stop after
>> the first hit, since the next hit may score higher.
>>
>> But I'm sure Lucene *does* cache portions of the index in RAM when
>> possible, but I've never had occasion to dig into the details.
>>
>> Which leads me to ask "Why do you care?". Is there a specific
>> situation you're trying to get better performance from or is this more
>> of a background question? If you have a specific situation, please
>> describe
>> it in some detail so better minds than mine can give you a better
>> response <G>....
>>
>> Best
>> Erick
>>
>> On Nov 24, 2007 10:26 AM, Haroldo Nascimento <[EMAIL PROTECTED]>
>> wrote:
>>
>>
>> > Hi,
>> >
>> >  I have a question ?
>> >
>> >  Lucene offers a mixing structure of storage of index, that is, first
>> > do search in memoria (ARMDirectory) and in case of not found do search
>> > in index file automatically ? For example: Load part of index in
>> > memory for do the search fastest.
>> >
>> >  Thnaks
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: [EMAIL PROTECTED]
>> > For additional commands, e-mail: [EMAIL PROTECTED]
>> >
>> >
>>
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Index%3A-mixing-the-structure-of-persistence-tf4866100.html#a13955202
Sent from the Lucene - Java Users mailing list archive at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to