Hi Robert,
thanks for your reply, I think it is suitable in my usecase to disable
the index at all since full-text-search will not be required. I'll try
to get the Datastore up and running, I'm only do not want to create my
application and find out that above some limit of overall storage
Jackrabbit stop working or something because JCR and/or Jackrabbit is
the wrong technology at all to be used as a meta-filestore.
Regards
Christoph
Am 11.02.2013 14:28, schrieb Seidel. Robert:
Hi,
storing is not the problem, cause this is all done by streaming. But you can
encounter problems if you want to index such data, because Lucene holds all
tokens for a file in memory (no streaming here).
The default configuration stores 10K tokens max. per property (see
maxFieldLength in http://wiki.apache.org/jackrabbit/Search).
But this can be real frustrating if the 10001. token is searched - it is also
not very transparent for the user.
If you increase this value, you need more memory.
Imho you have to decide to index all tokens (with enough memory) or nothing for
this data.
Regards, Robert
-----Ursprüngliche Nachricht-----
Von: Bertrand Delacretaz [mailto:[email protected]]
Gesendet: Montag, 11. Februar 2013 13:59
An: [email protected]
Betreff: Re: Is Jackrabbit suitable for storing lots of large files
Hi,
On Mon, Feb 11, 2013 at 1:49 PM, Christoph Läubrich<[email protected]>
wrote:
I read the performance doc here
http://wiki.apache.org/jackrabbit/Performance but did not find an answer:
Is Jackrabbit suitable for storing lots of files (around 100GB) with
each file around 2-200MB?
As usual with performance you'll need to do your own tests, but that shouldn't
be a problem if you use the datastore [1] to store the binary content.
-Bertrand
[1] http://wiki.apache.org/jackrabbit/DataStore
________________________________
Treffen Sie AEB vom 19.-21. Februar 2013 auf der LogiMAT in Stuttgart. Halle 5,
Stand 261.
Vereinbaren Sie jetzt einen Termin und Sie erhalten eine Eintrittskarte.
www.aeb.de/logimat