RAM Drive with models?

On Sat, Sep 7, 2013 at 8:47 PM, Ioan Barbulescu <[email protected]>wrote:

> Hi guys
>
> Thank you for your answers.
>
> Just to clarify the reasons for my question:
>
> - I am at the stage where I write my code, so I have to run my unit tests
> like 50 times a day :)
>    Hence, every second shaven off from the start time is invaluable.
>    Yes, I agree with Jörn, the lifecycle of the model is indeed the same
> with the one of my application.
>
> - I am already ussing a decent SSD, so loading time as a factor is
> off-the-table.
>
> I will keep tinker around it and let you know how it goes ...
>
> Many thanks.
> Ioan
>
> On Sat, Sep 7, 2013 at 9:21 PM, Aliaksandr Autayeu
> <[email protected]>wrote:
>
> > Ioan, often hard drive speed limits you more than processor speed. That
> is,
> > it might be faster to load 5M from disk and unpack than load unpacked
> > 25M from disk.
> >
> > Aliaksandr
> >
> >
> > On Wed, Sep 4, 2013 at 11:59 AM, Jörn Kottmann <[email protected]>
> wrote:
> >
> > > On 08/26/2013 03:00 PM, Ioan Barbulescu wrote:
> > >
> > >> Hi guys
> > >>
> > >> Short question, please:
> > >>
> > >> Currently, the opennlp models come as zipped files.
> > >>
> > >> Is it possible to use them in an expanded / un-zipped form?
> > >> (and how?)
> > >>
> > >> Zipped is very neat and clean, but it adds some time when reading the
> > >> file.
> > >> I am interested in speeding up as much as possible the load time.
> > >>
> > >
> > > You can probably repackage the zip files without using compression.
> > > Anyway I doubt that it adds much time, did you profile the loading
> code?
> > >
> > > As far as I know is the slowest part to build the maxent model, maybe
> > that
> > > can be speed up, I never profiled that part of OpenNLP.
> > >
> > > The life-cycle of a model should be the same as of  your application,
> > > maybe you can
> > > just find a way to reuse them, instead of loading them over and over
> > again.
> > >
> > > Jörn
> > >
> > >
> > >
> >
>

Reply via email to