Unless I’m entirely wrong it appears that you want to use shared threaded
memory.
This would allow you to keep out of apache altogether.
Here is an example of using threads that I worked out using shared memory.
We took a 4 hour task, serial, and turned it into 5 minutes with threads.
This worked
Cache::FastMmap is a great module for sharing read/write data, but it can't
compete with the speed of loading it all into memory before forking as Alan
said he plans to do.
- Perrin
On Tue, Feb 3, 2015 at 2:05 AM, Cosimo Streppone cos...@streppone.it
wrote:
Alan Raetz wrote:
So I have a
I agree, either threads or Parallel::ForkManager, depending on your
platform and your perl, will be a lot faster than mod_perl for this. Of
course there might be other reasons to use mod_perl, e.g. it's useful to
have this available as a remote service, or you want to call this
frequently for
You will find that when you share the memory the hashes are not copied to each
thread.
The docs are a little misleading.
On Feb 3, 2015, at 11:54 AM, Alan Raetz
alanra...@gmail.commailto:alanra...@gmail.com wrote:
Thanks for all the input. I was considering threads, but according to how I
read
Thanks for all the input. I was considering threads, but according to how I
read the perlthreadtut (
http://search.cpan.org/~rjbs/perl-5.18.4/pod/perlthrtut.pod#Threads_And_Data),
quote: When a new Perl thread is created, all the data associated with the
current thread is copied to the new thread,
Alan/Alexandr,
There will always be an overhead with using a webserver to do this -
even using mod_perl.
Assumiptions:
*from what you are saying that there is no actual website
involved but you want to use mod_perl to cache data for an offline process;
*One set of data is
So I have a perl application that upon startup loads about ten perl hashes
(some of them complex) from files. This takes up a few GB of memory and
about 5 minutes. It then iterates through some cases and reads from (never
writes) these perl hashes. To process all our cases, it takes about 3 hours
Pre-loading is good, but what you need, I belive, is Storable module. If
your files contains parsed data (hashes) just store them as serialized. If
they containing raw data, need to be parsed, you may pre-parse, serialize
it and store as binary files.
Storable is written in C and works very fast.
Alan Raetz wrote:
So I have a perl application that upon startup loads about ten perl
hashes (some of them complex) from files. This takes up a few GB of
memory and about 5 minutes. It then iterates through some cases and
reads from (never writes) these perl hashes. To process all our cases,
it