Stas Bekman wrote:
Ideally when such a
situation happens, and you must load all the data into the memory, which
is at short, your best bet is to rewrite the datastorage layer in XS/C,
and use a tie interface to make it transparent to your perl code. So you
will still use the hash but the
Hi Stas,
having a look at Apache::Status and playing around with your tips on
http://www.apacheweek.com/features/mod_perl11
I found some interesting results and a compromising solution:
In a module I load a CSV file as class data into different structures
and compared the output of
Ernest Lergon wrote:
having a look at Apache::Status and playing around with your tips on
http://www.apacheweek.com/features/mod_perl11
I found some interesting results and a compromising solution:
Glad to hear that Apache::Status was of help to you. Ideally when such a
situation
Kee Hinckley wrote:
At 17:18 28.04.2002, Ernest Lergon wrote:
Now I'm scared about the memory consumption:
The CSV file has 14.000 records with 18 fields and a size of 2 MB
(approx. 150 Bytes per record).
Now a question I would like to ask: do you *need* to read the whole CSV
Perrin Harkins wrote:
$foo-{$i} = [ @record ];
You're creating 14000 arrays, and references to them (refs take up space
too!). That's where the memory is going.
See if you can use a more efficient data structure. For example, it
takes less space to make 4 arrays with 14000 entries
Hi,
thank you all for your hints, BUT (with capital letters ;-)
I think, it's a question of speed: If I hold my data in a hash in
memory, access should be faster than using any kind of external
database.
What makes me wonder is the extremely blown up size (mod)perl uses for
datastructures.
Have you tried DBD::AnyData? It's pure Perl so it might not be as fast
but you never know?
--
Simon Oliver
Ernest Lergon wrote:
So I turned it around:
$col holds now 18 arrays with 14000 entries each and prints the correct
results:
...
and gives:
SIZE RSS SHARE
12364 12M 1044
Wow, 2 MB saved ;-))
That's pretty good, but obviously not what you were after.
I tried using the pre-size
Perrin Harkins wrote:
[snip]
Incidentally, that map statement in your script isn't doing
anything that I can see.
It simulates different values for each record - e.g.:
$line = \t\t1000\t10.99;
@record = split \t, $line;
for ( $i = 0; $i 14000; $i++ )
{
map { $_++ }
Ernest Lergon wrote:
Hi,
thank you all for your hints, BUT (with capital letters ;-)
I think, it's a question of speed: If I hold my data in a hash in
memory, access should be faster than using any kind of external
database.
What makes me wonder is the extremely blown up size
Hi,
in a mod_perl package I load a CSV file on apache startup into a simple
hash as read-only class data to be shared by all childs.
A loading routine reads the file line by line and uses one numeric field
as hash entry (error checks etc. omitted):
package Data;
my $class_data = {};
ReadFile
Ernest Lergon wrote:
Hi,
in a mod_perl package I load a CSV file on apache startup into a simple
hash as read-only class data to be shared by all childs.
A loading routine reads the file line by line and uses one numeric field
as hash entry (error checks etc. omitted):
package Data;
Jeffrey Baker wrote:
I tried this program in Perl (outside of modperl) and the memory
consumption is only 4.5MB:
#!/usr/bin/perl -w
$foo = {};
for ($i = 0; $i 14000; $i++) {
$foo-{sprintf('%020d', $i)} = 'A'x150;
}
;
1;
So I suggest something else might be going on
$foo-{$i} = [ record ];
You're creating 14000 arrays, and references to them (refs take up space
too!). That's where the memory is going.
See if you can use a more efficient data structure. For example, it
takes less space to make 4 arrays with 14000 entries in each than to
make 14000 arrays
At 17:18 28.04.2002, Ernest Lergon wrote:
Now I'm scared about the memory consumption:
The CSV file has 14.000 records with 18 fields and a size of 2 MB
(approx. 150 Bytes per record).
Now a question I would like to ask: do you *need* to read the whole CSV
info into memory? There are ways to
On Sun, 28 Apr 2002, Per Einar Ellefsen wrote:
At 17:18 28.04.2002, Ernest Lergon wrote:
Now I'm scared about the memory consumption:
The CSV file has 14.000 records with 18 fields and a size of 2 MB
(approx. 150 Bytes per record).
Now a question I would like to ask: do you *need* to
At 17:18 28.04.2002, Ernest Lergon wrote:
Now I'm scared about the memory consumption:
The CSV file has 14.000 records with 18 fields and a size of 2 MB
(approx. 150 Bytes per record).
Now a question I would like to ask: do you *need* to read the whole CSV
info into memory? There
On Sun, Apr 28, 2002 at 05:18:24PM +0200, Ernest Lergon wrote:
Hi,
in a mod_perl package I load a CSV file on apache startup into a simple
hash as read-only class data to be shared by all childs.
A loading routine reads the file line by line and uses one numeric field
as hash entry
18 matches
Mail list logo