I dunno what this problem is: I wrote a simple test and would appreciate if 
someone could provide feedback.

#creating a.exe that prints: hello!
pp -e "print q(hello!)" 

#timing 15runs
perl -MBenchmark -e "$start=Benchmark->new;print qx(aux1.exe hello) for 
1..15;$stop=Benchmark->new;print(timestr(timediff($stop,$start)))"

Result: 11 wallclock secs (0.05 usr + 0.03 sys = 0.08 CPU)

#Trying to achive the same result via commandline
perl -MBenchmark -e "$start=Benchmark->new;print qx(perl -e \"print q(Hi)\") 
for 1..15;$stop=Benchmark->new;print(timestr(timediff($stop,$start)))"

Result: 1 wallclock secs (0.02 usr + 0.02 sys = 0.03 CPU)

Can someone test and let me know if they are seeing a 10 sec difference? I 
guess it would be smaller on faster computers (mine is quite old).

Regards.

----------------------------------
On Sat, 12 Jun 2010 16:48:52 +0200
smuel...@cpan.org (Steffen Mueller) wrote:

> Xaero wrote:
> > Good that it uses zip but it doesn't look like there is a way to make
> > read from the par archive without creating a intermediate tempfile.
> > Whether I use perl -MPAR or parl it always unpacks to the TEMP
> > folder. Is there anyway to prevent  decompression to TEMP?
> 
> No.
> 
> perl -MPAR=foo.par ...
> 
> will only uncompress Perl code on demand. Shared libraries are (IIRC) 
> always extracted That's as good as it'll get because you cannot portably 
> load shared libraries from memory.
> 
> There is a branch that loads pure-Perl modules from memory, but that 
> cannot work in the general case. Sometimes, it even violates user 
> expectations.
> 
> Cheers,
> Steffen

Reply via email to