On Wed, Jun 18, 2003 at 01:02:16PM -0400, Bob Mariotti wrote:
> However, when timing the snippet below, it takes over 12 seconds to 
> complete.  That's 8 seconds (and that's an eternity) to respond.

Have you tried using a non-Perl client?  Something like netcat
(http://www.atstake.com/research/tools/network_utilities/) will have
very little overhead and should show you whether the problem is your
code or network/server issues.

Some things I'd try before trying to optimise your code too hard:

What's $ARGV [0]?  If it's an external program, how fast does this run?
        $ARGV[0] > /dev/null
This presumes you're running on a Unix system, I think the same can be
achieved with this under Windows:
        $ARGV [0] > NUL

If it's a file/fifo/device/whatever, how fast is:
        dd if=$ARGV[0] of=/dev/null
Unix only, dunno how to do this under Windows.

Can you write the returned data to a file, and try processing that
instead to check if the problem is your code or network issues?
        dd if=$ARGV[0] of=somefile

How fast is:
        open(CXIBIO,"+<$ARGV[0]") or die "Could NOT open $ARGV[0]\n";
        print CXIBIO "$ARGV[1]\015";
        while ( <CXIBIO> );

You may have some weird-ass buffering issues.  truss -d on Solaris,
strace -r, strace -T or strace -tt on Linux will give you a list of
system calls executed with timestamps, so you could look for read ()
taking a while.  You might need to introduce an artificial read () or
other system call before the real read () to get more accurate
differences.  I'm sure the BSDs have similar programs, I don't know
about Windows or any other OS.

While trying to optimise this, write several subroutines implementing
the different possibilities and use Benchmark to compare them.
Something like this should do it I think:

use Benchmark;

timethese ( 1000, {
        "sub1" => \&sub1,
        "sub2" => \&sub2.
        . . . .
        } );

sub sub1 { . . . }

The above is untested code though, so might not be perfect, or even
work.  Make sure to test using the file of data and the network
connection, to rule out network problems as the cause.

Some suggestions based on the code you posted:

> EP0000: while (1) {                                                   
> $REC="";

That line is unnecessary.

> $REC=<CXIBIO>;
> # Bypass ALL Returned Records until "Beginning of Data Sequence"
> if ( $REC =~ m/[EMAIL PROTECTED]/) { next EP0000; }

We're really into micro-optimisation here, but this _may_ be faster:
        $REC =~ m/[EMAIL PROTECTED]/o and next;
I have a vague memory that entering a new lexical scope is expensive.

Perl's regex engine is pretty optimised, but again this may offer an
improvement:
        '@BOD' eq substr $REC, 0, 4 and next;

How many @BOD records will there be?  If there's a fixed number you
could skip them like so:
        <CXIBIO>; <CXIBIO>; <CXIBIO>; # etc, etc, etc.
You could also move this to a separate loop before the main while (1)
loop, removing a test from the loop.

> # Exit if Last Record Signal
> if ( $REC =~ m/^0999/)  { last EP0000; }

You can do similar things here.  Possibly moving this to be the test
controlling the while loop would help, as it would remove another test.

> # Contruct Host Response String
> $RECIN="$RECIN"."$REC";

Maybe push onto an array then join later?  Anyone know if you try
something like this:
        $string = 'a' x 100000;
        $string =~ s/.*//;
Will perl retain the memory in the string, or free it?  Could this save
repeated reallocation?  How about allocating the string then using
substr to replace chunks?  If you're pushing onto an array, you could
pre-extend it with:
        $#array = 100;

I doubt that any of this will get you an improvement of more than a few
%.  I am interested in seeing what the core problem is though, so please
let us know.

> }
> print "$RECIN\n";

If this is all you're doing, why not just print instead of creating one
big string?

-- 
John Tobin
[Parrot] will have reflection, introspection, and Deep Meditative
Capabilities.
                    Dan Sugalski, 2002/07/11, [EMAIL PROTECTED]
_______________________________________________
Boston-pm mailing list
[EMAIL PROTECTED]
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to