Re: dbm-file versions

2004-09-09 Thread Chad Kellerman
Johann,
   I have had the same problem.  Check, I bet the versions of Berkley 
DB are different.


Chad
Johann Spies wrote:
We have a situation that we need to open a dbm-file but cannot do so
using perl version 5.8.4-2 on Debian Sarge.  The following script
fails, but the same script and dbm-file works on Woody with perl 5.6:
#!/usr/bin/perl -w
use Fcntl;
use DB_File;
#use NDBM_File;
use DBI;
$dbfile = "/tmp/globaldb.db";
tie(%hash,"DB_File",$dbfile,O_RDONLY) || die "Cannot open $dbfile [$!]";
untie %hash;
Is there a way to read that file with perl 5.8?
Regards
Johann

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



Use of uninitialized value in pattern match (m//) at test.pl line 85.

2003-02-01 Thread chad kellerman
Hi guys,

   How can I get around this "warning"

Use of uninitialized value in pattern match (m//) at test.pl line 85.

For some reason I always get driveNum=2.

I am using warinig and strict in my code..



if ($pong->ping($mslRef{$server}{ip})) {

#check for 2 drives
my $cmd = "grep /home2 /etc/fstab|awk \'{print \$2}\'";

my ($out, $err, $exit);
eval {
 my $ssh =
Net::SSH::Perl->new($mslRef{$server}{ip},identity_files=>["/root/.ssh/identity"]);
$ssh->login("$user");
 ($out, $err, $exit) = $ssh->cmd($cmd); 
};
if ($@) {
LogIt("Error from eval in Net::SSH::Perl\->
$@:$mslRef{$server}{hostname}:$mslRef{$server}{ip}");
}elsif ($err) {
LogIt("Error in command sent thru Net::SSH::Perl\->
$err:$mslRef{$server}{hostname}:$mslRef{$server}{ip}");
}
>>>>Line 86 below<<<<
if (defined $out =~ /\/home2/ ) {
my $driveNum = "2";
#MysqlIt($server,$driveNum,%mslRef);
TestIt($server,$driveNum,%mslRef);
}else{
my $driveNum = "1";
#MysqlIt($server,$driveNum,%mslRef);
TestIt($server,$driveNum,%mslRef);
}



If /home2 is not returned from the ssh commnd I get the warning

thanks in advance for the help.

chad
-- 
chad kellerman <[EMAIL PROTECTED]>



signature.asc
Description: This is a digitally signed message part


next if........ a cleaner way?

2003-03-20 Thread chad kellerman
Hello everyone,

 I want to clean this bit of code up.  It looks really messy.  I am in a 
mental block.  Any suggestions?


 @userInfo = split /\s+/, $buffer;

#insert home users & quota in the db
foreach $userInfo ( @userInfo )  {

( $name, $quota ) = split /\|/, $userInfo;
# get rig of the header info from repquota
   
next if ( ( $name =~ /Block/ ) or ( $name =~ /nobody/ ) or ( $name =~ 
/User/ ) or ( $name =~ /^\d/ ) or ( $name =~ /www/ )  or ( $name =~ /backup/ 
)  or ( $name =~ /ftp/ )  or ( $name =~ /httpd/ ) or ( $name =~ /root/ ) or ( 
$name =~ /netop/ ) or ( $name =~ /sysop/ ) or ( $name =~ /users/ ) or ( 
$quota !~ /\d+/ ) or ( $name =~ /^#/ ) or ( $quota <= 8 ) or ($name =~ 
/bill/) );


Is there an easier way to loop thru a bunch of regex?

Thanks for the help.

Chad




-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



counting fork process, again

2003-03-24 Thread chad kellerman

Helloe everyone,
   I was wondering is someone can help me out with an issue with forking.  I 
am trying to fork 8 process at a time.

   Here is what I have:



#!/usr/bin/perl
use strict;
use warnings;
use lib ".";
use BACKUP;   #my own module
use POSIX ":sys_wait_h";

my( $MAX_CHILDREN ) = "8";
my( $CHILD_PIDS ) = "0";
my( $maxtries ) = "7";
my( $failure ) = "0";

# there are actually 100 hostIds. but you should get the point..
my( @hostIds ) = "10.10.10.1, 10.10.10.2, 10.10.10.3 ,10.10.10.4";

$SIG{CHLD} = \&CHILD_COUNT($CHILD_PIDS);

FORK:
{

HOSTID: foreach my $hostId ( @hostIds ) {

redo HOSTID if $CHILD_PIDS >= $MAX_CHILDREN;

if( my $pid = fork ) {
$CHILD_PIDS++; #Add the children up until we hit the max
next;
}elsif (defined $pid) {
#  In here I do some stuff with each $hostID.
# To make the code easier to read, I made a module that
# has a bunch of subroutines in it.
#There are basically 2 subroutines that I call for each
# hostID.  1 grabs the quota for each user on the hostId,
#The other tars and copies the user where the script
# is.  I eval my connection and if some fails I
# go on to the next. ex.
 until ( (BACKUP->QuotaIt( $hostId ) or ( $failures == 
$maxtries ) ) ) {
  $failures++;
  if ( $failures == $maxtries ) {
 my( $subject ) = "Hey, WTF is up with $hosId";
 my( $message ) = "$0 failed to connect to $hostID.";
 BACKUP->MailIt( $subject, $message, $daily );
 #go to the next hostid
 next HOSTID2;
 } #if statememt
 } #until statement
  
   }elsif($! =~ /No more process/){
sleep 15;
redo; #do over.
}else{
# this is just a mail routine that mails be that I
#can't fork
my( $subject ) = "Failed to fork any children";
my( $message ) = "$0 failed to fork anymore children.
BACKUP->MailIt( $subject, $message, $daily );  
die;
}

} # foreach loop ends

} # this is the FORK


sub CHILD_COUNT {
my $child_pids = @_;
my $child = waitpid(-1,WNOHANG); 
while ($child != -1 && ($child_pids > 0 )) {
$child_pids--;
$child = waitpid(-1,WNOHANG);
}
}



   Just typing this I realized that if I can't fork then I probably won't be 
able to mail myself a notification.  So I gotta change that else statement 
with the mail notification.

   Anyways,  the issues I am having are two fold. The first I get this 
warning:
   Not a subroutine reference at ./script.pl line  331 which is:
 " redo HOSTID2 if $CHILD_PIDS >= $MAX_CHILDREN;"

  The second is a bigger issue.  I also fork in the "home made" perl module 
for each user of the HostId I am doing.
nothing crazy, just.

#--#
foreach $user (@users) {
my( $pid ) = fork ();
die "Cannot fork: $!" unless defined( $pid );
if ( $pid == 0 ) {
#do tarring of user  
exit 0;
}
waitpid($pid,0);
}
#---#

Here I get the error:
Not a subroutine reference at BACKUP.pm line 195. 
which is: " waitpid($pid,0);"

  I know this is very confusing.  And I might not even be posting to the right 
list.  But I am so frustrated with trying to get this thing to work.  It 
seems as if I have searched everywhere for examples of limiting the number of 
forked processes, then being able to fork with in a fork.  
I originally was using Parallel::ForkManager.   But I found that if I set 
the max_processes to 8 it will start eight but will not contiue until all 
eight were done, then only do one at a time.

  That's when I decided to go the POSIX route and use fork..  But just can't 
get it working.  I think setting the SIG{CHLD} is messing things up.  BUt I 
am not sure.

  Sorry for being so drawn out.  Please feel free to tear me/my code up.  I am 
new and would really like to know how to do this.  Don't worry I can take 
criticism pretty well... lol

Thanks in advance,

Chad
 


   

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: counting fork process, again

2003-03-24 Thread chad kellerman
Jenda,
  Actually, that's what I started with.  But here is what I found.  If I set 
the $max_processes to 8 in Parallel::ForkManager.  IT would indeed spawn off 
8 "children".  But as the children finished "died" new ones did not take 
their place.  Not until all 8 children were finished, did a new child become 
present.  And that was only 1 child at a time. Not 8.  THat 's why I decided 
to go  the POSIX route.

   I di not use Win32::ProcFarm because I am only working on a *nix network.

Thanks,
--chad  


On Monday 24 March 2003 02:19 pm, Jenda Krynicky wrote:
> From: chad kellerman <[EMAIL PROTECTED]>
>
> > Helloe everyone,
> >I was wondering is someone can help me out with an issue with
> >forking.  I
> > am trying to fork 8 process at a time.
>
> I did not read the previous thread. Did you consider
> Parallel::ForkManager or Win32::ProcFarm? (Both on CPAN)
>
> Jenda
> = [EMAIL PROTECTED] === http://Jenda.Krynicky.cz =
> When it comes to wine, women and song, wizards are allowed
> to get drunk and croon as much as they like.
>   -- Terry Pratchett in Sourcery


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: counting fork process, again

2003-03-25 Thread chad kellerman
Hello everyone,

 I just wanted to say thanks for all the help.  So far it is working quite 
nicely.  Jenda, I fixed my $SIG{CHLD} and poof.  All is well in my world.


Thanks again for all the great suggestions.

Chad



On Monday 24 March 2003 02:48 pm, Jenda Krynicky wrote:
> From: chad kellerman <[EMAIL PROTECTED]>
>
> > Jenda,
> >   Actually, that's what I started with.  But here is what I found.  If
> >   I set
> > the $max_processes to 8 in Parallel::ForkManager.  IT would indeed
> > spawn off 8 "children".  But as the children finished "died" new ones
> > did not take their place.  Not until all 8 children were finished, did
> > a new child become present.  And that was only 1 child at a time. Not
> > 8.  THat 's why I decided to go  the POSIX route.
>
> I see. That looks like a bug to me. Did you contact the
> Parallel::ForkManager's author about this?
>
> >I di not use Win32::ProcFarm because I am only working on a *nix
> >network.
>
> OK, I did not know.
> I'm only working under Windows. So I guess I would not be of much
> help for you. What would work for me probably would not for you.
>
> Jenda
> = [EMAIL PROTECTED] === http://Jenda.Krynicky.cz =
> When it comes to wine, women and song, wizards are allowed
> to get drunk and croon as much as they like.
>   -- Terry Pratchett in Sourcery

-- 
chad kellerman  
Jr. Systems Administrator
Alabanza Inc
10 East Baltimore Street
Suite 1500
Baltimore, Md 21202
1-800-361-2662 Ext 3305
410-234-3305 direct

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



LABELS, forks and foreach loops.....

2003-03-27 Thread chad kellerman

Hello,

   Got a little issue.  Tyring ot under stand how things work but just not 
getting it.  I am forking each element of the array .  I have a maximum child 
count of 8.  Which works.  What I was wondering is if I get into the if 
statement in the until loop.  How do I exit the pid clean up and go on to the 
next element in the array with out leaving any pids around?


FORK:
{

HOSTID2: foreach $hostId ( @hostIds ) {

#next if $CHILD_PIDS > $MAX_CHILDREN;
redo HOSTID2 if $CHILD_PIDS >= $MAX_CHILDREN;

if( my $pid = fork ) {
$CHILD_PIDS++; #Add the children up until we hit the max
next;
}elsif (defined $pid) {
my $failures = 0;

#grab quota for each user and insert into the database
until ( (BACKUP->QuotaIt( $hostId, $mysqluser, $mysqlpasswd ) or 
( $failures == $maxtries ) ) ) {
$failures++;
if ( $failures == $maxtries ) {  
#BLA BLA code

#clean up the pid and exit
exit 0;
waitpid(-1, &WNOHANG);
   
#go to the next hostid
next HOSTID2;

} #if statememt
 
} #until statement

#rest of code.

Thanks for the help,

Chad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: LABELS, forks and foreach loops.....

2003-03-27 Thread chad kellerman
Rob,

   Thanks for the help.  I did what you suggested and cleaned up the code.  
But I am getting some strange errors:

Use of uninitialized value in numeric gt (>) at ./script.pl line #.

foreach $hostId ( @hostIds ) {

my $pid = undef;

$pid = fork if $CHILD_PIDS < $MAX_CHILDREN;

if( $pid > 0 ) {

$CHILD_PIDS++; #Add the children up until we hit the max

}elsif (defined $pid) {

   #the rest is just like you suggested.



the message if for the line that says:

f( $pid > 0 ) {

Not too sure what to make of it??

Thansk for the help 

CHad


On Thursday 27 March 2003 11:59 am, Rob Dixon wrote:
> Hi Chad.
>
> Chad Kellerman wrote:
> > Hello,
> >
> >Got a little issue.  Tyring ot under stand how things work but just
> > not getting it.  I am forking each element of the array .  I have a
> > maximum child count of 8.  Which works.  What I was wondering is if I get
> > into the if statement in the until loop.  How do I exit the pid clean up
> > and go on to the next element in the array with out leaving any pids
> > around?
> >
> > FORK:
> > {
> >
> > HOSTID2: foreach $hostId ( @hostIds ) {
> >
> > #next if $CHILD_PIDS > $MAX_CHILDREN;
> > redo HOSTID2 if $CHILD_PIDS >= $MAX_CHILDREN;
> >
> >   if( my $pid = fork ) {
> > $CHILD_PIDS++; #Add the children up until we hit the max
> > next;
> > }elsif (defined $pid) {
> > my $failures = 0;
> >
> > #grab quota for each user and insert into the database
> > until ( (BACKUP->QuotaIt( $hostId, $mysqluser, $mysqlpasswd )
> > or ( $failures == $maxtries ) ) ) {
> > $failures++;
> > if ( $failures == $maxtries ) {
> > #BLA BLA code
> >
> > #clean up the pid and exit
> > exit 0;
> > waitpid(-1, &WNOHANG);
> >
> > #go to the next hostid
> > next HOSTID2;
> >
> > } #if statememt
> >
> > } #until statement
> >
> > #rest of code.
> >
> > Thanks for the help,
>
> You'd do yourself a favour if you laid out your code better with
> blocks indented neatly. It's very hard to see the loop structure
> as it is.
>
> You seem to be getting confused over which process your code is
> executing in. You're doing a waitpid from your child processes,
> for instance, and looping on the host ID in all the processes.
>
> Take a look at the following. It almost certainly won't work as it
> is, because I'm not sure of the sense of some of your tests, but it's
> written without using loop labels, which are often a sign of something
> going wrong. It's rarely necessary to label your loops unless you're
> doing something quite tricky.
>
> Let us know if this helps.
>
> Rob
>
>
> my $child_pids = 0;
>
> foreach $hostId ( @hostIds ) {
>
> my $pid = undef;
>
> $pid = fork if $child_pids < $MAX_CHILDREN;
>
> if ($pid > 0) {
>
> $child_pids++;
>
> } elsif ( defined $pid ) {
>
> # $pid == 0 so in the child process
>
> my $failures = 0;
>
> while ( $failures < $maxtries ) {
> my $result = BACKUP->QuotaIt ( $hostId, $mysqluser,
> $mysqlpasswd ); last if $result;
> $failures++;
> }
>
> #BLA BLA code
> #clean up the pid and exit
>
> exit 0;
>
> } else {
>
> # $pid is undefined so fork failed - wait for a spare one
>
> waitpid -1, WNOHANG;
> $child_pids--;
> }
> }

-- 
chad kellerman  
Jr. Systems Administrator
Alabanza Inc
10 East Baltimore Street
Suite 1500
Baltimore, Md 21202
1-800-361-2662 Ext 3305
410-234-3305 direct

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Fork to run a sub -process

2002-07-12 Thread chad kellerman

Hi everyone,

I am stuck.  I have a perl script that I wrote.  It runs on a Solaris 8 
box and goes out to linux boxes and tars up user data and mysql data and 
stores it on particular drives of the sun box.

   Right now the script only goes out and tars up one server at a time.  I was 
thinking of putting that process as a sub routine and try to go out and 
"backup" two servers (or three) at a time.

  I am thinking I should try and fork child processes to do each server.  The 
child being the sub routine.

What do you think?  Would this be the best way to go about this?   Where is 
the best resource for examples on forking?  I am going through google groups 
but most of them entail system calls or networking.  Not a sub routine.

thanks for the help...
--chad


--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




perl modules?

2002-07-15 Thread chad kellerman

Hello,

   Boy this stuff can get frustrating

  I am trying to include my own perl module that is outside the regular @INC 
array.  But my perl script still can not find it..  Any suggestions would be 
appreciated...

Here is what I have...


script.pm

package Script;
BEGIN {
   use Exporter;
   @ISA = qw( Exporter );
   @EXPORT = qw($serverList);
}

# declaring variable for main server list
my $serverList = "/home/user/somefile";
return 1;
END { }


script.pl

use strict;
$|++;

push (@INC, '.');

use FileHandle;
use Script;

use vars qw(
   $msl %msl
);

print $serverList;



  But is fails with the typical Can't locate Script.pm in @INC (@INC contains: 
etc.

I am on linux running this from /home/server/scripts/

Can anyone see what I an doing wrong from this?

Thanks for the help.

chad

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Remove 4 last letters

2002-07-23 Thread Chad Kellerman

Dave,

   I did it this way


   $variable = substr($orig_variable,0,length($orig_variable)-2);

   I actually only removed the last 2 characters but I don't see why you
could not put a 4 and remove last four..

   I am sort of new at perl but I did get the above to work.  I needed
to drop off 2 number off of a directory for a scritp I was writing..

and I used it like this:

$new_variable = substr($directory,0,length($directory)-2);

I hope this helps.

chad



On Tue, 23 Jul 2002 14:00:37 +0200
"David Samuelsson (PAC)" <[EMAIL PROTECTED]> wrote:

> This should be really simple, just use a regexp to remove the 4 last
> letters from a variable.
> 
> $variable =~ /\S\S\S\S$/;
> print "$variable";
> 
> but this doesnt remove the 4 last letters when i run it, i think i am
> just getting tired here and cant see why, what is the right way to do
> it? :)
> 
> //Dave
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Net::SSH::Perl

2002-07-29 Thread Chad Kellerman

Hello everyone,

   I am not too sure if this is the place to send this question or not,
but here it goes anyway:


I have a script that uses the Net::SSH::Perl modules for connecting
to a server.  I connect and grab all the databases and put them into an
array.  Latter in the script I run a foreach loop on the array and tar
the databases and zip them on disk. 

  But sometimes I get an error with degbugging turned on:
Can't connect to "whatever ip address", port 22: Bad file number at
/usr/local/lib/perl5/site_perl/5.6.1/Net/SSH/Perl.pm line 206.

  I think what is happening is that ssh is restarting on the remote
server and is killing my script.

   I was wondering if anyone has come accross this error  If you did
how did you fix it?

   I think I am going to have to use so sort of if statement to keep
trying for a connection??

any ideas??

Thanks for the help..
chad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




newbie question about Sys::Hostname and modules in general

2002-08-02 Thread Chad Kellerman

Hello,

   Ok, I made a newbie mistake and I hope I don't have to go back and
change my code.

  I wrote a script that talks to a Mysql database.  I created a table
called hostname.

I created a subroutine to send email messages when that is any errors as
well as log them to the db. 

  Problem is I want to include the hostname of the server in the subject
of the email.

** I always have issues with scope  Can I use Sys::Hostname  just in
the subroutine, or any perl module for that matter?  Or do I have to use
it globally?

$host=hostname; finds that daggone hostname and I have hostname all over
te placemy bad.  :^(.

Thanks for the help.

chad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Warning message I can't figure out

2002-08-17 Thread chad kellerman

Hey guys,

   I was working on a disk usage script.  Mainly using stat like DAn had
in an earlier posting.  Buit when I saw Janek post I thought, "That's
perfect,  Just what I was doing but about 8 lines less."  I incorporated
it into my script but the waring message I can't figure out.  I am kinda
new, but thanks for the help...


<--snip-->

use strict;
use diagnostics;
$|++;

use File::Find;
use vars qw (@users $user $BLOCK_SIZE $TOTAL_SIZE $MB_SIZE);

@ARGV = ('/home') unless @ARGV; 

$TOTAL_SIZE = 0;
$MB_SIZE= 0;

opendir(DIRECTORY, "@ARGV") || die $!;
@users = readdir(DIRECTORY);
closedir(DIRECTORY);

foreach $user (@users) {
next if ($user eq ".") || ($user eq "..") ;
find(sub {$TOTAL_SIZE += -s if -f}, "$user");
print "$user  $TOTAL_SIZE\n";
undef $TOTAL_SIZE;
}

<--/snip-->

  I had to undef $TOTAL_SIZE because it kept adding the users together
and I could not figure a way to get the du individually.

  THe error message I am getting says something like this:

Use of uninitialized value in concatenation (.) or string at
/home/ckell/scripts/du.pl line 28 (#1)

but if the undef is taken out the warning messag is gone but I don't get
the right results.

Thanks again,

chad





signature.asc
Description: This is a digitally signed message part


newbie question

2002-08-19 Thread Chad Kellerman

  Hello,

   I have only been writing perl for a few months, so forgive me if this
sounds stupid.

what is the difference between:

$| = 1;
and
$|++;

   Or can you point me in the right direction on where I can read
boutit?

Thanks,
Chad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Forking question.

2002-08-26 Thread Chad Kellerman

Good morning, afternoon, night,

I have been trying to work on a script that does forking.  But
the script dies in the fork.  Here is what I have:

I push some information about a server into an array.


use POSIX "sys_wait_h";
my $child_limit = 1;
my $child_pids = 0;
$SIG{CHLD} = \&CHILD_COUNT($child_limit);


 push @server_list, $href;

  FORK:
 {
while ( $#server_list > -1 ) {
 next if $child_pids > $child_limit;
 my $server_todo = pop @server_list;
 if (my $pid = fork) {
 # sleep 1;  # failing due to bad
subroutine? 
 #do the parent
 $child_pids++;
 next;
 }
 elsif (defined $pid) {
 # ok now the child.
 do_stuff($server_todo); #subroutine
 exit;
 }
 elsif ($! =~ /No more process/) {
 print " No more process ...sleeping\n";
 sleep 5;
 redo FORK;
 }
 else {
 die " Can't fork: $! \n";
 }
}   
 }

sub CHILD_COUNT {
my $child_limit = @_;
my $child;
$child = waitpid(-1, WNOHANG);
while ( $child > 0 && ( $child_pids > 0)) {
$child_pids-- if ( $child_pids > 0);
$child = waitpid(-1,WNOHANG);
}
}






  I get the error:
Not a subroutine reference at serverbackup.pl line 65.

Line 65 is the next if statement.

  I am at a loss here.  It seems that the more I read about this the
more I get confused.

Can anyone give me a hand with this?  Or a push in the right direction? 
It would be much appreciated.

Thanks
--chad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




eval on a $SIG{KILL}- newbie question

2002-08-27 Thread Chad Kellerman

Hello,
I am writing a script on a linux server use Net::SSH::Perl.  Every
once in a while the ssh connection to a remote server dies or it just
can't connect.  the perl module send a $SIG{KILL} to the script when
ever this happens.  Which isn't what I want.  I am trying to put the
kill in an eval stattement and have it wait a few minutes before it
tries to connect again.  But I am never getting past the eval statement.

Here's my code:

 eval  { local $SIG{KILL} = sub {die "died" };
alarm 10;
$ssh->login($user);
($out, $error, $exit) = $ssh->cmd($cmd);
alarm(0);
}; # end of eval statement
if ($@ =~ /died/) {
   try_again($host_ip, $host_name, $group_dir) = @_;
}

It the try_again sub I have it email me (which works) but I have it
print that it's entering the failed subroutine but it never does that.

  Does anyone see what I am doing wrong?  Thanks again for all the help.

--chad  

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: eval on a $SIG{KILL}- newbie question

2002-08-27 Thread Chad Kellerman

Bob,
Thanks for the responce.  I did not realize you can't trap a
$SIG{kill}.

 I guess the only way around this is to change the perl module. 
Change it so it doesn't die but return a value and grab that value in
the eval statement?


thanks again,
--chad


On Tue, 27 Aug 2002 08:41:31 -0400
Bob Showalter <[EMAIL PROTECTED]> wrote:

> > -Original Message-
> > From: Chad Kellerman [mailto:[EMAIL PROTECTED]]
> > Sent: Tuesday, August 27, 2002 8:33 AM
> > To: [EMAIL PROTECTED]
> > Subject: eval on a $SIG{KILL}- newbie question
> > 
> > 
> > Hello,
> > I am writing a script on a linux server use Net::SSH::Perl. 
> > Every
> > once in a while the ssh connection to a remote server dies or it
> > just can't connect.  the perl module send a $SIG{KILL} to the script
> > when ever this happens.  Which isn't what I want.  I am trying to
> > put the kill in an eval stattement and have it wait a few minutes
> > before it tries to connect again.  But I am never getting past the
> > eval statement.
> > 
> > Here's my code:
> > 
> >  eval  { local $SIG{KILL} = sub {die "died" };
> > alarm 10;
> > $ssh->login($user);
> > ($out, $error, $exit) = $ssh->cmd($cmd);
> > alarm(0);
> > }; # end of eval statement
> > if ($@ =~ /died/) {
> >try_again($host_ip, $host_name, $group_dir) = @_;
> > }
> > 
> > It the try_again sub I have it email me (which works) but I have it
> > print that it's entering the failed subroutine but it never does
> > that.
> > 
> >   Does anyone see what I am doing wrong?  Thanks again for 
> > all the help.
> 
> Are you saying Net::SSH::Perl sends SIGKILL to the calling script? You
> can't catch SIGKILL. 
> 
> If you're trying to catch the 10 second timeout, use $SIG{ALRM}.
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: eval on a $SIG{KILL}- newbie question

2002-08-27 Thread Chad Kellerman

Sorry everybody,

   I have been trying to work on this all day but nothing...

IF a perl module uses:
connect($sock, sockaddr_in($rport, $raddr))
or die "Can't connect to $ssh->{host}, port $rport: $!";

How do I catch the die() in an eval statement;  I have been using:

eval {   
alarm 10;
$ssh->login($user);
($out, $error, $exit) = $ssh->cmd($cmd);
alarm(0);
}; # end of eval statement
if ($@ =~ /Can't/) {
   try_again($ip, $host_name) = @_;
}

but the eval still doesn't pass the program to the sub try_again.

Sorry for being such a newbie, but I have been trying to follow the
oreilly book and still it just isn't happening.

thanks,
chad


On Tue, 27 Aug 2002 09:28:52 -0400
Bob Showalter <[EMAIL PROTECTED]> wrote:

> > -Original Message-
> > From: Chad Kellerman [mailto:[EMAIL PROTECTED]]
> > Sent: Tuesday, August 27, 2002 8:58 AM
> > To: [EMAIL PROTECTED]
> > Subject: Re: eval on a $SIG{KILL}- newbie question
> > 
> > 
> > Bob,
> > Thanks for the responce.  I did not realize you can't trap a
> > $SIG{kill}.
> > 
> >  I guess the only way around this is to change the perl module. 
> > Change it so it doesn't die but return a value and grab that value
> > in the eval statement?
> 
> Wait a minute. die() is vastly different from sending SIGKILL. If the
> module simply die()'s, you catch that by examining $@ after the eval
> block.
> 
> > 
> > 
> > thanks again,
> > --chad
> > 
> > 
> > On Tue, 27 Aug 2002 08:41:31 -0400
> > Bob Showalter <[EMAIL PROTECTED]> wrote:
> > 
> > > > -Original Message-
> > > > From: Chad Kellerman [mailto:[EMAIL PROTECTED]]
> > > > Sent: Tuesday, August 27, 2002 8:33 AM
> > > > To: [EMAIL PROTECTED]
> > > > Subject: eval on a $SIG{KILL}- newbie question
> > > > 
> > > > 
> > > > Hello,
> > > > I am writing a script on a linux server use Net::SSH::Perl. 
> > > > Every
> > > > once in a while the ssh connection to a remote server dies or it
> > > > just can't connect.  the perl module send a $SIG{KILL} to 
> > the script
> > > > when ever this happens.  Which isn't what I want.  I am trying
> > > > to put the kill in an eval stattement and have it wait a few
> > > > minutes before it tries to connect again.  But I am never
> > > > getting past the eval statement.
> > > > 
> > > > Here's my code:
> > > > 
> > > >  eval  { local $SIG{KILL} = sub {die "died" };
> > > > alarm 10;
> > > > $ssh->login($user);
> > > > ($out, $error, $exit) = $ssh->cmd($cmd);
> > > > alarm(0);
> > > > }; # end of eval statement
> > > > if ($@ =~ /died/) {
> > > >try_again($host_ip, $host_name, $group_dir) = @_;
> > > > }
> > > > 
> > > > It the try_again sub I have it email me (which works) but 
> > I have it
> > > > print that it's entering the failed subroutine but it never does
> > > > that.
> > > > 
> > > >   Does anyone see what I am doing wrong?  Thanks again for 
> > > > all the help.
> > > 
> > > Are you saying Net::SSH::Perl sends SIGKILL to the calling 
> > script? You
> > > can't catch SIGKILL. 
> > > 
> > > If you're trying to catch the 10 second timeout, use $SIG{ALRM}.
> > > 
> > 
> > -- 
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> > 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




perl ssh

2002-09-03 Thread Chad Kellerman

Hey guys,

Having an issue with Net::SSH:Perl and eval.

Is there another way to write this?  Or am I just missing something?

my $ssh;
eval {
 alarm 10;
 $ssh = Net::SSH::Perl->new($host_ip,
identity_files =>["$id_key_fn"],
port => 22,
debug => $dbg);
 alarm(0);
};
$ssh->login($user);
my ($out, $error, $exit) = $ssh->cmd($cmd);

if ($@) {
try_again($host_ip);
}

What's happening is I get :

Can't call method "login" on an undefined value at line so and so.  

where that line so and so is:

$ssh->login($user);

I have $user define globally. I am not too sure if this matters but
the script forks.  I don't think that is does.  But just in case.

   It's strange because this part of code works, but sometimes in one of
the childs procs I get the error.

   Anyone have any ideas?

Thanks,
Chad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




child processes dying...

2002-09-03 Thread Chad Kellerman

Hello Everyone,
 I got a script that spawns off a few children.  But for some reason
before the children are finished doing what they are suppose to do they
die.

code:
use POSIX "sys_wait_h";
my $child_limit = 1;
my $child_pids = 0;
$SIG{CHLD} = \&CHILD_COUNT;
FORK:
 {
while ( $#server_list > -1 ) {
 sleep 1;
 next if $child_pids > $child_limit;
 my $server_todo = pop @server_list;
 if (my $pid = fork) {
 #do the parent
 $child_pids++;
 next;
 }
 elsif (defined $pid) {
 # ok now the child.
 do_stuff($server_todo);
 exit;
 }
 elsif ($! =~ /No more process/) {
 print " No more process ...sleeping\n";
 sleep 5;
 redo FORK;
 }
 else {
 die " Can't fork: $! \n";
 }
}   
 }
wait;

sub CHILD_COUNT {
#my $child_limit = @_;
my $child;
$child = waitpid(-1, WNOHANG);
while ( $child > 0 && ( $child_pids > 0)) {
$child_pids-- if ( $child_pids > 0);
$child = waitpid(-1,WNOHANG);
}
}

  The program goes into the sub routine do_stuff but in the middle
spawns a new child and kills the first spawned pid.
  The sub works like a charm by itself but when I try to fork it dies.  
I have checked perlmonks and oreilly.  But still nothing works.

Thanks for the help...

chad 
   


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




writing stdout to disk in chunks

2002-09-05 Thread Chad Kellerman

Hello,
   I am having issues with writing standardout to disk. ( in chunks)  I
have  a small backup script that runs.  It uses Net::SSH::Perl to send a
tar command to the hoem directory of a server.  The tar command pipes
eveything to STDOUT, on the backend I write that STDOUT to disk. 
Everything works fine for small sites.  Because I am writing to memory
the only issue I have is when I run into a site that has more disk space
then the "backup" server has memory.  STDOUT uses all the memory and the
box dies.  So I thought I could write chunks of STDOUT to a file on the
disk at certian intervals, thus avoiding the memory issues.



 use strict;
use diagnostics;
$|++;

use lib ".";
use ServerBackup;
use Net::SSH::Perl;
use FileHandle;
use File::Path;
use Net::SSH::Perl::Buffer;
use Compress::Zlib;

my $user = "netop";
my $hostname = "sonedomain.com";
my $dbg = 1;

my $ssh = Net::SSH::Perl->new($hostname,
  identity_files =>["$id_key_fn"],
  port => 22,
  debug => $dbg);
   $ssh->login($user);
my ($home_list, $home_err, $home_exit) = $ssh->cmd($listhome);

my @home_users = split " ", $home_list;   
foreach my $home_user (@home_users) {
   my $buffer = Net::SSH::Perl::Buffer->new;
   my $gz =
gzopen("/export/home/server/scripts/widow/test/$home_user\.tar.gz",
"w");
   my ($home_out, $home_err, $home_exit);
   while (($home_out, $home_err, $home_exit) =
$ssh->cmd("cd /; /usr/bin/nice /bin/tar cpf - /home/$home_user")){
my $gz =
gzopen("/export/home/server/scripts/widow/test/$home_user\.tar.gz",
"w");
$buffer->put_int32($home_out);
my $int = $buffer->get_int32;
my $tgz_user = $gz->gzwrite($int); 
   }
   $gz->gzclose();
}



  I think I may have bitten off more then I can chew, it just "ain't"
working.
It creates the file but doesn't write everything in STDOUT to the file. 
IS any familiar with Net::SSH::Perl::Buffer.  Or is there another way I
can do this?  A perl module I don't know about?

THanks for the help.

chad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




how to release memory

2002-09-10 Thread Chad Kellerman

Hello,
   I have a subroutine that declares a variable from standard out.  If I
just undef that variable will that free up the memory that it used?  Or
is there another command  that frees up memory?

THanks for the help

--chad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




a small forking problem.

2002-09-11 Thread Chad Kellerman

Hey,

I got a script that I use Parallel::ForkManager, which works
great...
the only problem is that I want a child to fork another process and
Parallel::ForkManager does not allow that.

  So for my first fork I use it but in the child I am trying to use the
normal oreilly FORK but I am missing something stupid.


my @list = "bla, bla, bla, bla, bla";
foreach my $item(@list) {
my $pid;
FORK: {
if ($pid=fork) {
   print"$pid\n";
}elsif (defined $pid){
 #do other perl stuff to $item
exit 0;
} elsif ($! =~/No more process/) {
sleep 5;
redo FORK;
}
   } 
}

   What I want to do, is have the fork, process $item before it goes
onto the next $item.  But as it is written it forks every $item in the
@list.

   Just fork, finish, repeat until all $item are done.

Can any offer any suggestions?

THanks,
CHad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: a small forking problem.

2002-09-11 Thread Chad Kellerman

Hi all,
First thanks for the responses.  And sorry for being so vague.  But
let me explain what I have done and why.
I wrote a backup script that logs progress/errors/ and any other useful
information under the sun to a mysql database.

The script runs from a central backup server and backups up several
servers.  when the script starts it forks (right now I have it set to 5)
and does 5 servers at one time.  When one finishes it grabs the next one
and continues as such.  

   What I was running into is when I was backing up the server I was
tarring each individual user to standard out then writing that standard
out to a gzwrite object on the backup server.  This way, the zipping of
the archive was not done on the client box and I did not need any disk
space requirement on the client box as well.  I did not want to tar on
client box and scp tar ball over.  To many downfalls in that.

   But if I was tarred 5 large users from 5 different servers, I ran out
of memory on the backup server and crashed the server.  I have found
that perl, once it uses memory it does not release it until the script
dies.  Well this is not good.  

  So what I did was spawn a child each time I went to tar a user from a
client server.  Thus, when the user was finished tarring the child would
exit and that memory would then be release from the script so that other
processes on the box could use it.  Thus almost eliminating the issue
with using up all the memory.  The only issue now is if I have to back
up a user on a client box that is larger the memory on the backup
server.  But I have a few ideas for that.

   So the issue I was having when I posted  was that each of the $items
in the @array were being spawned at once.  I thru a while loop in there
and fixed that issue.  I knew it was something stupid I was over
looking.

   Sorry for not explaining everything thoroughly in my email.  I will
make sure that any other post will be more   descriptive.

   Thanks for all the help.  I greatly appreciate it.

Sincerely,
Chad 





On Wed, 11 Sep 2002 11:29:34 -0800
Michael Fowler <[EMAIL PROTECTED]> wrote:

> On Wed, Sep 11, 2002 at 12:17:55PM -0400, Chad Kellerman wrote:
> > my @list = "bla, bla, bla, bla, bla";
> 
> You probably meant @list = ("bla", "bla", "bla", "bla", "bla");
> 
> 
> > foreach my $item(@list) {
> > my $pid;
> > FORK: {
> > if ($pid=fork) {
> >print"$pid\n";
> > }elsif (defined $pid){
> >  #do other perl stuff to $item
> > exit 0;
> > } elsif ($! =~/No more process/) {
> > sleep 5;
> > redo FORK;
> > }
> >} 
> > }
> > 
> >What I want to do, is have the fork, process $item before it goes
> > onto the next $item.  But as it is written it forks every $item in
> > the@list.
> > 
> >Just fork, finish, repeat until all $item are done.
> 
> What is the point of forking if you're waiting for the sub-process to
> complete before continuing on?
> 
> The solution, of course, is to wait, see perldoc -f wait and perldoc
> -f waitpid.  Another solution is to remove the fork altogether, and
> simply process the item in the parent.
> 
>  
> Michael
> --
> Administrator  www.shoebox.net
> Programmer, System Administrator   www.gallanttech.com
> --
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




sleep question

2002-09-12 Thread Chad Kellerman

Greetings,
 I have a script that forks 5 children.  I print to screen when each
child gets forked.  Under certain conditions in the script a child
should sleep.  This conditions occurs at different times for each child.

 I think I am noticing that when the sleep is called in a child,
every child and the parent sleep as well.

  Am I correct in this assumption?  The OS that the script is running on
is Solaris.

Thanks,
CHad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Forking, Passing Parameters to forks

2002-09-19 Thread Chad Kellerman

Jason,

   I am using Parallel::ForkManger for a similar project.  I works great
except you cannot call a fork in a child process using the perl module. 
I actually had to use fork.  But other then that it works great.

chad  

On 19 Sep 2002 10:48:47 -0400
Jason Frisvold <[EMAIL PROTECTED]> wrote:

> Hrm ...  So, I could create an array of parameters to send, plus the
> hash for the data and the forked process would have this information
> available?  Now what happens if the main program modifies these
> variables?  Is the fork in a different memory space?
> 
> Do the forked processes start from the beginning or continue on from
> that point?
> 
> Yes, I'm headed over to look up fork now ..  :)
> 
> perldoc -f fork .. right?
> 
> Friz
> 
> On Thu, 2002-09-19 at 10:36, Bob Showalter wrote:
> > > -Original Message-
> > > From: Jason Frisvold [mailto:[EMAIL PROTECTED]]
> > > Sent: Thursday, September 19, 2002 9:43 AM
> > > To: [EMAIL PROTECTED]
> > > Subject: Forking, Passing Parameters to forks
> > > 
> > > 
> > > Greetings,
> > > 
> > >   I'm in the process of writing a large network 
> > > monitoring system in
> > > perl.  I want to be sure I'm headed in the right direction,
> > > however.
> > > 
> > >   I have a large MySQL database comprised of all the 
> > > items that need
> > > monitoring.  Each individual row contains exactly one monitoring
> > > type(although, i would love to be able to combine this
> > > efficiently)
> > > 
> > >   One of the tables will contain the individual 
> > > monitoring types and the
> > > name of the program that processes them.  I'd like to have a 
> > > centralized
> > > system that deals with spawning off these processes and 
> > > monitoring those
> > > to ensure they are running correctly.  I'm looking to spawn 
> > > each process
> > > with the information it needs to process instead of it having 
> > > to contact
> > > the database and retrieve it on it's own.  This is where I'm 
> > > stuck.  The
> > > data it needs to process can be fairly large and I'd rather not
> > > drop back to creating an external text file with all the data.  Is
> > > there a way to push a block of memory from one process to another?
> > >  Or some
> > > other efficient way to give the new process the data it 
> > > needs?  Part of
> > > the main program will be a throttling system that breaks the data
> > > down into bite size chunks based on processor usage, running time,
> > > 
> > > and memory
> > > usage.  So, in order to properly throttle the processes, I need to
> > > be able to pass some command line parameters in addition to the
> > > data chunk...
> > > 
> > >   Has anyone attempted anything like this?  Do I have a snowball's
> > > chance?  :)
> > 
> > fork() creates a copy of a process. The new process is an exact copy
> > of the original process (except for a few items, see your fork(2)
> > manpage), including all the variables.
> > 
> > So you don't need to "pass" anything. If a variable like @data
> > contains the data to be processed, then after the fork, the child
> > will have a copy of@data to work with.
> -- 
> ---
> Jason 'XenoPhage' Frisvold
> Senior ATM Engineer
> Penteledata Engineering
> [EMAIL PROTECTED]
> RedHat Certified - RHCE # 807302349405893
> ---
> "Something mysterious is formed, born in the silent void. Waiting
> alone and unmoving, it is at once still and yet in constant motion. It
> is the source of all programs. I do not know its name, so I will call
> it the Tao of Programming."
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




trying to get better with hashes

2002-09-19 Thread chad kellerman

Hello everyone,
 I think I am getting the hang of using perl, just rough around the
edges.  I was hoping you guys can give me a hand.  Here is my issue. 
>From what I understand using global variables such as my #variable; it
not the most "proper" way of coding.  I ran into a problem that I can't
seem to get around using it. (just because I am a little new at this
programming stuff)

 I have a text file that has fields separated by pipes.  Like so

--

#dir name|ip|hostname|back|port|chK|ver|mon|cl|db|pure|usr
/export6/home213/bob.work|192.168.123.23|bob.com|1|1|1|5|21,22|1|0|1|bob
/export5/home413/chuck.home|192.168.123.201|chuck.com|1|1|1|3|80|1|1|1|chuck 
#/export9/home325/sally.home|192.168.123.56|sally.com|1|1|1|3|80,21|1|1|1|sally
/export1/home425/tom.home|192.168.123.100|tom.com|1|0|1|3|80,21|0|1|1|tom   
The text file has about 1000 entries just like the ones above.

Here is the script I wrote:

---

#!/usr/bin/perl
use strict;
use warnings;
$|++;

use FileHandle;

my $msl_Host;
my %msl_Host;
my $serverList = "/usr/local/account/main-server.list";

&mslSplit($serverList);# pass the list into the subroutine.

foreach my $key (keys %msl_Host) {
my $href = $msl_Host{$key};
next if ($href->{'ignore_flag'} =~ "#");
print "$href->{'host_name'}\n";
}

sub mslSplit {
 my $msl = new FileHandle "@_", "r" || die $?;
 while (<$msl>) {
chomp;
   my ($backup_dir, $host_ip, $host_name, $backup_flag,undef) =
split(/\|/) ;
   next if $backup_dir =~ /^#directory name/;
   my ($comment, undef, $group_dir, $host_dir) = split(/\// ,
$backup_dir);
   my $daily_server = substr($group_dir,0,length($group_dir)-2);
   my %hostEntry = (
  'backup_dir'   => $backup_dir,
  'host_ip'  => $host_ip,
  'backup_flag'  => $backup_flag,
  'group_dir'=> $group_dir,
  'daily_server' => $daily_server,
  'host_dir' => $host_dir,
  'ignore_flag'  => $comment,
  'host_name'=> $host_name
);
$msl_Host{$host_name} = \%hostEntry;
 }
 $msl->close;
}

--
This works exactly the way I want it.  I can change my print statement
in the foreach loop and print any importatn field that I want.  Ther are
no additional fields I need.

I was wondering what can I do to clean it up. (Maybe be more
efficient)  I really don't like having:

my $msl_Host;
my %msl_Host;

  declared like that.  But I don't know how to rewrite it so I don't
have to declare it like that.  I kind stumbled across getting this just
to work. Any help would be appreciated.

Thanks, 
--chad






signature.asc
Description: This is a digitally signed message part


RE: how to find memory leaks?

2002-09-20 Thread chad kellerman


here's my $.02 on this subject.  Correct me if I am wrong.
Once perl uses memory it does not want to let it go back to the system. 
I believe I have read the the developers are working on this.  Since you
have your script running as a daemon.  It will not release a lot of
memory back to the system, if any at all.

I had a similar problem.  The way I worked around it is:
I knew where my script was eating up memory.  So at these point I fork()
children.  Once the child completes and dies the memory is released back
into the system.

at least I saw a dramatic decrease in memory consumption.

--chad

On Fri, 2002-09-20 at 04:08, Timothy Johnson wrote:
> 
> Instead of delete()ing it, try lexically scoping your hashes using my().
> You may find that letting the data structures go out of scope releases some
> memory to be reused by perl that you were missing.
> 
> -Original Message-
> From: Angerstein [mailto:[EMAIL PROTECTED]]
> Sent: Friday, September 20, 2002 12:30 AM
> To: [EMAIL PROTECTED]
> Subject: how to find memory leaks?
> 
> 
> Hi,
> I have a deamon like programm, which runs tasks at give timestamps.
> This is in a while (1) {}  if startjobx == time loop.
> 
> Now i have the problem that one or more of my datastructures eats more and
> more memory.
> I "delete" every value after using it from my hashes or array from arrays,
> but it still not getting better. any idea?
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
-- 
The instructions said to use windows 98 or better, so  I installed
slackware.



signature.asc
Description: This is a digitally signed message part


RE: how to find memory leaks?

2002-09-20 Thread chad kellerman

Dave,

 Actually in each fork.  I was tarring up information from another
server with Net::SSH::Perl.  The full tar ball was written to stdout and
I would gzip in on the originating server.  If I did not fork each tar
the server would crash in a matter of minutes.  But with the forks I
actually contected adn grabbed tar balls from a 100 servers, 8 at a
time.

The os was solaris x86 but I find that the script runs better on Red Hat
7.3.

   I did not use top when monitoring the script I actually used 
vmstat 1

   I kinda new with perl.  The script was my first script over 200
lines.  And like I said id would crash in a under an hour with out the
forks.

   It just made sense that when that child did die the memory the child
held for the tar was released.  I actually saw memory drop real low then
after the write to disk it would jump up.

Sound right to you?  Or am I missing something?

--chad   

On Fri, 2002-09-20 at 14:07, david wrote:
> Chad Kellerman wrote:
> 
> > 
> > here's my $.02 on this subject.  Correct me if I am wrong.
> > Once perl uses memory it does not want to let it go back to the system.
> > I believe I have read the the developers are working on this.  Since you
> > have your script running as a daemon.  It will not release a lot of
> > memory back to the system, if any at all.
> 
> currently, the memory will not be released back to the OS. your OS mostly 
> likely do not support that. many langugages that handles memory management 
> internally have the same problem. in C/C++, memory management is the job of 
> the programmer but if you put your data on the stack, they won't be 
> released back to the OS until your program exit. if, however, you request 
> something from the heap, you will have the chance to relese them back to 
> the OS. that's nice because you actually release what you don't need back 
> to the OS, not just your process pool.
> 
> > 
> > I had a similar problem.  The way I worked around it is:
> > I knew where my script was eating up memory.  So at these point I fork()
> > children.  Once the child completes and dies the memory is released back
> > into the system.
> > 
> 
> i don't know if what you describle really works. when you fork, you are 
> making an exact copy of the process running. the child process will include 
> the parent process's code, data, stack etc. if the fork success, you will 
> have 2 pretty much identical process. they are not related other than the 
> parent-child relation the kernal keeps track. so if your child process 
> exit, it should release all the memory of it's own but shouldn't take 
> anything with it from it's parent. this means your child process's exit 
> should not cause your parent process's memory pool to be returned back to 
> the OS.
> 
> but you said you really see a dramatic decrease in memory consumption but if 
> you check your process's memory foot print(let say, simply look at it from 
> the top command), does it's size reduce at all?
> 
> david
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


RE: how to find memory leaks?

2002-09-20 Thread chad kellerman

I tried doing the same thing but my supervisor got mad.  We aren't
allowed to use any type of system call in our scripts.  If the system
call is in a perl module it's one thing but not in our code.

system or exec anything that executes from /bin/sh

even though it is so much nicer to do that, especially using find..

-chad





On Fri, 2002-09-20 at 15:01, david wrote:
> Chad Kellerman wrote:
> 
> > Dave,
> > 
> >  Actually in each fork.  I was tarring up information from another
> > server with Net::SSH::Perl.  The full tar ball was written to stdout and
> > I would gzip in on the originating server.  If I did not fork each tar
> > the server would crash in a matter of minutes.  But with the forks I
> > actually contected adn grabbed tar balls from a 100 servers, 8 at a
> > time.
> > 
> 
> yes, now that it make sense to me. the tarring portion of your code is 
> likely to create huge heap (like the share memory module, i think it's call 
> ShareLite or something like that, everytime it pull something back from the 
> share memory, it creates a heap for it. the heap will not go away (because 
> Perl does it's own memory management) until the client exit.). you should 
> verify that if Net::SSH::Perl does something similar. it seems like it can 
> be the case. now, if you create a child process for that, it's the child 
> process that creates the heap, not the parent, so when the child process 
> exit, everything is destoried(code, data stack,heap, etc). that's why you 
> don't see your parent eating up a lot of memory.
> 
> i could be totally wrong but with my experience with ShareLit, the situation 
> is similiar. indeed, we use similar solution as your forking except that we 
> don't fork, we simply exec().
> 
> david
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


Re: how to find memory leaks?

2002-09-20 Thread chad kellerman

Frank-
   I think he believes it's more of a security issue then anything
else.  We only have three os's here.  Solaris x86, Solaris sparc, and
linux. So I don't think it's a port issue.

--chad

  

On Fri, 2002-09-20 at 15:49, Frank Wiles wrote:
>  .--[ chad kellerman wrote (2002/09/20 at 15:32:11) ]--
>  | 
>  |  I tried doing the same thing but my supervisor got mad.  We aren't
>  |  allowed to use any type of system call in our scripts.  If the system
>  |  call is in a perl module it's one thing but not in our code.
>  |  
>  |  system or exec anything that executes from /bin/sh
>  |  
>  |  even though it is so much nicer to do that, especially using find..
>  |  
>  `-
> 
> Does he have a sound reason for this or is he just thinking it
> somehow makes your program more secure or more easily ported, etc? 
> 
>  -
>Frank Wiles <[EMAIL PROTECTED]>
>http://frank.wiles.org
>  -
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


Don't know how to title this email.

2002-09-24 Thread chad kellerman

Hello everyone,
I am sure this is a newbie question but I have never had to do this
in a script and I am not quite too sure if I can.


  Is there a way I can "mark" a part of a script so that, if I wrap an
eval around a particular statement I can go back to that mark and retry
it.

  I have to make quite a few ssh connections to various servers, I was
wondering if I could put a mark before the ssh and if it dies have it go
back to a point before the connection and retry in a a few seconds.

  I would rather not create a whole sub routine.  Because I would have
to create about 10 of them for evey different ssh connection since there
is something different done on each connection.

Thanks for the help.

Chad







signature.asc
Description: This is a digitally signed message part


fork exitting sub

2002-09-26 Thread chad kellerman

Hi everyone,


I am forking and it looks like when my foreach loop is completeed it
dies without going to the next if statement:




I have:

foreach my $usr (@users) {
  my $UsrPid;
  unless ($UsrPid = fork) {
while fork {
  do stuff
  exit 0;
}
exit 0;
   }
   waitpid($UsrPid,0);
} 
if (defined @otherusers)
  foreach my $usr2 (@otherusers) {
my $oUsrPid;
unless ($oUsrPid = fork) {
   while fork {
  do stuff
  exit 0;
   }
   exit 0;
 }
 exit 0;
   }
   waitpid($oUsrPid,0);
}

This is in a sub routine.  But after it does stuff with $Usr it
exists out of the sub and goes to the next one.  IT does not contiue
with the if (define @otherusers).

I know @otherusers exist because I log them into a mysql databases a
few lines before the first foreach loop.  I think it has something to do
with how I end my fork.

Anyone have any suggestions why it's dying?  Do I have my exit 0; in the
proper location?

Thanks again for all the great help.

--chad


-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


RE: fork exitting sub

2002-09-26 Thread chad kellerman

James,

   I an tarring up different directories.  Directories that may be in
home or maybe in another location.  The directories in home are @users
and the directories in another location are @otherusrs.  The thing is
the directories in the other location might not exist.  
I read that before forking another process you should wait a few
before forking a new one..

--chad


On Thu, 2002-09-26 at 10:17, Kipp, James wrote:
>  
> > foreach my $usr (@users) {
> >   my $UsrPid;
> >   unless ($UsrPid = fork) {
> > while fork {
> >   do stuff
> >   exit 0;
> > }
> > exit 0;
> >}
> >waitpid($UsrPid,0);
> > } 
> > if (defined @otherusers)
> >   foreach my $usr2 (@otherusers) {
> > my $oUsrPid;
> > unless ($oUsrPid = fork) {
> >while fork {
> >   do stuff
> >   exit 0;
> >}
> >exit 0;
> >  }
> >  exit 0;
> >}
> >waitpid($oUsrPid,0);
> > }
> 
> What are you trying to do here? what is the distinction between @users and
> @otherusers ?Not sure you need all this code. Did you want to include the if
> (...) in the foreach loop ?
> 
> foreach my $usr (@users) {
>my $UsrPid;
>if ($UsrPid = fork) { exit }; #exit parent
>elseif ($UserPid) { do stuff  }
>else { die "bad fork .. $!" }
>
> # now, what do you want to with this ? waitpid waits for the child to exit
> is 
> # this  what you want to do ? or did you want the current child to go onto
> your
> # if statement below.  
> waitpid($UsrPid,0);
> } 
> > if (defined @otherusers)
> >   foreach my $usr2 (@otherusers) {
> > my $oUsrPid;
> > unless ($oUsrPid = fork) {
> >while fork {
> >   do stuff
> >   exit 0;
> >}
> >exit 0;
> >  }
> >  exit 0;
> >}
> >waitpid($oUsrPid,0);
> > }
> 
> 
> > 
> > This is in a sub routine.  But after it does stuff with $Usr it
> > exists out of the sub and goes to the next one.  IT does not contiue
> > with the if (define @otherusers).
> 
> THAT is because you code is telling it to that :-)
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


RE: fork exitting sub

2002-09-26 Thread chad kellerman

James,

The script is too long.  I fork because I tar thru ssh which uses
memory.  So that server I am on doesn't go done because of memory
consumption I fork each tar so when the fork dies the memory is freed
for the system..

I really think I have issues with my exit 's..  I am going to remove
an exit I think it might work.


Thanks for the help..

chad

On Thu, 2002-09-26 at 10:33, Kipp, James wrote:
> > 
> > James,
> > 
> >I an tarring up different directories.  Directories that may be in
> > home or maybe in another location.  The directories in home are @users
> > and the directories in another location are @otherusrs.  The thing is
> > the directories in the other location might not exist.  
> > I read that before forking another process you should wait a few
> > before forking a new one..
> > 
> 
> Is forking neccessary? Why not just check for the existence of the directory
> and tar it up. 
> can you post the complete script?
> 
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


Re: fork exitting sub

2002-09-26 Thread chad kellerman

david,

actually, without the while loop every user in the array gets forked
at one time.  500+ forks.  I put the while loop in and it does one at a
time..

chad

On Thu, 2002-09-26 at 14:28, david wrote:
> Chad Kellerman wrote:
> 
> > I have:
> > 
> > foreach my $usr (@users) {
> >   my $UsrPid;
> >   unless ($UsrPid = fork) {
> > while fork {
> >   do stuff
> >   exit 0;
> > }
> > exit 0;
> 
> i don't understand why you need another 'while fork' here.
> thte parent goes into the while loop, do stuff and then exit. the child 
> process simply ignore the while loop and then immediately exit. the child 
> does nothing. the 'while fork' seems unecessary.
> 
> david
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


returning to parent after a fork.

2002-10-01 Thread chad kellerman

Hi,

I was wondering if someone can shed some light on a problem I am
having.  I have a sub routine that forks each variable in an array one
at a time.  But after the last element in the array the program exists
from the sub routine, it does not  continue after the the last element
in the array is done forking.



sub tarusers {

foreach my $user (@home) {
  #creating fork for each user to free memory when the child dies.
  my $pid;  
  unless ($pid = fork) {
while (fork) {
  # in here I have code that tar users on a remote box
and gzips them on l;ocal machine.  Plus logs everything to a mysql
db.  
  exit 0;
  }  
  exit 0;
}
waitpid($pid,0);
  }  
  # start tarring home2 users.
  if (@home2) {
foreach my $user2 (@home2) {
  my $pid;  
  unless ($pid = fork) {
while (fork) {
  # here I have code that tars users on a
remote box and gzip locally.  The @home2 may or may not exists 
  exit 0;
} 
exit 0;
  } 
  waitpid($pid,0);
}
  }
}  



   The problem is when it exits the sub routine before the if(@home2). 
I am not to sure how to change the code to continue.  I know the @home2
exists because I can print it out.  I am just having a hell of a time
with forking and returning to the parent.

Can anyone offer any suggestions?

thanks,
--chad





signature.asc
Description: This is a digitally signed message part


uptime

2002-10-03 Thread chad kellerman

Hi everyone,

What would be the easiest way to find the uptime of a linux
machine?  I don't want to use a system call.  Is there a module I can
use? Or do I have to open /proc/uptime and calculate it thru there?


Thanks for the help..

Chad

-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


removal of a line in a file

2002-10-09 Thread chad kellerman

Perl gurus,

   I was wondering if there is a one liner that searches a file for a
string and then removes that line and the following four lines in the
file?

Thanks,

Chad

-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


newbie problem with Getopts::Std

2002-10-10 Thread chad kellerman

Hi everyone,
  Just started a little script I an am not too sure what I am doing
wong.  I am just testing my opts to make sure they are what I want. 
Trying to maake a script "dummy prooof"

Here is what I got:


use Getopt::Std;
our ($opt_h, $opt_d, $opt_s);

if ($opt_h) {
 print "Usage: $0 -d  -s  -h \n";
 exit;
}elsif (!defined $opt_s || !defined $opt_d) {
 print "please specify -s and -d options\n";
 exit;
}elsif ($opt_d =~ /^[a-zA-Z]/) {
 print "please use ip address for dup\n";
 exit;
}elsif ($opt_s =~ /^[0-9]/) {
 print "please use domain name of server to be upgraded\n";
 exit;
}



I think I am using the wrong function "!defined" but I am unsure
what else I could use.  When I run the script the -h options works but
if I specify -s or -d I always get the print statement "please specify
-s and -d options\n".

Can anyone offer any help with this?

thanks,
chad






signature.asc
Description: This is a digitally signed message part


Re: newbie problem with Getopts::Std

2002-10-10 Thread chad kellerman

My bad.  I got it.  I had 
getopts("ds:h");

and not

getopts("d:s:h");


Sorry for posting something so stupid.

chad


On Thu, 2002-10-10 at 08:16, chad kellerman wrote:
> Hi everyone,
>   Just started a little script I an am not too sure what I am doing
> wong.  I am just testing my opts to make sure they are what I want. 
> Trying to maake a script "dummy prooof"
> 
> Here is what I got:
> 
> 
> use Getopt::Std;
> our ($opt_h, $opt_d, $opt_s);
> 
> if ($opt_h) {
>  print "Usage: $0 -d  -s  -h \n";
>  exit;
> }elsif (!defined $opt_s || !defined $opt_d) {
>  print "please specify -s and -d options\n";
>  exit;
> }elsif ($opt_d =~ /^[a-zA-Z]/) {
>  print "please use ip address for dup\n";
>  exit;
> }elsif ($opt_s =~ /^[0-9]/) {
>  print "please use domain name of server to be upgraded\n";
>  exit;
> }
> 
> 
> 
> I think I am using the wrong function "!defined" but I am unsure
> what else I could use.  When I run the script the -h options works but
> if I specify -s or -d I always get the print statement "please specify
> -s and -d options\n".
> 
> Can anyone offer any help with this?
> 
> thanks,
> chad
> 
> 
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


checking email account name

2002-10-10 Thread chad kellerman

Hello again,
  
I am using getOpts::Std, with a -e option where you have to emter in
your email address.  How can I do checking on that address.  I am trying
to verify the domain as well as the user.

Here is what I have but I don't know how to contiue:



getopts("d:s:e;h"); # colons take arguements
my @authUsers = "bob, chuck, pam, lee, april"; 

if ($opt_e =~ /\@somedomain.com/) {
 print "please use a valid somedomain.com email account\n";
 exit;
}elsif () {
 print "you are not a valid user for this program.  go away\n";
 exit;
}

   I am not sure if I can put a conditional statement for the elsif
statement that would loop thru the @authUsers.  Or put it this way, I
don't know how.  I am working on a linux system, the user and group
permissions I am not worried about at this point in time, because I want
to use the valid email address later.

   Can anyone point me in the right direction?  Not sure if I should
change my @authusers to full valid email accounts and verify the whole
account, or pieces like I am doing.

Suggestions welcomed.

Thanks again,

Chad






signature.asc
Description: This is a digitally signed message part


Re: cgi scripts and a crashing server:advice needed

2002-10-10 Thread chad kellerman

not sure if anyone has asked.  What kind of server and what do the logs
say?

--chad

On Thu, 2002-10-10 at 11:56, Ben Crane wrote:
> Hi list,
> 2
> Right, I have a few (6 to be exact) cgi scripts that
> run (not at the same time) on our web server, all 6 do
> a variety of different things like parsing data from a
> csv file and creating a suitable output html format
> and redirecting to other pages.
> 
> Our server has crashed and there seems to be a general
> consensus that the number of scripts being run is
> causing the problem as the server also runs a Content
> Management System alongside. I've gone through my
> scripts to see if I've done anything silly, but the
> most advanced of all the scripts simply opens a csv
> file and parses it...i usually have other cgi scripts
> that use the csv file for validating the data, but I
> have closed each filehandle...
> 
> Can anyone think of what types of problems very basic
> cgi scripts can cause on a server? 
> 
> Thanx
> Ben
> 
> __
> Do you Yahoo!?
> Faith Hill - Exclusive Performances, Videos & More
> http://faith.yahoo.com
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


counter for hash

2002-10-15 Thread chad kellerman

Hello,

I wrote a script that goes thru the a access logs of a web server
and prints out the hour and number of hits/hour.  THe only problem is
that if i use warnings I get a bunch of errors when I count the hits.

Use of uninitialized value in addition (+) at scripts/hits.pl line 14,
 line 20569.


#!/usr/bin/perl

use warnings;
use strict;

#define variables
my %hits;

open IN, '<', $ARGV[0] or die "cannot open log file: $!";

while (my $line = ) {
 chomp $line;
 my (undef, $hour,undef) = split(/:/, $line);
 $hits{$hour} = $hits{$hour}+1;
}

foreach my $hour (sort keys %hits) { 
 print "$hour \+ -\+".$hits{$hour},"\n";
}


The problem lies in the line that says:

$hits{$hour} = $hits{$hour}+1;

What it the best way to create a counter?

  I have seen $var++, it would not work here I had to use +1.

Thanks for the info,

chad







signature.asc
Description: This is a digitally signed message part


Re: counter for hash

2002-10-15 Thread chad kellerman

japhy,

you are correct.  That it what I wanted to convey.  But with
everyone's input I understand what I was missing.


thanks again,

chad


On Tue, 2002-10-15 at 12:59, Jeff 'japhy' Pinyan wrote:
> On Oct 15, Rob said:
> 
> >From: "chad kellerman" <[EMAIL PROTECTED]>
> >
> >>  $hits{$hour} = $hits{$hour}+1;
> [snip]
> >> $hits{$hour} = $hits{$hour}+1;
> >>
> >> What it the best way to create a counter?
> >>
> >>   I have seen $var++, it would not work here I had to use +1.
> >
> >But I'd be interested to know why you had to use '+ 1'?
> 
> You are misinterpreting Chad's problem.  Chad didn't show EXACTLY the ++
> code he used, but I think I know.  Chad originally tried:
> 
>   $hits{$hour} = $hits{$hour}++;
> 
> but it did not work.  Why?  Because $hits{$hour}++ returns the PREVIOUS
> value of $hits{$hour} (0 in this case), then increments it by 1.  BUT then
> $hits{$hour} gets set back to 0 (the returned value).
> 
> Chad, you do not need to assign to $hits{$hour} again.  Simply saying
> 
>   $hits{$hour}++;
> 
> or, more computer-scientifically,
> 
>   ++$hits{$hour};
> 
> is sufficient.
> 
> -- 
> Jeff "japhy" Pinyan  [EMAIL PROTECTED]  http://www.pobox.com/~japhy/
> RPI Acacia brother #734   http://www.perlmonks.org/   http://www.cpan.org/
> ** Look for "Regular Expressions in Perl" published by Manning, in 2002 **
>  what does y/// stand for?   why, yansliterate of course..
> [  I'm looking for programming work.  If you like my work, let me know.  ]
> 
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


a newbies attempt at cleverness are failing

2002-10-16 Thread chad kellerman

Hi everyone,

I know there must be a easy way to do this but I just can't figure it
out.

I issue a command from a remote server and get a variable like so:

my $out = 14G;  # actually it's 14G\n

I am trying to get just a numeric out oif it:

I just want the 14:

I have been tried a lot of things, but I just can't gget the G outta
there.

Here is what I have now:

 my $Size = ( split /G/, chomp $out )[0];

I am grasping at straws.  I tried chop, chomp and the like (substr) but
just can't get it to drop both the new line character and the G.

Any help?

Thanks,
Chad 







signature.asc
Description: This is a digitally signed message part


to fork or not?

2002-10-22 Thread chad kellerman
Hi everyone,

I have a perl script I am trying to make run quicker.  Currently,
the script runs line-by-line and executes each line of code.  The next
line of code does not execute until the previous line is finished.

One line of the code tarrs a bunch of files and does not go to the
next line until the the tar ball is complete.

The line of code after the tar is not dependent on the line before
it.

   ***THE QUESTION*** 
Is there a way I can "background" the line that is doing the tar and
have the script go to the next line even though the tar is not complete?

I am think I have to fork the tar, but I am not too sure if this
would be the best approach.

thanks for the help. 

--chad
-- 
Chad Kellerman
Jr. Systems Administrator
Alabanza Inc
410-234-3305



signature.asc
Description: This is a digitally signed message part


writing STDOUT while it's happening

2002-10-28 Thread chad kellerman
Hi guys,

I have a piece of code that executes a command at a remote server
and writes the output to stdout.  It there a wat to write stdout as it
is being written and not have to wait for it to be completed.

I know it's kinda hard to understand but here is the portion of
code:


use Net::SSH::Perl;

my $remote_ip = "192.168.123.5";

 my ($out, $error, $exit) =  $ssh->cmd("cd /home; for i in *; do echo
\$i; tar cpvf - \$i/ | ssh $remote_ip \'tar -C /home -xpf -\';done");




how can I write $out as it is being executed. and not after everything
is completed?

I hope that explains it better..

Thanks,
Chad






signature.asc
Description: This is a digitally signed message part


fork?

2002-11-13 Thread chad kellerman
Hey guys,

 I maybe misunderstanding what a fork can do, so I thought I would
write in to verify.  Because it doesn't appear to be doing what I want
(think it should).

I have a script that tars (unix) directories.  I want the script to tar
directories [a-n] as it is tarrring up [a-n] I want it to tar [o-z].  At
the same time.  So that two tars would be running st the same time.

   what I did was

#!/usr/bin/perl
use strict;
use warnings;

$SIG{CHLD} = sub {wait ()}; #wait to avoid zombies

my $pid = fork ();
die "cannot fork: $!" unless defined($pid);
if  ($pid == 0) {
 #do some perl stuff
   }
   exit(0);
}
waitpid($pid,0);

#do some other perl stuff while above is running.


  I want the perl script to "multi task".  

Am I missing something??

thanks,
Chad






signature.asc
Description: This is a digitally signed message part


RE: fork?

2002-11-14 Thread chad kellerman
Daryl,

  I appreciated the help.  But the perl code in the elsif statement
still doesn't execute until the perl code in the if statement.  

  Any other suggestions?  I'd like the two perl codes to execute at the
same time.
 
  Maybe fork can't do this..

THanks,
Chad


On Wed, 2002-11-13 at 15:40, Daryl J. Hoyt wrote:
> I think what you want is 
> 
> #!/usr/bin/perl
> use strict;
> use warnings;
> 
> $SIG{CHLD} = sub {wait ()}; #wait to avoid zombies
> 
> my $pid = fork ();
> die "cannot fork: $!" unless defined($pid);
> if  ($pid == 0) {
>  #do some perl stuff
>}
> elsif($pid < 0) {
>  #do some other perl stuff
>exit(0);
> }
> waitpid($pid,0);
> 
> 
> Daryl J. Hoyt
> Software Engineer
> Geodesic Systems
> 312-832-2010
> < http://www.geodesic.com>
> < mailto:djh@;geodesic.com>
> 
> 
> 
> 
> -Original Message-
> From: chad kellerman [mailto:ckellerman@;alabanza.com]
> Sent: Wednesday, November 13, 2002 2:32 PM
> To: [EMAIL PROTECTED]
> Subject: fork?
> 
> 
> Hey guys,
> 
>  I maybe misunderstanding what a fork can do, so I thought I would
> write in to verify.  Because it doesn't appear to be doing what I want
> (think it should).
> 
> I have a script that tars (unix) directories.  I want the script to tar
> directories [a-n] as it is tarrring up [a-n] I want it to tar [o-z].  At
> the same time.  So that two tars would be running st the same time.
> 
>what I did was
> 
> #!/usr/bin/perl
> use strict;
> use warnings;
> 
> $SIG{CHLD} = sub {wait ()}; #wait to avoid zombies
> 
> my $pid = fork ();
> die "cannot fork: $!" unless defined($pid);
> if  ($pid == 0) {
>  #do some perl stuff
>}
>exit(0);
> }
> waitpid($pid,0);
> 
> #do some other perl stuff while above is running.
> 
> 
>   I want the perl script to "multi task".  
> 
> Am I missing something??
> 
> thanks,
> Chad
> 
> 
> 
> 




signature.asc
Description: This is a digitally signed message part


opendir questions

2002-11-20 Thread chad kellerman
Hi everyone,

   I am having a small problem with opendir:

#!/usr/bin/perl -w

# getting disk usage for users
# not quite as good as du(1) but alittle faster...maybe
# written by [EMAIL PROTECTED]
# Aug 17, 2002

use strict;
use diagnostics;
$|++;


my ( $name, $uid, $dir );
while ( ( $name, undef, $uid, undef, undef, undef, undef, $dir ) =
getpwent()) {
   next if ( $name =~ /users/  || $name =~ /www/ || $name =~ /sysop/);
   if ( $uid > 500 ) {
  if (-l $dir ) { $dir = "/home2/$name"};
  my $maildir = "$dir/\*\-mail/";
  print "$name has $dir and $maildir\n";
  opendir( MAILDIR, "$maildir" ) || die "what the: $!";
  my @mail = grep -T, readdir MAILDIR;
  closedir MAILDIR;
  print "@mail\n";
   }
}

This code dies on the opendir statemenet saying that /home/$name/*-mail
is not there.  But it is there.  What else can I use for the "*" so that
it will read the mail direcotry for a user?

Thanks for the help,
--chad 


-- 
chad kellerman <[EMAIL PROTECTED]>



signature.asc
Description: This is a digitally signed message part


getting rid of STDOUT sometimes...

2002-12-02 Thread chad kellerman
Hey guys,

 I got a script that uses Net::SCP.  When ever I use it in the script it
prints to stdout.

 I have other parts of the script where I use print statments to
describe what part of the script is being executed.

 Is there a way to suppress the Net::SCP stdout messages but keep my
normal prints.


examples:


my $scp = Net::SCP->new( { "host"=>$Ip, "user"=>$user } );
   $scp->put("$identity") or die $scpS->{errstr}; 

what gets printed to STDOUT is:
scp /home/bob/file.txt [EMAIL PROTECTED]:fiel.txt


I want to suppress that scp /home/bob stuff.

Nad keep all the print statements in the scri[pt..


Can anyone offer any suggestions??

Thanks,
--chad




-- 
chad kellerman <[EMAIL PROTECTED]>



signature.asc
Description: This is a digitally signed message part


Re: Using ssh for uptime?

2002-12-07 Thread chad kellerman
Mark,

   It may be easier to go by route of the Net::SSH::Perl module or
Net::SSH module


#!/usr/bin/perl
use strict;
use warnings;

my $user = "bob";
my @hosts = "host1, host2, host3, host3";

foreach my $host (@hosts) :
   my $cmd = "/usr/bin/uptime";
   my $ssh = Net::SSH::Perl->new( $host, port => 22 );
  $ssh->login($user);
   my ( $out, $error, $exit ) = $ssh->cmd( $cmd );
   my ( $time, $uptime ) = ( split /\|/, $out[0] )[1,1];
   print "$host has been up $uptime\n";
}


##
or something along that lines.  The cmd output spits to $out.  You can
do with it what ever you like... I think that should all work..

chad






On Sat, 2002-12-07 at 12:11, Mark Weisman wrote:
> I've got a script that I'm working on that will use SSH to check the
> uptime on servers within my domain. However, I'm unsure of how exactly
> to do this this is what I have so far.
> 
> ##!/usr/bin/perl
> 
> #My (@machines,$host,$user,$pass)
> 
> #Open(INFILE," # or die "Error opening machines.txt.$!,stopped"
> #@machines = ;
> #Close(INFILE);
> #Foreach my $rec (@machines) {
> # chomp($rec);
> # ($host,$user,$pass) = split(/,/, $rec);
> # open (OUTFILE, ">records.txt")
> # or die "Error opening records.txt.$!,stopped";
> # close(OUTFILE);
> # open (OUTFILE, ">>records.txt")
> # or die "Error opening records.txt.$!,stopped";
> # print OUTFILE 'ssh -l $user $host "uptime"';
> # close(OUTFILE);
> #};
> Without the hash marks of course. Where am I going wrong? Help please?
> 
> His Faithful Servant,
> Mark-Nathaniel Weisman
> President / CEO
> Infinite Visions Educational Systems Inc.
> Anchorage, Alaska
> http://www.ivedsys.com
> [EMAIL PROTECTED]
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 




signature.asc
Description: This is a digitally signed message part


Re: Using ssh for uptime?

2002-12-07 Thread chad kellerman
Dammit, got the split wrong.

  try this:
my ( $time, $uptime ) = ( split / /, $out[0] )[0,1];

my bad..



chad

On Sat, 2002-12-07 at 12:11, Mark Weisman wrote:
> I've got a script that I'm working on that will use SSH to check the
> uptime on servers within my domain. However, I'm unsure of how exactly
> to do this this is what I have so far.
> 
> ##!/usr/bin/perl
> 
> #My (@machines,$host,$user,$pass)
> 
> #Open(INFILE," # or die "Error opening machines.txt.$!,stopped"
> #@machines = ;
> #Close(INFILE);
> #Foreach my $rec (@machines) {
> # chomp($rec);
> # ($host,$user,$pass) = split(/,/, $rec);
> # open (OUTFILE, ">records.txt")
> # or die "Error opening records.txt.$!,stopped";
> # close(OUTFILE);
> # open (OUTFILE, ">>records.txt")
> # or die "Error opening records.txt.$!,stopped";
> # print OUTFILE 'ssh -l $user $host "uptime"';
> # close(OUTFILE);
> #};
> Without the hash marks of course. Where am I going wrong? Help please?
> 
> His Faithful Servant,
> Mark-Nathaniel Weisman
> President / CEO
> Infinite Visions Educational Systems Inc.
> Anchorage, Alaska
> http://www.ivedsys.com
> [EMAIL PROTECTED]
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 




signature.asc
Description: This is a digitally signed message part


RE: Using ssh for uptime?

2002-12-10 Thread chad kellerman
Mark,
This is a pain to install.  I would do it thru CPAN.

perl -MCPAN -e shell

cpan>install Net::SSH::Perl


  I fyou don't have root access to install perl modules, try to use
Net::SSH.

use strict;
use warnings;
use Net::SSH qw( sshopen3 );

my $user = "bob";
my $host = "10.10.10.10";
my $cmd = "uptime";
sshopen3( "$user\@$host", *WRITER, *READER, *ERROR, "$cmd" );
my $uptime = ;
chomp $uptime;
print "$uptime\n";


something like that ought to so it...

chad

  

On Tue, 2002-12-10 at 11:44, Mark-Nathaniel Weisman wrote:
> My box does not seem to have the Net::SSH::Perl installed. How can I get
> it installed?
> 
> His Faithful Servant,
> Mark-Nathaniel Weisman
> President / CEO
> Infinite Visions Educational Systems Inc.
> Anchorage, Alaska
> http://www.ivedsys.com
> [EMAIL PROTECTED]
>  
> 
> 
> -Original Message-
> From: zentara [mailto:[EMAIL PROTECTED]] 
> Sent: Monday, December 09, 2002 7:58 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Using ssh for uptime?
> 
> 
> On Sun, 8 Dec 2002 23:14:36 -0900, [EMAIL PROTECTED] (Mark-Nathaniel
> Weisman) wrote:
> 
> >Mark,
> >  I've got the code you sent installed and working (or almost working
> >anyway) snippet below:
> 
> >I'm trying to get this silly thing working, so any ideas or suggestions
> 
> >are more than appreciated.
> Hi,
> I used the other suggestion of using Net::SSH::perl and have this code
> working. (minus any html output )
> ##
> #!/usr/bin/perl
> use strict;
> use warnings;
> use Net::SSH::Perl;
> 
> my $user = "zzz";
> my $password = "zzztester";
> my @hosts = qw(localhost zentara.zentara.net);
> 
> foreach my $host (@hosts){
> my $cmd = "/usr/bin/uptime";
> my $ssh = Net::SSH::Perl->new( $host, port => 22);
> $ssh->login($user,$password);
> 
> my($out) = $ssh->cmd($cmd);
> my ($time,$uptime) = (split /\s+/,$out)[1,3];
> chop $uptime;
> print "$host has been up $uptime\n";
> } ### 
> 
> Output:
> 
> localhost has been up 4:28
> zentara.zentara.net has been up 4:28
>   
> 
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 




signature.asc
Description: This is a digitally signed message part


reading a httpd.conf file

2004-01-23 Thread chad kellerman
Hello everyone,

I have a little issue that I am sure someone has come across here
once before and was wondering if I can get some pointers.

I am trying to read and grab values from  a "messy" httpd.conf
file.  What I am trying to do is grab the ServerName value and
DocumentRoot value for a given VirtualHost depending on the username I
have defined.

The httpd.conf file has many VirtualHost sections (50 +).  And each
section does not have the same order of directives as the next.  For
example:


ScriptAlias /cgi-bin/ /usr/www/htdocs/lee/domain1-www/cgi-bin/
DocumentRoot /usr/www/htdocs/lee/domain1-www
ServerAdmin [EMAIL PROTECTED]
ServerName domain1.com
User lee



User bob
DocumentRoot /usr/www/htdocs/bob/domain1-www
ServerAdmin [EMAIL PROTECTED]
ScriptAlias /cgi-bin/ /usr/www/htdocs/bob/domain1-www/cgi-bin/
ServerName domain4.com



  So I wanted to write a script that if I have a username I could use
that to get the ServerName and DocumentRoot from the correct VirtualHost
Section.  Here is what I have:

---
#!/usr/local/bin/perl
use strict;
use warnings;
use diagnostics;
$|++;

use vars qw(
 $httpd_conf_dir $httpd_conf_file $owner
 $servername $docroot $user
   );

$httpd_conf_dir = '/etc/httpd/conf';
$httpd_conf_file = "$httpd_conf_dir/httpd.conf";
$user = 'lee';
$/ = ')
{
if (/irtualHost/)
{
$owner = $1 if /^User\s*(\d+)/m;
next if ($owner ne $user);
$servername = $1 if /^ServerName\s*(\w+)/m;
$docroot = $1 if /^DocumentRoot\s*(.+)/m;
}
}
close (HTTPD);

print "$owner : $servername : $docroot\n";



But this just doesn't work.  I get the last VirtualHost sections
servername (minus the .com) and the Docroot.  But the username isn't
there..



Can anyone steer me in the right direction?

THanks,
Chad



-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




1 liner question

2004-02-25 Thread chad kellerman
Hello everyone...

I am working on a perl one liner for adding quota on multiple
partitions.  But I can not, for the life of me get the number to add
up..


Here is what I have:

/usr/bin/quota michele | perl -ne 'if(/none$/){print
"9\n"}elsif(m:^\s+/dev/:){($q,
$l)=(split(/\s+/))[2,3];$t=($l-$q)*1024};next if(!$t);{print $t."\n"}'


Which prints out:

101376
243280896

I want these numbers added.

Can anyone offer any suggestions?

Thanks a lot..

Chad


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: 1 liner question

2004-02-25 Thread chad kellerman
Sx,

  This script goes into a procmail recipe I was working on.  It's
running on linux. If you run quota for a user and the quota is not set
it returns actually returns none and I just print the 9's to signify
that.  

  If the user has quota on multiple partitions, the quota command prints
the quota on separate lines, I wanted to add the quota for each
partition ( each line of the output of the quota command) and say this
is the total quota for the user.

  But the way I have it now it prints the quota (in bytes) on multiple
lines ( one for each partition). I was hoping to take the values and add
them together, but I can't seem to get it right.

  But I did not realize this is not an "appropriate" topic for this
mailing list.  I apologize for that..

Anyways, thanks for the help.

  Chad

On Wed, 2004-02-25 at 14:55, WC -Sx- Jones wrote:
> chad kellerman wrote:
> 
> > /usr/bin/quota michele | perl -ne 'if(/none$/){print
> > "9\n"}elsif(m:^\s+/dev/:){($q,
> > $l)=(split(/\s+/))[2,3];$t=($l-$q)*1024};next if(!$t);{print $t."\n"}'
> 
> Is none a reserved word now?
> 
> I ask because quota doesnt return the same values across Unix opsys...
> 
> Otherwise if the vaslues are in $q and $l - why not add them?
> -Sx-


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>




finding a spot in a file

2003-10-09 Thread chad kellerman
Hello,

   I am opening up a file (actually a my.conf).  I only want 2 lines in
the file.  port and datadir

   The problem I am running into is that there is a port = 3324 in both
the [client] section and the [mysqld] section.

   I want to open the file and go straight to the [mysqld] section and
grab the values for port and datadir.

   Can anyone point me in the right direction?


Thanks,
Chad


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: finding a spot in a file

2003-10-10 Thread chad kellerman
John,

   Thanks that was perfect..



Chad


On Fri, 2003-10-10 at 03:11, John W. Krahn wrote:
> Chad Kellerman wrote:
> > 
> > Hello,
> 
> Hello,
> 
> >I am opening up a file (actually a my.conf).  I only want 2 lines in
> > the file.  port and datadir
> > 
> >The problem I am running into is that there is a port = 3324 in both
> > the [client] section and the [mysqld] section.
> > 
> >I want to open the file and go straight to the [mysqld] section and
> > grab the values for port and datadir.
> > 
> >Can anyone point me in the right direction?
> 
> You could use a module like
> http://search.cpan.org/author/WADG/Config-IniFiles-2.38/IniFiles.pm or
> http://search.cpan.org/author/SHERZODR/Config-Simple-4.55/Simple.pm
> 
> Or something like this may work:
> 
> $/ = '[';
> my ( $port, $datadir );
> while (  ) {
> if ( /mysqld]/ ) {
> $port= $1 if /^port\s*=\s*(\d+)/m;
>     $datadir = $1 if /^datadir\s*=\s*(.+)/m;
> }
> }
> 
> 
> 
> John
> -- 
> use Perl;
> program
> fulfillment
-- 
Chad Kellerman
Systems Administration / Network Operations Manager
Alabanza Inc.
10 E Baltimore Street
Baltimore, Md 21202
Phone: 410-234-3305
Email: [EMAIL PROTECTED]



-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



easier way to search and append in a file?

2003-10-16 Thread chad kellerman
Hey guys,

   I know you can do one a liner that appends a string after line in a
file:
ie: perl -p -i -e 's/$oldsting/$1\n$nextline/' file.txt

  But I was doing it in a script:

#!/usr/bin/perl
use strict;
my $conf = "/home/bob/my.cnf";
my $string = "mysqld]";
my $replace = "bob was here";

open (FILE, "$conf") or die "can not open $conf: $!\n";
my @file = ;
close (FILE);
   
   for (my $line = 0; $line <= $#file; $line++)
{
$/ = '[';
if ($file[$line] =~ /$string$/)
{
$file[$line] .= "$replace";
last;
}
}
   
   open (FILE, ">$conf") or die "can not open 
$conf: $!\n";
print FILE @file;
close (FILE);


Then I was thinking, there has got to be a better way of doing this..


Any suggestions

Thanks,

Chad





-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Archive::Tar a directory?

2003-11-05 Thread chad kellerman
Hello,
   I am writing a program that backups up databases.  I am having
trouble tarring up the directories.  Tarring files using Archive::Tar is
pretty straight forward, but tarring directories, I am having issues:

here is part of the code..


$datadir = "/var/lib/mysql/";

$dbQuery = qq(
 SHOW DATABASES
 );
$dbQuery = $dbh->prepare($dbQuery);
$dbQuery->execute();
while (my $db = $dbQuery->fetchrow_array()) {
push @dbs, $datadir.$db;
}
$dbQuery->finish();
print "Found data directories in: @dbs\n" if (DEBUG);

# lock all tables and flush all data
print "Locking all tables.\n" if (DEBUG);
$lockQuery = qq(
FLUSH TABLES WITH READ LOCK
);
$lockQuery = $dbh->prepare($lockQuery);
$lockQuery->execute();

# backup up the databases
Archive::Tar->create_archive ("/home/backup/$name-db.tar.gz", 9, glob
"@dbs/*");

# don't finish or disconnect till after the tar.
$lockQuery->finish();
$rc = $dbh->disconnect();




Everything runs smoothly except when a view the tar ball, it appears I
only get the globbing on the last db of the array.  I get the
directories of the other dbs, just not the contents/files inside the
directory.  Can anyone point out what I am missing?  I would appreciate
it.


Thanks,
Chad


-- 




-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



the file is there.. I know it is.

2004-01-16 Thread chad kellerman
Hello everyone,
   I am starting to work on a script that is going to process a few
files in some users directories.  I thought I would do some checking on
the file to make sure they are there and to make sure they are really
files.  I thought it was going to be pretty straight forward, until I
ran it for the first time.  Sometimes the script sees the file for one
user but not the next ( that I know is there)?
   I must be misunderstanding something small, but I can't figure it
out.
   Can anyone offer any suggestions? 

Thanks,
Chad

__

#!/usr/local/bin/perl
eval 'exec /usr/local/bin/perl -S $0 ${1+"$@"}'
   if 0; #$running_under_some_shell
use strict;
use warnings;
use Sys::Hostname;
$|++;
   

use vars qw(
 $server $pwdfile @users
   );
   

$server = hostname();
$pwdfile = '/etc/passwd';
   

# get users
open (PASSWD,"$pwdfile")
or die "Cannot open passwd file: $!";
while (my $pwdline = )
{
my ($user,$home,$shell) = (split /:/, $pwdline)[0,5,6];
next if ( $home !~ /^\/home/ || $shell !~ /^\/bin/
  || $user eq "ftp" || $user eq "www");
   push @users, $user;
}
close (PASSWD);
   

foreach my $user(@users)
{
  print "Starting $user...\n";
  #print glob ("/home/$user/*-logs/old/200312/access-log.31-*.gz")."\n";
  $user = trim($user);
  my $decfile = glob
("/home/$user/*-logs/old/200312/access-log.31-*.gz");
  my $janfile = glob
("/home/$user/*-logs/old/200401/access-log.01-*.gz");

  if (!$decfile)
  {
print "\t\\Could not find Dec 31,2003 access log.\n";
next;
  }
  elsif (!-f $decfile)
  {
print "\t\\Dec 31,2003 access log is not a file.\n";
next;
  }
  elsif (!$janfile)
  {
print "\t\\Could not find Jan 01,2004 access log.\n";
next;
  }
  elsif (!-f $janfile)
  {
 print "\t\\Jan 01,2004 access log is not a file.\n";
 next;
  }
  else
  {
print "\t\\$user has both access logs.\n";
  }
}

# subs
sub trim
{
  my @in = @_;
  for (@in)
  {
  s/^\s+//;
  s/\s+$//;
  s/\n//g;
  }
  return wantarray ? @in : $in[0];
}



-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Backup script using other modules.

2002-05-14 Thread chad kellerman

Hello everyone,

   I am a little new to perl.  I am writing a little backup script using 
Net::SSH:Perl, Compress::Zlib and FileHandle.

   I am just tarring up users in the home directory. ( Ohh I am runnng Linux)
 I have most of the script working except Compress standard out from my ssh command.

  Can anyone offer any suggestions??


foreach $home_user (@home_users) {

 my($tar_out, $tar_err) = $ssh->cmd("cd /home; /bin/tar cpf - $home_user");
 my $gz_file->gzopen($tar_out, "rb") or die " Cannot open $tar_out: $gzerrno\n";
 while (<>) {
   $gz_file->gzwrite($_)
   or die "error writing: $gzerrno\n" ;
   }

 my $gz_user = new FileHandle "/backup1/$home_user\.tar.gz", "w";
$gz_user->print($gz_file);
  # for error logging...

 my $tar_err_log = new FileHandle "/var/log/tar_error.log", "w";
 $tar_err_log->print($tar_err);
 $tar_err_log->close;

$gz_file->gzclose;
$gz_user->close;

undef $tar_out;
undef $tar_err;
undef $gz_file;
}



  I can connect fine and just copy the tar ball aver.  But I want to gzip the tar ball 
also.  It keeps on dying saying 

Can't call method "gzopen" on an undefined value at tadpole.pl line such and such.

  But I thought I defined it when I say my($tar_out, $tar_err) = $ssh


I am stuck.

THanks for the help...


--chad

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Backup script using other modules.

2002-05-14 Thread chad kellerman

Jaimee,


 I have a dedicated backup server to work with.  And backing up the home directory 
on a remote web/db server.  I figured I save some energy on the web/db server by 
compressing the data on the backup server.  Plus saving the space on the web/db server 
by piping to stdout.

 Dependiong on the sixe of the user directory load average can go up pretty high 
with the -z option in tar on the web/db server.

Thanks for the input though..

--chad

On Tue, 14 May 2002 09:39:49 -0700
Jaimee Spencer <[EMAIL PROTECTED]> wrote:

> Hello Chad,
> 
>Why not just use z as part of your tar command? (E.g.  tar zcpf) no   -
> in the command.
> 
> Regards,
> Jaimee
> 
> -----Original Message-
> From: chad kellerman [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, May 14, 2002 7:14 AM
> To: [EMAIL PROTECTED]
> Subject: Backup script using other modules.
> 
> 
> Hello everyone,
> 
>I am a little new to perl.  I am writing a little backup script using
> Net::SSH:Perl, Compress::Zlib and FileHandle.
> 
>I am just tarring up users in the home directory. ( Ohh I am runnng
> Linux)
>  I have most of the script working except Compress standard out from my ssh
> command.
> 
>   Can anyone offer any suggestions??
> 
> 
> foreach $home_user (@home_users) {
> 
>  my($tar_out, $tar_err) = $ssh->cmd("cd /home; /bin/tar cpf -
> $home_user");
>  my $gz_file->gzopen($tar_out, "rb") or die " Cannot open $tar_out:
> $gzerrno\n";
>  while (<>) {
>$gz_file->gzwrite($_)
>or die "error writing: $gzerrno\n" ;
>}
> 
>  my $gz_user = new FileHandle "/backup1/$home_user\.tar.gz", "w";
> $gz_user->print($gz_file);
>   # for error logging...
> 
>  my $tar_err_log = new FileHandle "/var/log/tar_error.log", "w";
>  $tar_err_log->print($tar_err);
>  $tar_err_log->close;
> 
> $gz_file->gzclose;
> $gz_user->close;
> 
> undef $tar_out;
> undef $tar_err;
> undef $gz_file;
> }
> 
> 
> 
>   I can connect fine and just copy the tar ball aver.  But I want to gzip
> the tar ball also.  It keeps on dying saying 
> 
> Can't call method "gzopen" on an undefined value at tadpole.pl line such and
> such.
> 
>   But I thought I defined it when I say my($tar_out, $tar_err) = $ssh
> 
> 
> I am stuck.
> 
> THanks for the help...
> 
> 
> --chad
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]