Re: Php perl?

2003-04-05 Thread Scott R. Godin
Gary Stainburn wrote:

 Just to put things into perspective here
 
 1) I made the comment about the start-up speed.
 2) Although I use PHP frequently my feet are FIRMLY in the Perl camp
 3) Unix Fork/Load/Exec cycle *IS* slow because of the amount of work
 involved. The MS equiv will be just as slow
 4) The F/L/E cycle has to be done *every* time the CGI is requested.
 
 The PHP interpreter is already loaded and therefore *WILL* have a quicker
 startup. I don't have figures to back it up, but I would imagine Perl
 scripts exec much quicker that PHP asuming both scripts perform the same
 function.
 
 (From memory) Using mod_perl will only require scripts to be loaded and
 compiled once per lifetime of the apache daemon so will be much quicker
 overall and provide less load on the server than either traditional CGI's.

The question I have is with the current Apache (2.x) and mod_perl (1.27?) 
involved, when a perl program is incorporated into the httpd process in 
this manner, how much more *memory* overhead does each httpd process 
require over related circumstances in php? 

for example if I use CGI.pm to create a form (using its functional interface 
to produce the actual HTML seen) and also respond to said form, how much 
more RAM will each httpd process require if this is run under mod_perl ? 

One argument I've been handed by the php camp is that in a mod_perl 
situation, this will cause apache processes to become much larger, thereby 
taking up more of the precious memory resources. 

(consider a large gaming website like planetunreal (which IIRC uses .asp), 
or quakeworld, which garner huge volumes of hits on a daily basis)

This issue needs to be addressed firmly to the php camp, because the FUD 
being spread was enough to cost me one of the most fun hobby projects I was 
involved with, and the one that got me started on the perl path to begin 
with while they were still on .asp and had not yet migrated the site to a 
linux server and mod_php...  

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: How to measure Array or Hash's byte size ?

2003-04-05 Thread Scott R. Godin
Vincent Bufferne wrote:

 What's about:
 
 my @foo = ( '1', '2' ,'3' );
 my $size = $#foo + 1;
 print table size $size\n;
 
 Ouput:
 table size 3

print Table size: , scalar(@foo), \n;


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



random sub returns same results...

2003-04-05 Thread meriwether lewis
Hi Gurus!

I have the following sub which creates a random 
string. Each time I call the sub in my perl 
script it returns a random string which is fine. 
The problem is every time I run the script the 
same strings are returned. I'm running perl 
5.001 on Windows 2000.

sub RandomStr
{
my $i=0;
my $str=;
foreach(0..9,a..z,A..Z)
{   $i++;
$arr[$i]=$_
}
for($j=0;$jrand(30);$j++)
{
$str.=$arr[rand(58)+1]
}
return $str;
}

So if I run my script calling the above three 
times I get:

brLNh
96
x9k

If I run it again I get the same three strings.
Why?
Is there a better way to get a random string?

Thanks!

Meriwether

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Help needed: Simple HASHES question

2003-04-05 Thread Ciprian Morar
1. What is the difference between Line #1 and Line #2?
2. Why is the Line #2 declaration incorrect?
use strict;

my %option;
$option {'q'} = new CGI;
#Line 1-
$option{'Mon'} = 'Monday';
#Line 2 -
$option-{'Tue'} = 'Tuesday';
print $option{'q'} - header(),
$option{'q'} - start_html();
print $option{'q'} - end_html;

#thanks---

_
Get Hotmail on your mobile phone http://www.msn.co.uk/mobile
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Help needed: Simple HASHES question

2003-04-05 Thread Ciprian Morar
sorry for the spam - my question has been answered
Hotmail was filtering the discussion list.
C
1. What is the difference between Line #1 and Line #2?
2. Why is the Line #2 declaration incorrect?
use strict;

my %option;
$option {'q'} = new CGI;
#Line 1-
$option{'Mon'} = 'Monday';
#Line 2 -
$option-{'Tue'} = 'Tuesday';
print $option{'q'} - header(),
$option{'q'} - start_html();
print $option{'q'} - end_html;

#thanks---

_
Get Hotmail on your mobile phone http://www.msn.co.uk/mobile
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


_
Express yourself with cool emoticons http://www.msn.co.uk/messenger
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: random sub returns same results...

2003-04-05 Thread Li Ngok Lam
perldoc -f srand

- Original Message - 
From: meriwether lewis [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, April 05, 2003 10:09 AM
Subject: random sub returns same results...


 Hi Gurus!
 
 I have the following sub which creates a random 
 string. Each time I call the sub in my perl 
 script it returns a random string which is fine. 
 The problem is every time I run the script the 
 same strings are returned. I'm running perl 
 5.001 on Windows 2000.
 
 sub RandomStr
 {
 my $i=0;
 my $str=;
 foreach(0..9,a..z,A..Z)
 {   $i++;
 $arr[$i]=$_
 }
 for($j=0;$jrand(30);$j++)
 {
 $str.=$arr[rand(58)+1]
 }
 return $str;
 }
 
 So if I run my script calling the above three 
 times I get:
 
 brLNh
 96
 x9k
 
 If I run it again I get the same three strings.
 Why?
 Is there a better way to get a random string?
 
 Thanks!
 
 Meriwether
 
 -- 
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]
 
 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: random sub returns same results...

2003-04-05 Thread Rob Dixon
Meriwether Lewis wrote:
 Hi Gurus!

 I have the following sub which creates a random
 string. Each time I call the sub in my perl
 script it returns a random string which is fine.
 The problem is every time I run the script the
 same strings are returned. I'm running perl
 5.001 on Windows 2000.

 sub RandomStr
 {
 my $i=0;
 my $str=;
 foreach(0..9,a..z,A..Z)
 {   $i++;
 $arr[$i]=$_
 }
 for($j=0;$jrand(30);$j++)
 {
 $str.=$arr[rand(58)+1]
 }
 return $str;
 }

 So if I run my script calling the above three
 times I get:

 brLNh
 96
 x9k

 If I run it again I get the same three strings.
 Why?

Perl tries to 'seed' its random number generator from various
system values to make it different each time. Its first attempt
is to read a value from the system device /dev/urandom which
will be used if the read is successful. Clearly this isn't working
properly in your case so perhaps your urandom device is
misbehaving?

An alternative is that you may incorrectly be calling 'srand' to
seed the random number generator. If you pass a constant
to this function then the random number sequence will be
the same each time.

 Is there a better way to get a random string?

If this suits your purpose, then no. You code could be a lot
better laid out though. Take a look at this for inspiration:

sub RandomStr {
my $str;
my @arr = (0..9, a..z, A..Z);
$str .= $arr[rand(@arr)] for 1 .. rand(30);
$str;
}

HTH,

Rob




-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Queue Suggestions?

2003-04-05 Thread Jeff Westman
I'm posed with a problem, looking for suggestions for possible resolution.  I
have a script that has many steps in it, including telnet  ftp sessions,
database unloads, and other routines.  This script will run on a server,
accessing a remote server.  This works fine.  I will likely have several
dozen (maybe as many as 100) iterations of this script running
simultaneously.  The problem is, that their is a bottleneck towards the end
of my script -- I have to call a 3rd party process that is single-threaded. 
This means that if I have ~100 versions of my script running, I can only have
one at a time execute the 3rd party software.  It is very likely that
multiple versions will arrive at this bottle-neck junction at the same time. 
If I had more than one call the third party program, one will run, one will
loose, and die.  

So I am looking for suggestions on how I might attack this problem.  I've
thought about building some sort of external queue (like a simple hash file).
 The servers have numbers like server_01, server_02, etc.  When a iteration
of the script completes, it writes out it's server name to the file, pauses,
then checks of any other iteration is running the third party software.  If
one is running, it waits, with it's server name at the top of the file queue,
waiting.  A problem might be if again, two or more versions want to update
this queue file, so I thought maybe a random-wait period before writing to
the file-queue.

I'm open to other ideas.  (please don't suggest we rename or copy the third
party software, it just isn't possible).  I'm not looking for code, per se,
but ideas I can implement that will guarantee I will always only have one
copy of the external third party software running (including pre-checks,
queues, etc.

Thanks,

Jeff

__
Do you Yahoo!?
Yahoo! Tax Center - File online, calculators, forms, and more
http://tax.yahoo.com

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



sendmail not working

2003-04-05 Thread mel awaisi
Hello,

i have the script below retest.pl, and whe i try to run it i get nothing.

[EMAIL PROTECTED] cgi-bin]# perl retest.pl
[EMAIL PROTECTED] cgi-bin]#
-retest.pl
# Open Sendmail
open(MAIL, |/usr/lib/sendmail -t);
# Write to the sendmail program
print MAIL To: [EMAIL PROTECTED];
print MAIL From: [EMAIL PROTECTED];
print MAIL Subject:Your Subject\n\n;
print MAIL Your message here;
# Close the sendmail program
close(MAIL);
---
Cheers
Mel
_
Use MSN Messenger to send music and pics to your friends 
http://www.msn.co.uk/messenger

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Queue Suggestions?

2003-04-05 Thread Stefan Lidman
Hello,

maybe you can use flock
perldoc -f flock

I have never used this and dont know if it works in your case.

/Stefan

 I'm posed with a problem, looking for suggestions for possible resolution.  I
 have a script that has many steps in it, including telnet  ftp sessions,
 database unloads, and other routines.  This script will run on a server,
 accessing a remote server.  This works fine.  I will likely have several
 dozen (maybe as many as 100) iterations of this script running
 simultaneously.  The problem is, that their is a bottleneck towards the end
 of my script -- I have to call a 3rd party process that is single-threaded.
 This means that if I have ~100 versions of my script running, I can only have
 one at a time execute the 3rd party software.  It is very likely that
 multiple versions will arrive at this bottle-neck junction at the same time.
 If I had more than one call the third party program, one will run, one will
 loose, and die.
 
 So I am looking for suggestions on how I might attack this problem.  I've
 thought about building some sort of external queue (like a simple hash file).
  The servers have numbers like server_01, server_02, etc.  When a iteration
 of the script completes, it writes out it's server name to the file, pauses,
 then checks of any other iteration is running the third party software.  If
 one is running, it waits, with it's server name at the top of the file queue,
 waiting.  A problem might be if again, two or more versions want to update
 this queue file, so I thought maybe a random-wait period before writing to
 the file-queue.
 
 I'm open to other ideas.  (please don't suggest we rename or copy the third
 party software, it just isn't possible).  I'm not looking for code, per se,
 but ideas I can implement that will guarantee I will always only have one
 copy of the external third party software running (including pre-checks,
 queues, etc.
 
 Thanks,
 
 Jeff

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Queue Suggestions?

2003-04-05 Thread Wiggins d'Anconia
Jeff Westman wrote:
I'm posed with a problem, looking for suggestions for possible resolution.  I
have a script that has many steps in it, including telnet  ftp sessions,
database unloads, and other routines.  This script will run on a server,
accessing a remote server.  This works fine.  I will likely have several
dozen (maybe as many as 100) iterations of this script running
simultaneously.  The problem is, that their is a bottleneck towards the end
of my script -- I have to call a 3rd party process that is single-threaded. 
This means that if I have ~100 versions of my script running, I can only have
one at a time execute the 3rd party software.  It is very likely that
multiple versions will arrive at this bottle-neck junction at the same time. 
If I had more than one call the third party program, one will run, one will
loose, and die.  

So I am looking for suggestions on how I might attack this problem.  I've
thought about building some sort of external queue (like a simple hash file).
 The servers have numbers like server_01, server_02, etc.  When a iteration
of the script completes, it writes out it's server name to the file, pauses,
then checks of any other iteration is running the third party software.  If
one is running, it waits, with it's server name at the top of the file queue,
waiting.  A problem might be if again, two or more versions want to update
this queue file, so I thought maybe a random-wait period before writing to
the file-queue.
I'm open to other ideas.  (please don't suggest we rename or copy the third
party software, it just isn't possible).  I'm not looking for code, per se,
but ideas I can implement that will guarantee I will always only have one
copy of the external third party software running (including pre-checks,
queues, etc.
Currently I am implementing a system that has similar features, 
initially we developed a set of 3 queues, one a pre-processor that 
handles many elements simultaneously, a middle queue (incidentally that 
handles external encryptions/decryptions) which are very slow (seconds 
rather than milli or micro seconds, and a final queue that handles 
sending of files, FTP/SMTP which can be very very slow (hours 
depending on FTP timeout limits...grrr I know)  For this we were 
looking for essentially an event based state machine concept, which 
(thank god) led my searching to POE (since I keep mentioning it, this is 
why):  http://poe.perl.org  After getting over the POE learning curve 
developing my queues was a snap.  Because of business decisions we have 
since moved to a 9 queue system (inbound/outbound sets, plus a post 
processing queue, plus a reroute queue (don't ask)).  Essentially a 
similar setup would work for you, where your middle queue would have a 
threshold of 1 (aka only one process at a time) whereas all of our 
stages are acceptable to have multiple versions running, but we want to 
limit the number of encryption processes happening simultaneously 
because of load rather than problems.  You may also want to have a look 
at the Event CPAN module, it provides similar but lower level functionality.

I can provide more details about the implementation of our system and 
the development of our queues if you wish, but much to my dismay I 
cannot provide source... hopefully this will get you started in any 
case, be sure to check out the example POE uses, particularly the 
multi-tasking process example.

http://danconia.org

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: sendmail not working script

2003-04-05 Thread Wiggins d'Anconia
mel awaisi wrote:
Hi, this is the whole script:

#!/usr/bin/perl

use strict;
use warnings;
# Open Sendmail
open(MAIL, |/usr/lib/sendmail -t);
# Write to the sendmail program
print MAIL To: [EMAIL PROTECTED];
print MAIL From: [EMAIL PROTECTED];
print MAIL Subject:Yoour Subject\n\n;
print MAIL Your messsage here;
# Close the sendmail program
close(MAIL);
As an aside you shouldn't use sendmail directly to send messages from 
Perl, while in most cases it will work, it makes your scripts less 
portable and will lead to more problems, you should use one of the many 
modules available at CPAN. Mail::Mailer, MIME::Lite, Mail::Message, to 
name just a few...

Have you checked the path to sendmail? Are you sure the local sendmail 
server is running?  What does the maillog say, it should provide an 
error or sending information?

http://danconia.org

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: sendmail not working

2003-04-05 Thread R. Joseph Newton
mel awaisi wrote:

 Hello,

 i have the script below retest.pl, and whe i try to run it i get nothing.

 [EMAIL PROTECTED] cgi-bin]# perl retest.pl
 [EMAIL PROTECTED] cgi-bin]#

 -retest.pl
 # Open Sendmail
 open(MAIL, |/usr/lib/sendmail -t);
 # Write to the sendmail program
 print MAIL To: [EMAIL PROTECTED];
 print MAIL From: [EMAIL PROTECTED];
 print MAIL Subject:Your Subject\n\n;
 print MAIL Your message here;
 # Close the sendmail program
 close(MAIL);
 ---

 Cheers
 Mel

Mel, what is going on here?  A few days ago, you posted two scripts together,
one of which was apparently working well and doing what you are trying to do
with this.  That script used Net::SMTP and was much more elaborate and robust
that this one.  You indicated that that script was working, but that you
wanted it to wrok together with the other one.  Now after combining the other
two, you have posted a completely different mail script, and one that could
not possibly be mistaken for the earlier one, if you had actually looked at
both.

You do not seem to have noticed the difference.  This indicates to me a
serious problem with your approach to using assistance.

Go back and look over your posts.  I assume you have a Sent-Mail file?

Folks on this list are trying to help people who are writing their own code.
We post examples so that people can follow the flow of the logic and see its
effects.  These are meant to be studied to understand the operations they
contain.  They are not meant to be simply pasted into your script as working
code.

Please post again, and try to explain in detail what you are learning about
the processes contained in these scripts.

Joseph


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Queue Suggestions?

2003-04-05 Thread Rob Dixon
Jeff Westman wrote:
 I'm posed with a problem, looking for suggestions for possible resolution.  I
 have a script that has many steps in it, including telnet  ftp sessions,
 database unloads, and other routines.  This script will run on a server,
 accessing a remote server.  This works fine.  I will likely have several
 dozen (maybe as many as 100) iterations of this script running
 simultaneously.  The problem is, that their is a bottleneck towards the end
 of my script -- I have to call a 3rd party process that is single-threaded.
 This means that if I have ~100 versions of my script running, I can only have
 one at a time execute the 3rd party software.  It is very likely that
 multiple versions will arrive at this bottle-neck junction at the same time.
 If I had more than one call the third party program, one will run, one will
 loose, and die.

 So I am looking for suggestions on how I might attack this problem.  I've
 thought about building some sort of external queue (like a simple hash file).
  The servers have numbers like server_01, server_02, etc.  When a iteration
 of the script completes, it writes out it's server name to the file, pauses,
 then checks of any other iteration is running the third party software.  If
 one is running, it waits, with it's server name at the top of the file queue,
 waiting.  A problem might be if again, two or more versions want to update
 this queue file, so I thought maybe a random-wait period before writing to
 the file-queue.

 I'm open to other ideas.  (please don't suggest we rename or copy the third
 party software, it just isn't possible).  I'm not looking for code, per se,
 but ideas I can implement that will guarantee I will always only have one
 copy of the external third party software running (including pre-checks,
 queues, etc.

I don't think you need to get this complex Jeff. If your bottleneck were /at/
the end of the processing I would suggest a queue file as you describe, but
not as a means of synchronising the individual scripts. As its final stage each
script would simply append the details of its final operation to a serial file
and then exit. It would then be the job of a separate process to look at this
file periodically and execute any request which may have been written.
That will effectively serialise your operations.

However, since your process may not be able to exit straight away, what you
need, as Stefan says, is a simple dummy file lock. The following will do the
trick

use strict;
use Fcntl ':flock';

open my $que,  queue
or die Couldn't open lock file: $!;

flock $que, LOCK_EX or die Failed to lock queue: $!;
do_single_thread_op();
flock $que, LOCK_UN;

close $que;

Fcntl is there solely to add the LOCK_EX and LOCK_UN identifiers. I've opened
the file for append so that the file will be created if it isn't already there, but
will be left untouched if it is. The 'flock' call to lock exclusively will wait
indefinitely until it succeeds, which means that the process has come to the
head of the queue. It then has sole access to your third-party process and can
use it as it needs to before unlocking the file, when the next process that it
may have been holding up will be granted its lock and can continue.

I hope this helps,

Rob




-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Php perl?

2003-04-05 Thread R. Joseph Newton
George Schlossnagle wrote:

 ... Answering FUD

FUD = ?

Joseph


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Php perl?

2003-04-05 Thread George Schlossnagle
On Saturday, April 5, 2003, at 04:07  PM, R. Joseph Newton wrote:

George Schlossnagle wrote:

... Answering FUD
FUD = ?
Fear, Uncertainty and Doubt.  Basically unsubstantiated comments used 
to discredit a (competing) product.

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: unix permissions

2003-04-05 Thread Wiggins d'Anconia
[EMAIL PROTECTED] wrote:
I'm writing a file browser script.  I've learned about chmod and unix 
permission numbers and what they represent.  But how do I determine the 
owner,  group and other permission settings. 

Is there a perl module that makes this chore easy?

perldoc -f stat
perldoc -f chmod
http://danconia.org

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: unix permissions

2003-04-05 Thread R. Joseph Newton
[EMAIL PROTECTED] wrote:

 I'm writing a file browser script.  I've learned about chmod and unix
 permission numbers and what they represent.  But how do I determine the
 owner,  group and other permission settings.

 Is there a perl module that makes this chore easy?

 Thanks
 Tricia

There are a few.  If you are interested specifically in the Unix permission
model, though, you might want to work with the output of the ls -al
command.  The permission string is at the left of each line returned, and
has 10 characters, first the directory, then the r [read] w [write] and x
[execute] permissions for owner, group and world respectively.

By taking the substring from each line, then, you can parse each for the
permission appropriate to each category.

Joseph


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: unix windows permissions

2003-04-05 Thread Motherofperls
Does windows use the chmod command in the same manner as unix


GD under Win32 - PPM missing?

2003-04-05 Thread Seldo
Hello everyone -

I'm fairly new to Perl, using Win32. I want to use perl to create some
graphs on the fly of data in a table in mySQL. Not so hard, huh? A
million people must have done this already. But the problem is, I
can't seem to get GD, short of compiling the bugger myself, which I
would really rather not do.

In summary: all the documentation seems to suggest that a) I should
have GD already and b) if I don't, it should be easy to install.
However, this is not the case, so I'm very confused and could use some
help :-)

-- Details --

I already have ActiveState Perl 5.8. A search of past mailing list
questions on this topic suggests that you should be able to install GD
using PPM. Activestate suggests this too:

http://aspn.activestate.com/ASPN/Modules/Perl/dist_html?dist_id=8887

But when I do ppm search GD I get a list of five GD-related modules,
but not GD itself. What's up with that? Is it because I don't have the
GD library installed? And before you ask, no, it's not because GD is
already installed - I tried that! I don't have GD.pm anywhere useful.

Also, I'm fairly sure that the perl GD module is just the interface to
the GD library. I've found the GD library for Win32, I think, at the
GNUWin32 project:

http://sourceforge.net/project/showfiles.php?group_id=23617release_id=29152

But having downloaded it, what am I supposed to do with it? Just slap
the DLL into my windows/system32 directory?

Having hit these problems, I made the assumption that my PPM is broken
for some reason and downloaded GD from CPAN manually. I have no idea
how to get things to install using this method, either, but the README
told me this:

3. Does GD run with MacPerl/Win32 Perl?

   Yes.  The latest MacPerl and ActiveState binaries come with GD
   already compiled in and ready to go.

Which sounds nice enough, but if that's the case, why does use GD
make my scripts fall over because of dependency issues?

--- end details --

Any help appreciated!

Seldo.

  Seldo Voss: www.seldo.com
  ICQ #1172379 or [EMAIL PROTECTED]

If you can't find the time to do it right the first time, when will you find the time 
to do it over?


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: unix permissions

2003-04-05 Thread Wiggins d'Anconia
R. Joseph Newton wrote:
[EMAIL PROTECTED] wrote:


I'm writing a file browser script.  I've learned about chmod and unix
permission numbers and what they represent.  But how do I determine the
owner,  group and other permission settings.
Is there a perl module that makes this chore easy?

Thanks
Tricia


There are a few.  If you are interested specifically in the Unix permission
model, though, you might want to work with the output of the ls -al
command.  The permission string is at the left of each line returned, and
has 10 characters, first the directory, then the r [read] w [write] and x
[execute] permissions for owner, group and world respectively.
By taking the substring from each line, then, you can parse each for the
permission appropriate to each category.
This is the *wrong* way to approach getting the permissions for the 
file, the 'stat' builtin will do the same thing and does not require 
shelling out nor screen scraping.

perldoc -f stat
perldoc File::stat
http://danconia.org

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Re: Queue Suggestions?

2003-04-05 Thread Jeff Westman
Rob,

I think you're right.  I think the idea would be to have the server name
next-to-be-processed append to the file, then the next step call a single
separate script (start it if not already running, otherwise simpley wait)
that would lock the control file, and this script would be the single entry
point to the 3rd party software, controlling processes to run only one at a
time.  My thinking before was to have this be part of every script (last
step), but then it got real complicated thinking about queues, random wait
times and then checking, double checking, etc. 

Sometimes simpler is better.  Thanks for the suggestion!

-Jeff

___

 I don't think you need to get this complex Jeff. If your
 bottleneck were /at/ the end of the processing I would suggest a
 queue file as you describe, but not as a means of synchronising
 the individual scripts. As its final stage each script would
 simply append the details of its final operation to a serial file
 and then exit. It would then be the job of a separate process to
 look at this file periodically and execute any request which may
 have been written. That will effectively serialise your
 operations.
 
 However, since your process may not be able to exit straight
 away, what you need, as Stefan says, is a simple dummy file lock.
 The following will do the trick
 
 use strict;
 use Fcntl ':flock';
 
 open my $que,  queue
 or die Couldn't open lock file: $!;
 
 flock $que, LOCK_EX or die Failed to lock queue: $!;
 do_single_thread_op();
 flock $que, LOCK_UN;
 
 close $que;
 
 Fcntl is there solely to add the LOCK_EX and LOCK_UN identifiers.
 I've opened the file for append so that the file will be created
 if it isn't already there, but will be left untouched if it is.
 The 'flock' call to lock exclusively will wait indefinitely until
 it succeeds, which means that the process has come to the head of
 the queue. It then has sole access to your third-party process
 and can use it as it needs to before unlocking the file, when the
 next process that it may have been holding up will be granted its
 lock and can continue.
 
 I hope this helps,
 
 Rob
 

__
Do you Yahoo!?
Yahoo! Tax Center - File online, calculators, forms, and more
http://tax.yahoo.com

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]