if you say 'multi thread' you say POE....
it's an excellent module that allows you to multithread in perl, you can read
some about it here
http://www.perl.com/pub/2001/01/poe.html

it will take you a bit to wrap your brain around it, but i assure you it's worth
it... of course, feel free to post questions about it to the list

you can get the latest download from either sourceforge.net or from poe.perl.org

i recently made a ppm for windows to install the latest version, which should be
available for public download shortly as well

hth,

Jos Boumans



Rajeev Rumale wrote:

> Dear Chas,
>
> Thank U very much for the suggestion. I am very  much convinced with this
> and would like to proceed in same direction.
>
> I perfer to develop the whole application in a single language, as far as
> possible.   Since I am quite new to Perl I would like to know if we can
> write multi-threaded programs in PERL.
>
> I would be greatfull if any one can suggest me some good online tutorial for
> the same.
>
> with regards
>
> Rajeev Rumale
>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Rajeev Rumale
> MyAngel.Net Pte Ltd.,                                            Phone  :
> (65)8831530 (office)
> #04-01, 180 B, The Bencoolen,                               Email  :
> [EMAIL PROTECTED]
> Bencoolen Street, Singapore - 189648                     ICQ    : 121001541
> Website : www.myangel.net
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> ----- Original Message -----
> From: "Chas Owens" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Thursday, June 21, 2001 7:34 PM
> Subject: Re: Pooling of objects and session data
>
> > On 21 Jun 2001 15:16:19 +0800, Rajeev Rumale wrote:
> > > Hi,
> > >
> > > I need to know if there is any easiest way to keep session data or
> object
> > > accross the scripts.
> > >
> > > Basically I would like to pool Database connections so that Parrallel
> > > running scripts don't open multiple connection with the database.
> > >
> > > with regards
> > >
> > <snip />
> >
> > The only way I can think of to achieve this would be to write a daemon
> > process in perl (or any other language for that matter) that would be
> > responsible for accessing the database based on requests (through IPC,
> > BSD style sockets, files being placed in certain directories, smoke
> > signals, whatever) and returning the data (again through some
> > communication method).  I have seen production systems (not that I
> > recommend this) that had a special set of directories named /work/in and
> > /work/out.  Shell scripts would print sql statements to files in the
> > /work/in dir and a C daemon would: pick them up, see if they were from
> > the right owner, discard the invalid files, run the valid ones and put
> > the results in the /work/out dir.  Filenames were based on the pid of
> > the shell script.  The shell script would then sit waiting for a file
> > with its pid to show up in the /work/out directory.
> >
> > In general, if you are having to create hacks like this the problem is
> > most likely you choice of RDBMS.  Enteprise level databases generally
> > don't have a problem with thousands of concurrent connections.
> >
> > --
> > Today is Boomtime, the 26th day of Confusion in the YOLD 3167
> > Or is it?
> >
> >
> >

Reply via email to