Re: Whither gozer?

2001-01-13 Thread G.W. Haywood

Hi all,

On Fri, 12 Jan 2001, George Sanderson wrote:

 I have been tring to get in touch with Philippe for about 6 months with no
 success.

Try:

http://www.smartworker.com/projects/financial.html

which gives:

[EMAIL PROTECTED]


73,
Ged.





Re: [OT] Availability of Jobs -- was Re: [SOLICITATION] Programmeravailable for contracting..

2001-01-13 Thread Gunther Birznieks


  I notice that there have been many more job postings from employment
  seekers have occurred in the last few weeks versus jobs. Whereas it used
  to be many more jobs wanting mod_perl vs seekers of jobs.
 
  Is this an odd time of year for many contractors where the contract ends
  around the holiday season? Or is this starting to be a symptom of dotcoms
  going bust and the development market starting to level out?
 
  Or perhaps I am being a bit paranoid. :)

I have gotten some public responses but more private questions to ask what 
the response has been like since most people who have a sad story to tell 
usually won't tell it in public.

To summarize (and end the thread -- at least for private responses to me 
which isn't necessary)...

Everything that I have been emailed indicates that there are plenty of jobs 
still out there for people on this list.

However, the emails to me may be summarized by saying that the market is 
not what it used to be. This seems to have effected the number of jobs 
paying top $ rather than anything else specific.

I think the emails I received were from people who were technical enough to 
be on this list and be interested in mod_perl (more techie than most 
probably). So there are always going to be jobs for people like those -- 
that is my belief.

Stas pointed out F*ckedCompany.com which people have mixed feelings about.

I think F*ckedCompany does show that there are a lot of jobs that have been 
lost. And clearly the barrage of medium-size eBusiness consulting firms 
such as Scient laying off so many people is also a blow.

However, with that said. My startup in Singapore has gone to multiple dead 
startups here and interviewed their techs to get them to come to us after 
the company announced they were closing.

What I have found in interviewing the left over techs is that out of a 
company that might have 20 techs, they are usually not very bright and know 
how to do only one thing really well (like maybe all they know how to do is 
DBA Oracle) and clearly would need a lot of management if we were to hire 
them.

The people from a dead startup that are really good -- well, the story is 
that they usually left somewhat a bit before the startup died (not the 
reason, but they were savvy enough to smell the coffee) or they already 
easily get hired straight out before. The people that are left are either 
slow or they have already promised to go to another place when the startup 
finally winds up but are staying to help finish it out and possibly aid in 
selling off the assets (one major one is usually the IP bound up in the 
software that was developed and usually requires a bit of cleanup by a good 
programmer).

Others have also said (appropriately) that this is the season for bonus 
time and so some may be looking around to leave before they get their bonus 
so they can be prepared to resign if a better opportunity arises.

I guess that makes sense and I should have thought of that. After all I 
resigned my job last year as soon as my bonus was wire transferred from 
London to my US account.

Anyway, Sorry for bugging Jeffrey and others with the off topic. I felt I 
wanted to summarize the responses I got because I still get private ones so 
clearly there are people interested.

But I would like to say that I would rather not get responses to this 
anymore about each individual story sent to me in private. So please don't 
send me anymore private responses unless you really have something new for 
me to know that hasn't been mentioned here. :)

Thanks,
  Gunther




Re: Whither gozer?

2001-01-13 Thread Richard Dice

 Try:
 
 http://www.smartworker.com/projects/financial.html

I think you mean www.smartworker.org
 
 which gives:
 
 [EMAIL PROTECTED]

You can also try [EMAIL PROTECTED]

Cheers,
Richard



Re: [ANNOUNCE] Apache-AuthzCache 0.03

2001-01-13 Thread George Sanderson

When a request that requires authorization is received,
Apache::AuthzCache looks up the REMOTE_USER in a shared-memory
cache (using IPC::Cache) and compares the list of groups in the
cache against the groups enumerated within the "require"
configuration directive. If a match is found, the handler returns
OK and clears the downstream Authz handlers from the
stack. Otherwise, it returns DECLINED and allows the next
PerlAuthzHandler in the chain to be called.
 
After the primary authorization handler completes with an OK,
Apache::AuthzCache::manage_cache adds the new group (listed in
REMOTE_GROUP) to the cache.

I would like the module to be able to cache selected environment variables
along with the user and group information.
A directive could be used to select a list of environment variables to be
cache.  For example;

AuthzCacheOption CacheEnv  REMOTE_EMAIL SPECIAL_VAL FIELD23

Then when the following authorize request are processed, these environment
variables would be set.  The down steam handlers can now access the
variables without having to go back to the original data source (which may
change).

I have done some work with AuthzDBI.pm to provide this functionality.  I
hope to work with Edmund Mergl to get it added to Apache::AuthzDBI.pm.





[ANNOUNCE] HTTP::WebTest 0.01 released to CPAN

2001-01-13 Thread Carl Lipo


HTTP::WebTest (by Richard Anderson) was released to CPAN today and is
available for download. It is a module for creating automated unit tests
for Apache::ASP pages (and other). Here is the announcement posted to
comp.lang.perl.announce and comp.lang.perl.modules:

NAME
HTTP::WebTest - Test remote URLs or local web files

DESCRIPTION
This module runs tests on remote URLs or local web files containing
Perl/HTML/JavaScript/etc. and generates a detailed test report. The
test specifications can be read from a parameter file or input as
method arguments. If you are testing a local file, Apache is
started on a private/dynamic port with a configuration file in a
temporary directory.  The module displays the test results on the
terminal by default or directs them to a file. The module will also
optionally e-mails the test results. When the calling program
exits, the module stops the local instance of Apache and deletes
the temporary directory.

Each test consists of literal strings or regular expressions that
are either required to exist or forbidden to exist in the fetched
~page. You can also specify tests for the minimum and maximum number
of bytes in the returned page. If you are testing a local file, the
module checks the error log in the temporary directory before and
after the file is fetched from Apache. If messages are written to
the error log during the fetch, the module flags this as an error
and writes the messages to the output test report.

SYNOPSIS
 This module can accept input parameters from a parameter file or
 subroutine arguments.

 TO RUN WEB TESTS DEFINED BY SUBROUTINE ARGUMENTS:

 use HTTP::WebTest; run_web_test(\@web_tests, \$num_fail,
\$num_succeed, \%test_options)

 or

 use HTTP::WebTest; run_web_test(\@web_tests, \$num_fail,
\$num_succeed)

 TO RUN WEB TESTS DEFINED BY A PARAMETER FILE:

 use sigtrap qw(die normal-signals); # Recommended, not necessary
 use HTTP::WebTest; $webtest = HTTP::WebTest-new();
 $webtest-web_test('my_web_tests.wt', \$num_fail, \$num_succeed);

 The web_test() method has an option to test a local file by
 starting Apache on a private port, copying the file to a temporary
 htdocs directory and fetching the page from Apache.  If you are
 testing with multiple parameter files, you can avoid restarting
 Apache each time by calling new() only once and recycling the
 object:

 use sigtrap qw(die normal-signals); # Recommended, not necessary
 use HTTP::WebTest;
 $webtest = HTTP::WebTest-new();
 foreach $file (@ARGV) {
$webtest-web_test($file, \$num_fail, \$num_succeed);
 }

 TO ENABLE DEBUGGING MESSAGES (OUTPUT TO STDOUT):

 If you are calling the web_test method, use the debug parameter.
 If you are calling the run_web_test method, do this:

 use HTTP::WebTest;
 $HTTP::WebTest::Debug = 1; # Diagnostic messages
 $HTTP::WebTest::Debug = 2; # Messages and preserve temp Apache dir
 run_web_test(\@web_tests, \$num_fail, \$num_succeed)

RESTRICTIONS / BUGS
This module only works on Unix (e.g., Solaris, Linux, AIX, etc.).
The module's HTTP requests time out after 3 minutes (the default
value for LWP::UserAgent). If the file_path parameter is specified,
Apache must be installed. If the file_path parameter is specified,
the directory /tmp cannot be NFS-mounted, since Apache's lockfile
and the SSL mutex file must be stored on a local disk.

VERSION
This document describes version 0.01, release date 13 January 2001.

TODO
Add option to validate HTML syntax using HTML::Validator. Add
option to check links (see
http://world.std.com/~swmcd/steven/perl/pm/lc/linkcheck.html).

AUTHOR
 Richard Anderson [EMAIL PROTECTED]

COPYRIGHT
Copyright (c) 2000 Richard Anderson. All rights reserved. This
module is free software. It may be used, redistributed and/or
modified under the terms of the Perl Artistic License.

[EMAIL PROTECTED]  RayCosoft, LLC
Perl/Java/Oracle/Unix software engineeringwww.unixscripts.com
www.zipcon.net/~starfire/home Seattle, WA, USA







Looking for a new distro

2001-01-13 Thread Jamie Krasnoo

Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
of you using with Apache 1.3.14 and mod perl 1.24_01?

Jamie




Re: Looking for a new distro

2001-01-13 Thread dreamwvr

Try ..
www.linux-mandrake.com
Jamie Krasnoo wrote:

 Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
 of you using with Apache 1.3.14 and mod perl 1.24_01?

 Jamie






Re: Looking for a new distro

2001-01-13 Thread Sean D. Cook

On Sat, 13 Jan 2001, dreamwvr wrote:

 Try ..
 www.linux-mandrake.com
 Jamie Krasnoo wrote:
 

Not to turn this into a distro war but mandrake is not exactly known for
its stability.  Redhat is currently in a .0 release which is not stable at
all.  Debian and Slackware are really your best bet for
stability.  Personally I recommend slackware.  If you really want to stick
with Redhat you can get a copy of 6.2 it is pretty standard.  




 -- 
Sean Cook
Systems Analyst
Edutest.com

Phone: 804.673.22531.888.335.8378
email: [EMAIL PROTECTED]
__
Save the whales.  Collect the whole set.




Re: Looking for a new distro

2001-01-13 Thread Tom Kralidis

Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
of you using with Apache 1.3.14 and mod perl 1.24_01?

Jamie

I have Apache 1.3.14 and mod_perl 1.24_01 (as DSO) working fine under 
RedHat6.2.

..Tom
http://www.kralidis.ca/
_
Get Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.




Re: Looking for a new distro

2001-01-13 Thread Joshua Chamas

Jamie Krasnoo wrote:
 
 Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
 of you using with Apache 1.3.14 and mod perl 1.24_01?
 

I haven't used it yet, but I would eagerly be looking for
one that has reiserfs compiled in.  Is that a debian distro?

I'm sitting on a redhat 6.2 distro and fsck'ing a 20G ext2fs 
raid-1 partition is not my idea of happy downtime. :(

-- Josh

_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Re: Looking for a new distro

2001-01-13 Thread Wim Kerkhoff

For some reason, I think this will be a long (but on-topic) thread...

On Sat, Jan 13, 2001 at 12:32:44PM -0800, Jamie Krasnoo wrote:
 Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
 of you using with Apache 1.3.14 and mod perl 1.24_01?


I use Debian (potato/woody/sid). Getting a good development environment is a snap, and 
I've never lost time to figuring out wierd kernel/glibc/perl problems.  Personally I 
prefer to do a very minimal install of debian, then just apt-get whatever I need (ssh, 
vim, etc). Perl modules can be installed using apt-get, but it's just as easy to get 
them using the CPAN module.  Not having to hunt down dependencies and download 
separate RPM's is something you get used to _very_ quickly!  Within an hour, I can 
have debian installed, a new kernel compiled, and apache/mod_perl compiled and 
installed.

/me goes back to getting Perl 5.6, LWP, and other things to work properly on a RH 7.0 
box... ARGH!!

Regards,

Wim Kerkhoff 

Software Engineer
Merilus, Inc.  



Re: Looking for a new distro

2001-01-13 Thread J. J. Horner

* Jamie Krasnoo ([EMAIL PROTECTED]) [010113 17:20]:
 Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
 of you using with Apache 1.3.14 and mod perl 1.24_01?
 
 Jamie
 

I use Redhat 6.2.  I put 7.0 on my laptop, and it worked okay, but I only do perl
on my laptop.  If I were to code C (not very often), it would be on a 6.2 box.

I haven't looked into the egcs issue with RedHat 7.0, but I know RedHat issued an
"apology".

JJ

-- 
J. J. Horner
[EMAIL PROTECTED]

Apache, Perl, mod_perl, Web security, Linux


 PGP signature


Re: Looking for a new distro

2001-01-13 Thread Wim Kerkhoff

On Sat, Jan 13, 2001 at 02:36:21PM -0800, Joshua Chamas wrote:
 Jamie Krasnoo wrote:
  
  Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
  of you using with Apache 1.3.14 and mod perl 1.24_01?
  
 
 I haven't used it yet, but I would eagerly be looking for
 one that has reiserfs compiled in.  Is that a debian distro?
 
 I'm sitting on a redhat 6.2 distro and fsck'ing a 20G ext2fs 
 raid-1 partition is not my idea of happy downtime. :(

I got fed up with the corrupted ex2fs filesystems on my workstation a month or two 
ago, and converted to reiserfs.

As far as I know, debian doesn't have ReiserFS built into the kernel on the install 
disks.  What most people do, and what I did, is something like the following:

- do a very minimal install of debian to a small (200-300 MB) partition. 
- grab the kernel source and the reiserfs patch
- build the kernel  module  reisertools. 
- reboot with the new kernel
- mkreiserfs /dev/hda3 (or whatever partitions you want to be reiser)
- mount -t reiser -notail /dev/hda3 /mnt/new
- copy the directories in / (except /mnt/new, /proc, etc) to /mnt/new
- mkdir /mnt/new/proc
- modify /mnt/new/etc/fstab accordingly
- reboot, but at the lilo prompt pass root=/dev/hda3
- edit /etc/lilo.conf as required

I've created a 20 MB /boot as ext2fs.

The instructions above are off the top of my head. I think better instructions are at 
debianplanet.org, but I can't remember where.

Regards,

Wim Kerkhoff 
[EMAIL PROTECTED]



Re: Looking for a new distro

2001-01-13 Thread G.W. Haywood

Hi Jamie,

On Sat, 13 Jan 2001, Jamie Krasnoo wrote:

 Ok, I've had it with RH 7.0. Too many problems. What Linux distro
 are some of you using with Apache 1.3.14 and mod perl 1.24_01?

I had hardware troubles with 6.2 last year on one particular type of
machine and went back to 6.1 which was fine.  I use 6.1/6.2 on many
machines professionally, not sure of the split, several of them using
1.3.14/1.24_01.  My personal preference has always been for Slackware
but I'm running mostly older Linux/Apache/mod_perls on those systems,
and only four or five of them.  One of them has been running for two
years non-stop, the others only get booted every few months just for
the hell of it.  Never corrupted a filesystem except when it was my
own silly fault for playing with LILO.  Only do that in development!

Everything is Perl 5.005_03 except for one machine with 5.6 which is
in development.  No troubles to speak of with that but I wouldn't risk
it on 400,000 users yet.  MySQL on some of the machines, Oracle on
others, a few oddities with both.  Berkeley DB on some, ntpd, sendmail
8.10, emacs 19 etc. on most, no problems.  Netscape crashes all the
time but it never takes the OS down with it.  Only use it to test
Websites anyway.  XFree86 crashes occasionally and sometimes it does
trip the OS.  Only use it to run Netscape.  But you didn't ask about
any of them.  There are probably loads of packages I just can't think
of right now, sorry guys if you're piqued but thanks anyway.

Everything is built from source using Gnu (including the kernels on
the Slackware machines, but the RH systems are often out of the box).

Considering what a truckload of software is running on these machines
we really don't deserve the way they just run and run without trouble.

A few people like FreeBSD/Debian/SuSe but no I've experience of them.

HTH

73,
Ged.









Re: Looking for a new distro

2001-01-13 Thread Matt Sergeant

On Sat, 13 Jan 2001, Jamie Krasnoo wrote:

 Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
 of you using with Apache 1.3.14 and mod perl 1.24_01?

I replied privately as I think its more appropriate...

Can we kill this thread now, before it spirals out of control?

Matt.




Re: Looking for a new distro

2001-01-13 Thread Clayton Cottingham aka drfrog

heya all:

im really waiting for the next bunch of releases here,

im hoping that they all start doing a lil more QA before release!!

ive tried a lot of different dists and nothing has me going
"oh yeah!!"

mandrake 7.2 is my current 
"lesser of  Nth evils"

last summer i went to town downloading everything from 
debian to suse and back to slack and redhat 

at the time i started using mandrake its mod_perl was sett up nice

its not bad on 7.2 but it splits up so there is an httpd and an httpd-perl

i dont like having to configure 
both so i un-rpmd then and rolled my own, DSO style
install in less than thirty mins
drop my config in and zoom!!

again some of this has to do with how a dist sets up apache
i like it all in /home/ww 
but most put the conf 's under /etc etc etc.

some of these compatiblility  file management issues drive me nuts!!
and i usually uninstall and recomp my own

well that my take on it anyhow


-- 
back in the day
we didn't have no
old school
-dr. frog





Re: Looking for a new distro

2001-01-13 Thread G.W. Haywood

Hi Matt,

On Sat, 13 Jan 2001, Matt Sergeant wrote:

 Can we kill this thread now, before it spirals out of control?

Yeah, thanks Matt.  It *is* late.

73,
Ged.




RE: Looking for a new distro

2001-01-13 Thread Nathan Poole

We use Slackware (oi oi oi), have been since about '95, I love it :)

7.1 on a few machines, but 4.0 on most, Apache 1.3.14 and mod_perl 1.24_01
on nearly all of the important ones.

We don't have a mind-boggling load, but several machines with a whole bunch
of software/performing many different functions (including development at
the same time) run for 6 months at a time on average.

I believe there are reiserfs patches, but the only colleague I've seen try
it out had a few probs with sendmail (overlapping messages etc), but
everything else worked fine.

Cheers,

NP

-Original Message-
From: Joshua Chamas [mailto:[EMAIL PROTECTED]]
Sent: Sunday, 14 January 2001 9:36 AM
To: Jamie Krasnoo
Cc: Modperl
Subject: Re: Looking for a new distro


Jamie Krasnoo wrote:

 Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
 of you using with Apache 1.3.14 and mod perl 1.24_01?


I haven't used it yet, but I would eagerly be looking for
one that has reiserfs compiled in.  Is that a debian distro?

I'm sitting on a redhat 6.2 distro and fsck'ing a 20G ext2fs
raid-1 partition is not my idea of happy downtime. :(

-- Josh

_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks  free web link monitoring   Huntington Beach, CA  USA
http://www.nodeworks.com1-714-625-4051




Re: Looking for a new distro

2001-01-13 Thread ken_i_m

At 12:32 PM 1/13/01 -0800, you wrote:
Ok, I've had it with RH 7.0. Too many problems. What Linux distro are some
of you using with Apache 1.3.14 and mod perl 1.24_01?

Jamie

I have built my own using the Linux From Scratch guidebook. Some of those 
on the mailing list have incorporated reiserfs, others are using the glibc 
2.2 library. I am currently building a router for my LAN with the official 
release of the 2.4.0 kernel.

Another thing I really like about building from source tarballs is that I 
am not stuck with default installs of Sendmail, wu-ftp, telnet and many of 
the other security problems the major distros foist onto users.

I also get to choose how bleeding edge I want to be (which is not very). I 
know where everything is because I put it there.  And many, many other 
benefits.



I think, therefore, ken_i_m




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscripts that contain un-shared memory

2001-01-13 Thread Gunther Birznieks

I have just gotten around to reading this thread I've been saving for a 
rainy day. Well, it's not rainy, but I'm finally getting to it. Apologizes 
to those who hate when when people don't snip their reply mails but I am 
including it so that the entire context is not lost.

Sam (or others who may understand Sam's explanation),

I am still confused by this explanation of MRU helping when there are 10 
processes serving 10 requests at all times. I understand MRU helping when 
the processes are not at max, but I don't see how it helps when they are at 
max utilization.

It seems to me that if the wait is the same for mod_perl backend processes 
and speedyCGI processes, that it doesn't matter if some of the speedycgi 
processes cycle earlier than the mod_perl ones because all 10 will always 
be used.

I did read and reread (once) the snippets about modeling concurrency and 
the HTTP waiting for an accept.. But I still don't understand how MRU helps 
when all the processes would be in use anyway. At that point they all have 
an equal chance of being called.

Could you clarify this with a simpler example? Maybe 4 processes and a 
sample timeline of what happens to those when there are enough requests to 
keep all 4 busy all the time for speedyCGI and a mod_perl backend?

At 04:32 AM 1/6/01 -0800, Sam Horrocks wrote:
   Let me just try to explain my reasoning.  I'll define a couple of my
   base assumptions, in case you disagree with them.
  
   - Slices of CPU time doled out by the kernel are very small - so small
   that processes can be considered concurrent, even though technically
   they are handled serially.

  Don't agree.  You're equating the model with the implemntation.
  Unix processes model concurrency, but when it comes down to it, if you
  don't have more CPU's than processes, you can only simulate concurrency.

  Each process runs until it either blocks on a resource (timer, network,
  disk, pipe to another process, etc), or a higher priority process
  pre-empts it, or it's taken so much time that the kernel wants to give
  another process a chance to run.

   - A set of requests can be considered "simultaneous" if they all arrive
   and start being handled in a period of time shorter than the time it
   takes to service a request.

  That sounds OK.

   Operating on these two assumptions, I say that 10 simultaneous requests
   will require 10 interpreters to service them.  There's no way to handle
   them with fewer, unless you queue up some of the requests and make them
   wait.

  Right.  And that waiting takes place:

 - In the mutex around the accept call in the httpd

 - In the kernel's run queue when the process is ready to run, but is
   waiting for other processes ahead of it.

  So, since there is only one CPU, then in both cases (mod_perl and
  SpeedyCGI), processes spend time waiting.  But what happens in the
  case of SpeedyCGI is that while some of the httpd's are waiting,
  one of the earlier speedycgi perl interpreters has already finished
  its run through the perl code and has put itself back at the front of
  the speedycgi queue.  And by the time that Nth httpd gets around to
  running, it can re-use that first perl interpreter instead of needing
  yet another process.

  This is why it's important that you don't assume that Unix is truly
  concurrent.

   I also say that if you have a top limit of 10 interpreters on your
   machine because of memory constraints, and you're sending in 10
   simultaneous requests constantly, all interpreters will be used all the
   time.  In that case it makes no difference to the throughput whether you
   use MRU or LRU.

  This is not true for SpeedyCGI, because of the reason I give above.
  10 simultaneous requests will not necessarily require 10 interpreters.

 What you say would be true if you had 10 processors and could get
 true concurrency.  But on single-cpu systems you usually don't need
 10 unix processes to handle 10 requests concurrently, since they get
 serialized by the kernel anyways.
  
   I think the CPU slices are smaller than that.  I don't know much about
   process scheduling, so I could be wrong.  I would agree with you if we
   were talking about requests that were coming in with more time between
   them.  Speedycgi will definitely use fewer interpreters in that case.

  This url:

 http://www.oreilly.com/catalog/linuxkernel/chapter/ch10.html

  says the default timeslice is 210ms (1/5th of a second) for Linux on a PC.
  There's also lots of good info there on Linux scheduling.

 I found that setting MaxClients to 100 stopped the paging.  At 
 concurrency
 level 100, both mod_perl and mod_speedycgi showed similar rates 
 with ab.
 Even at higher levels (300), they were comparable.
  
   That's what I would expect if both systems have a similar limit of how
   many interpreters they can fit in RAM at once.  Shared memory would help
   here, since it would allow more