[JOB] Perl / mod_perl programmer in Orem, UT

2002-07-10 Thread Cahill, Earl

Internet Software Engineer (PERL)

About Web Services is seeking an Internet Software Engineer to maintain and
develop new applications for it's hosting platform.  About Web Services is a
division of About.com that provides hosting solutions to more than 4 million
web sites including freeservers.com and bizhosting.com.  Job includes a
competitive salary and benefits (Health and Dental Insurance, 401K, ESPP)  

Qualifications:
  -2 years+ Experience developing in PERL / modperl on Unix
  -Solid Understanding of Object Oriented Programming
  -Experience with HTML, JavaScript, and XML
  -Experience with SQL (Oracle and/or MySQL)
  -Experience with writing efficient highly scalable code
  -Experience with Red Hat Linux or equivalent
  -Experience with CVS
  -Experience in working in a team environment

Qualified individuals should send a resume to [EMAIL PROTECTED] or About Web
Services 1253 N. Research Way Suite Q-2500 Orem, Utah  84097.

If you have questions you are welcome to email me as well.

Thanks,
Earl



RE: File::Redundant

2002-04-29 Thread Cahill, Earl

 Interesting ... not sure if implementing this in this fashion 
 would be 
 worth the overhead.  If such a need exists I would imagine 
 that would have 
 choosen a more appropriate OS level solution.  Think OpenAFS.

It is always nice to use stuff that has ibm backing and likely has at least
a professor or two and some grad students helping out on it.  I had never
heard of OpenAFS before your email.  I will have to look into it a bit.  My
stuff would hopefully make it nice if you didn't want to change your os, or
if you just wanted to make File::Redundant a small part of a much larger
overall system.

The biggest overheard I have seen is having to do readilnks.  Maybe I could
get around them somehow.  I will have to draw up some uml or something to
show how my whole system works.

Earl



RE: File::Redundant

2002-04-29 Thread Cahill, Earl

 I would think it could be useful in non-mod_perl applications as well
 - you give an example of a user's mailbox.  With scp it might be even
 more fun to have around :)  (/me is thinking of config files and
 such)

mod_perl works very well with the system for keeping track of what boxes are
down, sizes of partitions and the like.  However, a simple daemon would do
about the same thing for say non-web based mail stuff.  When I release I
will likely have a daemon version as well as the mod_perl version, just
using Net::Server.

 What's a `very large amount of data' ?

We use it for tens of thousands of files, but most of those are small, and
they certainly are all small on the 3 GB range.  That is sort of the model
for dirsync I think.  Lots of small files in lots of different directories.

 Our NIS maps are on the order
 of 3 GB per file (64k users).

Man, that is one big file.  Guess dropping a note to this list sorta lets
you know what you have to really scale to.  Sounds like dirsync could use
rsync if Rob makes a couple changes.  Can't believe the file couldn't be
broken up into smaller files.  3 GB for 64k users doesn't scale so hot for
say a million users, but I have no idea about NIS maps, so there you go.

Earl



File::Redundant

2002-04-25 Thread Cahill, Earl

Just putting about a little feeler about this package I started writing last
night.  Wondering about its usefulness, current availability, and just
overall interest.  Designed for mod_perl use.  Doesn't make much sense
otherwise.

Don't want to go into too many details here, but File::Redundant, takes some
unique word (hopefully guaranteed through a database: a mailbox, a username,
a website, etc.) which I call a thing, a pool of dirs, and how many $copies
you would like to maintain.  From the pool of dirs, $copies good dirs are
chosen, ordered by percent full on the given partition.
When you open a file with my open method (along with close, this is the only
override method I have written so far), you get a file handle.  Do what you
like on the file handle.  When you close the file handle, with my close
method, I CORE::close the file and use Rob Brown's File::DirSync to sync to
all the directories.  DirSync uses time stamps to very quickly sync changes
between directory trees.
When a dir can't be reached (box is down or what have you), $copies good
dirs are re-chosen and the dirsync happens from good old data to the new
good dirs.  If too much stuff goes down, you're sorta outta luck, but you
would have been without my system anyway.
I would write methods for everything (within reason) you do to a file, open,
close, unlink, rename, stat, etc.
So who cares?  Well, using this system would make it quite easy to keep
track of really an arbitrarily large amount of data.  The pool of dirs could
be mounts from any number of boxes, located remotely or otherwise, and you
could sync accordingly.  If File::DirSync gets to the point where you can
use ftp or scp, all the better.
There are race conditions all over the place, and I plan on
transactionalizing where I can.  The whole system depends on how long the
dirsync takes.  In my experience, dirsync is very fast.  Likely I would have
dirsync'ing daemon(s), dirsync'ing as fast as they can.  In some best case
scenario, the most data that would ever get lost would be the time it takes
to do one dirsync (usually less than a second for even very large amounts of
data), and the loss would only happen if you were making changes on a dir as
the dir went down.  I would try to deal with boxes coming back up and
keeping everything clean as best I could.
So, it would be a work in progress, and hopefully get better as I went, but
I would at least like to give it a shot.
Earl



RE: Document Caching

2002-03-06 Thread Cahill, Earl

I am finishing up a sort of alpha version of Data::Fallback (my own name)
which should work very well for cache'ing just about anything locally on a
box.  We  are planning on using it to cache dynamically generated html
templates and images.  You would ask a local perl daemon (using Net::Server)
for the info and it would look first in the cache.  If it isn't in the
cache, it falls back according to where you told it to look (for now
conffile or DBI, but later Storable, dbm, HTTP hit, whatever), and caches
how you tell it to, based on ttl if you like.

I am doing some testing now to see what sort of numbers we can get.  Looking
like 100-200 queries a second, but we'll see if that holds up in production,
under high loads.  I hope to write some docs on it over the weekend and get
at least some alpha version CPAN'd before too long here.

Earl

 -Original Message-
 From: Rasoul Hajikhani [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, March 06, 2002 1:28 PM
 To: [EMAIL PROTECTED]
 Subject: Document Caching
 
 
 Hello People,
 Need your advise on how to cache a template under mod_perl... 
 Any ideas?
 Thanks in advance
 -r
 



RE: Document Caching

2002-03-06 Thread Cahill, Earl

 Hmmm... isn't that sort of backwards?  It sounds like you're 
 considering 
   the problem as building a cache that can be taught how to 
 fetch data, 
 but to me it seems more natural to build components for fetching data 
 and teach them how to cache.
 
 The semantic for describing how something can be cached are 
 much simpler 
 than those describing how something can be fetched.  I would think it 
 makes more sense to do something along the lines of the 
 Memoize module, 
 i.e. make it easy to add caching to your existing data 
 fetching modules 
 (hopefully using a standard interface like Cache::Cache).

Yeah, I buy that.  Mostly I have been writing the fetching routines, and in
sort of ad hoc fashion I have started to add on the caching stuff.  I am
just using a hash structure built on the modle File::CacheDir that I wrote.
For me it is a two part problem that is pretty easily divisible.  I have a
function that checks the cache and if it returns false, then I fetch it
according to the fallback.  I would not be opposed to calling a different,
more standard function to check the cache (set up in a more standard way),
and then fetch accordingly.

Earl



RE: ANNOUNCE: Apache::Watchdog::RunAway v0.3

2002-03-04 Thread Cahill, Earl

Any chance of being able to define a runaway script based on percent of
CPU or percent of memory used as well as time in seconds?  This would be
great for us.  Every so often we get a script that just starts choking on
memory, and gets every process on the box swapping, which kills our load.

Earl

 -Original Message-
 From: Stas Bekman [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, February 28, 2002 8:39 PM
 To: modperl list
 Subject: ANNOUNCE: Apache::Watchdog::RunAway v0.3
 
 
 
 The uploaded file
 
  Apache-Watchdog-RunAway-0.3.tar.gz
 
 has entered CPAN as
 
file: $CPAN/authors/id/S/ST/STAS/Apache-Watchdog-RunAway-0.3.tar.gz
size: 7722 bytes
 md5: 701b7a99fe658c5b895191e5f03fff34
 
 Changes:
 
 =head1 ver 1.0 Wed Feb 20 11:54:23 SGT 2002
 
 * making the DEBUG a constant variable, settable via PerlSetVar
 
 * A few code style changes and doc fixes
 
 * this module has spent enough time in alpha/beta incubator 
 = going 1.0.
 
 _
 Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
 http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
 mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
 http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
 



RE: ANNOUNCE: Apache::GTopLimit v1.0

2002-03-04 Thread Cahill, Earl

Looks like this will do the limit by CPU or memory used.  Guess I should
read my whole inbox before I start to respond.

Thanks,
Earl

 -Original Message-
 From: Stas Bekman [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, February 28, 2002 8:40 PM
 To: modperl list
 Subject: ANNOUNCE: Apache::GTopLimit v1.0
 
 
 The uploaded file
 
  Apache-GTopLimit-1.0.tar.gz
 
 has entered CPAN as
 
file: $CPAN/authors/id/S/ST/STAS/Apache-GTopLimit-1.0.tar.gz
size: 5117 bytes
 md5: d1847eecbf8584ae04f9c0081e22897f
 
 =head1 ver 1.0 Wed Feb 20 11:54:23 SGT 2002
 
 * making the DEBUG a constant variable, settable via PerlSetVar
 
 * A few code style changes and doc fixes
 
 * this module has spent enough time in alpha/beta incubator 
 = going 1.0.
 
 
 _
 Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
 http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
 mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
 http://singlesheaven.com http://perl.apache.org http://perlmonth.com/