Re: Working directory of script is / !

2003-08-07 Thread Ed Grimm
On Wed, 30 Jul 2003, Stas Bekman wrote:

 Perrin Harkins wrote:
 On Tue, 2003-07-29 at 07:23, Stas Bekman wrote:
 
That's correct. This is because $r-chdir_file in compat doesn't do
anything.  The reason is that under threaded mpm, chdir() affects all
threads. Of course we could check whether the mpm is prefork and do
things the old way, but that means that the same code won't work the
same under threaded and non-threaded mpms. Hence the limbo. Still
waiting for Arthur to finish porting safecwd package, which should
resolve this problem.
 
 When he does finish it, won't we make the threaded MPM work just like
 this?  It seems like it would be reasonable to get prefork working
 properly, even if the threaded MPM isn't ready yet. 
 
 It's a tricky thing. If we do have a complete implementation then it's
 cool.  If not then we have a problem with people testing their code on
 prefork mpm and then users getting the code malfunctioning on the
 threaded mpms.
 
 I think we could have a temporary subclass of the registry (e.g.:
 ModPerl::RegistryPrefork) which will be removed once the issue is
 resolved. At least it'll remind the developers that their code won't
 work on the threaded mpm setups. However if they make their code
 working without relying on chdir then they can use Modperl::Registry
 and the code will work everywhere.

What's wrong with having the chdir code check for the threaded mpm, and,
if it detects it, generate a warning that describes the situation?

Admittedly, I have a difficult time understanding someone who tests
under one mpm, and then releases under another mpm without testing.  I
realize there are people who do this sort of thing; I'm merely stating
that I have difficulty understanding them.

Ed




Re: [error] Can't locate CGI.pm in @INC

2003-05-30 Thread Ed
On Thu, May 29, 2003 at 04:12:51PM +1000, Stas Bekman wrote:
 Brown, Jeffrey wrote:
 Problem solved!
 
 You all are a fantastic resource to newbies!
 
 Jeff
 
 -Original Message-
 From: Ed [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, May 28, 2003 9:28 PM
 To: Brown, Jeffrey; [EMAIL PROTECTED]
 
 On Wed, May 28, 2003 at 09:11:06PM -0700, Brown, Jeffrey wrote:
 
 Here are the results from the log file:
 
 [Wed May 28 20:50:21 2003] [error] No such file or directory at
 /htdocs/perl/first.pl line 6 during global destruction.
 
 
 openbsd's httpd is chrooted.
 
 Again, can someone please post a patch/addition for the troubleshooting.pod 
 doc explaining the problem and the solution in details. I've seen this kind 
 of questions more than once here.
 
 Should go into OpenBSD cat at:
 http://perl.apache.org/docs/1.0/guide/troubleshooting.html#OS_Specific_Notes
 Get the pod by clicking on the [src] button.

For the list archive:

- rtfm
-u disables chroot. httpd(8)
http://www.openbsd.org/faq/faq10.html#httpdchroot


- set up chroot basics
The doc for setting up an anoncvs mirror could be adapted for mod_perl.
http://www.openbsd.org/anoncvs.shar
Ofcourse much of it doesn't apply, but the part about ld.so, etc. is
helpful.


- List archives
dreamwvr figured out how to actually get things to work and posted notes
to the list. (so see archives)


- 3.3-current (soon to be 3.4)
And one last bit added after 3.3 was released, Revision 1.7 to apachectl:
http://www.openbsd.org/cgi-bin/cvsweb/src/usr.sbin/httpd/src/support/apachectl

pick's up httpd_flags from /etc/rc.conf, so you can just add -DSSL -u to
httpd_flags.


- ports
The openbsd ports system is not by default configured to install
perl modules or packages in the chroot environment. You would have to
set PREFIX or LOCALBASE. see bsd.port.mk(5) and ports(7)
(PHP ports are set up for chroot installs).


- goolge
A nice HOWTO run mod_perl chrooted would be nice.  maybe someone's
already written it?


I hope this helps some.


Ed.


Re: [error] Can't locate CGI.pm in @INC

2003-05-29 Thread Ed
On Wed, May 28, 2003 at 09:11:06PM -0700, Brown, Jeffrey wrote:
 Here are the results from the log file:
 
 [Wed May 28 20:50:21 2003] [error] No such file or directory at
 /htdocs/perl/first.pl line 6 during global destruction.

openbsd's httpd is chrooted.

Ed.


[Http-webtest-general] [ANNOUNCE] HTTP-WebTest-Plugin-TagAttTest-1.00

2003-03-14 Thread Ed Fancher



The uploaded file 
HTTP-WebTest-Plugin-TagAttTest-1.00.tar.gzhas entered CPAN 
as file: 
$CPAN/authors/id/E/EF/EFANCHE/HTTP-WebTest-Plugin-TagAttTest-1.00.tar.gz 
size: 5312 bytes md5: 940013aada679fdc09757f119d70686e


NAME HTTP::WebTest ::Plugin::TagAttTest - WebTest 
plugin providing a higher level tag and attribute search interface.

DESCRIPTION see also http://search.cpan.org/search?query=HTTP%3A%3AWebTestmode=all 
This module is a plugin extending the functionality of the WebTest module to 
allow tests of the form:
my $webpage='http://www.ethercube.net';my @result;

@result = (@result, {test_name = "title 
junk",url 
= 
$webpage,tag_forbid 
= [{ tag="title", tag_text="junk"}]});@result = (@result, 
{test_name = "title test 
page",url 
= 
$webpage,tag_require 
= [{tag= "title", text="test page"}]});@result = (@result, 
{test_name = "type att with xml in 
value",url 
= 
$webpage,tag_forbid 
= [{attr="type", attr_text="xml" }]});@result = (@result, 
{test_name = "type class with body in 
value",url 
= 
$webpage,tag_require 
= [{attr="class", attr_text="body" }]});@result = 
(@result, {test_name = "class 
att",url 
= 
$webpage,tag_require 
= [{attr="class"}]});@result = (@result, {test_name 
= "script 
tag",url 
=$webpage,tag_forbid 
= [{tag= "script"}]});@result = (@result, {test_name = 
"script tag with attribute 
language=_javascript_",url 
= 
$webpage,tag_forbid 
= 
[{tag="script",attr="language",attr_text="_javascript_"}]});my 
[EMAIL PROTECTED];

 my $params = { 
 
plugins = 
["::FileRequest","HTTP::WebTest::Plugin::TagAttTest"] 
};my $webtest= HTTP::WebTest-new;#4check_webtest(webtest 
=$webtest, tests= $tests,opts=$params, 
check_file='t/test.out/1.out');#$webtest-run_tests( 
$tests,$params);

Ed FancherEthercube Solutionshttp://www.ethercube.netPHP, Perl, 
MySQL, _javascript_ solutions.


Re: Determining when a cached item is out of date

2003-01-16 Thread Ed
On Thu, Jan 16, 2003 at 06:33:52PM +0100, Honza Pazdziora wrote:
 On Thu, Jan 16, 2003 at 06:05:30AM -0600, Christopher L. Everett wrote:
  
  Do AxKit and PageKit pay such close attention to caching because XML
  processing is so deadly slow that one doesn't have a hope of reasonable
  response times on a fast but lightly loaded server otherwise?  Or is
  it because even a fast server would quickly be on its knees under
  anything more than a light load?
 
 It really pays off to do any steps that will increase the throughput.
 And AxKit is well suited for caching because it has clear layers and
 interfaces between them. So I see AxKit doing caching not only to get
 the performance, but also just because it can. You cannot do the
 caching easily with more dirty approaches.
 
  With a MVC type architecture, would it make sense to have the Model
  objects maintain the XML related to the content I want to serve as
  static files so that a simple stat of the appropriate XML file tells
  me if my cached HTML document is out of date?
 
 Well, AxKit uses filesystem cache, doesn't it?
 
 It really depends on how much precission you need to achieve. If you
 run a website that lists cinema programs, it's just fine that your
 public will see the updated pages after five minutes, not immediatelly
 after they were changed by the data manager. Then you can really go
 with simply timing out the items in the cache.
 
 If you need to do something more real-time, you might prefer the push
 approach of MVC (because pull involves too much processing anyway, as
 you have said), and then you have a small problem with MySQL. As it
 lacks trigger support, you will have to send the push invalidation
 from you applications. Which might or might not be a problem, it
 depends on how many of them you have.

I have pages that update as often as 15 seconds.  I just use mtime() and
has_changed() properly in my custom provider Provider.pm's or rely on
the File::Provider's checking the stat of the xml files.  Mostly users are
getting cached files.

For xsp's that are no_cache(1), the code that generates the inforation that
gets sent throught the taglib does its own caching.  Just as if it were a
plain mod_perl handler.  they use IPC::MM and Cache::Cache (usually filecache)

I've fooled w/ having the cache use different databases but finally decided it
didn't make much of a difference since the os and disk can be tuned effectively.
The standard rules apply: put the cache on its own disk spindle, ie. not on 
the same physical disk as your sql database etc.  Makes a big difference ...
you can see w/ vmstat, systat etc.

The only trouble is cleaning up the ever growing stale cache.  So, I use this
simple script in my /etc/daily.local file, or a guy could use cron.

Its similar to what's openbsd uses for its cleaning of /tmp,/var/tmp in the
/etc/daily script.

Ed.

# cat /etc/clean_www.conf
CLEAN_WWW_DIRS=/u4/www/cache /var/www/temp

# cat /usr/local/sbin/clean_www
#!/bin/sh -
# $Id: clean_www.sh,v 1.2 2003/01/03 00:18:27 entropic Exp $

: ${CLEAN_WWW_CONF:=/etc/clean_www.conf}

clean_dir() {
dir=$1
echo Removing scratch and junk files from '$dir':
if [ -d $dir -a ! -L $dir ]; then
cd $dir  {
find . ! -name . -atime +1 -execdir rm -f -- {} \;
find . ! -name . -type d -mtime +1 -execdir rmdir -- {} \; \
/dev/null 21; }
fi
}

if [ -f $CLEAN_WWW_CONF ]; then
. $CLEAN_WWW_CONF
fi

if [ X${CLEAN_WWW_CONF} != X ]; then
echo 
for cfg_dir in $CLEAN_WWW_DIRS; do
clean_dir ${cfg_dir};
done
fi






Re: More Segfaultage - FreeBSD, building apache, ssl, mod_perl from ports

2002-11-12 Thread Ed
On Tue, Nov 12, 2002 at 04:29:19PM +, Rafiq Ismail (ADMIN) wrote:
 I'm a bit irritated by FreeBSD ports at the moment and need somoene to
 shine some light.  I need to build Apache from ports on a BSD box - it has
 to be from ports - but i don't want to include mod_perl in as a dso.
 Thus, I'd like to go to ports and 'Make' with a bunch of options which
 will compile mod_perl straight into my apache1.3-ssl package.  Having run
 make on www/apache1.3-ssl and www/mod_perl, all I get is segfaults.  I
 simply want to run one make to build it in one go.
 
 How???
 
 I'm sure that the BSD users amoungst you have all done it 101 times.
 
 Help please?

Attached is a port i use for OpenBSD. (It needs cleaning, but works for me)

There are a bunch of customizations but some key points to the Makefile are:

DISTFILES=
PATCH_LIST_SUP=
FAKE_FLAGS=
post-patch:

Ed.



www-mod_perl.tar.gz
Description: application/tar-gz


Re: Random broken images when generating dynamic images

2002-10-24 Thread Ed
On Wed, Oct 23, 2002 at 05:55:05PM -0500, Dave Rolsky wrote:
 So here's the situation.
 
 I have some code that generates images dynamically.  It works, mostly.
 
 Sometimes the image will show up as a broken image in the browser.  If I
 reload the page once or twice, the image comes up fine.
 
 On a page with 5 different dynamic images (all generated by the same chunk
 of code, it's a set of graphs), I'll often see 1 or 2 as a broken image,
 but the rest work.  Sometimes all 5 are ok.
 
 I tried out a scheme of writing them to disk with dynamically generated
 files, but since I still need to do some auth checking, they end up being
 served dynamically and I have the same problem.
 
 To make it even weirder, I just took a look at one of the image files that
 showed up as broken, and it's fine (I can't view it directly in my
 browser).

I've seen the problem before.  My solution was to save the dynamic images
on disk and serve them just like plain 'ol static files from the front-end
server. This way everything is served from the same Keep-Alive request.
And apache does all the http/1.1 headers/chunked-encoding for me.

Your MaxKeepAliveRequests would then be the culprit on your end but not likely
unless its set really low.  I'm not sure how the browser determines the
equivalent limit. tcpdump showed that opera created a second keep-alive
request after 10 images for me (could be limiting on bytes rather than
requests ... don't know).

You can still serve dynamicly and handle the custom auth w/ the backend and
maintain the clients keep-alive.  The current mod_proxy will maintain the
clients keep-alive eventhough your backend has keepalive off.  Be sure
all the required http/1.1 components/headers are sent to maintain a
keep-alive.

I'm interested in what you finally work out.

thanks,
Ed



Re: code evaluation in regexp failing intermittantly

2002-10-23 Thread Ed
On Wed, Oct 23, 2002 at 02:24:48PM -0500, Rodney Hampton wrote:
 
 Can any of you gurus please help!
 

A wise guru would help by directing you to:
http://perl.apache.org/docs/tutorials/tmpl/comparison/comparison.html



Re: Random broken images when generating dynamic images

2002-10-23 Thread Ed
On Wed, Oct 23, 2002 at 05:55:05PM -0500, Dave Rolsky wrote:
 So here's the situation.
 
 I have some code that generates images dynamically.  It works, mostly.
 
 Sometimes the image will show up as a broken image in the browser.  If I
 reload the page once or twice, the image comes up fine.
 
 On a page with 5 different dynamic images (all generated by the same chunk
 of code, it's a set of graphs), I'll often see 1 or 2 as a broken image,
 but the rest work.  Sometimes all 5 are ok.
 
 I tried out a scheme of writing them to disk with dynamically generated
 files, but since I still need to do some auth checking, they end up being
 served dynamically and I have the same problem.
 
 To make it even weirder, I just took a look at one of the image files that
 showed up as broken, and it's fine (I can't view it directly in my
 browser).

I've seen the problem before.  My solution was to save the dynamic images
on disk and serve them just like plain 'ol static files from the front-end
server. This way everything is served from the same Keep-Alive request.
And apache does all the http/1.1 headers/chunked-encoding for me.

Your MaxKeepAliveRequests would then be the culprit on your end but not likely
unless its set really low.  I'm not sure how the browser determines the
equivalent limit. tcpdump showed that opera created a second keep-alive
request after 10 images for me (could be limiting on bytes rather than
requests ... don't know).

You can still serve dynamicly and handle the custom auth w/ the backend and
maintain the clients keep-alive.  The current mod_proxy will maintain the
clients keep-alive eventhough your backend has keepalive off.  Be sure
all the required http/1.1 components/headers are sent to maintain a
keep-alive.

I'm interested in what you finally work out.

thanks,
Ed



Re: repost: [mp1.0] recurring segfaults on mod_perl-1.27/apache-1.3.26

2002-10-18 Thread Ed
Daniel,

Could be bad hardware.  Search google for Signal 11.

Probably your memory (usual cause I've seen).

good luck.

Ed

On Tue, Oct 08, 2002 at 09:46:16AM -0700, [EMAIL PROTECTED] wrote:
 Sorry for the repost, but no responses so far, and I need some help with 
 this one.
 
 I've managed to get a couple of backtraces on a segfault problem we've
 been having for months now. The segfaults occur pretty rarely on the
 whole, but once a client triggers one on a particular page, they do not
 stop. The length and content of the request are key in making the
 segfaults happen. Modifying the cookie or adding characters to the
 request line causes the segfaults to stop.
 
 example (word wrapped):
 
 
 This request will produce a segfault (backtrace in attached gdb1.txt)
 and about 1/3 of the expected page :
 
 
 nc 192.168.1.20 84
 GET /perl/section/entcmpt/ HTTP/1.1
 User-Agent: Mozilla/5.0 (compatible; Konqueror/3; Linux 2.4.18-5)
 Pragma: no-cache
 Cache-control: no-cache
 Accept: text/*, image/jpeg, image/png, image/*, */*
 Accept-Encoding: x-gzip, gzip, identity
 Accept-Charset: iso-8859-1, utf-8;q=0.5, *;q=0.5
 Accept-Language: en
 Host: 192.168.1.20:84
 Cookie:
 
mxstsn=1033666066:19573.19579.19572.19574.19577.19580.19576.19558.19560.19559.19557.19567.19566.19568.19544.19553.19545.19551.19554.19546.19548.19547.19532.19535.19533.19538.19534:0;
 
 
 Apache=192.168.2.1.124921033666065714
 
 
 Adding a bunch of zeroes to the URI (which does not change the code
 functionality) causes the page to work correctly:
 
 
 nc 192.168.1.20 84
 GET
 /perl/section/entcmpt/? 
 
 HTTP/1.1
 User-Agent: Mozilla/5.0 (compatible; Konqueror/3; Linux 2.4.18-5)
 Pragma: no-cache
 Cache-control: no-cache
 Accept: text/*, image/jpeg, image/png, image/*, */*
 Accept-Encoding: x-gzip, gzip, identity
 Accept-Charset: iso-8859-1, utf-8;q=0.5, *;q=0.5
 Accept-Language: en
 Host: 192.168.1.20:84
 Cookie:
 
mxstsn=1033666066:19573.19579.19572.19574.19577.19580.19576.19558.19560.19559.19557.19567.19566.19568.19544.19553.19545.19551.19554.19546.19548.19547.19532.19535.19533.19538.19534:0;
 
 
 Apache=192.168.2.1.124921033666065714
 
 
 
 
 Some info:
 /usr/apache-perl/bin/httpd -l
 Compiled-in modules:
http_core.c
mod_env.c
mod_log_config.c
mod_mime.c
mod_negotiation.c
mod_status.c
mod_include.c
mod_autoindex.c
mod_dir.c
mod_cgi.c
mod_asis.c
mod_imap.c
mod_actions.c
mod_userdir.c
mod_alias.c
mod_access.c
mod_auth.c
mod_so.c
mod_setenvif.c
mod_php4.c
mod_perl.c
 
 
 
 Please forgive any obvious missing info (i'm not a c programmer). The
 first backtrace shows the segfault happening in mod_perl_sent_header(),
 and the second shows it happening in  the ap_make_array() which was from
 Apache::Cookie. I don't have one handy now, but I've also seen it happen
 in ap_soft_timeout() after an XS_Apache_print (r-server was out of bounds).
 
 I've added a third backtrace where r-content_encoding contains the
 above 'mxstsn' cookie name.
 
 
 
 
 Any help would be greatly appreciated.
 
 -- 
 --
 Daniel Bohling
 NewsFactor Network
 

 [root@proxy dumps]# gdb  /usr/apache-perl/bin/httpd core.12510
 GNU gdb Red Hat Linux (5.2-2)
 Copyright 2002 Free Software Foundation, Inc.
 GDB is free software, covered by the GNU General Public License, and you are
 welcome to change it and/or distribute copies of it under certain conditions.
 Type show copying to see the conditions.
 There is absolutely no warranty for GDB.  Type show warranty for details.
 This GDB was configured as i386-redhat-linux...
 Core was generated by `/usr/apache-perl/bin/httpd'.
 Program terminated with signal 11, Segmentation fault.
 Reading symbols from /lib/libpam.so.0...done.
 Loaded symbols for /lib/libpam.so.0
 Reading symbols from /usr/lib/libmysqlclient.so.10...done.
 Loaded symbols for /usr/lib/libmysqlclient.so.10
 Reading symbols from /lib/libcrypt.so.1...done.
 Loaded symbols for /lib/libcrypt.so.1
 Reading symbols from /lib/libresolv.so.2...done.
 Loaded symbols for /lib/libresolv.so.2
 Reading symbols from /lib/i686/libm.so.6...done.
 Loaded symbols for /lib/i686/libm.so.6
 Reading symbols from /lib/libdl.so.2...done.
 Loaded symbols for /lib/libdl.so.2
 Reading symbols from /lib/libnsl.so.1...done.
 Loaded symbols for /lib/libnsl.so.1
 Reading symbols from /lib/i686/libc.so.6...bdone.
 Loaded symbols for /lib/i686/libc.so.6
 Reading symbols from /lib/libutil.so.1...done.
 Loaded symbols for /lib/libutil.so.1
 Reading symbols from /usr/lib/libexpat.so.0...done.
 Loaded symbols for /usr/lib/libexpat.so.0
 Reading symbols from /usr/lib/libz.so.1...done.
 Loaded symbols for /usr/lib/libz.so.1
 Reading symbols from /lib/ld-linux.so.2...done.
 Loaded symbols for /lib/ld-linux.so.2
 Reading symbols from /lib/libnss_files.so.2...done.
 Loaded symbols for /lib/libnss_files.so.2
 Reading symbols from 
/usr/lib/perl5/5.6.1/i386-linux/auto/Data

Re: repost: [mp1.0] recurring segfaults on mod_perl-1.27/apache-1.3.26

2002-10-18 Thread Ed
On Fri, Oct 18, 2002 at 03:54:22PM -0400, Perrin Harkins wrote:
 Ed wrote:
 Could be bad hardware.  Search google for Signal 11.
 
 That's actually pretty rare.  Segfaults are usually just a result of 
 memory-handling bugs in C programs.

I saw the problem when someone had their memory speed too low in their
bios using an asus-a7v motherboard.  Apps such as bzip2 croaked and memory
intensive compiles failed in random places.

very similar to the first answer here: http://www.bitwizard.nl/sig11/

When we were trying to debug, the failures were a giant mystery.
We spent days inside of gdb and such trying to figure out what the
heck was up.  It turns out the bios reset to incorrect settings after a
power failure and took a week or so till the random sig 11's showed up.

It was at a remote colocation too ... (checking the bios was last on our list)
We ended up replacing the box at the colo ... this melted/sig11 box is still
able to run netbsd with the bios under-clocked (up 183 days) but they dont
use it for anything important

Anyway I have a story about a bad nic cable too  but will save it

/me paronoid about mysterious sig 11's ...

Ed



Re: Apache Hello World Benchmarks Updated

2002-10-14 Thread Ed

Hi,

(as far as i can tell after a quick peek at the code and some debugging)

It looks like there is a bug w/ AxKit::run_axkit_engine() and/or
Apache::AxKit::Cache::_get_stats()

run_axkit_engine() wants to create a .gzip cachefile when AxGzipOutput is off.

When AxGzipOutput is off the .gzip file is never made and _get_stats() 
returns w/ !$self-{file_exists} effectivly disabling delivering cached copies.

With AxGzipOutput enabled both files are created and appropriate cached copies
are delivered as expected.

I haven't decided for myself a best fix except for just enabling AxGzipOutput.

So, I reran hello/bench.pl w/ AxGzipOutput On and sped axkit up quite a bit.

attached are some diffs and a couple of 1 sec bench.pl runs.  Would be
interesting to see how axkit compares now?

Thanks,

Ed

On Mon, Oct 14, 2002 at 12:26:06AM -0700, Josh Chamas wrote:
 Hey,
 
 The Apache Hello World benchmarks are updated at
 
   http://chamas.com/bench/
 
 The changes that affect performance numbers include:
 
   Set MaxRequestsPerChild to 1000 globally for more realistic run.
 
   Set MaxRequestsPerChild to 100 for applications that seem to leak
   memory which include Embperl 2.0, HTML::Mason, and Template Toolkit.
   This is a more typical setting in a mod_perl type application that
   leaks memory, so should be fairly representative benchmark setting.
 
 Note that the latter change seemed to have the most benefit for Embperl 2.0,
 with some benefit for Template Toolkit  less ( but some ) for HTML::Mason
 on the memory usage numbers.
 
 Regards,
 
 Josh
 
 Josh Chamas, Founder   phone:925-552-0128
 Chamas Enterprises Inc.http://www.chamas.com
 NodeWorks Link Checkinghttp://www.nodeworks.com
 


--- hello/bench.pl  Sun Oct 13 04:07:35 2002
+++ hello-gz/bench.pl   Tue Oct 15 00:15:48 2002
 -106,7 +106,7 
 
 # FIND AB
 my $httpd_dir = $HTTPD_DIR;
-$AB = $httpd_dir/ab;
+$AB = '/usr/sbin/ab'; #$httpd_dir/ab;
 unless(-x $AB) {
 print ab benchmark utility not found at $AB, using 'ab' in PATH\n;
 $AB = 'ab';


--- hello/bench.pl  Sun Oct 13 04:07:35 2002
+++ hello-gz/bench-gz.plTue Oct 15 00:16:32 2002
 -106,7 +106,7 
 
 # FIND AB
 my $httpd_dir = $HTTPD_DIR;
-$AB = $httpd_dir/ab;
+$AB = '/usr/sbin/ab'; #$httpd_dir/ab;
 unless(-x $AB) {
 print ab benchmark utility not found at $AB, using 'ab' in PATH\n;
 $AB = 'ab';
 -583,6 +583,7 
AxAddStyleMap application/x-xpathscript 
+Apache::AxKit::Language::XPathScript
   AxAddProcessor text/xsl hello.xsl
AxCacheDir $TMP/axkit
+   AxGzipOutput On
 }],
 
  'AxKit XSLT Big' = ['hxsltbig.xml', qq{
 -593,6 +594,7 
AxAddStyleMap application/x-xpathscript 
+Apache::AxKit::Language::XPathScript
   AxAddProcessor text/xsl hxsltbig.xsl
AxCacheDir $TMP/axkit
+   AxGzipOutput On
 }],
 
  'AxKit XSP Hello' = ['hello.xsp', qq{
 -601,6 +603,7 
AxAddStyleMap application/x-xsp +Apache::AxKit::Language::XSP
AxAddProcessor application/x-xsp NULL
   AxCacheDir $TMP/axkit
+   AxGzipOutput On
  }],
 
  'AxKit XSP 2000' = ['h2000.xsp', qq{
 -609,6 +612,7 
AxAddStyleMap application/x-xsp +Apache::AxKit::Language::XSP
AxAddProcessor application/x-xsp NULL
   AxCacheDir $TMP/axkit
+   AxGzipOutput On
  }],
 
 # new Embperl 2.x series


[2002-10-15 00:16:53] Found apache web server at /usr/local/sbin/httpd_perl
[2002-10-15 00:16:53]  running 1 groups of benchmarks for 1 seconds
[2002-10-15 00:16:56] testing AxKit v1.6 XSP 2000 at 
http://localhost:5000/h2000.xsp?title=Hello%20World%202000integer=2000
[2002-10-15 00:17:11] testing AxKit v1.6 XSP Hello at http://localhost:5000/hello.xsp
[2002-10-15 00:17:25] testing AxKit v1.6 XSLT Hello at http://localhost:5000/hxslt.xml
[2002-10-15 00:17:40] testing AxKit v1.6 XSLT Big at http://localhost:5000/hxsltbig.xml

Test Name   Test File  Hits/sec   # of Hits  Time(sec)  
secs/Hit   Bytes/Hit  
-   -  -  -  -  
-  -  
AxKit v1.6 XSP 2000 h2000.xsp14.8 20   1.35 
0.067600   28680  
AxKit v1.6 XSP Hellohello.xsp   245.5261   1.06 
0.004073   353
AxKit v1.6 XSLT Hello   hxslt.xml   157.6169   1.07 
0.006343   331
AxKit v1.6 XSLT Big hxsltbig.x   37.3 38   1.02 
0.026816   21590  

Apache Server Header Tokens
---
Apache/1.3.26
AxKit/1.6
mod_perl/1.27

PERL Versions: 5.006001



[2002-10-15 00:18:04] Found apache web server at /usr/local/sbin/httpd_perl
[2002-10-15 00:18:04]  running 1 groups of benchmarks for 1 seconds
[2002-10-15 00:18:07] testing AxKit v1.6 XSP 2000 at 
http

RE: Compiled-in but not recognized

2002-08-11 Thread Ed Grimm

On Sun, 11 Aug 2002, Colin wrote:

 -Original Message-
 From: Ged Haywood [mailto:[EMAIL PROTECTED]]
 Sent: Sunday, August 11, 2002 6:02 PM
 Subject: Re: Compiled-in but not recognized


 Hi there,

 On Sun, 11 Aug 2002, Colin wrote:

 I know this is a recurring problem but bear with me ...

 :)

 httpd -l
 Compiled-in modules:
 http_core.c
 mod_so.c
 mod_perl.c

 pwd?

I think that Ged was suggesting you might have multiple httpd binaries
on your system, and was suggesting that you verify you're running the
binary you think you're running.

It's really annoying when you're trying to debug a program, and the
program you're running is not the one you're adding the debugging
statements to.  However, I suspect most of us have done it on occasion.

Ed
How the #@*! is it getting past all those debug statements without
hitting any?!?! - Me




Re: PerlChildInitHandler doesn't work inside VirtualHost?

2002-08-10 Thread Ed Grimm

On Thu, 8 Aug 2002, Rick Myers wrote:

 On Aug 09, 2002 at 12:16:45 +1000, Cees Hek wrote:
 Quoting Jason W May [EMAIL PROTECTED]:
 Running mod_perl 1.26 on Apache 1.3.24.
 
 I've found that if I place my PerlChildInitHandler inside a VirtualHost
 block, it is never called.
 
 It doesn't really make sense to put a PerlChildInitHandler
 inside a VirtualHost directive.
 
 Why? The Eagle book says this is a perfectly valid concept.

Well, for one thing, it would only call the handler if a request to that
virtual host was the first request for that child.  Assuming it works;
I'd think this would be a good canidate for a case that's never been
tested before, due to the fact that it would not call the handler if the
request that initiated the child was not to that virtual host...

It would fail to work in all cases if Apache does not recognize what
triggered the child until after child init.  Looking over pages 59
through 61, 72 and 73, this appears to me to be the case.  Yes, it does
explicitly say that it's ok in virtual host blocks, but it doesn't say
it works.

Ed




Re: mod perl load average too high

2002-08-08 Thread Ed Grimm

That looks like there's something that occasionally goes off and starts
spinning, given the low memory usage and the fact that some processes
using little cpu are also not swapped out.

I suspect that one of your pages has a potential infinite loop that's
being triggered.  Try and catch at what point the load suddenly starts
rising, and check what pages were accessed around that time.  They're
where you should start looking.

Note that you should probably focus on the access and error log lines
that correspond with processes that are using excessive amounts of cpu.

Ed

On Tue, 6 Aug 2002, Anthony E. wrote:

 I'm using apache 1.3.26 and mod_perl 1.27
 
 My apache processes seem to be taking up more and more
 system resources as time goes on.
 
 Can someone help me determine why my server load is
 going up?
 
 When i first start apache, my load average is about
 .02, but after a couple of hours, it goes up to 4 or
 5, and after a couple of days, has been as high as
 155.
 
 I have the following directives configured in
 httpd.conf:
 
 MaxKeepAliveRequests 100
 MinSpareServers 5 
 MaxSpareServers 20
 StartServers 10
 MaxClients 200
 MaxRequestsPerChild 5000
 
 Here is a snip of 'top' command:
   6:28pm  up 46 days, 23:03,  2 users,  load average:
 2.24, 2.20, 1.98
 80 processes: 74 sleeping, 6 running, 0 zombie, 0
 stopped
 CPU0 states: 99.3% user,  0.2% system,  0.0% nice, 
 0.0% idle
 CPU1 states: 100.0% user,  0.0% system,  0.0% nice, 
 0.0% idle
 Mem:  1029896K av,  711884K used,  318012K free,  
 0K shrd,   76464K buff
 Swap: 2048244K av,  152444K used, 1895800K free   
   335796K cached
 
   PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM
   TIME COMMAND
 25893 nobody16   0 10188 9.9M  3104 R95.5  0.9
  21:55 httpd
 25899 nobody16   0  9448 9448  3104 R95.3  0.9
  63:27 httpd
 25883 nobody 9   0 10468  10M  3096 S 2.5  1.0
   0:16 httpd
 25895 nobody 9   0 10116 9.9M  3104 S 2.1  0.9
   0:15 httpd
 25894 nobody 9   0 10240  10M  3104 S 1.9  0.9
   0:16 httpd
 25898 nobody 9   0 10180 9.9M  3100 S 1.7  0.9
   0:13 httpd
 
 Also, I notice in my error_log i get this entry quite
 frequently:
 26210 Apache::DBI new connect to
 'news:1.2.3.4.5userpassAutoCommit=1PrintError=1'
 
 What can i do to keep the server load low?
 
 
 =
 Anthony Ettinger
 [EMAIL PROTECTED]
 http://apwebdesign.com
 home: 415.504.8048
 mobile: 415.385.0146
 
 __
 Do You Yahoo!?
 Yahoo! Health - Feel better, live better
 http://health.yahoo.com
 




Re: E-commerce payment systems for apache/mod_perl

2002-07-03 Thread Ed

On Tue, Jul 02, 2002 at 10:43:14PM -0500, David Dyer-Bennet wrote:
 Any obvious choices for a relatively small-scale e-commerce payment
 processing system for a server running apache / mod_perl?  

http://interchange.redhat.com/
- it's mature
- we wrote our own but i'd use it instead if I had to start over
http://www.ipaymentinc.com/
- reseller for authorize.net
http://authorize.net/
- big transaction provider
- supported cpan module (simple/trivial)
http://www.dhl.com/
- we get really cheap rates for dhl's next day shipping service world wide
  (1-2 days continential us  $6)
  (3-4 days door2door to pakistan from indianapolis  $21)
  ... much, much cheaper than even the cheapest ups-residential-ground
- ups has well developed xml API's, dhl dosn't

Ed



Re: [Templates] Re: Separating Aspects (Re: separating C from V in MVC)

2002-06-07 Thread Ed

On Fri, Jun 07, 2002 at 09:14:25AM +0100, Tony Bowden wrote:
 On Thu, Jun 06, 2002 at 05:08:56PM -0400, Sam Tregar wrote:
   Suppose you have a model object for a concert which includes a date.  On
   one page, the designers want to dipslay the date in a verbose way with
   the month spelled out, but on another they want it abbreviated and fixed
   length so that dates line up nicely.  Would you put that formatting in
   the controller?
  In the script:
  
 $template-param(long_date  = $long_date,
  short_date = $short_date);
  In the template:
  
 The long date: tmpl_var long_date  br
 The short date: tmpl_var short_date
 
 Can I vote for yick on this?
 
 A designer should never have to come to a programmer just to change the
 formatting of a date.
 
 I'm a huge fan of passing Date::Simple objects, which can then take a
 strftime format string:
 
   [% date.format(%d %b %y) %]
   [% date.format(%Y-%m-%d) %]
 
 Tony
 

xmlns:date=http://exslt.org/dates-and-times;  wins for me.

date:date-time()
date:date()
date:time()
date:month-name()
... etc

xslt solutions win for me because it its supported (or seems to be)
by many major languages, and applications. 

xslt stylesheets can be processed, reused and shared with my c,perl,
java,javascript, ruby, mozilla, ieexplorer ... kde apps, gnome apps
... etc

Imagine having your templates and data supported and interoperable ...

Aren't we trying to rid the world of proprietary (only works here) things?

Ed   (an axkit lover)



Re: [RFC] Dynamic image generator handler

2002-05-10 Thread Ed

On Fri, May 10, 2002 at 10:46:11AM -0700, Michael A Nachbaur wrote:
 On Fri, 10 May 2002 08:32:55 +0200
 Robert [EMAIL PROTECTED] wrote:
 
  Take a look at Apache::ImageMagick
 
 In my benchmarks I ran, ImageMagick was way slower than GD.  I wrote a
 little test, rendering a little text image of 120x30.  With ImageMagick,
 I was getting 0.3 rps, and under GD with similar circumstances I was
 getting 1.5rps.  I'm sure I could've optimized the ImageMagick one a bit
 further, but that quick test settled it for me.
 
 I looked at Apache::ImageMagick last night however, and although it
 seems pretty usefull, it doesn't really address what I want to do with
 my module.

I'm using Imlib2 w/ the c interface (http://freshmeat.net/projects/imlib2perl/)

I needed antialias lines, alpha's etc.  I modified my app to use a 'dbi' like
interface for potentially any media driver.  The diferent 'media drivers',
gd, imlib2, *pdf/*tex etc all have different ideas how to draw a line, circle,
polygon, text, add colors etc. Now all I have to do to use diffent libraries
such as Media-new(Driver = 'imlib2'), or Media-new(Driver = 'gd'),
Media-new(Driver = 'svg'), Media-new(Driver = 'pdflib') etc.

There are may libraries out there, gd, imlib, imlib2, libart, povray, gdk,
flash, pdfAPI2, pdflib, tex, latex, svg, imager, imagemagic, ...

There are may good reasons to be able to 'just drop in' a driver ... just
look at why the unified interface 'DBI' was developed for RDBM's .

Ed






Re: [RFC] Dynamic image generator handler

2002-05-10 Thread Ed
. The configuration for a preset config
 template would be layered, so the earlier the definition, the lower the
 layer is. The real important part here, is the name attribute of any
 element, as this identifies where input can be indicated. The above
 preset could be used by invoking the following URI.


I used CSS.pm for a bit but it was too fat w/ Parse::RecDecent. To unify my
app and the browser I use axkit to 'generate' the css from an xml file.

css
 selector name=back
colorblack/color
font-familygeneva/font-family
font-familyarial/font-family
font-size7px/font-size
background-colorwhite/background-color
 /selector
/css

.back {
color: black;
font-family: geneva, arial;
font-size: 7px;
background-color: white;
}

Creating complicated css files are difficult but my drawing app can load its
info from the uri, an xml file, a rdbm , Config::General, inifiles or whatever
and use different output methods such as axkit's providers to parse the 
'color config' to render the *.css file to the browser.

This way the document, style, skin and images are all unified.

Graphics::ColorNames works wonderfully to help handle all the different color
needs.

 
   http://localhost/genImage/preset=thumbnail-image;src=/images/ducks.jpg
 
 As you can see, the preset is invoked by passing it's name as an
 attribute, and any element that has a name attribute, it's value can be
 provided on the URI. If an element has both a value and a name
 attribute, the value in the config file can be used as a default.
 
 *) Caching Schemes
 
 A caching scheme similar to AxKit could be used. The current module
 takes all the input arguments, sorts them (including all values that are
 not provided, for completeness), and takes it's MD5 checksum. That
 becomes the image's filename on the system. It is placed in a temporary
 directory, and any further requests to that same URI, the file is pulled
 from the filesystem without regenerating the image. Further, the code
 has been blatantly ripped off from AxKit, which separates the directory
 into two sub-levels, to prevent performance problems of having too many
 files in one directory.
 
 Note: To prevent the filesystem from filling up, due to DoS attacks, it
 may be prudent to have a cron job periodically cull files that have the
 oldest access time.


Cache::Cache is appropriate here ... 



 
 *) Image Manipulation Modules
 
 My current code uses GD for text writing, and I'm quite happy with it.
 It is extremely fast, and creates nice text output when compiled with a
 TTF font engine. Looking forward however, it may not be as desirable if
 things like drop shadows is to be done. GD can work with multiple
 images, can resize them, etc, but the advanced features are still
 unknown.
 
 *) File Expiration Headers and Browser Caching
 
 With my current code, it seems that browsers are reluctant to cache
 these dynamically generated images. I have passed Expires: headers to
 tell the browser to cache file file for a long period of time (2+
 weeks), but I have been unsuccessful. I know the caching headers are
 complex, and needs more than one simple header, but fixing this has
 moved to the back-burner of my project. However, if more complicated
 processing is to be done, and with more images, it will be crucial to
 make browsers cache these images.


I create a digest w/ MD5 or SHA1 for the image/pdf and use it as the filename
and the Cache::Cache key.

The cache is easily invalidated if the source image file failes a -e test.
I also use cron to delete stale image files.

The generated now static image is redirected-to or referenced in the html.  
I found that it is important to complete the  processing of the images before 
the referencing html document gets served, ... rather than having the html 
document initiate dynamic imbeded links to create the image.  Letting apache 
serve images as static image-files has proved rock solid for me 
... (note keep-alives).  There is nothing worse than to pull a page and have
to wait for each of the images to show up.

Browsers, proxies and users are all real pains to deal w/ when the uri has
a query string.  Digest's are ugly but they play much better w/ everybody.

014d1c89fc3da6e15e0069000dfa381e44239af71021057594.png

Ed




Re: [announce] mod_perl-1.99_01

2002-04-10 Thread Ed Grimm

On Mon, 8 Apr 2002, Stas Bekman wrote:

 Ged Haywood wrote:
 Compilations should be SILENT unless something goes wrong.

 The build process takes time, if you don't give an indication
 of what's going on users will try to do funky things. Since the
 build process is comprised of many small sub-processes you cannot
 really use something like completion bar.

As someone said, redirect the output to a temporary location.  But, add
to that one of those little | bars, which turns one position every time
another build step completes (each file compiled, each dependancy file
built, etc.).  However, in the case of an error, I would want the whole
thing available.  Possibly something along the lines of, the last build
step and all output from then on printed to stdout (or stderr), ended
with, For the full build log, see /tmp/mod_perl.build.3942 or some
such.

 Also remember that mod_perl build prints very little,
 it's the compilation messages that generated most of the output.
 p.s. I don't recall seeing silent build processes at all.

The only ones I've seen went too far the other way.  I especially loved
the one which used a shell script, which started out with a dozen large
shell functions, then an 'exec /dev/null 2/dev/null', then a
half-dozen more large shell functions, and ending with 'main $'.
When the shell script finished, its caller checked its exit code, and
reported 'done.' or 'failed.' as appropriate.  Admittedly, I wouldn't
have minded too much, except that I'd gotten the latter answer.

Ed




Re: proxy front to modperl back with 1.3.24

2002-04-06 Thread Ed

FYI,

There is a patch this morning from the mod_proxy maintainer.

http://marc.theaimsgroup.com/?l=apache-httpd-devm=101810478231242w=2

Ed

On Fri, Apr 05, 2002 at 02:33:35PM -0800, ___cliff rayman___ wrote:
 i had trouble using a proxy front end to both
 a mod_perl and mod_php back end servers.
 
 this works fine for me at 1.3.23, so I reverted
 back to it.  i copied the httpd.conf files
 from the 1.3.24 to my downgraded 1.3.23
 and everything worked correctly on the first
 try.
 
 i was getting garbage characters before the first
 html or doctype tag, and a 0 character at
 the end.  also, there was a delay before the
 connection would close.  i tried turning keep
 alives off and on in the back end server,
 but i did not note a change in behavior
 i also tried some different buffer directives,
 including the new ProxyIOBufferSize.
 
 these garbage characters and delays were
 not present serving static content from the
 front end server, or when directly requesting
 content directly from either of the back end
 servers.
 
 i know they've made some mods to the
 proxy module, including support for HTTP/1.1,
 but i did not have time to research the exact
 cause of the problem.
 
 just a word of warning before someone
 spends hours in frustration, or perhaps
 someone can give me a tip if they've
 solved this problem.
 
 --
 ___cliff [EMAIL PROTECTED]http://www.genwax.com/
 
 



Re: Apache::DBI or What ?

2002-03-25 Thread Ed Grimm

On Sun, 24 Mar 2002, Andrew Ho wrote:

What would be ideal is if the database would allow you to change the
user on the current connection.  I know PostgreSQL will allow this
using the command line interface psql tool (just do \connect
database user), but I'm not sure if you can do this using DBI.

Does anyone know if any datbases support this sort of thing?
 
 This occurred to me in the case of Oracle (one of my co-workers was
 facing a very similar problem in the preliminary stages of one of his
 designs), and I actually had asked our DBAs about this (since the
 Oracle SQL*Plus also allows you to change users). As I suspected (from
 the similar connect terminology), our DBAs confirmed that Oracle
 just does a disconnect and reconnect under the hood. I would bet the
 psql client does the same thing.

First, I'll suggest that there are hopefully other areas you can look at
optimizing that will get you a bigger bang for your time - in my test
environment (old hardware), it takes 7.4 ms per
disconnect/reconnect/rebind and 4.8 ms per rebind.  Admittedly, I'm
dealing with LDAP instead of SQL, and I've no idea how they compare.

If the TCP connection were retained, this could still be a significant
win.  *Any* reduction in the connection overhead is an improvement.  If
there are a million connects per day, and this saves a milli-second per
connect (believable to me, as at least three packets don't need to be
sent - syn, syn ack, and fin.  My TCP's a bit fuzzy, but I think there's
a couple more, and there's also the mod_perl disconnect/reconnect
overhead), that's over 15 minutes of response time and about 560,000,000
bits of network bandwidth (assuming the DB is not on the same machine)
saved.  Admittedly, at 100Mb/s, that's only 6 seconds.

It may, in some cases, still be necessary to move access control from
the DB into ones application, so one can maintain a single connection
which never rebinds, but I think it's better to utilize the security in
the DB instead of coding ones own - more eyes have looked over it.
We're talking about a fairly small unit of time; it may very well be
better to throw money if you are near your performance limit.

Ed





Re: Performace...

2002-03-24 Thread Ed Grimm

On Sun, 24 Mar 2002, Kee Hinckley wrote:
 At 2:27 PM -0500 3/23/02, Geoffrey Young wrote:

you might be interested in Joshua Chamas' ongoing benchmark project:

[EMAIL PROTECTED]">http://mathforum.org/epigone/modperl/sercrerdprou/[EMAIL PROTECTED]
http://www.chamas.com/bench/

he has the results from a benchmark of Apache::Registry and plain 
handlers, as well as comparisons between HTML::Mason, Embperl, and 
other templating engines.
 
 Although there are lots of qualifiers on those benchmarks, I consider 
 them rather dangerous anyway.  They are Hello World benchmarks, in 
 which startup time completely dominates the time. The things that 

That explains why Embperl did so poorly compared to PHP, yet when we
replaced our PHP pages with Embperl, our benchmarks using real user
queries, sending the same queries through the old and new pages, the new
pages showed a 50% performance boost.

Note: that gain was enough to saturate our test network.  Our purpose
for the benchmark was to determine if it was an improvement or not, not
to determine the exact improvement, so we don't really know what the
real gain was.  The same machines do several other tasks, and our
monitoring at the time of change was not very sophisticated, so we only
really know it was a big win.  Something on the order of 37 load issues
the week before the change, most of which were fairly obviously web
overload, and two the week after (those two being very obviously
associated with other services the boxes are running.)

Ed




Re: 'Pinning' the root apache process in memory with mlockall

2002-03-22 Thread Ed Grimm
 respond at all, as it'll be heavily engaged in swapping process.

Yes, this is why we want to lock the memory.

Ed





Re: Berkeley DB 4.0.14 not releasing lockers under mod_perl

2002-03-21 Thread Ed Grimm

Does shutting down apache free up your locks?

(As an aside, I'm not sure I'll ever get over undef being proper closing
of a database connection; it seems so synonomous to free([23]).  I
expect something like $db-db_close() or something.)

Ed

On Thu, 21 Mar 2002, Dan Wilga wrote:

 At 2:03 PM -0500 3/21/02, Aaron Ross wrote:

  I'm testing with the Perl script below, with the filename ending
  .mperl (which, in my configuration, causes it to run as a mod_perl
  registry script).

  I would re-write it as a handler and see if Apache::Registry is partly
to blame.
 
 I tried doing it as a handler, using the configuration below (and the 
 appropriate changes in the source) and the problem persists. So it 
 doesn't seem to be Registry's fault.
 
 Location /dan
  SetHandler perl-script
  PerlHandler DanTest
 /Location
 
  source code 
 
 #!/usr/bin/perl
 
 package DanTest;
 
 use strict;
 use BerkeleyDB qw( DB_CREATE DB_INIT_MPOOL DB_INIT_CDB );
 
 my $dir='/home/httpd/some/path';
 
 sub handler {
   system( rm $dir/__db* $dir/TESTdb );
 
   foreach( 1..5 ) {
   my $env = open_env($dir);
   my %hash;
   my $db = open_db( TESTdb, \%hash, $env );
   untie %hash;
   undef $db;
   undef $env;
   }
   print HTTP/1.1 200\nContent-type: text/plain\n\n;
   print `db_stat -c -h $dir`;
   print \n;
 }
 
 sub open_env {
   my $env = new BerkeleyDB::Env(
   -Flags=DB_INIT_MPOOL|DB_INIT_CDB|DB_CREATE,
   -Home= $_[0],
   );
   die Could not create env: $! .$BerkeleyDB::Error. \n if !$env;
   return $env;
 }
 
 sub open_db {
   my( $file, $Rhash, $env ) = @_;
   my $db_key = tie( %{$Rhash}, 'BerkeleyDB::Btree',
   -Flags=DB_CREATE,
   -Filename=$file,
   -Env=$env );
   die Can't open $file: $! .$BerkeleyDB::Error.\n if !$db_key;
   return $db_key;
 }
 
 1;
 
 
 Dan Wilga [EMAIL PROTECTED]
 Web Technology Specialist http://www.mtholyoke.edu
 Mount Holyoke CollegeTel: 413-538-3027
 South Hadley, MA  01075Seduced by the chocolate side of the Force
 




RE: loss of shared memory in parent httpd

2002-03-16 Thread Ed Grimm

I believe I have the answer...

The problem is that the parent httpd swaps, and any new children it
creates load the portion of memory that was swaped from swap, which does
not make it copy-on-write.  The really annoying thing - when memory gets
tight, the parent is the most likely httpd process to swap, because its
memory is 99% idle.  This issue aflicts Linux, Solaris, and a bunch of
other OSes.

The solution is mlockall(2), available under Linux, Solaris, and other
POSIX.1b compliant OSes.  I've not experimented with calling it from
perl, and I've not looked at Apache enough to consider patching it
there, but this system call, if your process is run as root, will
prevent any and all swapping of your process's memory.  If your process
is not run as root, it returns an error.


The reason turning off swap works is because it forces the memory from
the parent process that was swapped out to be swapped back in.  It will
not fix those processes that have been sired after the shared memory
loss, as of Linux 2.2.15 and Solaris 2.6.  (I have not checked since
then for behavior in this regard, nor have I checked on other OSes.)

Ed

On Thu, 14 Mar 2002, Bill Marrs wrote:

 It's copy-on-write.  The swap is a write-to-disk.
 There's no such thing as sharing memory between one process on disk(/swap)
 and another in memory.
 
 agreed.   What's interesting is that if I turn swap off and back on again, 
 the sharing is restored!  So, now I'm tempted to run a crontab every 30 
 minutes that  turns the swap off and on again, just to keep the httpds 
 shared.  No Apache restart required!

 Seems like a crazy thing to do, though.
 
 You'll also want to look into tuning your paging algorithm.
 
 Yeah... I'll look into it.  If I had a way to tell the kernel to never swap 
 out any httpd process, that would be a great solution.  The kernel is 
 making a bad choice here.  By swapping, it triggers more memory usage 
 because sharing removed on the httpd process group (thus multiplied)...
 
 I've got MaxClients down to 8 now and it's still happening.  I think my 
 best course of action may be a crontab swap flusher.
 
 -bill




Re: Image Magick Alternatives?

2002-02-18 Thread Ed

On Mon, Feb 18, 2002 at 09:26:57PM -, Jonathan M. Hollin wrote:
 The WYPUG migration from Win2K to Linux is progressing very nicely.
 However, despite my best efforts, I can't get Perl Magick to work
 (Image::Magick compiled successfully and without problems).  All I use
 Perl Magick for is generating thumbnails (which seems like a waste
 anyway).  So, is there an alternative - a module that will take an image
 (gif/jpeg) and generate a thumbnail from it?  I have searched CPAN but
 haven't noticed anything suitable.  If not, is there anyone who would be
 willing to help me install Perl Magick properly?

Imager can do what you want. many formats, antialias, freetype, etc.

Ed



Re: [OT] RE: modperl growth

2002-02-05 Thread Ed Grimm

On Tue, 5 Feb 2002, Dave Rolsky wrote:
 On Mon, 4 Feb 2002, Andrew Ho wrote:
 
 One last thing that is hard is where is your DocumentRoot? This is a huge
 problem for web applications being installable out of the box. Perl
 can't necessarily figure that out by itself, either.
 
 You take a guess and then ask the user to confirm.  And you can't guess
 you just ask.

That's a good strategy (assuming a missing if in there somewhere).  It
can be augmented with the tactic of check for a running apache, see
where it gets its config file from, and parse the config file to get
the initial guess.  (Note that I wouldn't want this to be a final guess;
I'm using mod_perl in a virtual host config; the main apache config
doesn't use it, and has a completely unrelated docroot
(/usr/local/apache/htdocs as opposed to /home/appname/public_html))

 There's nothing wrong with an interactive installer.  What kills mod_perl
 apps is they simply have a README or INSTALL that says Copy all the
 template files to a directory called 'app-root' under your document root.

My what?  Which files are templates?  I don't know this unix stuff;
copy doesn't work right.

I think we've all probably heard these words before...

 I guess my point is that installation is hard. Rather than trying to make
 it work for everybody out of the box, you should make it work for the
 typical case out of the box, and then provide hooks for installing it in
 custom places.

 I think the best installer is an interactive installer that tries really
 hard to provide good defaults.

I agree; while I frequently leave unimportant considerations alone (note
my main docroot above), I tend to have very poor luck with the works
with the typical case out of the box, and then provides hooks which
change with every bloo^W^W^W^W^Wfor installing it in custom places.  I
won't go into speculations why.

Ed





Re: performance coding project? (was: Re: When to cache)

2002-01-26 Thread Ed Grimm

On Sat, 26 Jan 2002, Perrin Harkins wrote:

 It all depends on what kind of application do you have. If you code
 is CPU-bound these seemingly insignificant optimizations can have a
 very significant influence on the overall service performance.

 Do such beasts really exist?  I mean, I guess they must, but I've
 never seen a mod_perl application that was CPU-bound.  They always
 seem to be constrained by database speed and memory.

I've seen one.  However, it was much like a normal performance problem -
the issue was with one loop which ran one line which was quite
pathological.  Replacing loop with an s///eg construct eliminated the
problem; there was no need for seemlingly insignificant optimizations.
(Actually, the problem was *created* by premature optimization - the
coder had utilized code that was more efficient than s/// in one special
case, to handle a vastly different instance.)

However, there could conceivably be code which was more of a performance
issue, especially when the mod_perl utilizes a very successful cache on
a high traffic site.

 On the other hand how often do you get a chance to profile your code
 and see how to improve its speed in the real world. Managers never
 plan for debugging period, not talking about optimizations periods.

Unless there's already a problem, and you have a good manager.  We've
had a couple of instances where we were given time (on the schedule,
before the release) to improve speed after a release.  It's quite rare,
though, and I've never seen it for a mod_perl project.

Ed




Re: UI Regression Testing

2002-01-25 Thread Ed Grimm

On Sat, 26 Jan 2002, Gunther Birznieks wrote:

 I agree that testing is great, but I think it is quite hard in practice. 
 Also, I don't think programmers are good to be the main people to write 
 their own tests. It is OK for programmers to write their own tests but 
 frequently it is the user or a non-technical person who is best at doing 
 the unexpected things that are really were the bug lies.

My experience is that the best testers come from technical support,
although this is not to suggest that all technical support individuals
are good at this; even among this group, it's rare.  Users or other
non-technical people may find a few more bugs, but frequently, the
non-technical people don't have the ability to correctly convey how to
reproduce the problems, or even what the problem was.  I clicked on the
thingy, and it didn't work.

This being said, users and tech support can't create unit tests; they're
not in a position to.

 Finally, unit tests do not guarantee an understanding of the specs because 
 the business people generally do not read test code. So all the time spent 
 writing the test AND then writing the program AND ONLY THEN showing it to 
 the users, then you discover it wasn't what the user actually wanted. So 2x 
 the coding time has been invalidated when if the user was shown a prototype 
 BEFORE the testing coding commenced, then the user could have confirmed or 
 denied the basic logic.

For your understanding of the spec, you use functional tests.  If your
functional test suite uses test rules which the users can understand,
you can get the users to double-check them.

For example, at work, we use a suite which uses a rendered web page as
its test output, and the input can be sent to a web page to populate a
form; this can be read by most people who can use the application.

Unit software is a means of satisfying a spec, but it doesn't satisfy
the spec itself - if it did, you'd be talking about the entire package,
and therefore refering to functional testing.  (At least, this is the
way I distinguish between them.)

Admittedly, we are a bit lacking in our rules, last I checked.

Ed




Re: Single login/sign-on for different web apps?

2002-01-20 Thread Ed Grimm

On Wed, 16 Jan 2002, Paul Lindner wrote:
 On Wed, Jan 16, 2002 at 06:56:37PM -0500, Vsevolod Ilyushchenko wrote:
 
  3) Perl-based applications can just use the module and the common key
 to decrypt the contents of the cookie to find the authenticated
 username.  If the cookie is not present redirect to the central
 authentication page, passing in the URL to return to after
 authentication.
 
 Hmmm... Can I do it securely without using Kerberos? I think so. Looks like
 if I use https instead of http, people won't be able to steal my (encoded)
 session information as it is transmitted. And I can also add the IP address
 to the cookie information.
 
 But the cookies file might be readable by other people! If they can steal
 that file and change the IP address of another machine to yours, they can
 pretend they are you!
 I wonder if there is a way out of this...
 
 Yes, you use the timestamp.  Just reauthenticate the user when they
 try to do 'sensitive' activities.

No, use session cookies - they're not stored on disk.  If you need the
system to retain knowledge through a browser shutdown, you can use a
timestamped cookie to retain the user ID, but don't have it allow them
to do anything other than not have to type their user ID in again
(password screen has user ID filled out for them.)

One can also mark the cookies such that they'll only be transmitted over
https.

 $cookie = CGI::Cookie-new(-name   = 'login',
-value  = 
tgape::setcookiepassword($uid, $pass),
-domain = '.tgape.org',
-path   = '/',
-secure = 1,
);

If you feel the need to timestamp your session cookies, make the cookie
include an encrypted timestamp.

 For example you might allow someone to view their bank balance if they
 typed their password within the last 2 hours.  Transferring money
 might require a valid password within the last 10 minutes..

Ah, but many systems will refresh a cookie on activity.  So they view
their balance, get a new cookie, and then transfer money.

 Of course, the best authentication system for banking I've seen is
 from UBS.  They send you a scratchlist of around 100 numbers.  Every
 time you login you use one of the numbers and cross it off.  Very
 slick.

All I need to do is find where you left the list.  Written passwords are
not anywhere near as secure as memorized passwords, unless the person
carrying them around is really conscious about security concerns.

Ed




RE: mod_perl beginners list

2002-01-20 Thread Ed Grimm

On Tue, 15 Jan 2002, Robert Landrum wrote:
 At 10:22 PM + 1/15/02, Matt Sergeant wrote:
On Tue, 15 Jan 2002, Robert Landrum wrote:

 I've seen nothing on this list that suggests that new users shouldn't
 ask questions.  If they don't ask questions because they're afraid
 of the response they might get, then maybe they should stay home and
 leave the programming to those people who have mettle to ask.

I know where the sentiment comes from, but I really hope people don't read
that and stay away in fear. Really folks, we're friendly here, so long as
you play by the rules: quote cleanly, don't post HTML, and ask politely.
 
 Absolutly.  My response was addressing Joe's statment that users are 
 too intimidated to post.  I disagree.  True programmers know no fear.

True programmers know no fear of computers.  However, any programmer who
knows fear of RTFM would likely not post to any perl mailing list that
didn't have beginner or newbie in the name, due to having experience
with such lists before ever hearing about mod_perl.

Count me as someone who would be interested in being on the list, though
not necessarily very active.

Ed




Re: Single login/sign-on for different web apps?

2002-01-20 Thread Ed Grimm

No.  There are very important reasons why Apache by default puts an ACL
restricting .ht* from being viewable.  (Basically, the password encryption
used in said file is moderately easily cracked via brute force.)

One could use a file distributed using rsync(1) or some such (preferably
with RSYNC_RSH=ssh).  However, that's still a bit on the unsecure side,
unless you really do trust everyone who is running one of these web
servers.

Ed

On Wed, 16 Jan 2002, Medi Montaseri wrote:

 I wonder if one could change the HTTP Server's behavior to process a
 distributed version of AuthUserFile and AuthGroupFile.
 
 That instead of
 
 AuthUserFile /some/secure/directory/.htpasswd
 
 One would say
 
 AuthUserFile http://xyz.com/some/directory/htpasswd;
 
 Then write a GUI (web) inteface to this password and group file and
 you have distributed authentication system.
 
 Ed Grimm wrote:
 
  On Wed, 16 Jan 2002, Medi Montaseri wrote:
 
   I think Netegrity single sing-on system modifies the HTTP server
   (possible with mod_perl) to overload or override its native
   authoentication and instead contact a Host, Database or LDAP to get
   the yes or no along with expiration data it then sends its finding
   to the CGI by sending additonal HTTP Header info. A CGI program can
   then go from there...
 
  Something like this.  Basically, it has modules, plugins, or access
  instructions, as appropriate, for various web servers, to configure them
  to use it.  I know it gives an LDAP interface, and I'm assuming it gives
  an LDAPS interface; I'm not sure what others it may present.
 
   I might not have this 100%, but perhaps we can learn from those
   commercial products.
  
   Someone suggested using LDAP and RDBMS, my question is why both, why
   not just RDBMS.
 
  Why not just LDAP?  As someone working on rolling out a single sign-on
  solution with LDAPS, I really want to know...  (We're planning on
  getting Netegrity for its distributed administration stuff; at that
  time, we'll start using its web server configuration stuff for any new
  web servers.  Until that time, we're rolling out LDAPS, and we're not
  currently planning on converting systems we roll out in the interm to
  Netegrity.)
 
  Incidentally, we're being a bunch of lazy bums, compared to the rest of
  y'all.  We're considering single sign-on to mean they only need to keep
  track of one userid and password (unless they need to access classified
  or otherwise restricted stuff.)  If they go to different sites and have
  to log on again, we don't currently care.  (Basically, we have too many
  sites created by too many groups.  We'll share the same login between
  servers run by the same group, but beyond that, security concerns
  outweigh user convinience.)
 
  Ed
 
   Aaron Johnson wrote:
  
   We are working on/with a similar system right now.
  
   We have an application that is written in Perl, but the people
   visiting will most likely be signing on at a different point then our
   applications sign in page. Our system was built to use its own
   internal database for authentication and their app/login uses a
   different method.  The design requirements were that each side would
   have to do as little possible to modify there application to work in
   our single sign on solution.
  
   We have the luxury of not being overly concerned with the security
   aspect so please keep that in mind.
  
   We setup a nightly sync program that verifies the data in the current
   database vs. their login user information database.
  
   Here is a less then detailed summary of how the system operates.
  
   1) The user logs into the application through their application and
   they are sent a cookie that contains the user name.
  
   2) All links to our application are sent to a single page on their
   end with the full url of the page they want as part of the query
   string.
  
   3) They verify that the user is valid using whatever method they
   want.
  
   4) The user is then redirected to a special page in our application
   that expects the query string to contain two items, the user name and
   the final URL to go to.
  
   5) Our application verifies the HTTP_REFFERER and the query string
   contains valid values.
  
   6) Our application checks the database for a user matching the name
   sent in. Then if the user already has a session if they do then they
   are redirected to the correct page, otherwise it does a lookup in our
   system to create a session for the user based on the incoming user
   name and then redirects to the final URL.
  
   Now a user can go between the two applications without concern since
   they have a cookie for each domain.
  
   If the user comes to our site the reverse of the above occurs.
  
   This allowed us to plug into existing applications without a lot of
   rework. It is also fairly language/platform independent.
  
   As stated above I know there are some large security issues with this
   approach

Re: Single login/sign-on for different web apps?

2002-01-20 Thread Ed Grimm

On Wed, 16 Jan 2002, Medi Montaseri wrote:

 I think Netegrity single sing-on system modifies the HTTP server
 (possible with mod_perl) to overload or override its native
 authoentication and instead contact a Host, Database or LDAP to get
 the yes or no along with expiration data it then sends its finding
 to the CGI by sending additonal HTTP Header info. A CGI program can
 then go from there...

Something like this.  Basically, it has modules, plugins, or access
instructions, as appropriate, for various web servers, to configure them
to use it.  I know it gives an LDAP interface, and I'm assuming it gives
an LDAPS interface; I'm not sure what others it may present.

 I might not have this 100%, but perhaps we can learn from those
 commercial products.

 Someone suggested using LDAP and RDBMS, my question is why both, why
 not just RDBMS.

Why not just LDAP?  As someone working on rolling out a single sign-on
solution with LDAPS, I really want to know...  (We're planning on
getting Netegrity for its distributed administration stuff; at that
time, we'll start using its web server configuration stuff for any new
web servers.  Until that time, we're rolling out LDAPS, and we're not
currently planning on converting systems we roll out in the interm to
Netegrity.)

Incidentally, we're being a bunch of lazy bums, compared to the rest of
y'all.  We're considering single sign-on to mean they only need to keep
track of one userid and password (unless they need to access classified
or otherwise restricted stuff.)  If they go to different sites and have
to log on again, we don't currently care.  (Basically, we have too many
sites created by too many groups.  We'll share the same login between
servers run by the same group, but beyond that, security concerns
outweigh user convinience.)

Ed

 Aaron Johnson wrote:
 
 We are working on/with a similar system right now.

 We have an application that is written in Perl, but the people
 visiting will most likely be signing on at a different point then our
 applications sign in page. Our system was built to use its own
 internal database for authentication and their app/login uses a
 different method.  The design requirements were that each side would
 have to do as little possible to modify there application to work in
 our single sign on solution.

 We have the luxury of not being overly concerned with the security
 aspect so please keep that in mind.

 We setup a nightly sync program that verifies the data in the current
 database vs. their login user information database.

 Here is a less then detailed summary of how the system operates.

 1) The user logs into the application through their application and
 they are sent a cookie that contains the user name.

 2) All links to our application are sent to a single page on their
 end with the full url of the page they want as part of the query
 string.

 3) They verify that the user is valid using whatever method they
 want.

 4) The user is then redirected to a special page in our application
 that expects the query string to contain two items, the user name and
 the final URL to go to.

 5) Our application verifies the HTTP_REFFERER and the query string
 contains valid values.

 6) Our application checks the database for a user matching the name
 sent in. Then if the user already has a session if they do then they
 are redirected to the correct page, otherwise it does a lookup in our
 system to create a session for the user based on the incoming user
 name and then redirects to the final URL.

 Now a user can go between the two applications without concern since
 they have a cookie for each domain.

 If the user comes to our site the reverse of the above occurs.

 This allowed us to plug into existing applications without a lot of
 rework. It is also fairly language/platform independent.

 As stated above I know there are some large security issues with this
 approach.

 Aaron Johnson

 Vsevolod Ilyushchenko wrote:

 Hi,

 Have you ever ran into the problem of putting up many separate web
 apps on several machines in your company/university/whatever that
 are written from scratch or downloaded frow the Web and each of
 which has its own user database? What would you think is a good way
 to make the system seem more cohesive for the users?

 It occurs to me that 1) for the single login they all should use the
 same user database (obviously :), and 2) for the single sign-on
 there must be a way of storing the session information. That is,
 once I login in the morning to the first app, I get a cookie, a
 ticket or something similar, and then, as I go from app to app, I
 will not have to re-enter my credentials because they are supplied
 by a cookie. Apache::Session sounds like the right tool for the job.
 (Did I hear Kerberos? :)

 Has anyone had experince with this kind of app integration? The
 downside I see is that once I settle on a particular scheme to do
 it, I will have to build this scheme into every app

Re: Single login/sign-on for different web apps?

2002-01-20 Thread Ed Grimm

On Thu, 17 Jan 2002, Gunther Birznieks wrote:

 Of course, the best authentication system for banking I've seen is
 from UBS.  They send you a scratchlist of around 100 numbers.  Every
 time you login you use one of the numbers and cross it off.  Very
 slick.
 
 Does that really work in practice? That sounds really annoying. Is this for 
 business banking or for retail? How do they get the next 100 numbers to the 
 user? Do they mail it out when they've used 90?
 
 It sounds like it would be less annoying to use certificates and some 
 plug-in token there is going to be that much extra work to deal with a 
 password sheet.

Alternately, for a high-tech approach, RSA makes a nice product called a
SecurID token (Well, one of mine says Security Dynamics on the back, but
the new ones definitely say RSA).  Actually, they make two, one nice,
one not nice.  The nice one has a keypad where you enter in a pin, press
a button, and it generates a temporary id based on its serial number,
your pin, and the current time interval; the time interval changes every
minute or two.  The not nice one has no keypad; it works like the other
would if you didn't enter a pin.

I know of several companies that use these; they tend to work fairly
well.  (I had one break on me, but I gave it a lot of abuse first; it
lasted almost half of its battery span in spite of not being taken care
of.)

Ed




Re: load balancing on apache

2001-12-14 Thread ed phillips

Jeff Beard wrote:
 
 On Fri, 14 Dec 2001, Perrin Harkins wrote:
 
   I _really_ hate so-called dedicated boxes. They're closed, nasty,
   inflexible and often don't work in _your_ situation. Doing smart
   session-based redirection can be hard with these boxes.
 
  You can make it work with homegrown solutions, but I've found the dedicated
  load-balancing tools (at least Big/IP) to be effective and fairly easy to
  work with, even with large loads, failover requirements, and more exotic
  stuff like sticky sessions.  This is one area where the problem seems to be
  well enough defined for most people to use an off-the-shelf solution.
  They're often more expensive than they should be, but if you don't have
  someone on hand who knows the ipchains or LVS stuff it can save you some
  time and trouble.
 
 I couldn't agree more. In terms of managability and scalability,
 the various software solutions simply add complexity to something that is
 already so. I've got some experience with Alteon AceDirectors and even though
 they seem little flakey at times, you do end up with true load balacing. (We
 have Cisco's solution deployed and they periodically have issues too.)
 
 DNS round-robin should be avoided at all costs. It's half-assed at best. In
 the case of a failure those clients that have that IP cached are SOL.
 
 On some of the systems that I've deployed we have a frontend proxy on the same box
 as the mod_perl with the mod_perl server listening on 127.0.0.1. This is
 behind an Alteon (or 2). You can put the proxy on a separate box as well but
 (I've seen some odd problems with TCP connections not working in this situation
 which I never fully understood but may have had to do with the Alteon being flakey.)
 
 Anyway, my advice is to go with a hardware load balancer/intelligent IP switch.
 In the long term, it will pay for itself in the time recovered from *not* being
 spent on troubleshooting complex problems.
 

yes. It's a money vs. time/knowledge thing. Plus the state of the free
software available. Anyone care to compare the features and power of
some of the opensource projects vs. the Big/IP's? Which are the more
promising opensource projects in this area?

It would be nice to use an open source solution, or at least be able to
offer it as an option, and I'd like to track the progress of some of the
more promising projects.

Ed

Ed



Re: Defeating mod_perl Persistence

2001-12-11 Thread ed phillips

Ged Haywood wrote:
 
 Hi there,
 
 On Tue, 11 Dec 2001, Jonathan M. Hollin wrote:
 
  When using Mail::Sender only the first email is sent on my mod_perl server.
  When I investigated, I realised that the socket to the SMTP server was
  staying open after the completion of that first email (presumably mod_perl
  is responsible for this persistence).
 
  Is there any way to defeat the persistence on the socket while running my
  script under mod_perl, or do such scripts always need to be mod_cgi?
 
 The idea is for the mod_perl process to complete its job and get on
 with another as quickly as possible.  Waiting around for nameserver
 timeouts and such doesn't help things.
 
 You might be better off re-thinking the design for use under mod_perl.
 This is a well-trodden path, have a browse through the archives.
 

Yes, this has come up before. Ideally you want to separate out your mail
service and pass your mails to a queue. Then, wholly independent of your
app, your smtp server can negotiate with remote hosts and generally do
its thing. That is, you shouldn't even make your app wait for your SMTP
server to send an email before you free it to handle the next request.
This is analagous to using a proxy server to handle slowish clients. See
the guide, archives.


Ed



[OT]Re: The DEFINITIVE answer to: How much should I charge?

2001-10-10 Thread ed phillips

Tom Mornini wrote:
 
 This whole thread can be answered very easily:
 
 ANSWER: As much as you can.
 
 That's it! That's the entire answer. Nothing else should figure in
 unless you
 personally wish to make exceptions for any reason you see fit.
 
 Did the people who ask this question grow up and become educated in a
 part of the
 world where free markets and capitalism did not exist?
 
 Perhaps in socialistic colleges in the U.S.? :-)
 

If you mean when you say, sociailistic colleges, very well funded
universities full of tenured radicals, then I'm guilty. ;-)

Those were my favorite professors!  But, I was never deluded into
thinking they had in any sense escaped the money economy. The star
tenured radicals such as Fred Jameson for example make as much or more
than a very well paid software developer so one must appreciate the
ironies.

As a freelancer you charge what the market will bear. Besides the extra
cost of benefits and the added tax liabilities one must also factor in
the assumption of risk if you want to come to a justification for
charging 100+ per hour. If you don't feel the need to justify, then you
merely say, I charge the market rate. No tenured radical would
begrudge you. They know on what side their bread is buttered. ;-)

ed



Re: [OT] Re: What hourly rate to charge for programming?

2001-10-03 Thread ed phillips

Perrin Harkins wrote:
 
  Now take the amount you want to make and divide it by the number
  of hours you came up with above ($40,000 / 1,000).  You get $40.
  That's your target hourly rate.  And despite what they high-flying
  .com weanies were saying a year ago, that's going to be a nice
  living for a young guy unless you're smack in the middle of a
  high-cost area and can't bother to cook your own meals.
 
 Don't forget that self-employed people in the US must pay considerably more
 in social security, as well as covering the full cost of their own health
 insurance and other needed benefits.  $40K as a consultant is much less
 spendable money than $40K as an employee.
 - Perrin


Yes, that's an additional 7.5% for social security. In addition, you
have to take care of your own benefits, etc.

Market downturns can be a better time for contract work over full-time,
especially since stock options don't mean what they do during an upturn.
;-) And many employers don't have the resources to take on full-time
staff.

I'd recommend that you start to inch up your rate with new clients, and
that you try and see what your market will bear. Your target should be
$100+ in the U.S. for basic consulting and more for mod_perl specific
work, again if your market will bear it.

Good Luck,

Ed



Re: [ANNOUNCE] TicketMaster.com sponsors mod_perl development

2001-09-20 Thread ed phillips

Congratulations to Stas, mod_perl, and the guide.

Excellent!

Ed


Stas Bekman wrote:
 
 If you remember back in the end of April, I've posted to the list an
 unusual job seek request [1], where I was saying that I want some
 company to sponsor me to work full time on mod_perl 2.0 development.
 
 Believe it or not my unusual request has been answered by Craig McLane
 from TicketMaster.com (which owns citysearch.com).
 
 citysearch.com is a heavy user of mod_perl technology and interested in
 making sure that mod_perl technology get more and more mature and ensure
 their business' success.
 
 So starting from this September I'm working on mod_perl 2.0
 development, a new documentation project (which you are welcome to
 join) and doing mod_perl advocacy through teaching at the conferences
 and other ways.
 
 Currently the contract is for one year. But if everything goes well,
 and mod_perl 2.0 rocks the world even better than 1.x did we will see
 more support and sponsoring from TicketMaster.
 
 This email's purpose:
 
 - is to set a precedent for other business to sponsor mod_perl and
related technologies. There are at least a few excellent developers
that I know will jump on the opportunity of being able to do what
they love full time.
 
 - is to set a precedent for other developers to seek what they really
want and read less stories about hi-tech recession, since good
developers are always in demand. Therefore I hope that this email
will encourage you to do that.
 
 Notes:
 
[1] http://forum.swarthmore.edu/epigone/modperl/runvesay
 
 _
 Stas Bekman  JAm_pH --   Just Another mod_perl Hacker
 http://stason.org/   mod_perl Guide  http://perl.apache.org/guide
 mailto:[EMAIL PROTECTED]   http://apachetoday.com http://eXtropia.com/
 http://singlesheaven.com http://perl.apache.org http://perlmonth.com/



Re: [ANNOUNCE] TicketMaster.com sponsors mod_perl development

2001-09-20 Thread ed phillips

Aaron E. Ross wrote:
 
 On Fri, Sep 21, 2001 at 02:01:31AM +0800, Gunther Birznieks wrote:
  You can reach your goals.
 
  I'm living proof.
 
  beefcake.
 
  BEEFCAKE!!
 
  -- Eric Cartman
 
  LOL!  sounds like a great project stas! thanks ticketmaster!


Yeah. Kudos to Ticketmaster for supporting a great Open Source project.



Re: mod_proxy and mod_perl in guide

2001-09-17 Thread ed phillips

Thanks Vivek,

Andrei, use the front end to directly handle any binaries, static files,
etc.

I doubt they are generating of these on the fly.



Vivek Khera wrote:
 
  AAV == Andrei A Voropaev [EMAIL PROTECTED] writes:
 
 AAV In our system we have to pass large PDF files thru mod_perl to
 AAV proxy and we noticed that it takes the same time as sending it
 AAV directly to customer.
 
 Why do you have to pass the PDF thru mod_perl?  Are you generating it
 on the fly?  If not, configure your proxy front end to intercept
 static documents like .pdf .txt .html etc. to be handled by the front
 end directly.  I use mod_rewrite for this, and my configs have been
 posted to this list at least twice.
 
 --
 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Vivek Khera, Ph.D.Khera Communications, Inc.
 Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
 AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/



Re: AxKit configuration question

2001-09-02 Thread Ed Loehr

Robin Berjon wrote:
 
 On Saturday 01 September 2001 08:02, Ed Loehr wrote:
  There's also a note (in AxKit 1.4 change log, I think) that says that
  problem is fixed in 1.4.  Also, from 'perldoc AxKit':
 
 If you have a recent mod_perl and use mod_perl's
 Makefile.PL DO_HTTPD=1 to compile Apache for you, this
 option will be enabled automatically for you.
 
  Other clues?
 
 It's supposed to, but sometimes apparently it doesn't (I haven't been able to
 track down why, and it may be my fault).
 
 Have you tried without SSL ? It sometimes conflicts with other modules.
 Another search track would be to find out which module that AxKit pulls in
 causes the crash (if any). You could also try out the 1.5 RC (which must be
 called 1.4_9x and is probably on CPAN or mirrored somewhere), it's been quite
 stable in my experience.

I tried it without SSL, and the same problem remains:  httpd exits
silently after seemingly normal startup.  I'll try 1.5RC, axkit irc, and
debugging if I can't find a way around in my laziness.  Thanks.

Regards,
Ed Loehr



AxKit.org/axkit.apache.org timing?

2001-09-01 Thread Ed Loehr

I recently read that AxKit was in the process of becoming an ASF xml
project.  Does anyone have a sense of the timing for when this might
happen and when axkit.org/axkit.apache.org will return/arrive?

Also, does anyone know of a mirror site for axkit.org?

Regards,
Ed Loehr



Re: AxKit configuration question

2001-08-31 Thread Ed Loehr

Ed Loehr wrote:
 
 I'm attempting to install AxKit 1.4 (and 10 or so other pre-requisite
 modules) on my modperl/modssl server, and I'm trying to get the
 ultra-basic AxKit manpage example to work ('perldoc AxKit').
 
 The first sign of trouble has arisen:  httpd silently exits immediately
 after startup once I add the specified AxKit configuration to my Apache
 config files, and I have not been able to find any logging whatsoever
 yet.  I can see it is successfully loading AxKit.pm, and producing
 seemingly all of the normal Apache log startup messages I usually see.
 It's just that when I go to the ps table, it is not there.  Does anyone
 have a clue to offer before I recompile with debugging on?
 
   Apache/1.3.20 (Unix) mod_perl/1.25 mod_ssl/2.8.4 OpenSSL/0.9.6b

More data:  there is no core file created, and the mere presence of this
one line in my httpd.conf ...

PerlModule AxKit

...with no other AxKit directives anywhere, causes httpd to exit shortly
( 1 sec) after starting.  I hacked AxKit.pm to verify it is loading, and
it is successfully completing it's BEGIN block.

Any clues?

Regards,
Ed Loehr


This is kernel 2.2.12-20smp.

# perl -V
Summary of my perl5 (5.0 patchlevel 5 subversion 3) configuration:
  Platform:
osname=linux, osvers=2.2.5-22smp, archname=i386-linux
uname='linux porky.devel.redhat.com 2.2.5-22smp #1 smp wed jun 2
09:11:51 edt 1999 i686 unknown '
hint=recommended, useposix=true, d_sigaction=define
usethreads=undef useperlio=undef d_sfio=undef
  Compiler:
cc='cc', optimize='-O2', gccversion=egcs-2.91.66 19990314/Linux
(egcs-1.1.2 release)
cppflags='-Dbool=char -DHAS_BOOL -I/usr/local/include'
ccflags ='-Dbool=char -DHAS_BOOL -I/usr/local/include'
stdchar='char', d_stdstdio=undef, usevfork=false
intsize=4, longsize=4, ptrsize=4, doublesize=8
d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12
alignbytes=4, usemymalloc=n, prototype=define
  Linker and Libraries:
ld='cc', ldflags =' -L/usr/local/lib'
libpth=/usr/local/lib /lib /usr/lib
libs=-lnsl -ldl -lm -lc -lposix -lcrypt
libc=, so=so, useshrplib=false, libperl=libperl.a
  Dynamic Linking:
dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-rdynamic'
cccdlflags='-fpic', lddlflags='-shared -L/usr/local/lib'


Characteristics of this binary (from libperl): 
  Built under linux
  Compiled at Aug 30 1999 23:09:51
  @INC:
/usr/lib/perl5/5.00503/i386-linux
/usr/lib/perl5/5.00503
/usr/lib/perl5/site_perl/5.005/i386-linux
/usr/lib/perl5/site_perl/5.005


# httpd -V
Server version: Apache/1.3.20 (Unix)
Server built:   Aug 31 2001 21:07:15
...
Server compiled with
 -D EAPI
 -D HAVE_MMAP
 -D HAVE_SHMGET
 -D USE_SHMGET_SCOREBOARD
 -D USE_MMAP_FILES
 -D USE_SYSVSEM_SERIALIZED_ACCEPT
 -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
 -D HTTPD_ROOT=/usr/local/apache_ssl-2.8.4-1.3.20
 -D SUEXEC_BIN=/usr/local/apache_ssl-2.8.4-1.3.20/bin/suexec
 -D DEFAULT_PIDLOG=logs/httpd.pid
 -D DEFAULT_SCOREBOARD=logs/httpd.scoreboard
 -D DEFAULT_LOCKFILE=logs/httpd.lock
 -D DEFAULT_XFERLOG=logs/access_log
 -D DEFAULT_ERRORLOG=logs/error_log
 -D TYPES_CONFIG_FILE=conf/mime.types
 -D SERVER_CONFIG_FILE=conf/httpd.conf
 -D ACCESS_CONFIG_FILE=conf/access.conf
 -D RESOURCE_CONFIG_FILE=conf/srm.conf



AxKit configuration question

2001-08-31 Thread Ed Loehr

Hi All,

I'm attempting to install AxKit 1.4 (and 10 or so other pre-requisite
modules) on my modperl/modssl server, and I'm trying to get the
ultra-basic AxKit manpage example to work ('perldoc AxKit').  

The first sign of trouble has arisen:  httpd silently exits immediately
after startup once I add the specified AxKit configuration to my Apache
config files, and I have not been able to find any logging whatsoever
yet.  I can see it is successfully loading AxKit.pm, and producing
seemingly all of the normal Apache log startup messages I usually see. 
It's just that when I go to the ps table, it is not there.  Does anyone
have a clue to offer before I recompile with debugging on?

My server's status line is 

  Apache/1.3.20 (Unix) mod_perl/1.25 mod_ssl/2.8.4 OpenSSL/0.9.6b

(it once had AxKit 1.4 in it, but I haven't been able to reproduce
that...)
 
Here's my httpd.conf file:

##
IfModule mod_perl.c
Include conf/perl.conf
/IfModule
##

Here's part of conf/perl.conf:

###
PerlModule AxKit

Location /ax
# Install AxKit main parts
SetHandler perl-script
PerlHandler AxKit

# Setup style type mappings
AxAddStyleMap text/xsl Apache::AxKit::Language::Sablot
AxAddStyleMap application/x-xpathscript \
   Apache::AxKit::Language::XPathScript

# Optionally set a hard coded cache directory
# make sure this is writable by nobody
AxCacheDir /opt/axkit/cachedir

# turn on debugging (1 - 10)
AxDebugLevel 10
AxStackTrace On
AxLogDeclines On
/Location
##

If I delete this section of perl.conf, the server starts and runs fine,
albeit without the desired AxKit effect!!

The only thing I have not investigated are a bunch of log messages re
Apache::Status as follows ...

Subroutine menu_item redefined at
/usr/lib/perl5/site_perl/5.005/i386-linux/Apache/Status.pm line 46.

and a bunch for mod_perl.pm ...

Subroutine Apache::Table::TIEHASH redefined at
/usr/lib/perl5/site_perl/5.005/i386-linux/mod_perl.pm line 65535.

Thanks in advance for any clues/pointers.

Ed Loehr



Re: AxKit configuration question

2001-08-31 Thread Ed Loehr

Randy Kobes wrote:
 
 On Fri, 31 Aug 2001, Ed Loehr wrote:
 
  More data:  there is no core file created, and the mere presence of this
  one line in my httpd.conf ...
 
PerlModule AxKit
 
  ...with no other AxKit directives anywhere, causes httpd to exit shortly
  ( 1 sec) after starting.  I hacked AxKit.pm to verify it is loading, and
  it is successfully completing it's BEGIN block.
 
 The AxKit FAQ (which unfortunately I don't think is reachable yet
 at http://www.axkit.org/) says that there could be some problems
 with Apache's default build with expat enabled and XML::Parser's
 version of expat. The recommendation is to recompile Apache
 with --disable-rule=expat. Does this work?

I don't think so, not the way I tried it, anyway...

cd $SSL_SRC_DIR/$MOD_PERL_DIST
perl Makefile.PL \
USE_APACI=1 EVERYTHING=1 \
DO_HTTPD=1 SSL_BASE=/usr/local/ssl \
APACHE_PREFIX=$SSL_LOCAL_DIR \
APACHE_SRC=../$APACHE_DIST/src \
APACI_ARGS='--enable-module=ssl, \
--enable-module=rewrite, \
--enable-module=perl, \
...
--disable-rule=expat'
(make  date  make test  date  make install  date) | tee
make.log
cd $SSL_SRC_DIR/$APACHE_DIST
make certificate TYPE=custom
make install

There's also a note (in AxKit 1.4 change log, I think) that says that
problem is fixed in 1.4.  Also, from 'perldoc AxKit':

   If you have a recent mod_perl and use mod_perl's
   Makefile.PL DO_HTTPD=1 to compile Apache for you, this
   option will be enabled automatically for you.

Other clues?

Regards,
Ed Loehr



Re: [ModPerl] missing POST args mystery

2001-07-10 Thread Ed Loehr

Ed Loehr wrote:
 
  I'm stumped ...
  In a nutshell, my problem is that POSTed form key-value pairs are
  intermittently not showing up in the request object inside my handler
  subroutine.

As I was puzzling over this, I saw this error message in the logs...

(offline mode: enter name=value pairs on standard input)

A google search turned up a note about needing to have $CGI::NO_DEBUG =
1 before calling CGI::Cookie-parse().  Adding that line of code before
my parse call seems to have fixed the problem.  At a glance, looks like
CGI.pm was strangely set to read from the command-line (default
$CGI::NO_DEBUG = 0), probably triggering a call of Apache's request-args
somewhere along the line.  How the default setting may have changed I
don't know, because I've been using CGI.pm for years without this
problem; I may have upgraded that package, picking up a change
accidentally.

Regards,
Ed Loehr



[ModPerl] missing POST args mystery

2001-07-06 Thread Ed Loehr

I'm stumped regarding some request object behavior in modperl, and after
searching the Guide, Google, and the list archives without success, I'm
hoping someone might offer another idea I could explore, or offer some
helpful diagnostic questions.

In a nutshell, my problem is that POSTed form key-value pairs are
intermittently not showing up in the request object inside my handler
subroutine.

I have a modperl-generated form:

HTML
HEAD
META HTTP-EQUIV=Expires CONTENT=Tue, 01 Jan 1980 1:00:00 GMT
META HTTP-EQUIV=Pragma CONTENT=no-cache
TITLE.../TITLE
/HEAD
BODY
FORM METHOD=POST ACTION=postform
...
INPUT NAME=id TYPE=HIDDEN VALUE=123
...
/FORM
/BODY
/HTML

Upon submission, the form data eventually flows to my PerlHandler... 

sub handler {
my $r = shift;
my @argsarray = ($r-method eq 'POST' ? $r-content() : $r-args());
...
}

Now, if I examine (print) the form values retrieved from the request
object upon entry into this handler (*after* I load them into $args),
'id' is not present at all.  I must be missing something trivially
obvious to some of you.

This is running Apache/1.3.19 (Unix) mod_perl/1.25 mod_ssl/2.8.3
OpenSSL/0.9.6a.

Regards,
Ed Loehr



Re: modperl/ASP and MVC design pattern

2001-04-25 Thread ed phillips

[EMAIL PROTECTED] wrote:

  Francesco, I believe that Ian was joking, hence the yikes before the name,
  so  the above post is the documentation!
 
  Ed
 

 .. so the best environment for the MVC++ design pattern is parrot/mod_parrot :)
 http://www.oreilly.com/news/parrotstory_0401.html

 Thanks
 Francesco


Exactly!

Wasn't Ian the one responsible for the mod_parrot MVC++ API?

ed






Re: Can AxKit be used as a Template Engine?

2001-04-23 Thread ed phillips

Michael Alan Dorman wrote:

 Matt Sergeant [EMAIL PROTECTED] writes:
  It depends a *lot* on the type of content on your site. The above
  www.dorado.com is brochureware, so it's not likely to need to be
  re-styled for lighter browsers, or WebTV, or WAP, or... etc. So your
  content (I'm guessing) is pure HTML, with Mason used as a fancy way
  to do SSI, with Mason components for the title bars/menus, and so
  on. (feel free to correct me if I'm wrong).

 It is more sophisticated than that, but you're basically right.  I do
 pull some tagset-like tricks for individual pages, so it's not totally
 pure HTML, but yeah, if we wanted to do WebTV we'd be fscked.

  AxKit is just as capable of doing that sort of thing, but where it
  really shines is to provide the same content in different ways,
  because you can turn the XML based content into HTML, or WebTV HTML,
  or WML, or PDF, etc.

 Ah---well a web site that does all of that isn't what first comes to
 mind when someone talkes about doing a static site---though now that
 you've explained further, I believe I understand exactly what you
 intended.

  I talk about how the current Perl templating solutions (including
  Mason) aren't suited to this kind of re-styling in my AxKit talk,
  which I'm giving at the Perl conference, so go there and come see
  the talk :-)

 Heh.  I agree entirely with this assesment---I can conceptualize a way
 to do it in Mason, but the processing overhead would be unfortunate,
 the amount of handwaving involved would be enormous, and it would
 probably be rather fragile.

  So I take back that people wouldn't be using Mason for static
  content. I was just trying to find a simple way to classify these
  tools, and to some people (I'd say most people), Mason is more on
  the dynamic content side of things, and AxKit is more on the static
  content side of things, but both tools can be used for both types of
  content.
 
  (I hate getting into these things - I wish I'd never brought up
  Mason or EmbPerl)

 Well I will say that you made an excellent point that hadn't really
 occured to me---I use XML + XSL for a lot of stuff (the DTD I use for
 my resume is a deeply reworked version of one I believe you had posted
 at one time), but not web sites, in part because I'm not currently
 obligated to worry about other devices---so I don't exactly regret
 getting you to clarify things.

 Could I suggest that a better tagline would be that AxKit is superior
 when creating easily (re-)targetable sites with mostly static content?
 It might stave off more ignorant comments.

 Mike.

Matt,

I've also found your use of static to describe transformable or
re-targetable(unfortunate
word) content to be confusing. This discussion helps clarify things, a
little. ;-)

Ed





Re: Fast DB access

2001-04-18 Thread ed phillips

Matthew Kennedy wrote:

 I'm on several postgresql mailing lists and couldn't find a recent post
 from you complaining about 6.5.3 performance problems (not even by an
 archive search). Your benchmark is worthless until you try postgresql
 7.1. There have been two major releases of postgresql since 6.5.x (ie.
 7.0 and 7.1) and several minor ones over a total of 2-3 years. It's no
 secret that they have tremendous performance improvements over 6.5.x. So
 why did you benchmark 6.5.x?

 This is a good comparison of MySQL and PostgreSQL 7.0:

 "Open Source Databases: As The Tables Turn" --
 http://www.phpbuilder.com/columns/tim20001112.php3

  We haven't tried this one. We are doing a project on mysql. Our preliminary 
assessment is, it's a shocker. They justify not having commit and rollback!! Makes us 
think whether they are even lower end than MS-Access.

 Again, checkout PostgreSQL 7.1 -- I believe "commit" and "rollback" (as
 you put it) are available. BTW, I would like to see that comment about
 MS-Access posted to pgsql-general... I dare ya. :P

 Matthew

You can scale any of these databases; Oracle, MySQL or PostgreSQL, but please research 
each one thoroughly and tune it properly before you do your benchmarking.  And, again, 
MySQL does support transactions now. Such chutzpah for them to have promoted an "atomic
operations" paradigm for so long without supporting transactions! But that discussion 
is moot now.

Please be advised that MySQL is threaded and must be tuned properly to handle many 
concurrent users on Linux. See the docs at http://www.mysql.com  The author of the PHP 
Builder column did not do his research, so his results for MySQL on Linux are way off.
Happily, though, even he got some decent results from PostgreSQL 7.0.

The kernel of wisdom here:  If you are going to use one of the Open Source databases, 
please use
the latest stable release (they improve quickly!) and please either hire someone with 
some expertise installing and administering, and tuning your database of choice on 
your platform of choice or do the research thoroughly yourself.

Ed




Re: Varaible scope memory under mod_perl

2001-03-14 Thread ed phillips

agh!

check the headers!


Steven Zhu wrote:

 How could I unsubscribe from [EMAIL PROTECTED] you so
 much.Steven.

  -Original Message-





Re: Not even beginning - INSTALL HELP

2001-02-27 Thread ed phillips

If you are going to upgrade gcc for RH 7.0, I reccomend the
new source RPM for gcc to be found in the updates directory
on any redhat mirror site.  In fact, if you are sticking with RH
you should see about updating a number of things.

23,

Ed

"G.W. Haywood" wrote:

 Hi there,

 On Tue, 27 Feb 2001, A. Santillan Iturres wrote:

  I have Apache 1.3.12 running on a RedHat 7.0 box with perl, v5.6.0 built for
  i386-linux
  I went to install mod_perl-1.25:
  When I did:
  perl Makefile.PL
  I've got a:
  Segmentation fault (core dumped)

 Did you build your Perl yourself?  Sounds like there's a problem with
 it.  Check out the mod_perl List archives for problems with gcc (the C
 compiler) that was shipped with RedHat 7.0.  You should probably get
 that replaced to start with.  (Or use Slackware - sorry:)

 73,
 Ged.




mod_perl + multiple Oracle schemas (was RE: Edmund Mergl)

2001-01-10 Thread Ed Park

John--

Another thing you may want to look into is just doing an
"alter session set current_schema" call at the top of your mod_perl page.
This is actually significantly faster than Tim's reauthenticate solution
(about 7X, according to my benchmarks).

It has become a supported feature as of Oracle 8i. For details on what I
did, see http://www.lifespree.com/modperl/ (which is still a total mess
right now-- I'll get around to cleaning it up sometime soon, I promise!)

cheers,
Ed

-Original Message-
From: John D Groenveld [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 10, 2001 5:10 PM
To: Edmund Mergl
Cc: [EMAIL PROTECTED]
Subject: Re: Edmund Mergl


Good to see you alive, well, and still coding Perl.

Months ago, about the time of the Perl conference so it may have slipped
under everyone's radar, Jeff Horn from U of Wisconsin sent you some patches
to Apache::DBI to use Oracle 8's re-authenticate function instead of
creating and caching a separate Oracle connection for each user. Did you
decide whether to incorporate them or to suggest another module name for
him to use? I wasn't  able to participate in the discussion at the time,
but I now have need for that functionality. I don't know if Jeff Horn is
still around, but I'll track him down if necessary and offer to work on it.

Also, I sent you a small patch to fix Apache::DBI warnings under Perl5.6.
I hate to be a pest, but I'm rolling out software where the installation
procedure requires the user to fetch Perl from Active State and Apache::DBI
from CPAN. I'd rather not ship my own version of yours or any CPAN module.

Thanks,
John
[EMAIL PROTECTED]




getting rid of multiple identical http requests (bad users double-clicking)

2001-01-04 Thread Ed Park

Does anyone out there have a clean, happy solution to the problem of users
jamming on links  buttons? Analyzing our access logs, it is clear that it's
relatively common for users to click 2,3,4+ times on a link if it doesn't
come up right away. This not good for the system for obvious reasons.

I can think of a few ways around this, but I was wondering if anyone else
had come up with anything. Here are the avenues I'm exploring:
1. Implementing JavaScript disabling on the client side so that links become
'click-once' links.
2. Implement an MD5 hash of the request and store it on the server (e.g. in
a MySQL server). When a new request comes in, check the MySQL server to see
whether it matches an existing request and disallow as necessary. There
might be some sort of timeout mechanism here, e.g. don't allow identical
requests within the span of the last 20 seconds.

Has anyone else thought about this?

cheers,
Ed




Re: is morning bug still relevant?

2000-12-18 Thread ed phillips

Please use the  MySQL modules list. Responses are timely.
;-)

ed

Subscribe: mailto:[EMAIL PROTECTED]




Vivek Khera wrote:

  "SV" == Steven Vetzal [EMAIL PROTECTED] writes:

 SV Greetings,
  to say "ping doesn't work in all cases" without qualifiying why and/or
  which drivers that applies to.

 SV We've had to write our own -ping method for the MySQL DBD
 SV driver. Our developer tried to track down a maintainer for the
 SV DBD::msql/mysql module to submit a diff, but to no avail.

 How old a version are you talking about?  In any case, according to
 CPAN, the DBD::mysql module is "owned" by

 Module id = DBD::mysql
 DESCRIPTION  Mysql Driver for DBI
 CPAN_USERID  JWIED (Jochen Wiedmann [EMAIL PROTECTED])
 CPAN_VERSION 2.0414
 CPAN_FILEJ/JW/JWIED/Msql-Mysql-modules-1.2215.tar.gz
 DSLI_STATUS  RmcO (released,mailing-list,C,object-oriented)
 INST_FILE(not installed)

 and I *know* he's responsive to that email address at least as of a
 month or so ago, as we exchanged correspondence on another matter.

 --
 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 Vivek Khera, Ph.D.Khera Communications, Inc.
 Internet: [EMAIL PROTECTED]   Rockville, MD   +1-240-453-8497
 AIM: vivekkhera Y!: vivek_khera   http://www.khera.org/~vivek/




showing mod_perl execute time in access_log

2000-12-14 Thread Ed Park

quick, obvious trick:
This is a trivial modification of Doug's original Apache::TimeIt script that
allows you to very precisely show the Apache execute time of the page.

This is particularly useful if you want to know which pages of your site you
could optimize.

Here's a question, though: does anyone know an easy way of measuring how
long apache keeps a socket to the client open, assuming that KeepAlive has
been turned off? This is relevant because I want to know how long on average
it is taking clients to receive certain pages in my application. I know that
I can approximately calculate it from bandwidth, but I would expect the
actual number to vary wildly throughout a given day due to Internet
congestion.

cheers,
Ed

---
package AccessTimer;

# USAGE:
# Just put the following line into your .conf file:
#
# PerlFixupHandler AccessTimer
#
# and use a custom Apache log (this logging piece is not at all
mod_perl-based...
# see http://httpd.apache.org/docs/mod/mod_log_config.html)
#
# CustomLog /path/to/your/log "%h %l %u %t \"%r\" %s %b %{ELAPSED}e"
#

use strict;
use Apache::Constants qw(:common);
use Time::HiRes qw(gettimeofday tv_interval);
use vars qw($begin);

sub handler {
my $r = shift;

$begin = [gettimeofday];
$r-push_handlers(PerlLogHandler=\log);

return OK;
}

sub log {
my $r = shift;

my $elapsed = tv_interval($begin);
$r-subprocess_env('ELAPSED' = "$elapsed");
return DECLINED;
}

1;





RE: Mod_perl tutorials

2000-12-13 Thread Ed Park

My two cents--

I really like the look of the take23 site as well, and I would be happy as a
clam if we could get modperl.org. I'd even be willing to chip in some
(money/time/effort) to see whether we could get modperl.org.

More than that, though, I think that I would really like to see take23 in
large measure replace the current perl.apache.org. I remember the first time
I looked at perl.apache.org, it was not at all clear to me that I could
build a fast database-backed web application using mod_perl. In contrast,
when you click on PHP from www.apache.org, you are taken directly to a site
that gives you the sense that there is a strong, vibrant community around
php. (BTW, I also like the look and feel of take23 significantly more than
php).

Anyways, those are my own biases. The final bias is that the advocacy site
should be hosted someplace _fast_; one of the reasons I initially avoided
PHP was that their _site_ was dog slow, and I associated that with PHP being
dog slow. Anyways, take23 is very fast for now.

cheers,
Ed




Apache::Session benchmarks

2000-12-11 Thread Ed Park

FYI-- here are some Apache::Session benchmark results. As with all
benchmarks, this may not be applicable to you.

Basically, though, the results show that you really ought to use a database
to back your session stores if you run a high-volume site.

Benchmark: This benchmark measures the time taken to do a create/read for
1000 sessions. It does not destroy sessions, i.e. it assumes a user base
that browses around arbitrarily and then just leaves (i.e. does not log out,
and so session cleanup can't easily be done).

RESULTS: I tested the following configurations:

Apache::Session::MySQL - Dual-PIII-600/512MB/Linux 2.2.14SMP: Running both
the httpd and mysqld servers on this server. Average benchtime: 2.21 seconds
(consistent)

Apache::Session::Oracle - Ran the httpd on the dual-PIII-600/512MB/Linux
2.2.14SMP, running Oracle on a separate dual PIII-500/1G (RH Linux 6.2).
Average benchtime: 3.1 seconds (consistent). (ping time between the servers:
~3ms)

Apache::Session::File - Dual-PIII-600/512MB/Linux 2.2.14SMP: Ran 4 times.
First time: ~2.2s. Second time: ~5.0s. Third time: ~8.4s. Fourth time:
~12.2s.

Apache::Session::DB_File - Dual-PIII-600/512MB/Linux 2.2.14SMP: Ran 4 times.
First time: ~20.0s. Second time: ~20.8s. Third time: ~21.9s. Fourth time:
~23.2s.

The actual benchmarking code can be found at
http://www.lifespree.com/modperl/ (warning - the site is in a terrible state
right now, mostly a scratchpad for various techniques  benchmarks)

Question: does anyone know how to pre-specify the _session_id for the
session, rather than allowing Apache::Session to set it and read it? I saw
some posts about it a while back, but no code...

cheers,
Ed




[ANNOUNCE] new site: scaling mod_perl (+tool: mod_perl + DBD::Oracle)

2000-12-08 Thread Ed Park

The enterprise mod_perl architectures idea that I posted earlier has evolved
into a slightly modified idea: a 'scaling mod_perl' site:
http://www.lifespree.com/modperl.

The point of this site will be to talk about  synthesize techniques for
scaling, monitoring, and profiling large, complicated mod_perl
architectures.

So far, I've written up a basic scaling framework, and I've posted a
particular development profiling tool that we wrote to capture, time, and
explain all SQL select queries that occur on a particular page of a mod_perl
+ DBD::Oracle application:
-http://www.lifespree.com/modperl/explain_dbitracelog.pl
-http://www.lifespree.com/modperl/DBD-Oracle-1.06-perfhack.tar.gz

Currently, I'm soliciting thoughts and code on the following subjects in
particular:
1. Performance benchmarking code. In particular, I'm looking for tools that
can read in an apache log, play it back realtime (by looking at the time
between requests in the apache log), and simulate slow  simultaneous
connections. I've started writing my own, but it would be cool if something
else out there existed.
2. Caching techniques. I know that this is a topic that has been somewhat
beaten to a pulp on this list, but it keeps coming up, and I don't know of
any place where the current best thinking on the subject has been
synthesized. I haven't used any caching techniques yet myself, but I intend
to begin caching data at the mod_perl tier in the next version of my
application, so I have a very good incentive to synthesize and benchmark
various techniques. If folks could just send me pointers to various caching
modules and code, I'll test them in a uniform environment and let folks know
what I come up with. Or, if someone has already done all that work of
testing, I'd appreciate if you could point me to the results. I'd still like
to run my own tests, though.

If folks could point me towards resources/code for these topics (as well as
any other topics you think might be relevant to the site), please let me
know. I'm offering to do the legwork required to actually test, benchmark,
and synthesize all of this stuff, and publish it on the page.

I'm also still interested in actually talking with various folks. If anyone
who has been through some significant mod_perl scaling exercise would like
to chat for 15-30 minutes to swap war stories or tactical plans, I'd love to
talk with you; send me a private email.

cheers,
Ed


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: [ANNOUNCE] new site: scaling mod_perl will be movin to the Guide

2000-12-08 Thread Ed Park

I've gotten in touch with Stas, and the 'scaling mod_perl' site will
eventually be folded into the Guide. woohoo!

I'm going to spend several weeks fleshing it out and cleaning it up before
it goes in, though.

-Ed

-Original Message-
From: Perrin Harkins [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 08, 2000 12:36 PM
To: Ed Park; [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Re: [ANNOUNCE] new site: scaling mod_perl (+tool: mod_perl +
DBD::Oracle)


 The enterprise mod_perl architectures idea that I posted earlier has
evolved
 into a slightly modified idea: a 'scaling mod_perl' site:
 http://www.lifespree.com/modperl.

 The point of this site will be to talk about  synthesize techniques for
 scaling, monitoring, and profiling large, complicated mod_perl
 architectures.

No offense, but the content you have here looks really well suited to be
part of the Guide.  It would fit nicely into the performance section.
Making it a separate site kind of fragments the documentation.

 So far, I've written up a basic scaling framework, and I've posted a
 particular development profiling tool that we wrote to capture, time, and
 explain all SQL select queries that occur on a particular page of a
mod_perl
 + DBD::Oracle application:
 -http://www.lifespree.com/modperl/explain_dbitracelog.pl
 -http://www.lifespree.com/modperl/DBD-Oracle-1.06-perfhack.tar.gz

Take a look at DBIx::Profile as well.

 1. Performance benchmarking code. In particular, I'm looking for tools
that
 can read in an apache log, play it back realtime (by looking at the time
 between requests in the apache log), and simulate slow  simultaneous
 connections. I've started writing my own, but it would be cool if
something
 else out there existed.

The mod_backhand project was developing a tool like this called Daquiri.

 If folks could just send me pointers to various caching
 modules and code, I'll test them in a uniform environment and let folks
know
 what I come up with.

There are a bunch of discussions about this in the archives, including one
this week.  Joshua Chamas did some benchmarking on a dbm-based approach
recently.

- Perrin


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




[JOB] mod_perl folks wanted in Boston - athenahealth.com

2000-12-08 Thread Ed Park

In the spirit of all of this talk about certification, demand for mod_perl
programmers, etc., I'd just like to say that I'm looking for programmers.

More to the point, I'm looking for kickass folks who just happen to know
mod_perl. If you know mod_perl very well, great, but generally speaking, I'm
looking for folks who are just kickass hackers, know that they are kickass
hackers, and are willing to do anything to drive a problem to extinction.

Experience with mod_perl, Linux, Oracle, Solaris, Java, XML/SOAP, MQ Series,
transaction brokers, systems administration, NT, DHTML, JavaScript, etc.
etc. are all Good Things. But basically, we're looking for folks who are
itching to prove themselves and have some sort of history that indicates
that they can do it.

As a backdrop: we just raised $30 million, and we were the top story in the
latest Red Herring VC Dealflow.
http://www.redherring.com/vc/2000/1206/vc-ltr-dealflow120600.html
As you have probably gathered by now from my posts about the Scaling
mod_perl page (http://www.lifespree.com/modperl/- soon to be folded into the
Guide), I'm currently starting up a scaling mod_perl project, and I have a
lot of money and stock options to burn on good people and interesting toys.

If you're interested, send me a private email  a resume and we'll talk.

Unfortunately, you sort of have to be in the Boston area (or willing to
move) to make this work.

cheers,
Ed


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: eval statements in mod_perl

2000-12-07 Thread Ed Park

This was a problem that I had when I was first starting out with mod_perl;
i.e., it wouldn't work the first or second times through, and then it would
magically start working.

This was always caused for me by a syntax error in a library file. In your
case, it could be caused by a syntax error in a library file used somewhere
in your eval'd code. I highly suggest running
 perl -c library file
on all of your library files to check them for valid syntax. If all of your
library files are in the same directory,
 perl -c *
will work as well.

I'm not certain for the technical reason for this, but I believe it has
something to do with the fact that syntax errors in the libraries are not in
and of themselves considered a fatal condition for loading libraries in
mod_perl, so the second or third time around the persistent mod_perl process
thinks that it has successfully loaded the library. Obviously, some
functions in that library won't work, but you won't know that unless you
actually use them. Someone else might be able to shed more light on this.

good luck,
Ed


-Original Message-
From: Gunther Birznieks [mailto:[EMAIL PROTECTED]]
Sent: Thursday, December 07, 2000 3:38 AM
To: Hill, David T - Belo Corporate; '[EMAIL PROTECTED]'
Subject: Re: eval statements in mod_perl


Without knowing your whole program, this could be a variety of logic
problems leading to this code. For example, perhaps $build{$nkey} is a
totally bogus value the first 2 times and hence your $evalcode is also
bogus the first two times -- and it's not a problem of eval at all!

This is unclear for the snippet.

At 10:52 AM 12/6/2000 -0600, Hill, David T - Belo Corporate wrote:
Howdy,
 I am running mod_perl and have created a handler that serves all
the
pages for our intranet.  In this handler I load perl programs from file
into
a string and run eval on the string (not in a block).  The problem is that
for any session the code doesn't work the first or second time, then it
works fine.  Is this a caching issue or compile-time vs. run-time issues?
I
am sure this is a simple fix.  What am I missing?

 Here is the nasty part (don't throw stones :)  So that we can
develop, I put the eval in a loop that tries it until it returns true or
runs 3 times.  I can't obviously leave it this way.  Any suggestions?  Here
is the relevant chunk of code:

 #  Expect perl code.  Run an eval on the code and execute it.
 my $evalcode = "";
 my $line = "";
 open (EVALFILE, $build{"$nkey"});
 while ($line = EVALFILE) {
 $evalcode .= $line;
 }
 my $evalresult = 0;
 my $counter=0;

#
 #   Temporary measure to overcome caching issue, try
to
#
 #   run the eval code 3 times to get a true return.
#

#
 until (($evalresult) || ($counter eq 3)) {
 $evalresult = eval $evalcode;
 $counter++;
 }
 $pageHash{"Retries"} = $counter if $counter  1;
 $r-print($@) if $@;
 close (EVALFILE);

I appreciate any and all constructive comments.

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

__
Gunther Birznieks ([EMAIL PROTECTED])
eXtropia - The Web Technology Company
http://www.extropia.com/


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




[OT]Re: mod_perl advocacy project resurrection

2000-12-06 Thread ed phillips

Aristotle from the Ars Rhetorica on money:

Money will not make you wise, but it will bring a wise man to your door.


Robin Berjon wrote:

 At 12:39 06/12/2000 -0800, brian moseley wrote:
  ActiveState has built an Perl/Python IDE out of Mozilla:
   http://www.activestate.com/Products/Komodo/index.html
 
 too bad it's windows only :/

 That's bound to change. I think AS will release it on all platforms where
 Moz/Perl/Python run when it's finished. The current release is very
 unstable anyway.

 -- robin b.
 All paid jobs absorb and degrade the mind. -- Aristotle

 -
 To unsubscribe, e-mail: [EMAIL PROTECTED]
 For additional commands, e-mail: [EMAIL PROTECTED]


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




enterprise mod_perl architectures

2000-12-05 Thread Ed Park
d VCs come
knocking with questions. In this way, it should dovetail nicely with the
mod_perl advocacy project.

I am not yet certain whether the best forum for this is this mailing list,
or whether I should try to create a private list of names for folks who are
interested. Relevant considerations include:
-The possible very off-topicness of pieces of the discussion.
-At some point, some of us may want opinions from other folks on sensitive
information (network diagrams, etc.) that Corporate won't allow us to show
to the outside world except under NDA; if all the folks on a list signed an
NDA, then we could speak freely all the time.
-At any rate, I'd like to publish any methodologies we use and put any
monitoring tools, performance benching tools, etc. into open-source. To that
end, I'll be creating a page that publishes any code we come up with and
summarizes our thoughts. I'd be happy to publish that page myself, but I
could also just add it as a page-- 'Enterprise mod_perl architectures'-- to
Matt's new site (modperl.sergeant.org).

So, I'd like to get folks' thoughts on this project. Again, I am staking out
very high ground on this project-- multimillion-dollar companies with
multimillion-dollar budgets. I'm doing this not because I'm disparaging
other companies, but because part of the reason behind doing this project is
to establish mod_perl's credibility as an enterprise web platform and to
describe some of the pitfalls and workarounds that allow mod_perl to scale
to that level. To that end, I'd like to get a list of interested parties. In
general, this should include the chief architects, CTOs, and/or senior
engineers at different shops using mod_perl. Some of those folks don't read
this list regularly, and in that case, I'd be happy to email them/call them
directly if people could just point them my way.

If any subset of folks are interested, I'd be more than happy to drive this
project forward. This is a project that really describes one of my core
responsibilities in my company right now, so I actually have a lot of time
and the resources to devote to this as part of my job.

Anyways, not to belabor the point-- I'd like y'alls input on this,
specifically:
1) What do folks thing about the project in general?
2) Should we keep it on this list, or should we create a separate mailing
list for interested parties, or should we do a combination of the two?
3) Is there anyone who'd like to volunteer virtual space to host this? e.g.
ftp, web, creating a mailing list, etc.

I am not yet interested in specifics about peoples' architectures; I think
that we need to frame the general discussion and create some infrastructure
before we go into that.

cheers,
Ed


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Apache::Registry() and strict

2000-11-07 Thread ed phillips

Ron,

This is a greivous FAQ.  Please read the guide at
http://perl.apache.org/guide

You'll find much more than this question answered.

Ed



Ron Rademaker wrote:

 Hello,

 I'm just starting with mod_perl and I'm using Apache::Registry(). The
 second line after #!/usr/bin/perl -w is use strict;
 But somehow variables I use in the script are still defined if I execute
 the script again, in one of the script I said undef $foo at the
 end, but I don't think this is the way it should be done, but it did work.
 Anyone knows what could be causing this??

 Ron Rademaker

 PS. Please CC to me because I'm not subscribed to this mailinglist




Re: Apache trouble reading in large cookie contents

2000-10-20 Thread ed phillips

Explictly echoing Gunther, don't go there!

Use cookies, think crumbs of info, as flyweights.  Significant chunks of data need
to be passed and stored
in other ways.

Ed

Gunther Birznieks wrote:

 Caveat: even if you modify apache to do larger cookies, it's possible that
 there will be a set of browsers that won't support it.

 At 04:48 PM 10/20/00 -0700, ___cliff rayman___ wrote:
 i'm not an expert with this, but, a quick grep for your error in
 the apache source (mine is still 1.3.9 ) and some digging yield:
 
 ./include/httpd.h:#define DEFAULT_LIMIT_REQUEST_FIELDSIZE 8190
 
 so you're right, 8K is currently the apache limit. if you try to change
 this value in
 the source code, you will probably also have to muck with IOBUFSIZE and
 possibly other things as well.  IOBUFSIZE is 8192 and the
 DEFAULT_LIMIT_REQUEST_FIELDSIZE is set to 2 bytes below that to make
 room for the extra \r\n after the last header.
 
 looks like you'll have to take responsibility for mucking with the apache
 source, or
 sending smaller cookies and using some other techniques such as HIDDEN fields.
 
 
 --
 ___cliff [EMAIL PROTECTED]http://www.genwax.com/
 
 "Biggs, Jody" wrote:
 
   I'm having trouble when a browser sends a fair sized amount of data to
   Apache as cookies - say around 8k.
  
 
   Apache then complains (and fails the request) with
   a message of the sort:
 
   [date]  [error] [client 1.2.3.4] request failed: error reading the headers
 
   I assume this is due to a compile time directive to Apache specifying the
   maximum size of a header line.
  

 __
 Gunther Birznieks ([EMAIL PROTECTED])
 eXtropia - The Web Technology Company
 http://www.extropia.com/




Re: Forking in mod_perl?

2000-10-04 Thread ed phillips

Hi David,

Check out the guide at

http://perl.apache.org/guide/performance.html#Forking_and_Executing_Subprocess

The Eagle book also covers the C API subprocess details on page 622-631.

Let us know if the guide is unclear to you, so we can improve it.

Ed


"David E. Wheeler" wrote:

 Hi All,

 Quick question - can I fork off a process in mod_perl? I've got a piece
 of code that needs to do a lot of processing that's unrelated to what
 shows up in the browser. So I'd like to be able to fork the processing
 off and return data to the browser, letting the forked process handle
 the extra processing at its leisure. Is this doable? Is forking a good
 idea in a mod_perl environment? Might there be another way to do it?

 TIA for the help!

 David

 --
 David E. Wheeler
 Software Engineer
 Salon Internet ICQ:   15726394
 [EMAIL PROTECTED]   AIM:   dwTheory




Re: Forking in mod_perl?

2000-10-04 Thread ed phillips

I hope it is clear that you don't want fork the whole server!

Mod_cgi goes to great pains to effectively fork a subprocess, and
was the major impetus I believe for the development of
the C subprocess API. It  (the source code for
mod_cgi) is a great place to learn some of the
subtleties as the Eagle book points out. As the Eagle book
says, Apache is a complex beast. Mod_perl gives
you the power to use the beast to your best advantage.

Now you are faced with a trade off.  Is it more expensive to
detach a subprocess, or use the child cleanup phase to do
some extra processing? I'd have to know more specifics to answer
that with any modicum of confidence.

Cheers,

Ed


"David E. Wheeler" wrote:

 ed phillips wrote:
 
  Hi David,
 
  Check out the guide at
 
  http://perl.apache.org/guide/performance.html#Forking_and_Executing_Subprocess
 
  The Eagle book also covers the C API subprocess details on page 622-631.
 
  Let us know if the guide is unclear to you, so we can improve it.

 Yeah, it's a bit unclear. If I understand correctly, it's suggesting
 that I do a system() call and have the perl script called detach itself
 from Apache, yes? I'm not too sure I like this approach. I was hoping
 for something a little more integrated. And how much overhead are we
 talking about getting taken up by this approach?

 Using the cleanup phase, as Geoffey Young suggests, might be a bit
 nicer, but I'll have to look into how much time my processing will
 likely take, hogging up an apache fork while it finishes.

 Either way, I'll have to think about various ways to handle this stuff,
 since I'm writing it into a regular Perl module that will then be called
 from mod_perl...

 Thanks,

 David




Re: open(FH,'|qmail-inject') fails

2000-10-02 Thread ed phillips

Greg Stark wrote:

 A better plan for such systems is to have a queue in your database for
 parameters for e-mails to send. Insert a record in the database and let your
 web server continue processing.

 Have a separate process possibly on a separate machine or possibly on multiple
 machines do selects from that queue and deliver mail. I think the fastest way
 is over a single SMTP connection to the mail relay rather than forking a
 process to inject the mail.

 This keeps the very variable -- even on your own systems -- mail latency
 completely out of the critical path for web server requests. Which is really
 the key measure that dictates the requests/s you can serve.


Exactly, Greg.  This is homologous to proxy serving http requests. Ideally, the
data/text
should be relayed to a separate, dedicated mail server.  This has come up
repeatedly
for me on performance tuning projects. If there are a number of mail processes
negotiating
with remote hosts even running on the same machine as you are web serving from,
you may,
under significant load, degrade performance.




Re: [OT] advice needed.

2000-09-29 Thread ed

Mike,

I think many developers share a similar desire to not
have projects (that leverage free software) close down
what are really generic programming techniques,
routines, classes, protocols, etc.  And further,
we'd like to contribute enhancements and documentation
based upon our work.

I'd like to find a lawyer who has experience and/or
want to pursue legal means of removing the friction
that keeps us from giving back. Part of that work would
of course involve contract writing/editing. I'm hiring.
Contact me if you are such.

It is up to you to educate your potential employers
about just how much of what you do is pior open
art and how free software can empower them.
That means the first contract has to be amended.
;-)

Be very explicit about your intentions from the get go,
and repeat yourself a few times; never assume they'll
look at the code or even closely read your written
self-description.

Ed

Michael Dearman wrote:

 Where the heck does trying to do the right thing by
 GPL (or similar), in attempting to return some improved
 OpenSource code to the community. Or however the license
 phrases it. Shouldn't these contracts address that issue
 specifically, especially when the project is _based_ on
 OpenSource/GPL'd code?

 Mike D.




Re: tracking down why a module was loaded?;

2000-09-26 Thread ed

Gunther Birznieks wrote:

 I unfortunately have to agree.
 snip

 And in the end, the salaries for mod_perl programmers
 are pretty high right now because of it -- so will a system really cost
 less to develop in mod_perl than in Java if Java programmers are becoming
 less expensive than mod_perl programmers?
 /snip

Mod_perl programmers are more expensive as individuals,
because mod_perl is more powerful, and allows you access
to the Apache API; mod_perlers are more saavy.
One or two mod_perlers could do the
work of a java shop of ten in half the time. Still a savings.
Not to mention the hardware that goes with Java by fiat!

ed




RE: setting LD_LIBRARY_PATH via PerlSetEnv does not work

2000-08-21 Thread Ed Park

I ran into this exact same problem this weekend using:
-GNU ld 2.9.1
-DBD::Oracle 1.06
-DBI 1.14
-RH Linux 6.0
-Oracle 8i

Here's another, cleaner (I think) solution to your problem: after running
perl Makefile.PL, modify the resulting Makefile as follows:
1. search for the line LD_RUN_PATH=
2. replace it with LD_RUN_PATH=(my_oracle_home)/lib
(my_oracle_home) is, of course, the home path to your oracle installation.
In particular, the file libclntsh.so.8.0 should exist in that directory.
(If you use cpan, the build directory for DBD::Oracle should be in
~/.cpan/build/DBD-Oracle-1.06/ if you're logged in as root.)

Then, just type make install, and all should go well.

FYI, setting LD_RUN_PATH has the effect of hard-coding the path to
(my_oracle_home)/lib in the resulting Oracle.so file generated by the
DBD::Oracle so that at run-time, it doesn't have to go searching through
LD_LIBRARY_PATH or the default directories used by ld.

The reason I think this is cleaner is because this way, the Oracle directory
is not hardcoded globally into everyone's link paths, which is what ldconfig
does.

For more information, check out the GNU man page on ld:
http://www.gnu.org/manual/ld-2.9.1/html_mono/ld.html
or an essay on LD_LIBRARY_PATH:
http://www.visi.com/~barr/ldpath.html

cheers,
Ed

-Original Message-
From: Stas Bekman [mailto:[EMAIL PROTECTED]]
Sent: Monday, August 21, 2000 6:51 AM
To: Richard Chen
Cc: Yann Ramin; [EMAIL PROTECTED]
Subject: Re: setting LD_LIBRARY_PATH via PerlSetEnv does not work


On Mon, 21 Aug 2000, Richard Chen wrote:

 It worked like a charm! If PerlSetEnv could not do it, I think
 this should be documented in the guide. I could not find any mention

done. thanks for the tip!

 about ldconfig in the modperl guide. May be I missed it somehow.

 The procedure on linux is very simple:
 # echo $ORACLE_HOME/lib  /etc/ld.so.conf
 # ldconfig

 Thanks

 Richard

 On Sun, Aug 20, 2000 at 08:11:50PM -0700, Yann Ramin wrote:
  As far as FreeBSD goes, LD_LIBRARY_PATH is not searched for setuid
  programs (aka, Apache). This isn't a problem for CGIs since they don't
  do a setuid (and are forked off), but Apache does, and mod_perl is in
  Apache.  I think thats right anyway :)
 
  You could solve this globaly by running ldconfig (I assume Linux has it,
  FreeBSD does).  You'd be looking for:
 
  ldconfig -m your directory here
 
  Hope that helps.
 
  Yann
 
  Richard Chen wrote:
  
   This is a redhat linux 6.2 box with perl 5.005_03, Apache 1.3.12,
   mod_perl 1.24, DBD::Oracle 1.06, DBI 1.14 and oracle 8.1.6.
   For some odd reason, in order to use DBI, I have to set
   LD_LIBRARY_PATH first. I don't think I needed to do this when I
   used oracle 7. This is fine on the command line because
   I can set it in the shell environment. For cgi scripts,
   the problem is also solved by using apache SetEnv directive. However,
   this trick does not work under modperl. I had tried PerlSetEnv
   to no avail. The message is the same as if the LD_LIBRARY_PATH is not
set:
  
   install_driver(Oracle) failed: Can't load
   '/usr/lib/perl5/site_perl/5.005/i386-linux/auto/DBD/Oracle/Oracle.so'
for module DBD::Oracle:
   libclntsh.so.8.0: cannot open shared object file: No such file or
directory at
   /usr/lib/perl5/5.00503/i386-linux/DynaLoader.pm line 169. at (eval 27)
line 3 Perhaps a required shared
   library or dll isn't installed where expected at
/usr/local/apache/perl/tmp.pl line 11
  
   Here is the section defining LD_LIBRARY_PATH under Apache::Registry:
  
   PerlModule Apache::Registry
   Alias /perl/ /usr/local/apache/perl/
   Location /perl
 PerlSetEnv LD_LIBRARY_PATH /u01/app/oracle/product/8.1.6/lib
 SetHandler perl-script
 PerlHandler Apache::Registry
 Options ExecCGI
 PerlSendHeader On
 allow from all
   /Location
  
   Does anyone know why PerlSetEnv does not work in this case?
   How come SetEnv works for cgi scripts? What is the work around?
  
   Thanks for any info.
  
   Richard
 
  --
 
  
  Yann Ramin  [EMAIL PROTECTED]
  Atrus Trivalie Productions  www.redshift.com/~yramin
  Monterey High ITwww.montereyhigh.com
  ICQ 46805627
  AIM oddatrus
  Marina, CA
 
  IRM Developer   Network Toaster Developer
  SNTS Developer  KLevel Developer
 
  (yes, this .signature is way too big)
 
  "All cats die.  Socrates is dead.  Therefore Socrates is a cat."
  - The Logician
 
  THE STORY OF CREATION
 
  In the beginning there was data.  The data was without form and null,
  and darkness was upon the face of the console; and the Spirit of IBM
  was moving over the face of the market.  And DEC said, "Let there be
  registers"; and there were registers.  And DEC saw that they carried;
  and DEC seperated the data from the instructions.  DEC cal

Re: [OT] [JOB] mod_perl and Apache developers wanted

2000-06-21 Thread Ed Phillips

It is interesting and and somewhat ironic that the Engineering
dep at eToys is part of the open source community and culture
while their management's behavior was so disastrously misguided
and so misunderstanding of net culture and precedent.
They shot themselves in the foot pretty badly.

Would eToys have paid for the legal expenses of the Etoy group
if they weren't clued in by their Engineering department? Have
they learned a hard lesson?

Perrin is an exemplary figure, and I commiserate with him, but
some basic precedents of net culture need to be respected for the
network to function and the culture to flourish. If we had not
protested the attempted eToys domain grab, and I was one
who protested, they may have never recanted and  Etoy might
still be fighting at absurd personal cost.

Cheers,

Ed




Paul Singh wrote:

 Regardless of what eToys' intentions were, the way I see it, this was a case
 in which a billion dollar corporation (well, at least it was back then)
 filed suit against a handful of artists who had the etoy.com domain way
 before eToys came along.  eToys had no legitimate stake to the domain... and
 I don't associate legitimacy with the law... they seldom coincide.  So if
 this isn't a case of the bigger guy bullying the little guy, what is it?
 Granted, I have a distant association with the eToy crew so my opinions will
 be biased... however, even with staying to the facts and ignoring eToys'
 motivations, their actions alone reek of unfairness (at best).

 Of course, this says little of what type of work environment eToys is and
 the people that work there... but it does comment on the corporation and the
 people running it.

 But as you said, this is definitely off-topic, and I will cease further
 comment... take care.

 - jps

  -Original Message-
  From: Perrin Harkins [mailto:[EMAIL PROTECTED]]
  Sent: Friday, June 16, 2000 4:48 PM
  To: Paul Singh
  Cc: ModPerl Mailing List
  Subject: RE: [OT] [JOB] mod_perl and Apache developers wanted
 
 
  On Thu, 15 Jun 2000, Paul Singh wrote:
   While that may be true (as with many publications), I hope you're not
   denying the facts of this case
 
  The basic facts are correct: eToys received complaints from parents about
  the content their children found on the etoy.com site and, after failing
  to reach an agreement with the site's operators, filed a lawsuit involving
  trademarks which led to etoy being ordered to shut down their site by a
  judge.
 
  Slashdot's coverage ignored or underreported some aspects of the situation
  (the motivation behind the lawsuit, epxloitation of the name confusion on
  the part of etoy), and reported some conjecture and pure flights of fancy
  as fact (evil intentions, scheming lawyers).  You have no idea how painful
  it is to read things like that from a source that you trust and consider
  part of your community.  I guess I should have known better though:
  Slashdot is an op/ed site.  If you want the news, you still have to read
  the New York Times (who had much more accurate coverage of the events).
 
  Anyway, I don't claim that eToys was right to take legal action, just that
  the reports about an evil empire were greatly exaggerated and that eToys
  is a good place to work, full of good people.  Anyone who doesn't believe
  me at this point probably never will, so I'm going to stop spamming the
  list about this subject and go back to spamming about mod_perl.
 
  - Perrin
 




apache.org down

2000-06-02 Thread Ed Phillips

"Hughes, Ralph" wrote:

 COOL!
 I couldn't wait...
 I built and installed mod_perl 1.24 and it fixed the problem!   Now if I can
 just get the CGI module
 to recognize my domainname .. :-)

 -Original Message-
 From: Hughes, Ralph
 Sent: Friday, June 02, 2000 2:02 PM
 To: Geoffrey Young; 'Michael Todd Glazier'; ModPerl
 Subject: RE: Segmetation Fault problem

 I'm not too good on back traces myself.   `
 I'm using a dynamic build of mod_perl, so I may try building the 1.24
 version next week sometime.
 I hadn't thought of changing the PerFreshStart parameter, it might make a
 difference...

 -Original Message-
 From: Geoffrey Young [mailto:[EMAIL PROTECTED]]
 Sent: Friday, June 02, 2000 1:11 PM
 To: Hughes, Ralph; 'Michael Todd Glazier'; ModPerl
 Subject: RE: Segmetation Fault problem

 hmmm, did you try upgrading your installation then?
 you are using a static mod_perl?
 PerlFreshRestart Off?

 I'm no good at reading backtraces, but posting that is probably the next
 step (see SUPPORT doc section on core dumps in the distribution)

 sorry I can't be of more help...

 --Geoff




was apache.org down

2000-06-02 Thread Ed Phillips

Level 3 is broken.

They know and are working on it. hmmm

Ed




Re: was apache.org down

2000-06-02 Thread Ed Phillips

Replying to myself.  It is back up, obviously. sorry for the noise



Ed Phillips wrote:

 Level 3 is broken.

 They know and are working on it. hmmm

 Ed




Re: [benchmark] DBI/preload (was Re: [RFC] improving memory mappingthru code exercising)

2000-06-02 Thread Ed Phillips

Yes, very cool Stas!

Perrin Harkins wrote:

 On Sat, 3 Jun 2000, Stas Bekman wrote:

  correction for the 3rd version (had the wrong startup), but it's almost
  the same.
 
Version Size   SharedDiff Test type

  1  3469312  2609152   860160  install_driver
  2  3481600  2605056   876544  install_driver  connect_on_init
  3  3469312  2588672   880640  preload driver
  4  3477504  2482176   995328  nothing added
  5  3481600  2469888  1011712  connect_on_init

 Cool, thanks for running the test!  I will put this information to good
 use...




RE: :Oracle Apache::DBI

2000-05-22 Thread Ed Park

Ian--

I very occasionally get these errors while using DBI and DBD::Oracle under
mod_perl. I find that it generally happens when a random, perfectly good SQL
statement causes the Oracle process dump the connection and write the reason
to alert.log.

Try doing the following: from your oracle home, run:
 find . -name 'alert*' -print
Go to that directory, read the alert files, and look through any
corresponding trace files. The trace files contain the sql that actually
cause the trace dump.

I find that I can usually rewrite the sql statement in such a way that it no
longer dumps core. Again, this happens _very_ rarely.

Hope this helps,
Ed

-Original Message-
From: Ian Kallen [mailto:[EMAIL PROTECTED]]
Sent: Monday, May 22, 2000 9:37 PM
To: [EMAIL PROTECTED]
Subject: DBD::Oracle  Apache::DBI



I've done everything I can think of to shore up any DB connection
flakiness but I'm still plagued by errors such as these:
DBD::Oracle::db selectcol_arrayref failed: ORA-12571: TNS:packet writer
failure
...this is only a problem under mod_perl, outside of the
mod_perl/Apache::DBI environment everything seems fine.  Once the db
connection is in this state, it's useless until the server gets a restart.

My connect strings look good and agree, I put Stas' ping method in the
DBD::Oracle::db package, set a low timeout,  called Oracle (they don't
want to hear about it).  Everything is the latest versions of
mod_perl/Apache/DBI/DBD::Oracle connecting to an Oracle 8.1.5 db on
Solaris.  Is Apache::DBI not up to it?  (it looks simple enough)

Maybe there's a better persistent connection method I should be looking
at?

--
Salon Internet  http://www.salon.com/
  Manager, Software and Systems "Livin' La Vida Unix!"
Ian Kallen [EMAIL PROTECTED] / AIM: iankallen / Fax: (415) 354-3326




pod and EmbPerl

2000-05-01 Thread Ed Park

Does anyone know whether it is possible to pod-ify an EmbPerl document?

When embedding pod directives in my EmbPerl pages and then running pod2html
on them, the pod2html interpreter returns a blank page.

thanks,
Ed




[RFI] URI escaping modules?

2000-03-28 Thread Ed Loehr

I just noticed that Apache::Util::escape_uri does not escape embedded ''
characters as I'd expected.  What is the preferred module for escaping
'', '?', etc. when embedded in strings?

Regards,
Ed Loehr



Can't upgrade that kind of scalar

2000-02-12 Thread Ed Loehr

Aside from gdb, any fishing tips on how to track this fatal problem
down?

Can't upgrade that kind of scalar at XXX line NN...

Happens intermittently, often on a call to one of these (maybe the
first access of $r?):

$r-server-server_hostname()
$r-connection-remote_ip()

I've tried turning off PerlFreshRestart, have _totally_ clean output
from 'use diagnostics', reviewed The Guide, 'perldoc perldiag', FAQ,
deja.com, swarthmore, removed /o, used Carp::cluck, handled global
vars with 'use vars qw(...)'...

Config:  apache 1.3.9, mod_perl 1.21, mod_ssl 2.4.9, openssl 0.9.4,
perl 5.005_03, DBI 1.13 (no Apache::DBI), DBD::Pg 0.92, Linux
2.2.12-20smp (RH 6.1)...



$r-print delay?

2000-02-10 Thread Ed Loehr

Any ideas on why would this output statement takes 15-20 seconds to
send a 120kb page to a browser on the same host?

sub send_it {
my ($r, $data) = @_;

$| = 1;  # Don't buffer anything...send it asap...
$r-print( $data );
}   

modperl 1.21, apache/modssl 1.3.9-2.4.9...lightly loaded Linux (RH6.1)
Dual PIII 450Mhz with local netscape 4.7 client...



Re: $r-print delay?

2000-02-10 Thread Ed Loehr

Ken Williams wrote:
 
 Are you sure it's waiting?  You might try debug timestamps before  after the
 $r-print().  You might also be interested in the send_fd() method if the data
 are in a file.

Fairly certain it's waiting there.  I cut my debug timestamps out for
ease on your eyes in my earlier post, but here's one output (of many
like it) when I had the print sandwiched...

Thu Feb 10 14:41:59.053 2000 [v1.3.7.1 2227:1 ed:1]  INFO : Sending
120453 bytes to client...
Thu Feb 10 14:42:14.463 2000 [v1.3.7.1 2227:1 ed:1]  INFO : Send of
120453 bytes completed.

Re send_fd(), it's all dynamically generated data, so that's not an
option...

Other clues?

 [EMAIL PROTECTED] (Ed Loehr) wrote:
 Any ideas on why would this output statement takes 15-20 seconds to
 send a 120kb page to a browser on the same host?
 
 $| = 1;  # Don't buffer anything...send it asap...
 $r-print( $data );
 
 modperl 1.21, apache/modssl 1.3.9-2.4.9...lightly loaded Linux (RH6.1)
 Dual PIII 450Mhz with local netscape 4.7 client...



Building Apache/modperl for SCO OS 5.05

2000-02-07 Thread ed hallda

Has anyone had any luck building Apache on SCO Open Server 5 with mod_perl?
 We have been unsuccessful, and am hoping to find a solution.

r/

ed



Re: does ssl encrypt basic auth?

2000-02-06 Thread Ed Loehr

[EMAIL PROTECTED] wrote:
 
  Ed Loehr wrote:
 
  Is a basic authentication password, entered via a connection to an
  https/SSL server, encrypted or plain text across the wire?
 
 Encrypted - but that question really doesn't belong here.
 It has nothing to do with modperl.

Yes, some of your fellow off-topic police have already served notice
privately.  My unstated context was that mod_perl authentication was
giving me fits, and in my effort to find an alternative, I (gasp)
posted off-topic.  I'm just glad you're watching.  :(



Can't upgrade that kind of scalar (and more)

2000-02-04 Thread Ed Loehr

I've scoured deja.com, FAQs, modperl list archives at 
forum.swarthmore.edu, 'perldoc mod_perl_traps', experimented ad
nauseum 
for 4 days now... this modperl newbie is missing something
important...

Lasting gratitude and a check in the mail for dinner on me to any of
you
who can offer any tips/help which unlock this riddle for me...

Cheers,
Ed Loehr

SYMPTOMS...
---
Spurious errors in my error_log with increasingly nasty consequences:

Can't upgrade that kind of scalar at XXX line NN...
Not a CODE reference at XXX line NN...
Modification of a read-only value attempted at XXX line NN...
Attempt to free unreferenced scalar.
Attempt to free unreferenced scalar during global destruction.
Attempt to free unreferenced scalar at XXX line NN...

Once upon a time, the server was fully functional even with these
occasional error messages (which is why I ignored them originally).
Now, they are frequent showstoppers, causing requests to fail
altogether with 500 errors and occasional segfaults...

I'm lumping these together because I suspect they are all related.
In any case, the severest of these at present seems to be the

Can't upgrade that kind of scalar at XXX line NN...

message, which causes the request to fail and seems to foul up that
child for the rest of its life.


FAILED REMEDIES...
--
- Turned off PerlFreshRestart
- Got rid of '$| = 1;'
- Got rid of #!/usr/bin/perl -w (!!)
- Check 'use diagnostics' output
- Got rid of string regex optimization flags ( $key =~
m/^xyz/o )
- Replaced use of 'apachectl restart' with
stop-sleep3-startssl;
- use Carp ();  local $SIG{__WARN__} = \Carp::cluck;
- Changed global all instances of global 'my $var = 0' to
'use vars qw($var);  $var = 0;'
- Commented out Apache::Registry
(Most of these are just suggestions I found during my hunt...)

CONFIGURATION...

(detailed config dumps below)
mod_perl 1.21 (*NOT* Apache::StatINC)
mod_ssl 2.4.9
openssl 0.9.4
perl 5.005_03
DBI 1.13 (*NOT* using Apache::DBI)
DBD::Pg 0.92
Apache 1.3.9 (*VERY* lightly loaded)
Linux 2.2.12-20smp (RH 6.1), 1Gb RAM, RAID 5 (*lots* of free
mem)
Dual PII 450 cpus
Using modified TicketMaster scheme from Eagle book

OBSERVATIONS...
---
I'm convinced the code referenced by the error msgs (XXX line NN) is
almost at random; typically code that's worked flawlessly before
(sometimes my code, often not).  I suspect the line numbers in the
error msg may be screwed up.  #line did not clarify things.

If I rearrange my code, I have been able to make the error "move" to
another module (eg., from Exporter.pm to CGI.pm).  Smell like a stack
corruption problem?  Currently, the first unsuccessful statement is:

# ($r is the usual apache request object)
return ($retval,$msg)
unless $ticket{'ip'} eq
$r-connection-remote_ip;


In -X mode, once the server process hits one of these, it can no
longer
serve any modperl-generated page without a 500 error, occasionally
segfaulting in the process.

This also happens on both production and development server in
slightly
different manifestations (with slightly different httpd.conf files).

Other change factors that may or may not be related:  new firewall
rules,
increased number of open file handles (echo 8192 
/proc/sys/file-max),
increased load, RAM upgrade, numerous modperl app src code changes,
added
more use of Time::HiRes to other modules, new SSL certificate, and
more...

Finally, totally commenting out my incarnation of the TicketMaster
scheme
from the Eagle book (cookie-based passworded sessions) *seems* to
remove
the problem, but it's a moving target so I'm not sure of that yet. 
Have
been unable to determine what it might be within TicketMaster that is
causing the problem.

NEXT STEPS...
-
Try removing Logger.pm from Apache::Ticket*
Whittle down until minimal set produces error?
Autoload troubles?
Find/try MacEachern's Apache::Leak?  Hunting XS errors?
Apache::Vmonitor?
Relying on $_ in foreach loops?
SSL Certificate differences?
Dreaded Last Step: setup debugger and chase ...

# /usr/local/apache_ssl/bin/httpd -l
Compiled-in modules:
  http_core.c
  mod_env.c
  mod_log_config.c
  mod_mime.c
  mod_negotiation.c
  mod_status.c
  mod_include.c
  mod_autoindex.c
  mod_dir.c
  mod_cgi.c
  mod_asis.c
  mod_imap.c
  mod_actions.c
  mod_userdir.c
  mod_alias.c
  mod_access.c
  mod_auth.c
  mod_setenvif.c
  mod_ssl.c
  mod_perl.c

# /usr/local/apache_ssl/bin/httpd -V
Server version: Apache/1.3.9 (Unix)
Server built:   Dec  9 1999 11:40:44
Server's Module Magic Number: 19990320:6
Server compiled with
 -D EAPI
 -D HAVE_MMAP
 -D HAVE_SHMGET
 -D USE_SHMGET_SCOREBOARD
 -D USE_

Re: oracle : The lowdown

2000-01-20 Thread Ed Phillips

For those of you tired of this thread please excuse me, but
here is MySQL's current position statement on and discussion
about transactions:

Disclaimer: I just helped Monty write this partly in response to
some of the fruitful, to me, discussion on this list. I know
this is not crucial to mod_perl but I find the "wise men who 
are enquirers into many things" to be one of the great things
about this list, to paraphrase old Heraclitus. I learn quite
a bit about quite many things by following leads and hints here
as well as by seeing others problems.

I'd love to see your criticism of the below either here or
off the list.


Ed
-


The question is often asked, by the curious and the critical, "Why is
MySQl not a transactional database?" or "Why does MySQl not support 
transactions."

MySQL has made a conscious decision to support another paradigm for 
data integrity, "atomic operations." It is our thinking and experience 
that atomic operations offer equal or even better integrity with much 
better performance. We, nonetheless, appreciate and understand the 
transactional database paradigm and plan, in the next few releases, 
on introducing transaction safe tables on a per table basis. We will 
be giving our users the possibility to decide if they need
the speed of atomic operations or if they need to use transactional 
features in their applications. 

How does one use the features of MySQl to maintain rigorous integrity 
and how do these features compare with the transactional paradigm?

First, in the transactional paradigm, if your applications are written 
in a way that is dependent on the calling of "rollback" instead of "commit" 
in critical situations, then transactions are more convenient. Moreover, 
transactions ensure that unfinished updates or corrupting activities 
are not commited to the database; the server is given the opportunity 
to do an automatic rollback and your database is saved. 

MySQL, in almost all cases, allows you to solve for potential 
problems by including simple checks before updates and by running 
simple scripts that check the databases for inconsistencies and 
automatically repair or warn if such occurs. Note that just by 
using the MySQL log or even adding one extra log, one can normally 
fix tables perfectly with no data integrity loss. 

Moreover, "fatal" transactional updates can be rewritten to
 be atomic. In fact,we will go so far as to say that all
 integrity problems that transactions solve can be done with 
LOCK TABLES or atomic updates, ensuring that 
you never will get an automatic abort from the database, which is a
common problem with transactional databases.
 
Not even transactions can prevent all loss if the server goes down.  
In such cases even a transactional system can lose data.  
The difference between different systems lies in just how small 
the time-lap is where they could lose data. No system is 100 % secure, 
only "secure enough". Even Oracle, reputed to be the safest 
of transactional databases, is reported to sometimes lose data
 in such situations.

To be safe with MySQL you only need to have backups and have the update
logging turned on.  With this you can recover from any situation that you could
with any transactional database.  It is, of course, always good to have
backups, independent of which database you use.

The transactional paradigm has its benefits and its drawbacks. Many users
and application developers depend on the ease with which they can code around
problems where an "abort" appears or is necessary, and they may have to do
 a little more work with MySQL to either think differently or write more.
 If you are new to the atomic operations paradigm, or more familiar or more
comfortable with transactions, do not jump to the conclusion that MySQL 
has not addressed these issues. Reliability and integrity are foremost 
in our minds.

Recent estimates are that there are more than 1,000,000 mysqld servers 
currently running, many of which are in production environments.  We hear
 very, very seldom from our users that they have lost any data, and in
 almost all of those cases user error is involved. This is in our 
opinion the best proof of MySQL's stability and reliability.

Lastly, in situations where integrity is of highest importance, MySQL's
 current features allow for transaction-level or 
better  reliability and integrity. 

If you lock tables with LOCK TABLES, all updates will stall until any
integrity checks are made.  If you only do a read lock (as opposed to
a write lock), then reads and inserts are still allowed to happen.
The new inserted records will not be seen by any of the clients
that have a READ lock until they relaease their read locks.
With INSERT DELAYED you can queue insert into a local queue, until
the locks are released, without having to have the client to wait for
the insert to complete.


Atomic in the sense that w

Re: modperl success story

2000-01-14 Thread Ed Phillips


The troll vanisheth!

ha!

Reminds me of the Zen story of an old fisherman in a boat on a lake in a heavy can't 
see your hands fog. He bumps into another boat, and shouts at the other guy, "Look 
where you're going would you! You almost knocked me over."  He pulls up beside the 
boat and is about to give the other guy a piece of his mind, but when he looks in the 
other boat, he discovers that no one else is there.

Flame trolls on mailing lists are virtual empty boats, whose only value is the 
sometimes humorous apoplexy elicited in the old sea salts on the list.


Ed



Re: APACHE_ROOT

2000-01-14 Thread Ed Phillips

Ged,

You are very entertaining. The code in question is also known as a combined
copy and substitution.

Beware if you haven't got /src on the end of your source directory!

If you don't have a match with the string or regexp , you'll just get a straight copy.


Ed

   X-Authentication-Warning: C2H5OH.jubileegroup.co.uk: ged owned process doing -bs
   Date: Sat, 15 Jan 2000 00:00:37 + (GMT)
   From: "G.W. Haywood" [EMAIL PROTECTED]
   Content-Type: TEXT/PLAIN; charset=US-ASCII
   Sender: [EMAIL PROTECTED]
   Precedence: bulk

   Hi there,

   On 14 Jan 2000, William P. McGonigle wrote:

Can someone explain what APACHE_ROOT is meant to be?  I'm assuming
it's somehow different thatn APACHE_SRC (which I'm defining).

   The expression

   ($APACHE_ROOT = $APACHE_SRC) =~ s,/src/?$,,;

   sets the scalar $APACHE_ROOT to be equal to the scalar $APACHE_SRC and
   then chops off any "/src" or "/src/" from the end of it.  
   

   The =~ binding operator (p27) tells perl to do the substitution
   s,/src/?$,, to the thing on left hand side of its expression.

   The parentheses (p77) mean the thing in them is a term, which has the
   highest precedence in perl so the assignment has to be done first.

   The substitution then has to be done on the result, $APACHE_ROOT and
   not $APACHE_SRC, er, obviously.

   The three commas are quotes (p41) for a substitution, presumably
   chosen because they can't easily appear in a filename.

   The pattern to match is

   /src/?$

   The question mark is a quantifier (p63), it says we can have 0 or 1
   trailing slash in the pattern we match - it's trailing at the end of
   a string because of the $ (p62).

   If our string matches, the matching bit is replaced with the bit
   between the second and third commas.  There's nothing between the
   second and third commas, so it's replaced with nothing.  Have a look
   at pages 72 to 74 especially for more about the s/// construct.

   The page numbers are from the Camel Book, second edition.  I keep it
   on my desk at all times, it stops my papers blowing around.  You will
   help yourself a lot with these things if you read chapters one and two
   five or six times this year as a kind of a penance.

   So if

   $APACHE_SRC eq  "/usr/local/apache/src/"

   or

   $APACHE_SRC eq  "/usr/local/apache/src"

   then

   $APACHE_ROOT eq "/usr/local/apache"

   after the substitution.

   I just *love* Perl's pattern matching!

   73,
   Ged.



Re: mysql.pm on Apache/mod_perl/perl win98

2000-01-10 Thread Ed Phillips

Hi Dave,

I only do *nix, but I think that you should not need mysql.pm if you are using
DBI/DBD. Jochen is quite helpful on the MySQL modules list. subscription info availble 
at www.mysql.com.

Good Luck,

Ed



Re: Comparing arrays

2000-01-05 Thread Ed Phillips

Really Dheeraj,

This is not a mod_perl specific question, and I don't know the all important context 
into which this boilerplate code you are seeking to elicit from the list is to be 
dropped.

here is a boilerplate "find me keys that are not in both hashes":

foreach (keys %hash_one) {
  push(@here_not_there, $_) unless exists $hash_two{$_};  
}

shame on you. To expiate your sins, read perldoc pages for two hours
everyday for two weeks.

ed



Re: Comparing arrays

2000-01-05 Thread Ed Phillips

Cliff,

I wanted him to work for the rest of it, or at least go to another list.

It looks like he wanted two arrays, @in_hash_one_alone and @in_hash_two_alone,
so having him push to one array may confuse him. he's better off doing a little
studying, methinks.

ed



Re: ApacheDBI vs DBI for TicketMaster

2000-01-02 Thread Ed Loehr

Edmund Mergl wrote:

   On Sun, Jan 02, 2000 at 01:48:58AM -0600, Ed Loehr wrote:
My apache children are seg faulting due to some combination of
DBI usage and the cookie-based authentication/authorization
   [...]
child seg faults.  If I comment out all DBI references in the
  
   Hm, are you connecting to your database prior to Apache's forking
 
  No.  BTW, this is all on apache 1.3.9 with mod_ssl 2.4.9 and mod_perl 1.21 on
 Redhat 6.1 (2.2.12-20smp)...


 do you use rpm's or did you compile everything by yourself ?

Compiled everything myself.  Oh, and I am also using DBD::Pg 0.92...

Does that suggest anything to anyone?

Cheers,
Ed Loehr



Embperl problem (newbie question?)

1999-12-18 Thread Ed Greenberg

Just trying out Embperl, and I discovered that (in my test of dynamic 
tables) the $maxrow and $maxcol variables are being set to defaults of 100 
and 10 respectively and then obeyed, even though the $tabmode variable is 
set to 17.  According to the documentation, these variables should only be 
obeyed when tabmode contains bits in the 64 and 4 position.

My test called for 209 rows and 11 columns, and I was flummoxed for a bit 
until I started playing around with these variables.

Am I missing something, or is it just a documentation inconsistancy?

/edg

Ed Greenberg
__
Get Your Private, Free Email at http://www.hotmail.com



Re: DBI

1999-11-11 Thread Ed Phillips

This is also not a mod_perl question.

depending on where your DBD::Oracle is installed you can get away with certain 
liberties in the Oracle library department. 

Nonetheless, you should continue your inquiry on a DBI related list.

Thank you,

Ed



Re: Server Stats

1999-10-21 Thread Ed Phillips

this is like closing the gate after the horse has bolted without things
like decent locking and transactions. Although perhaps I'm mistaken and

You can rest assured that they know what they are doing. :-)

It is also worth upgrading to newer versions. The newest versions not deemed stable 
just yet no longer use ISAM, are much faster, and will allow for a host of new 
features. stay tuned.

ed



Re: Spreading the load across multiple servers (was: Server Stats)

1999-10-21 Thread Ed Phillips



I don't have any real answers - just a suggestion. What is wrong with the
classic RDBMS architecture of RAID 1 on multiple drives with MySQL - surely
it will be able to do that transparently?


Yes, RAID is very helpful with MySQL.  I spoke with Monty, the developer of MySQL at 
the open source conference in Monterey and he said that they are currently working on 
replication and mirroring features. It might be worth inquiring directly with them. 


Ed