Problem comping mod_perl and mod_dav statically at same time...

2002-03-14 Thread simran

Hi All, 

I am trying to compile the following things together:

* apache_1.3.23
* mod_dav-1.0.3-1.3.6
* mod_perl-1.26

If i compile apache with mod_dav OR mod_perl it works fine. However, if
i compile them both in then httpd always segfaults as soon as i pass it
any request. 

The way i configure before compiling is: 

./configure --enable-module=all
--activate-module=src/modules/perl/libperl.a
--activate-module=src/modules/dav/libdav.a

-

My actual issue is that i want mod_perl working and mod_dav working
properly - i don't really care about how they are linked (static is
preferable but not necessary).

If i compile only mod_perl in statically and then compile mod_dav as a
DSO (.so) file. Then the server stats up, but unfortunately a:

  OPTION /dav/ HTTP/1.1
  Host: my.host.com:80

Gives me the following response

  HTTP/1.1 200 OK
  Date: Thu, 14 Mar 2002 08:01:49 GMT
  Server: Apache/1.3.23 (Unix) DAV/1.0.3 mod_perl/1.26
  Content-Length: 0
  MS-Author-Via: DAV
  Allow: OPTIONS, MKCOL, PUT, LOCK
  DAV: 1,2,http://apache.org/dav/propset/fs/1
  Content-Type: text/plain

BUT the response i really want is one that includes the header:

Allow: OPTIONS, GET, HEAD, POST, DELETE, TRACE, PROPFIND, PROPPATCH,
COPY, MOVE, LOCK, UNLOCK

Otherwise of course WebDav will not work properly... 

-
The relevant section in my httpd.conf file is:

  #
  # Web Dav Stuff...
  #
  DAVLockDB /tmp/DAVLock
  DAVMinTimeout 600

  Location /dav
AuthName DAV Realm
AuthType Basic
AuthUserFile  /home/simran/netchant/www/pa/conf/.htpasswd

DAV On
Options Indexes
AllowOverride All

#Limit GET POST PUT DELETE CONNECT OPTIONS PATCH PROPFIND PROPPATCH
MKCOL COPY MOVE LOCK UNLOCK
#Order allow,deny
#Allow from all
# #Order deny,allow
# #Deny from all
#/Limit

   LimitExcept GET HEAD OPTIONS
 require valid-user
   /LimitExcept

  /Location


--


Can anyone please help me with the following:
---
* Compiling mod_perl and mod_dav staticaly together and ensuring apache
does not die on any request

or

* Compiling mod_perl statically and mod_dav dynamically but getting
apache to return the right Allow: headers so webdav actually works. 


Your help would be so very very much appreciated, i have been playing
with this for over 3 full days and have not gotten very far at all. 

thanks,

simran.



[ANNOUNCE] PHP::Session

2002-03-14 Thread Tatsuhiko Miyagawa

Announcing new module: PHP::Session.

This module enables you to read / write (write is not yet implemented
though) PHP4-builtin session files from Perl. Then you can share
session data between PHP and Perl, without changing PHP code, which
may be a hard work for us Perl hackers.

This is something you'll never want to do, but imagine the cases where
you should co-work with PHP coders, or take over another company's PHP
code.



NAME
PHP::Session - read / write PHP session files

SYNOPSIS
  use PHP::Session;

  my $session = PHP::Session-new($id);

  # session id
  my $id = $session-id;

  # get/set session data
  my $foo = $session-get('foo');
  $session-set(bar = $bar);

  # remove session data
  $session-unregister('foo');

  # remove all session data
  $session-unset;

  # check if data is registered
  $session-is_registerd('bar');

  # save session data (*UNIMPLEMENTED*)
  $session-save;

DESCRIPTION
PHP::Session provides a way to read / write PHP4 session files, with
which you can make your Perl applicatiion session shared with PHP4.

TODO
*   saving session data into file is UNIMPLEMENTED.

*   WDDX support, using WDDX.pm

*   Apache::Session::PHP

AUTHOR
Tatsuhiko Miyagawa [EMAIL PROTECTED]

This library is free software; you can redistribute it and/or modify it
under the same terms as Perl itself.

SEE ALSO
the WDDX manpage, the Apache::Session manpage



-- 
Tatsuhiko Miyagawa [EMAIL PROTECTED]



Re: Problem comping mod_perl and mod_dav statically at same time...

2002-03-14 Thread simran

Hi All, 

To all those that read the message and were about to reply, a big
thankyou. 

I *think* i have it working :-) 

The problem was that the 'dav' directory (as specificied in the Location
field in my httpd.conf) did not exist on the filesystem!!!

Once i created that, all the options seem to be available, and the Allow
headers is perfect. 

cheers,

simran.



On Thu, 2002-03-14 at 19:29, simran wrote:
 Hi All, 
 
 I am trying to compile the following things together:
 
 * apache_1.3.23
 * mod_dav-1.0.3-1.3.6
 * mod_perl-1.26
 
 If i compile apache with mod_dav OR mod_perl it works fine. However, if
 i compile them both in then httpd always segfaults as soon as i pass it
 any request. 
 
 The way i configure before compiling is: 
 
 ./configure --enable-module=all
 --activate-module=src/modules/perl/libperl.a
 --activate-module=src/modules/dav/libdav.a
 
 -
 
 My actual issue is that i want mod_perl working and mod_dav working
 properly - i don't really care about how they are linked (static is
 preferable but not necessary).
 
 If i compile only mod_perl in statically and then compile mod_dav as a
 DSO (.so) file. Then the server stats up, but unfortunately a:
 
   OPTION /dav/ HTTP/1.1
   Host: my.host.com:80
 
 Gives me the following response
 
   HTTP/1.1 200 OK
   Date: Thu, 14 Mar 2002 08:01:49 GMT
   Server: Apache/1.3.23 (Unix) DAV/1.0.3 mod_perl/1.26
   Content-Length: 0
   MS-Author-Via: DAV
   Allow: OPTIONS, MKCOL, PUT, LOCK
   DAV: 1,2,http://apache.org/dav/propset/fs/1
   Content-Type: text/plain
 
 BUT the response i really want is one that includes the header:
 
 Allow: OPTIONS, GET, HEAD, POST, DELETE, TRACE, PROPFIND, PROPPATCH,
 COPY, MOVE, LOCK, UNLOCK
 
 Otherwise of course WebDav will not work properly... 
 
 -
 The relevant section in my httpd.conf file is:
 
   #
   # Web Dav Stuff...
   #
   DAVLockDB /tmp/DAVLock
   DAVMinTimeout 600
 
   Location /dav
 AuthName DAV Realm
 AuthType Basic
 AuthUserFile  /home/simran/netchant/www/pa/conf/.htpasswd
 
 DAV On
 Options Indexes
 AllowOverride All
 
 #Limit GET POST PUT DELETE CONNECT OPTIONS PATCH PROPFIND PROPPATCH
 MKCOL COPY MOVE LOCK UNLOCK
 #Order allow,deny
 #Allow from all
 # #Order deny,allow
 # #Deny from all
 #/Limit
 
LimitExcept GET HEAD OPTIONS
  require valid-user
/LimitExcept
 
   /Location
 
 
 --
 
 
 Can anyone please help me with the following:
 ---
 * Compiling mod_perl and mod_dav staticaly together and ensuring apache
 does not die on any request
 
 or
 
 * Compiling mod_perl statically and mod_dav dynamically but getting
 apache to return the right Allow: headers so webdav actually works. 
 
 
 Your help would be so very very much appreciated, i have been playing
 with this for over 3 full days and have not gotten very far at all. 
 
 thanks,
 
 simran.




Apache and Perl with Virtual Host

2002-03-14 Thread Matt Phelps

Forgive me if I'm posting to the wrong group. Ive got apache 1.3.22 
running several virtual webs. I can get perl scripts to run under the 
default web but not in the others. All the webs point to the same script 
folder. If I try to run the script under a virtual web, all I get is 
text display. Any help would be great.

Thanks

Matt





Re: Serious bug, mixing mod-perl content

2002-03-14 Thread mire


Beta contains new code and www is old code. We were calling www but once a
while beta would pop in.  We noticed error messages that were giving whole
stack trace (caller) but those error messages were not present in www code,
they are implemented as a change in beta code. Right now we solved the problem
by moving beta to a totally different domain. We'll make a large test
eventually. Everything seems ok, double checked with a help from my business
partner, the paths are different, names are different etc.


Could you describe the actual nature of the error?  How can you tell
that the response you're getting is from the wrong virtual host and what
is different about the virtual hosts' setup that causes the difference
in responses?

-- 

Best regards,

Miroslav Madzarevic, Senior Perl Programmer
[EMAIL PROTECTED]
Mod Perl Development http://www.modperldev.com
Telephone: +381 64 1193 501
jamph 
$_=,,.,,.,,,.,,,.,,,..,,.,,,.,.,,,;
s/\s//gs;tr/,./05/;my(@a,$o,$i)=split//;$_=DATA;tr/~`'^/0-4/;map{$o
.=$a[$i]+$_;$i++}split//;@a=$o=~m!...!g;map{print chr}@a; __DATA__
`~^`~~``^`~`~`^``~`~``''~^'`~^``'``^```~^``'```'~`~



problem in recompiling

2002-03-14 Thread Parag R Naik



Hi all,I am having a problem compiling mod_perl 
1.26 src with apache 1.3.22 src.The problem on running make occur at the 
following command 

gcc -c -I../.. 
-I/usr/local/ActivePerl-5.6/lib/5.6.1/i686-linux-thread-multi/CORE 
-I../../os/unix -I../../include -DLINUX=22 -I/usr/include/db1 
-DMOD_PERL -DUSE_PERL_SSI -DUSE_REENTRANT_API -D_POSIX_C_SOURCE=199506L 
-D_REENTRANT -fno-strict-aliasing -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 
-DUSE_HSREGEX -DNO_DL_NEEDED -DUSE_REENTRANT_API -D_POSIX_C_SOURCE=199506L 
-D_REENTRANT -fno-strict-aliasing -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 
`../../apaci` os.c

In file included from 
/usr/include/sys/sem.h:27, 
from 
../../include/ap_config.h:493, 
from os.c:6:/usr/include/sys/ipc.h:28: warning: #warning "Files using this 
header must be compiled with _SVID_SOURCE or _XOPEN_SOURCE"In file 
included from 
/usr/include/sys/sem.h:27, 
from 
../../include/ap_config.h:493, 
from os.c:6:/usr/include/sys/ipc.h:34: parse error before 
`ftok'/usr/include/sys/ipc.h:34: warning: data definition has no type or 
storage classIn file included from 
../../include/ap_config.h:493, 
from os.c:6:/usr/include/sys/sem.h:50: parse error before 
`__key'make[4]: *** [os.o] Error 1make[3]: *** [subdirs] Error 
1make[2]: *** [build-std] Error 2make[1]: *** [build] Error 2make: 
*** [apaci_httpd] Error 2

can anybody help me out with this The Linux I 
am using is Redhat 6.22 Kernel version 2.2.14

Thanks 



Re: Memory query

2002-03-14 Thread Andrew Green

In article [EMAIL PROTECTED],
   Perrin Harkins [EMAIL PROTECTED] wrote:

 If you actually want to free the memory, you need to undef it.  The
 untie prevents it from persisting, but the memory stays allocated
 unless you undef.

OK, I think I'm probably handling this properly then, after all.
In a Registry script, I typically tie the hash to a package global, and
pass a reference to that hash to any routines in my library modules.  At
the end of the script, the hash is untied and the package global undefed.

Many thanks,
Andrew.

-- 
perl -MLWP::Simple -e 'getprint(http://www.article7.co.uk/res/japh.txt;);'



RE: loss of shared memory in parent httpd

2002-03-14 Thread Bill Marrs


It's copy-on-write.  The swap is a write-to-disk.
There's no such thing as sharing memory between one process on disk(/swap)
and another in memory.

agreed.   What's interesting is that if I turn swap off and back on again, 
the sharing is restored!  So, now I'm tempted to run a crontab every 30 
minutes that  turns the swap off and on again, just to keep the httpds 
shared.  No Apache restart required!

Seems like a crazy thing to do, though.

You'll also want to look into tuning your paging algorithm.

Yeah... I'll look into it.  If I had a way to tell the kernel to never swap 
out any httpd process, that would be a great solution.  The kernel is 
making a bad choice here.  By swapping, it triggers more memory usage 
because sharing removed on the httpd process group (thus multiplied)...

I've got MaxClients down to 8 now and it's still happening.  I think my 
best course of action may be a crontab swap flusher.

-bill




Re: Apache and Perl with Virtual Host

2002-03-14 Thread Bill Marrs

At 04:02 AM 3/14/2002, Matt Phelps wrote:
Forgive me if I'm posting to the wrong group. Ive got apache 1.3.22 
running several virtual webs. I can get perl scripts to run under the 
default web but not in the others. All the webs point to the same script 
folder. If I try to run the script under a virtual web, all I get is text 
display. Any help would be great.

Well, I use mod_perl with VituralHosts...  My config looks something like:

VirtualHost gametz.com
ServerAdmin [EMAIL PROTECTED]
DocumentRoot /home/tz/html
ServerName gametz.com
DirectoryIndex /perl/gametz.pl
# The live area
Alias /perl/ /home/tz/perl/
Location /perl
   AllowOverride  None
   SetHandler perl-script
   PerlHandler Apache::RegistryBB
   PerlSendHeader On
   Options+ExecCGI
/Location
/VirtualHost

VirtualHost surveycentral.org
ServerAdmin [EMAIL PROTECTED]
DocumentRoot /projects/web/survey-central
ServerName surveycentral.org
DirectoryIndex /perl/survey.pl

Alias /perl/ /projects/web/survey-central/perl/
Location /perl
   SetHandler perl-script
   PerlHandlerApache::RegistryBB
   PerlSendHeader On
   Options+ExecCGI
/Location
/VirtualHost





Re: loss of shared memory in parent httpd

2002-03-14 Thread Andreas J. Koenig

 On Thu, 14 Mar 2002 07:25:27 -0500, Bill Marrs [EMAIL PROTECTED] said:

  It's copy-on-write.  The swap is a write-to-disk.
  There's no such thing as sharing memory between one process on disk(/swap)
  and another in memory.

   agreed.   What's interesting is that if I turn swap off and back on
   again, the sharing is restored!  So, now I'm tempted to run a crontab
   every 30 minutes that  turns the swap off and on again, just to keep
   the httpds shared.  No Apache restart required!

Funny, I'm doing this for ages and I never really knew why, you just
found the reason, Thank You! My concerns were similar to yours but on
a smaller scale, so I did not worry that much, but I'm running a
swapflusher regularly.

Make sure you have a recent kernel, because all old kernels up to
2.4.12 or so were extremely unresponsive during swapoff. With current
kernels, this is much, much faster and nothing to worry about.

Let me show you the script I use for the job. No rocket science, but
it's easy to do it wrong. Be careful to maintain equality of priority
among disks:

  use strict;

  $|=1;
  print Running swapon -a, just in case...\n;
  system swapon -a;
  print Running swapon -s\n;
  open S, swapon -s |;
  my(%prio);
  PARTITION: while (S) {
print;
next if /^Filename/;
chop;
my($f,$t,$s,$used,$p) = split;
my $disk = $f;
$disk =~ s/\d+$//;
$prio{$disk} ||= 5;
$prio{$disk}--;
if ($used == 0) {
  print Unused, skipping\n;
  next PARTITION;
}
print Turning off\n;
system swapoff $f;
print Turning on with priority $prio{$disk}\n;
system swapon -p $prio{$disk} $f;
  }
  system swapon -s;


Let me know if you see room for improvements,

Regards,
-- 
andreas



[OT]RE: loss of shared memory in parent httpd

2002-03-14 Thread Narins, Josh

Call me an idiot.

How is it even remotely possible that turning off swap restores memory
shared between processes? Is the Linux kernel going from process to process
comparing pages of memory as they re-enter RAM? Oh, those two look
identical, they'll get shared?

-Incredulous

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 14, 2002 8:24 AM
To: Bill Marrs
Cc: [EMAIL PROTECTED]
Subject: Re: loss of shared memory in parent httpd


 On Thu, 14 Mar 2002 07:25:27 -0500, Bill Marrs [EMAIL PROTECTED]
said:

  It's copy-on-write.  The swap is a write-to-disk.
  There's no such thing as sharing memory between one process on
disk(/swap)
  and another in memory.

   agreed.   What's interesting is that if I turn swap off and back on
   again, the sharing is restored!  So, now I'm tempted to run a crontab
   every 30 minutes that  turns the swap off and on again, just to keep
   the httpds shared.  No Apache restart required!

Funny, I'm doing this for ages and I never really knew why, you just
found the reason, Thank You! My concerns were similar to yours but on
a smaller scale, so I did not worry that much, but I'm running a
swapflusher regularly.

Make sure you have a recent kernel, because all old kernels up to
2.4.12 or so were extremely unresponsive during swapoff. With current
kernels, this is much, much faster and nothing to worry about.

Let me show you the script I use for the job. No rocket science, but
it's easy to do it wrong. Be careful to maintain equality of priority
among disks:

  use strict;

  $|=1;
  print Running swapon -a, just in case...\n;
  system swapon -a;
  print Running swapon -s\n;
  open S, swapon -s |;
  my(%prio);
  PARTITION: while (S) {
print;
next if /^Filename/;
chop;
my($f,$t,$s,$used,$p) = split;
my $disk = $f;
$disk =~ s/\d+$//;
$prio{$disk} ||= 5;
$prio{$disk}--;
if ($used == 0) {
  print Unused, skipping\n;
  next PARTITION;
}
print Turning off\n;
system swapoff $f;
print Turning on with priority $prio{$disk}\n;
system swapon -p $prio{$disk} $f;
  }
  system swapon -s;


Let me know if you see room for improvements,

Regards,
-- 
andreas


--
This message is intended only for the personal and confidential use of the designated 
recipient(s) named above.  If you are not the intended recipient of this message you 
are hereby notified that any review, dissemination, distribution or copying of this 
message is strictly prohibited.  This communication is for information purposes only 
and should not be regarded as an offer to sell or as a solicitation of an offer to buy 
any financial product, an official confirmation of any transaction, or as an official 
statement of Lehman Brothers.  Email transmission cannot be guaranteed to be secure or 
error-free.  Therefore, we do not represent that this information is complete or 
accurate and it should not be relied upon as such.  All information is subject to 
change without notice.





Problem With DB_File Installation On Red-Hat Linux 7.1

2002-03-14 Thread James McKim

I'm trying to install DB_File on our Red-Hat Linux. 7.1 box and am 
getting an error about having 2 versions of BerkeleyDB installed. The 
log of the installation follows. Any help would be appreciated.

James

  CPAN.pm: Going to build P/PM/PMQS/DB_File-1.803.tar.gz

Parsing config.in...
Looks Good.
Checking if your kit is complete...
Looks good
Writing Makefile for DB_File
cp DB_File.pm blib/lib/DB_File.pm
AutoSplitting blib/lib/DB_File.pm (blib/lib/auto/DB_File)
cc -c -I/usr/local/BerkeleyDB/include -fno-strict-aliasing 
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2   -DVERSION=\1.803\ 
-DXS_VERSION=\1.803\ -fpic 
-I/usr/local/lib/perl5/5.6.1/i686-linux/CORE -D_NOT_CORE  
-DmDB_Prefix_t=size_t -DmDB_Hash_t=u_int32_t  version.c
/usr/local/bin/perl -I/usr/local/lib/perl5/5.6.1/i686-linux 
-I/usr/local/lib/perl5/5.6.1 /usr/local/lib/perl5/5.6.1/ExtUtils/xsubpp 
-noprototypes -typemap /usr/local/lib/perl5/5.6.1/ExtUtils/typemap 
-typemap typemap DB_File.xs  DB_File.xsc  mv DB_File.xsc DB_File.c
cc -c -I/usr/local/BerkeleyDB/include -fno-strict-aliasing 
-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2   -DVERSION=\1.803\ 
-DXS_VERSION=\1.803\ -fpic 
-I/usr/local/lib/perl5/5.6.1/i686-linux/CORE -D_NOT_CORE  
-DmDB_Prefix_t=size_t -DmDB_Hash_t=u_int32_t  DB_File.c
Running Mkbootstrap for DB_File ()
chmod 644 DB_File.bs
rm -f blib/arch/auto/DB_File/DB_File.so
LD_RUN_PATH=/usr/lib cc  -shared -L/usr/local/lib version.o DB_File.o  
-o blib/arch/auto/DB_File/DB_File.so   -ldb 
chmod 755 blib/arch/auto/DB_File/DB_File.so
cp DB_File.bs blib/arch/auto/DB_File/DB_File.bs
chmod 644 blib/arch/auto/DB_File/DB_File.bs
  /usr/bin/make  -- OK
Running make test
PERL_DL_NONLAZY=1 /usr/local/bin/perl -Iblib/arch -Iblib/lib 
-I/usr/local/lib/perl5/5.6.1/i686-linux -I/usr/local/lib/perl5/5.6.1 -e 
'use Test::Harness qw(runtests $verbose); $verbose=0; runtests ARGV;' 
t/*.t
t/db-btree..Can't load 'blib/arch/auto/DB_File/DB_File.so' for 
module DB_File: blib/arch/auto/DB_File/DB_File.so: undefined symbol: 
db_version at /usr/local/lib/perl5/5.6.1/i686-linux/DynaLoader.pm line 206.
 at t/db-btree.t line 23
Compilation failed in require at t/db-btree.t line 23.
BEGIN failed--compilation aborted at t/db-btree.t line 23.
t/db-btree..dubious  

Test returned status 255 (wstat 65280, 0xff00)
t/db-hash...Can't load 'blib/arch/auto/DB_File/DB_File.so' for 
module DB_File: blib/arch/auto/DB_File/DB_File.so: undefined symbol: 
db_version at /usr/local/lib/perl5/5.6.1/i686-linux/DynaLoader.pm line 206.
 at t/db-hash.t line 23
Compilation failed in require at t/db-hash.t line 23.
BEGIN failed--compilation aborted at t/db-hash.t line 23.
t/db-hash...dubious  

Test returned status 255 (wstat 65280, 0xff00)
t/db-recno..Can't load 'blib/arch/auto/DB_File/DB_File.so' for 
module DB_File: blib/arch/auto/DB_File/DB_File.so: undefined symbol: 
db_version at /usr/local/lib/perl5/5.6.1/i686-linux/DynaLoader.pm line 206.
 at t/db-recno.t line 23
Compilation failed in require at t/db-recno.t line 23.
BEGIN failed--compilation aborted at t/db-recno.t line 23.
t/db-recno..dubious  

Test returned status 255 (wstat 65280, 0xff00)
FAILED--3 test scripts could be run, alas--no output ever seen
make: *** [test_dynamic] Error 2
  /usr/bin/make test -- NOT OK
Running make install
  make test had returned bad status, won't install without force

-- 

James McKim, President
ISRG, Inc.
V: (603) 497-3015  F: (603) 497-2599
http://www.isrginc.com

Strategic use of information and human capital to improve your
bottom line is our bottom line.





RE: Problem With DB_File Installation On Red-Hat Linux 7.1 [OT]

2002-03-14 Thread Joe Breeden

I had this problem the other day. And it was a screwy problem to fix. I had to get the 
latest BerkeleyDB, something like v4.0.14 (www.sleepycat.com) install it. Then 
reinstall the DB_File and I believe Storable modules making sure they pointed to the 
new install of BerkeleyDB. Of course, when I went through all of this I screwed up rpm 
somehow and other parts of my system so bad I surrendered and wiped my machine (which 
is my development desktop) and loaded RedHat 7.2. I think that trying to figure out 
the problem I uninstalled some RPMs I needed. If I had CAREFULLY followed the 
instructions for installing DB_File that came with the 4.0.14 BerkeleyDB install I 
think I would have been ok. 

It has been a few weeks since I went through this and I have seen a squirrel or two 
since then and it was a bad experience so I have tried to block the memories of the 
awful awful day. I hope this helps, but I doubt it will. Good luck. 

 -Original Message-
 From: James McKim [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, March 14, 2002 9:01 AM
 To: [EMAIL PROTECTED]
 Subject: Problem With DB_File Installation On Red-Hat Linux 7.1
 
 
 I'm trying to install DB_File on our Red-Hat Linux. 7.1 box and am 
 getting an error about having 2 versions of BerkeleyDB installed. The 
 log of the installation follows. Any help would be appreciated.
 
 James
 
   CPAN.pm: Going to build P/PM/PMQS/DB_File-1.803.tar.gz
 
 Parsing config.in...
 Looks Good.
 Checking if your kit is complete...
 Looks good
 Writing Makefile for DB_File
 cp DB_File.pm blib/lib/DB_File.pm
 AutoSplitting blib/lib/DB_File.pm (blib/lib/auto/DB_File)
 cc -c -I/usr/local/BerkeleyDB/include -fno-strict-aliasing 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2   -DVERSION=\1.803\ 
 -DXS_VERSION=\1.803\ -fpic 
 -I/usr/local/lib/perl5/5.6.1/i686-linux/CORE -D_NOT_CORE  
 -DmDB_Prefix_t=size_t -DmDB_Hash_t=u_int32_t  version.c
 /usr/local/bin/perl -I/usr/local/lib/perl5/5.6.1/i686-linux 
 -I/usr/local/lib/perl5/5.6.1 
 /usr/local/lib/perl5/5.6.1/ExtUtils/xsubpp 
 -noprototypes -typemap /usr/local/lib/perl5/5.6.1/ExtUtils/typemap 
 -typemap typemap DB_File.xs  DB_File.xsc  mv DB_File.xsc DB_File.c
 cc -c -I/usr/local/BerkeleyDB/include -fno-strict-aliasing 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2   -DVERSION=\1.803\ 
 -DXS_VERSION=\1.803\ -fpic 
 -I/usr/local/lib/perl5/5.6.1/i686-linux/CORE -D_NOT_CORE  
 -DmDB_Prefix_t=size_t -DmDB_Hash_t=u_int32_t  DB_File.c
 Running Mkbootstrap for DB_File ()
 chmod 644 DB_File.bs
 rm -f blib/arch/auto/DB_File/DB_File.so
 LD_RUN_PATH=/usr/lib cc  -shared -L/usr/local/lib version.o 
 DB_File.o  
 -o blib/arch/auto/DB_File/DB_File.so   -ldb 
 chmod 755 blib/arch/auto/DB_File/DB_File.so
 cp DB_File.bs blib/arch/auto/DB_File/DB_File.bs
 chmod 644 blib/arch/auto/DB_File/DB_File.bs
   /usr/bin/make  -- OK
 Running make test
 PERL_DL_NONLAZY=1 /usr/local/bin/perl -Iblib/arch -Iblib/lib 
 -I/usr/local/lib/perl5/5.6.1/i686-linux 
 -I/usr/local/lib/perl5/5.6.1 -e 
 'use Test::Harness qw(runtests $verbose); $verbose=0; 
 runtests @ARGV;' 
 t/*.t
 t/db-btree..Can't load 
 'blib/arch/auto/DB_File/DB_File.so' for 
 module DB_File: blib/arch/auto/DB_File/DB_File.so: undefined symbol: 
 db_version at 
 /usr/local/lib/perl5/5.6.1/i686-linux/DynaLoader.pm line 206.
  at t/db-btree.t line 23
 Compilation failed in require at t/db-btree.t line 23.
 BEGIN failed--compilation aborted at t/db-btree.t line 23.
 t/db-btree..dubious   

 
 Test returned status 255 (wstat 65280, 0xff00)
 t/db-hash...Can't load 
 'blib/arch/auto/DB_File/DB_File.so' for 
 module DB_File: blib/arch/auto/DB_File/DB_File.so: undefined symbol: 
 db_version at 
 /usr/local/lib/perl5/5.6.1/i686-linux/DynaLoader.pm line 206.
  at t/db-hash.t line 23
 Compilation failed in require at t/db-hash.t line 23.
 BEGIN failed--compilation aborted at t/db-hash.t line 23.
 t/db-hash...dubious   

 
 Test returned status 255 (wstat 65280, 0xff00)
 t/db-recno..Can't load 
 'blib/arch/auto/DB_File/DB_File.so' for 
 module DB_File: blib/arch/auto/DB_File/DB_File.so: undefined symbol: 
 db_version at 
 /usr/local/lib/perl5/5.6.1/i686-linux/DynaLoader.pm line 206.
  at t/db-recno.t line 23
 Compilation failed in require at t/db-recno.t line 23.
 BEGIN failed--compilation aborted at t/db-recno.t line 23.
 t/db-recno..dubious   

 
 Test returned status 255 (wstat 65280, 0xff00)
 FAILED--3 test scripts could be run, alas--no output ever seen
 make: *** [test_dynamic] Error 2
   /usr/bin/make test -- NOT OK
 Running make install
   make test had returned bad status, won't install without force
 
 -- 
 
 James McKim, President
 ISRG, Inc.
 V: (603) 497-3015  F: (603) 497-2599
 http://www.isrginc.com
 
 Strategic use of information 

RE: Problem With DB_File Installation On Red-Hat Linux 7.1 [OT]

2002-03-14 Thread Joe Breeden

That rings a bell. I think that the problem was that a secondary required file for the 
db.h for one of the versions db3 I believe is not a part of the RedHat install and 
that relinking the db.h file didn't help. It was at that point that I went to 
sleepcat.com to get the complete kit and installed from there.

 -Original Message-
 From: Nicholas Studt [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, March 14, 2002 9:32 AM
 To: Joe Breeden
 Cc: [EMAIL PROTECTED]
 Subject: Re: Problem With DB_File Installation On Red-Hat 
 Linux 7.1 [OT]
 
 
  Joe Breeden wrote [ 2002/03/14 at 09:15:44 ]
  
  It has been a few weeks since I went through this and I have seen a
  squirrel or two since then and it was a bad experience so I 
 have tried
  to block the memories of the awful awful day. I hope this 
 helps, but I
  doubt it will. Good luck. 
 
 A much easier fix specfically for Redhat 7.1 is to correctly link
 /usr/include/db.h to the same version of the /lib that DB_File is
 picking up. Redhat has versions 1, 2, and 3 of Berkeley db 
 installed to
 support all of the applications. If you relink db.h, 
 generally pointing
 it to db2/db.h ( though it may be db3/db.h or db1/db.h 
 depending on the
 rest of the stuff you have installed ) will make DB_file happy.
 
 
  
   -Original Message-
   From: James McKim [mailto:[EMAIL PROTECTED]]
   Sent: Thursday, March 14, 2002 9:01 AM
   To: [EMAIL PROTECTED]
   Subject: Problem With DB_File Installation On Red-Hat Linux 7.1
   
   
   I'm trying to install DB_File on our Red-Hat Linux. 7.1 
 box and am 
   getting an error about having 2 versions of BerkeleyDB 
 installed. The 
   log of the installation follows. Any help would be appreciated.
 
  | nicholas l studt   [EMAIL PROTECTED]
  | GPG: 0EBE 38F2 342C A857 E85B 2472 B85E C538 E1E0 8808
  `---
 



Re: POST and multipart/data-form question

2002-03-14 Thread Robin Berjon

On Thursday 14 March 2002 17:12, Vuillemot, Ward W wrote:
 Now, I change nothing more than the form enctype to multipart/data-form.

I haven't looked at your sample code in detail, but as someone that got 
caught on similar problems due to silly typoes I'd like to point out that 
it's multipart/form-data and not the other way 'round.

-- 
___
Robin Berjon [EMAIL PROTECTED] -- CTO
k n o w s c a p e : // venture knowledge agency www.knowscape.com
---
There's too much blood in my caffeine system.




POST and multipart/data-form question

2002-03-14 Thread Vuillemot, Ward W

I have searched off and on for the past 3 weeks for a solution to my
problem.  I am at wits end. . .and thought I would finally ask the
mailinglist.

I had a set of CGI scripts that worked without problem.  I began the process
about 4 weeks ago of moving them to mod_perl.  The suite of scripts are
handled as their own perlHandler collection.  One of the scripts has a form
where a user can either enter data directly, or indicate a file of
equivalent data.

When I use the form to POST without any enctype and if you enter directly
into the form things work correctly.  That is, the data is massaged and
sent back to you as downloadable file.  Of course, this form does not handle
file uploads.

Now, I change nothing more than the form enctype to multipart/data-form.
Now, regardless of how the data is presented in the form (e.g. directly or
via file upload) the browser tries to refresh the screen with the web-page
(which it should not since its only response is to send to the client a file
to download).  However, the web page does not get completely sent, and
consistently stops in the middle of the send.

I have been using the POST2GET snippet to help make the post more
persistent.  In short, my httpd.conf file looks like:

#
# **
# ** MOD PERL CHANGES **
# **
# limit POSTS so that they get processed properly
Limit POST
  PerlInitHandler POST2GET
/Limit
# force reloading of modules on restart
PerlFreshRestart on
# Perl module primitive mother load on start/restart
#PerlRequire lib/perl/startup.pl
# FLOE application (mod_perl)
PerlModule Apache::DBI
PerlModule floeApp
Location /floeApp
  SetHandler perl-script
  PerlHandler floeApp
  PerlSendHeader On
/Location

And the relevant two snippets of code from the script are:
## process incoming
# if submitted
my %hash = undef;
my $initialList = $q-param('initialList') || '';
my $upload = $q-upload || undef;
my $fh = $upload-fh if defined($upload);
if (defined($upload)  $upload) {
$initialList = '';
while ($fh) {
$initialList .= $_;
}
}

## some processing is done to the POST'ed data
## and eventually. . .

## send file to client
print   Content-type: text/plain\n;
print   Content-Disposition: attachment;
filename=list.txt\n\n;

foreach my $value (sort keys %$hash) {
chomp($value);
next unless ($value);
print $hash-{$value}$CRLF$value$CRLF;
}

exit;


Any ideas?  I would love to get this solved so I can get back to developing
useful scripts.  :)

Thanks!
Ward

Ward W. Vuillemot
Boeing Flight Operations Engineering
Performance Software
tel +01 206-662-8667 * fax +01 206-662-7612
[EMAIL PROTECTED]




Re: [OT]RE: loss of shared memory in parent httpd

2002-03-14 Thread Bill Marrs


How is it even remotely possible that turning off swap restores memory
shared between processes? Is the Linux kernel going from process to process
comparing pages of memory as they re-enter RAM? Oh, those two look
identical, they'll get shared?

This is a good point.  I really have no clue how the kernel deals with 
swapping/sharing, so I can only speculate.  I could imagine that it's 
possible for it to do this, if the pages are marked properly, they could be 
restored.  But, I'll admit, it seems unlikely.

...and, I had this thought before.  Maybe this apparent loss of shared 
memory is an illusion.  It appears to make the amount of memory that the 
httpds use grow very high, but perhaps it is a kind of shared-swap, and 
thus the calculation I'm using to determine overall memory usage would need 
to also factor out swap.  ...in which case, there's no problem at all.

But, I do see an albeit qualitative performance increase and CPU load 
lowering when I get the httpds to stay shared (and unswapped).  So, I think 
it does matter.

Though, if you think about it, it sort of makes sense.  Some portion of the 
shared part of the httpd is also not being used much, so it gets swapped 
out to disk.  But, if those pages really aren't being used, then there 
shouldn't be a performance hit.  If they are being used, then they'd get 
swapped back in.

...which sort of disproves my qualitative reasoning that swap/unshared is bad.

my head hurts, maybe I should join a kernel mailing list and see is someone 
there can help me (and if I can understand them).

-bill






[ANNOUNCE] Apache::VMonitor v0.7

2002-03-14 Thread Stas Bekman

The uploaded file

 Apache-VMonitor-0.7.tar.gz

has entered CPAN as

   file: $CPAN/authors/id/S/ST/STAS/Apache-VMonitor-0.7.tar.gz
   size: 19973 bytes
md5: 352f90fa6d40deae16a4daa80ef22d5e

Changes:

* fix a devide by zero error (when there is no swap used). Thanks to Bill
   Marrs [EMAIL PROTECTED].


_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




Re: POST and multipart/data-form question

2002-03-14 Thread Hans Poo

El Jue 14 Mar 2002 12:12, Vuillemot, Ward W escribió:
 I have searched off and on for the past 3 weeks for a solution to my
 problem.  I am at wits end. . .and thought I would finally ask the
 mailinglist.

 I had a set of CGI scripts that worked without problem.  I began the
 process about 4 weeks ago of moving them to mod_perl.  The suite of scripts
 are handled as their own perlHandler collection.  One of the scripts has a
 form where a user can either enter data directly, or indicate a file of
 equivalent data.

 When I use the form to POST without any enctype and if you enter directly
 into the form things work correctly.  That is, the data is massaged and
 sent back to you as downloadable file.  Of course, this form does not
 handle file uploads.

 Now, I change nothing more than the form enctype to multipart/data-form.
 Now, regardless of how the data is presented in the form (e.g. directly or
 via file upload) the browser tries to refresh the screen with the web-page
 (which it should not since its only response is to send to the client a
 file to download).  However, the web page does not get completely sent, and
 consistently stops in the middle of the send.

 I have been using the POST2GET snippet to help make the post more
 persistent.  In short, my httpd.conf file looks like:

 #
 # **
 # ** MOD PERL CHANGES **
 # **
 # limit POSTS so that they get processed properly
 Limit POST
   PerlInitHandler POST2GET
 /Limit
 # force reloading of modules on restart
 PerlFreshRestart on
 # Perl module primitive mother load on start/restart
 #PerlRequire lib/perl/startup.pl
 # FLOE application (mod_perl)
 PerlModule Apache::DBI
 PerlModule floeApp
 Location /floeApp
   SetHandler perl-script
   PerlHandler floeApp
   PerlSendHeader On
 /Location

 And the relevant two snippets of code from the script are:
   ## process incoming
   # if submitted
   my %hash = undef;
   my $initialList = $q-param('initialList') || '';
   my $upload = $q-upload || undef;
   my $fh = $upload-fh if defined($upload);
   if (defined($upload)  $upload) {
   $initialList = '';
   while ($fh) {
   $initialList .= $_;
   }
   }

   ## some processing is done to the POST'ed data
   ## and eventually. . .

   ## send file to client
   print   Content-type: text/plain\n;
   print   Content-Disposition: attachment;
 filename=list.txt\n\n;

   foreach my $value (sort keys %$hash) {
   chomp($value);
   next unless ($value);
   print $hash-{$value}$CRLF$value$CRLF;
   }

   exit;


 Any ideas?  I would love to get this solved so I can get back to developing
 useful scripts.  :)

 Thanks!
 Ward

 Ward W. Vuillemot
 Boeing Flight Operations Engineering
 Performance Software
 tel +01 206-662-8667 * fax +01 206-662-7612
 [EMAIL PROTECTED]

just to test, if you trie:

perl -MCGI -e 'print CGI::start_multipart_form()'

you get

form method=post action=/-e enctype=multipart/form-data

and not:

multipart/data-form

as you write

May be you spelled it wrong on he message, but this may be your problem.

Hans





RE: POST and multipart/data-form question

2002-03-14 Thread Vuillemot, Ward W

not a type -- just my brain switching things.  the form is correct.  

   :  -Original Message-
   :  From: Robin Berjon [mailto:[EMAIL PROTECTED]]
   :  Sent: Thursday, March 14, 2002 8:22 AM
   :  To: [EMAIL PROTECTED]
   :  Subject: Re: POST and multipart/data-form question
   :  
   :  
   :  On Thursday 14 March 2002 17:12, Vuillemot, Ward W wrote:
   :   Now, I change nothing more than the form enctype to 
   :  multipart/data-form.
   :  
   :  I haven't looked at your sample code in detail, but as 
   :  someone that got 
   :  caught on similar problems due to silly typoes I'd like 
   :  to point out that 
   :  it's multipart/form-data and not the other way 'round.
   :  
   :  -- 
   :  __
   :  _
   :  Robin Berjon [EMAIL PROTECTED] -- CTO
   :  k n o w s c a p e : // venture knowledge agency www.knowscape.com
   :  --
   :  -
   :  There's too much blood in my caffeine system.
   :  



Re: Serious bug, mixing mod-perl content

2002-03-14 Thread Perrin Harkins

mire wrote:
 Beta contains new code and www is old code. We were calling www but once a
 while beta would pop in.  We noticed error messages that were giving whole
 stack trace (caller) but those error messages were not present in www code,
 they are implemented as a change in beta code.

Are you sure you aren't just having namespace problems?  If this code is 
in a module which has the same name for both versions, you can only have 
one version loaded in each perl interpreter.

- Perrin




RE: loss of shared memory in parent httpd

2002-03-14 Thread Tom Brown

On Thu, 14 Mar 2002, Bill Marrs wrote:

 
 It's copy-on-write.  The swap is a write-to-disk.
 There's no such thing as sharing memory between one process on disk(/swap)
 and another in memory.
 
 agreed.   What's interesting is that if I turn swap off and back on again, 

what? doesn't seem to me like you are agreeing, and the original quote
doesn't make sense either (because a shared page is a shared page, it can
only be in one spot until/unless it gets copied).

a shared page is swapped to disk. It then gets swapped back in, but for
some reason the kernel seems to treat swapping a page back in as copying
the page which doesn't seem logical ... anyone here got a more
direct line with someone like Alan Cox?

That is, _unless_ you copy all the swap space back in (e.g.
swapoff)..., but that is probably a very different operation
than demand paging.

 the sharing is restored!  So, now I'm tempted to run a crontab every 30 
 minutes that  turns the swap off and on again, just to keep the httpds 
 shared.  No Apache restart required!
 
 Seems like a crazy thing to do, though.
 
 You'll also want to look into tuning your paging algorithm.
 
 Yeah... I'll look into it.  If I had a way to tell the kernel to never swap 
 out any httpd process, that would be a great solution.  The kernel is 
 making a bad choice here.  By swapping, it triggers more memory usage 
 because sharing removed on the httpd process group (thus multiplied)...

the kernel doesn't want to swap out data in any case... if it
does, it means memory pressure is reasonably high. AFAIK the kernel
would far rather drop executable code pages which it can just go
re-read ...

 
 I've got MaxClients down to 8 now and it's still happening.  I think my 
 best course of action may be a crontab swap flusher.

or reduce MaxRequestsPerChild ? Stas also has some tools for
causing children to exit early if their memory usage goes above
some limit. I'm sure it's in the guide.

 
 -bill
 

--
[EMAIL PROTECTED]   | Courage is doing what you're afraid to do.
http://BareMetal.com/  | There can be no courage unless you're scared.
   | - Eddie Rickenbacker 




RE: Mapping files

2002-03-14 Thread Stathy G. Touloumis

Ok, I found an interim solution for my file mapping issue the problem I am
running into now is that get/post data associated with the request is lost .
. .

Is this covered in the mod_perl cookbook?

  I am trying to map a uri to a file based on certain factors.  I
 would like
  to have this done after the 'Trans' phase when certain information is
  available.  I noticed when the original file mapping does not
 exist (what
  apache maps in it's 'Trans' phase) the 'content_type' method
 does not return
  a value.


 it shouldn't.  mod_mime needs the filename the URI maps to in
 order to determine
 the MIME type.


 I actually found an initial answer after digging through the docs.  After
 mapping the file I can create a new sub-request via lookup_file($filename)
 and take appropriate action.  So far so good . . .




Re: problems returning a hash of hashes using mod_perl

2002-03-14 Thread Garth Winter Webb

On Thu, 2002-03-14 at 10:46, [EMAIL PROTECTED] wrote:

 code:
 
 |  return %Actions::Vars::config{$conf}; |
 
-

You are not access the hash with the proper syntax:

%Actions::Vars::config

refers to the entire config hash, while:

$Actions::Vars::config{$conf}

will return you a value from that hash.  Notice the leading '%' has been
replaced with a '$'.  Read the 'perldsc' man page:

man perldsc

G




[OT]Re: problems returning a hash of hashes using mod_perl

2002-03-14 Thread Per Einar Ellefsen

At 15:46 14.03.2002 -0300, [EMAIL PROTECTED] wrote:
im using mod_perl with a module which stores all the configurations, and
embperl for displaying the wepages

a sub in this .pm has to return a hash with the configurations

but that hash is inside another general hash called configurations, this is
because each user of the program has its own configurations

the sentence im using to return the value is this


code:

|  return %Actions::Vars::config{$conf}; |
-


being Actions::Vars:: the name of the package, config the general hash and
$conf the name of the subhash

remember that i need the entire subhash.. not values from it

this isn´t working, and because im not very expreinced with perl i searched
across with all the documentations i could find and never an example of
something like this

if you could help i´ll greatly apreciate it


Hi,

You should read the perllol documentation. It has got lots of information 
about what you're talking about. This isn't a mod_per specific issue.

First of all:
the subhash you are referring to is just a value of the 
%Action::Vars::config has. So you retrieve it like this:
$hashref = $Actions::Vars::config{$conf}

Now, that gives you a hash reference, from which you can access values 
using $hashref-{key} ou $$hashref{key}.
If you want to retrieve the hash in normal form, like this:

my %conf = get_conf($conf);

You need to access all the values of the sub-hash.
So you get:
return %{$Actions::Vars::config{$conf}};
and then you can do:
my %hash = get_conf($conf);   (or whatever)
This is actualle equivalent to:
my $hashref = $Actions::Vars::config{$conf};
my %hash = %$hashref;

Again, see perllol, it'll give you insight into this matter.



-- 
Per Einar Ellefsen
[EMAIL PROTECTED]




Re: [OT]Re: problems returning a hash of hashes using mod_perl

2002-03-14 Thread Per Einar Ellefsen

At 19:53 14.03.2002 +0100, Per Einar Ellefsen wrote:
Again, see perllol, it'll give you insight into this matter.


Oops, like Garth pointed out, this is supposed to be perldsc, and not 
perllol (which gives a description of arrays of arrays, which work in a 
similar way).

--

Per Einar Ellefsen
[EMAIL PROTECTED]




Re: problems returning a hash of hashes using mod_perl

2002-03-14 Thread Ernest Lergon

[EMAIL PROTECTED] wrote:
 
 [snip]

 
 |  return %Actions::Vars::config{$conf}; |
 
-
 
Must read:

return $Actions::Vars::config{$conf};   # returns a hash reference
or
return % { $Actions::Vars::config{$conf} || {} }; # returns plain hash

and should have been created like this:

my %user_conf = ( foo = 1, bar = 'on' );

$Actions::Vars::config{'user'} = { %user_conf };

One more tip: always say:

use strict;
use warnings;

That should have told you, whats wrong ;-))

See also

http://www.perldoc.com/perl5.6/pod/perldsc.html#Declaration-of-a-HASH-OF-HASHES

Ernest



-- 

*
* VIRTUALITAS Inc.   *  *
**  *
* European Consultant Office *  http://www.virtualitas.net  *
* Internationales Handelszentrum *   contact:Ernest Lergon  *
* Friedrichstraße 95 *mailto:[EMAIL PROTECTED] *
* 10117 Berlin / Germany *   ums:+49180528132130266 *
*




Re: problems returning a hash of hashes using mod_perl

2002-03-14 Thread FRacca


tnks a lot to all of you for the quick answers..

it now recognizes the hash im sending to, but its complaining a bit about
the values.. saying it cat find the values for the keys.. but i don´t think
this will be a real problem... it must be some gramatical eror or something

tnks again

Fernando Racca



Esta comunicación es exclusivamente para fines informativos. Está dirigida
al destinatario y puede contener información confidencial de Apsys SRL. No
constituye una oferta o solicitud de compra o venta de ningún servicio o
producto, o como la confirmación oficial de ninguna transacción. Cualquier
comentario o afirmación vertida en esta comunicación, no reflejan
necesariamente la posición de Apsys SRL. Si lo recibió por error, por favor
bórrelo y avise de inmediato al emisor.





[WOT] emacs and WEBDAV

2002-03-14 Thread Rob Bloodgood

I'm running a Mason based website, and I use Emacs when I write code.
My web designers use Dreamweaver.  I've designed the site so that my web
guys have to reserve me one table cell (or more than one depending on where
in the site, but you get the point) where I put a single dispatch  component
to the dynamic content appropriately.

The problem is, concurrency.  Dreamweaver has versioning built in... but
emacs has no way to recognize it.  So when I make a fix to a file, if the
designers aren't explicitly instructed to refresh-from-the-website-via-ftp,
my changes get hosed.

DW also speaks WEBDAV natively, but emacs does not.  Emacs speaks CVS
natively, but DW does not.  DW also speaks SourceSafe shudder, but I never
took that seriously... :-)

I've been trying, in various attempts over the past two years, to come up
with a compromise between the two.  The closest I've come was somebody
mentioned a CVS emulation layer over a DAV repository... but that never came
to fruition.  And even more frustrating, I haven't managed to pick up enough
eLisp to do it myself w/ vc.el sigh.

Does anybody have any ideas for my next direction to turn?

TIA!

L8r,
Rob




Re: [WOT] emacs and WEBDAV

2002-03-14 Thread darren chamberlain

Quoting Rob Bloodgood [EMAIL PROTECTED] [Mar 14, 2002 14:30]:
 I've been trying, in various attempts over the past two years,
 to come up with a compromise between the two.  The closest I've
 come was somebody mentioned a CVS emulation layer over a DAV
 repository... but that never came to fruition.  And even more
 frustrating, I haven't managed to pick up enough eLisp to do it
 myself w/ vc.el sigh.
 
 Does anybody have any ideas for my next direction to turn?

This http://www.cvshome.org/cyclic/cvs/dev-dav.html looks
promising...

(darren)

-- 
We can't all, and some of us don't. That's all there is to it.
-- Eeyore



Re: problems returning a hash of hashes using mod_perl

2002-03-14 Thread FRacca


Actually i found out that this was the correct answer


code:

|  return %{$Actions::Vars::config{$conf}};  |
-


tnks all for taking the time to answer

Fernando Racca



Esta comunicación es exclusivamente para fines informativos. Está dirigida
al destinatario y puede contener información confidencial de Apsys SRL. No
constituye una oferta o solicitud de compra o venta de ningún servicio o
producto, o como la confirmación oficial de ninguna transacción. Cualquier
comentario o afirmación vertida en esta comunicación, no reflejan
necesariamente la posición de Apsys SRL. Si lo recibió por error, por favor
bórrelo y avise de inmediato al emisor.





Re: [WOT] emacs and WEBDAV

2002-03-14 Thread Kee Hinckley

At 11:30 AM -0800 3/14/02, Rob Bloodgood wrote:
The problem is, concurrency.  Dreamweaver has versioning built in... but
emacs has no way to recognize it.  So when I make a fix to a file, if the
designers aren't explicitly instructed to refresh-from-the-website-via-ftp,
my changes get hosed.

Versioning, no.  Locking, yes, optionally.  (Well, I guess it can do 
versioning via SourceSafe, but not via anything else.)  I'm seriously 
hoping they'll address that in the next release.

I've been trying, in various attempts over the past two years, to come up
with a compromise between the two.  The closest I've come was somebody
mentioned a CVS emulation layer over a DAV repository... but that never came
to fruition.  And even more frustrating, I haven't managed to pick up enough
eLisp to do it myself w/ vc.el sigh.

Does anybody have any ideas for my next direction to turn?

There are WebDAV extensions under development to provide versioning. 
I suspect that eventually we'll see those supported.  But that's got 
to be a year or more down the road.

Emacs over WebDAV should work fine if you run something that supports 
WebDAV as a filesystem (e.g. OSX), but that's not going to help you 
much.

There are two options I can think of.

1. If your designers aren't making use of checkin/checkout in 
DreamWeaver, then simply make it clear to them that before they can 
save a file to the server, they have to do a sync first.  Make the 
final repository sit on CVS, and do a checkin every night.  So if 
something does go wrong you can at least pick up the previous day's 
work.

2. DreamWeaver's locking mechanism is handled by placing lock files 
on the server.  Those files have the info about who has what.  It 
ought to be possible to write an emacs extension that would use those 
files.

-- 

Kee Hinckley - Somewhere.Com, LLC
http://consulting.somewhere.com/
[EMAIL PROTECTED]

I'm not sure which upsets me more: that people are so unwilling to accept
responsibility for their own actions, or that they are so eager to regulate
everyone else's.



Re: problem in recompiling

2002-03-14 Thread Ged Haywood

Hi there,

On Thu, 14 Mar 2002, Parag R Naik wrote:

 Hi all,
 I am having a problem compiling mod_perl 1.26 src with apache 1.3.22 src.
 The problem on running make occur at the following command 
 
 gcc -c -I../.. -I/usr/local/ActivePerl-5.6/lib/5.6.1/i686-linux-thread-multi/COR

ActivePerl ?  I don't understand.  Tell me all about that.

73,
Ged.




Problem Removing Handlers

2002-03-14 Thread Hans Poo

Please Help

One of my handlers do an:

$r-set_handlers( PerlInitHandler = undef);

Later in the same virtual host configuration there is another Directory 
covering the URL / with this handler. 

PerlInitHandler sub { my $r = shift; warn 'callback', $r-current_callback; 
warn 'this should not be called'; } 

It happens that the callback printed in the above warn is: PerlInitHandler
Exacty what i though was removed with: $r-set_handlers( PerlInitHandler = 
undef);

Hans



Re: Problem Removing Handlers

2002-03-14 Thread Geoffrey Young



Hans Poo wrote:

 Please Help
 
 One of my handlers do an:
 
 $r-set_handlers( PerlInitHandler = undef);
 
 Later in the same virtual host configuration there is another Directory 
 covering the URL / with this handler. 
 
 PerlInitHandler sub { my $r = shift; warn 'callback', $r-current_callback; 
 warn 'this should not be called'; } 
 
 It happens that the callback printed in the above warn is: PerlInitHandler
 Exacty what i though was removed with: $r-set_handlers( PerlInitHandler = 
 undef);

you probably want to make that set_handlers() call explicit, then, to whatever 
phase you really mean, since PerlInitHandler isn't a real handler but an alias 
to others

$r-set_handlers( PerlPostReadRequestHandler = undef);
$r-set_handlers( PerlHeaderParserHandler = undef);

HTH

--Geoff




Re: performance testing - emulating real world use

2002-03-14 Thread mike808

 My experience with commercial load-testing apps is that they are 
 outrageously expensive, a pain to program, don't really scale all that 
 well, and mostly have to run on Windows with someone sitting at the 
 mouse.  There are some that work better than others, but the free stuff 
 in this areas is quite good.

Ditto. I've found it easier to hack together something with LWP's 
LWP::UserAgent and Benchmark. Particularly when load-testing an application 
that required different pre-registered users that had not performed this 
particular transaction we were load testing. (A survey with random and 
different questions - depending on the user!).

None of the macro-like testing apps could do very much with regard to that kind 
of interaction and variability in the content generated by the application we 
were load-testing. But a 30-line Perl script that simply appended the Benchmark 
results into a tab-delimited file worked great. We found about 30 instances of 
Perl running the script per WinPC ate the machine. So we only loaded 15 per PC 
during actual testing and added more distributed nodes to the test.

As an aside, the whole thing was an exercise in needing a cup of sugar and 
asking the local grocery store how much sugar they have on the shelves.
i.e. What is the point of measuring beyond the more than the 1 cup you need?
So we measured (at great expense) and determined that the entire lifetime load 
(~1yr) expected for all users on their system could be accomplished on the 
existing sytem during a lunch hour.

Mike808/

-
http://www.valuenet.net





Apache::DBI startup failure

2002-03-14 Thread Doug Silver

I can't seem to get Apache::DBI to start up properly.

Here's my startup.pl:

#!/usr/bin/perl -w
use strict;
use Apache ();
use Apache::Status ();
use Apache::DBI (); #  This *must* come before all other DBI modules!
use Apache::Registry;
use CGI (); 
CGI-compile(':all');
use CGI::Carp ();
$Apache::DBI::DEBUG = 2;
Apache::DBI-connect_on_init
   (DBI:Pg:dbname=demo;host=localhost, demo, ,
  {
 PrintError = 1, # warn() on errors
 RaiseError = 0, # don't die on error
 AutoCommit = 0, # require transactions
  }
   )
   or die Cannot connect to database: $DBI::errstr;
1;

And here's what happens:
[Thu Mar 14 14:28:35 2002] [notice] Apache/1.3.22 (Unix) mod_perl/1.26 PHP/4.1.0 
mod_ssl/2.8.5 OpenSSL/0.9.6a configured -- resuming normal operations
13336 Apache::DBI PerlChildInitHandler 
13337 Apache::DBI PerlChildInitHandler 
13338 Apache::DBI PerlChildInitHandler 
13339 Apache::DBI PerlChildInitHandler 
13340 Apache::DBI PerlChildInitHandler 
[Thu Mar 14 14:28:35 2002] [notice] Accept mutex: flock (Default: flock)
[Thu Mar 14 14:28:35 2002] [notice] child pid 13338 exit signal Segmentation fault (11)
[Thu Mar 14 14:28:35 2002] [notice] child pid 13339 exit signal Segmentation fault (11)
[Thu Mar 14 14:28:35 2002] [notice] child pid 13337 exit signal Segmentation fault (11)
[Thu Mar 14 14:28:35 2002] [notice] child pid 13336 exit signal Segmentation fault (11)
[Thu Mar 14 14:28:36 2002] [notice] child pid 13340 exit signal Segmentation fault (11)

If I don't use the connect_on_init stuff, I can run a test script fine with those
exact db parameters.

Any suggestions?

Thanks!
-- 
~
Doug Silver
Network Manager
Quantified Systems, Inc
~




Re: Apache::DBI startup failure

2002-03-14 Thread Brendan W. McAdams

I've seen similar behavior with DBD::Sybase; if your SYBASE env variable
is not set or points at an invalid directory Apache starts up but begins
segging every child process over and over again.

I'm not familiar with Postgres but this might point you in the right
direction.

On Thu, 2002-03-14 at 18:09, Doug Silver wrote:
 I can't seem to get Apache::DBI to start up properly.
 
 Here's my startup.pl:
 
 #!/usr/bin/perl -w
 use strict;
 use Apache ();
 use Apache::Status ();
 use Apache::DBI ();   #  This *must* come before all other DBI modules!
 use Apache::Registry;
 use CGI (); 
 CGI-compile(':all');
 use CGI::Carp ();
 $Apache::DBI::DEBUG = 2;
 Apache::DBI-connect_on_init
(DBI:Pg:dbname=demo;host=localhost, demo, ,
   {
  PrintError = 1, # warn() on errors
  RaiseError = 0, # don't die on error
  AutoCommit = 0, # require transactions
   }
)
or die Cannot connect to database: $DBI::errstr;
 1;
 
 And here's what happens:
 [Thu Mar 14 14:28:35 2002] [notice] Apache/1.3.22 (Unix) mod_perl/1.26 PHP/4.1.0 
mod_ssl/2.8.5 OpenSSL/0.9.6a configured -- resuming normal operations
 13336 Apache::DBI PerlChildInitHandler 
 13337 Apache::DBI PerlChildInitHandler 
 13338 Apache::DBI PerlChildInitHandler 
 13339 Apache::DBI PerlChildInitHandler 
 13340 Apache::DBI PerlChildInitHandler 
 [Thu Mar 14 14:28:35 2002] [notice] Accept mutex: flock (Default: flock)
 [Thu Mar 14 14:28:35 2002] [notice] child pid 13338 exit signal Segmentation fault 
(11)
 [Thu Mar 14 14:28:35 2002] [notice] child pid 13339 exit signal Segmentation fault 
(11)
 [Thu Mar 14 14:28:35 2002] [notice] child pid 13337 exit signal Segmentation fault 
(11)
 [Thu Mar 14 14:28:35 2002] [notice] child pid 13336 exit signal Segmentation fault 
(11)
 [Thu Mar 14 14:28:36 2002] [notice] child pid 13340 exit signal Segmentation fault 
(11)
 
 If I don't use the connect_on_init stuff, I can run a test script fine with those
 exact db parameters.
 
 Any suggestions?
 
 Thanks!
 -- 
 ~
 Doug Silver
 Network Manager
 Quantified Systems, Inc
 ~
 
-- 
Brendan W. McAdams | [EMAIL PROTECTED]
Senior Applications Developer  | (646) 375-1140
TheMuniCenter, LLC | www.themunicenter.com

Always listen to experts. They'll tell you what can't be done, and why.
Then do it.
- Robert A. Heinlein



signature.asc
Description: This is a digitally signed message part


RE: [WOT] emacs and WEBDAV

2002-03-14 Thread Rob Bloodgood

 At 11:30 AM -0800 3/14/02, Rob Bloodgood wrote:
 The problem is, concurrency.  Dreamweaver has versioning built
 in... but emacs has no way to recognize it.  So when I make a fix
 to a file, if the designers aren't explicitly instructed to 
 refresh-from-the-website-via-ftp, my changes get hosed.

 Versioning, no.  Locking, yes, optionally.  (Well, I guess it can do
 versioning via SourceSafe, but not via anything else.)  I'm seriously
 hoping they'll address that in the next release.

sigh I meant locking.  Not versioning.  e-Foot in e-Mouth.

 Emacs over WebDAV should work fine if you run something that supports
 WebDAV as a filesystem (e.g. OSX), but that's not going to help you
 much.

If we're talking about LOCKING, is this statement still true?

 There are two options I can think of.

 1. If your designers aren't making use of checkin/checkout in
 DreamWeaver, then simply make it clear to them that before they can
 save a file to the server, they have to do a sync first.  Make the
 final repository sit on CVS, and do a checkin every night.  So if
 something does go wrong you can at least pick up the previous day's
 work.

That (the train-them-to-sync-first part) has been what I've been forced to
do so far.  I haven't gone so far as to set up a CVS for the website tho.
Thx for the, I'll look into it.

 2. DreamWeaver's locking mechanism is handled by placing lock files
 on the server.  Those files have the info about who has what.  It
 ought to be possible to write an emacs extension that would use those
 files.

Certainly.  But my original message mentioned the REAL source of my
frustration: I'm pretty limited at elisp, otherwise I might have already had
this worked out. :-)

L8r,
Rob




Re: Apache::DBI startup failure

2002-03-14 Thread Doug Silver

Ok, I found it, but this has got to be some kind of bug.

This works:
Apache::DBI-connect_on_init(dbi:pg:demo,demo);

This doesn't:
Apache::DBI-connect_on_init(dbi:Pg:demo,demo);

That's right, putting 'dbi:pg' in lowercase made it work. I looked through
some old newsgroup stuff and saw someone using Postgres had it similar to
that.  

Here's some further debugging information for the developers:
perl -v = v5.6.1 on i386-freebsd (FreeBSD 4.4)
# pkg_info |egrep -i dbi|postgres
p5-Apache-DBI-0.88  DBI persistent connection, authentication and authorization
p5-DBD-Pg-1.01  Provides access to PostgreSQL databases through the DBI
p5-DBI-1.20 The perl5 Database Interface.  Required for DBD::* modules
postgresql-7.1.3A robust, next generation, object-relational DBMS

-doug

On 14 Mar 2002, Brendan W. McAdams wrote:

 I've seen similar behavior with DBD::Sybase; if your SYBASE env variable
 is not set or points at an invalid directory Apache starts up but begins
 segging every child process over and over again.
 
 I'm not familiar with Postgres but this might point you in the right
 direction.
 
 On Thu, 2002-03-14 at 18:09, Doug Silver wrote:
  I can't seem to get Apache::DBI to start up properly.
  
  Here's my startup.pl:
  
  #!/usr/bin/perl -w
  use strict;
  use Apache ();
  use Apache::Status ();
  use Apache::DBI (); #  This *must* come before all other DBI modules!
  use Apache::Registry;
  use CGI (); 
  CGI-compile(':all');
  use CGI::Carp ();
  $Apache::DBI::DEBUG = 2;
  Apache::DBI-connect_on_init
 (DBI:Pg:dbname=demo;host=localhost, demo, ,
{
   PrintError = 1, # warn() on errors
   RaiseError = 0, # don't die on error
   AutoCommit = 0, # require transactions
}
 )
 or die Cannot connect to database: $DBI::errstr;
  1;
  




Re: [OT]RE: loss of shared memory in parent httpd

2002-03-14 Thread Stas Bekman

Bill Marrs wrote:
 
 You actually can do this. See the mergemem project:
 http://www.complang.tuwien.ac.at/ulrich/mergemem/
 
 
 I'm interested in this, but it involves a kernel hack and the latest 
 version is from 29-Jan-1999, so I got cold feet.

It was a student project. And unless someone tells me differently wasn't 
picked up by community.
In any case I've mentioned this as a proof of concept. Of course I'd 
love to see a working tool too.


_
Stas Bekman JAm_pH  --   Just Another mod_perl Hacker
http://stason.org/  mod_perl Guide   http://perl.apache.org/guide
mailto:[EMAIL PROTECTED]  http://ticketmaster.com http://apacheweek.com
http://singlesheaven.com http://perl.apache.org http://perlmonth.com/




[Fwd: Re: Apache::DBI startup failure]

2002-03-14 Thread Brendan W. McAdams

Weird, although I bet if you had straced the apache processes you would
have seen the File not found.

For some reason I recall DBD Drivers being case sensitive.
On Thu, 2002-03-14 at 20:06, Doug Silver wrote:
 Ok, I found it, but this has got to be some kind of bug.
 
 This works:
 Apache::DBI-connect_on_init(dbi:pg:demo,demo);
 
 This doesn't:
 Apache::DBI-connect_on_init(dbi:Pg:demo,demo);
 
 That's right, putting 'dbi:pg' in lowercase made it work. I looked through
 some old newsgroup stuff and saw someone using Postgres had it similar to
 that.  
 
 Here's some further debugging information for the developers:
 perl -v = v5.6.1 on i386-freebsd (FreeBSD 4.4)
 # pkg_info |egrep -i dbi|postgres
 p5-Apache-DBI-0.88  DBI persistent connection, authentication and authorization
 p5-DBD-Pg-1.01  Provides access to PostgreSQL databases through the DBI
 p5-DBI-1.20 The perl5 Database Interface.  Required for DBD::* modules
 postgresql-7.1.3A robust, next generation, object-relational DBMS
 
 -doug
 
 On 14 Mar 2002, Brendan W. McAdams wrote:
 
  I've seen similar behavior with DBD::Sybase; if your SYBASE env variable
  is not set or points at an invalid directory Apache starts up but begins
  segging every child process over and over again.
  
  I'm not familiar with Postgres but this might point you in the right
  direction.
  
  On Thu, 2002-03-14 at 18:09, Doug Silver wrote:
   I can't seem to get Apache::DBI to start up properly.
   
   Here's my startup.pl:
   
   #!/usr/bin/perl -w
   use strict;
   use Apache ();
   use Apache::Status ();
   use Apache::DBI ();   #  This *must* come before all other DBI 
modules!
   use Apache::Registry;
   use CGI (); 
   CGI-compile(':all');
   use CGI::Carp ();
   $Apache::DBI::DEBUG = 2;
   Apache::DBI-connect_on_init
  (DBI:Pg:dbname=demo;host=localhost, demo, ,
 {
PrintError = 1, # warn() on errors
RaiseError = 0, # don't die on error
AutoCommit = 0, # require transactions
 }
  )
  or die Cannot connect to database: $DBI::errstr;
   1;
   
 
-- 
Brendan W. McAdams | [EMAIL PROTECTED]
Senior Applications Developer  | (646) 375-1140
TheMuniCenter, LLC | www.themunicenter.com

Always listen to experts. They'll tell you what can't be done, and why.
Then do it.
- Robert A. Heinlein
-- 
Brendan W. McAdams | [EMAIL PROTECTED]
Senior Applications Developer  | (646) 375-1140
TheMuniCenter, LLC | www.themunicenter.com

Always listen to experts. They'll tell you what can't be done, and why.
Then do it.
- Robert A. Heinlein



signature.asc
Description: This is a digitally signed message part


Looking for RPC::ONC.pm

2002-03-14 Thread Medi Montaseri

Does anyone know where I can find an ONC RPC perl package?

The only one I found is perlrpcgen-0.71a from Jake Donham
who used be reachable at [EMAIL PROTECTED] However Jake's
implementation requires an include file (rpc/svc_soc.h) that seems to
be only available on Solaris. I need a linux version of it.

I also see lots of XML-RPC and home made client-server RPC look-alikes.
I need this to work with ONC RPC Server written in C.

Thanks

--
-
Medi Montaseri   [EMAIL PROTECTED]
Unix Distributed Systems EngineerHTTP://www.CyberShell.com
CyberShell Engineering
-






Re: [WOT] emacs and WEBDAV

2002-03-14 Thread Tatsuhiko Miyagawa

At Thu, 14 Mar 2002 11:30:54 -0800,
Rob Bloodgood wrote:

 DW also speaks WEBDAV natively, but emacs does not.  Emacs speaks CVS

Eldav: Yet another WebDAV interface for Emacsen 
http://www.gohome.org/eldav/



-- 
Tatsuhiko Miyagawa [EMAIL PROTECTED]



cvs commit: modperl-2.0/xs/maps apr_types.map

2002-03-14 Thread stas

stas02/03/14 18:03:57

  Modified:xs/maps  apr_types.map
  Log:
  fixing the typemap for apr_interval_time_t to NV, because it's:
  typedef apr_int64_t apr_interval_time_t;
  64bit != IV, but NV
  
  Revision  ChangesPath
  1.13  +1 -1  modperl-2.0/xs/maps/apr_types.map
  
  Index: apr_types.map
  ===
  RCS file: /home/cvs/modperl-2.0/xs/maps/apr_types.map,v
  retrieving revision 1.12
  retrieving revision 1.13
  diff -u -r1.12 -r1.13
  --- apr_types.map 10 Mar 2002 00:11:50 -  1.12
  +++ apr_types.map 15 Mar 2002 02:03:57 -  1.13
   -139,7 +139,7 
   apr_ssize_t| IV
   apr_size_t | IV
   apr_time_t | NV
  -apr_interval_time_t| IV
  +apr_interval_time_t| NV
   apr_gid_t  | IV
   apr_uid_t  | IV
   apr_off_t  | IV