Re: implementing server affinity
Chris Nokleberg [EMAIL PROTECTED] writes: Of course, the front-end proxy servers don't have mod_perl, so the TransHandler would have to be written in C (?). Does anyone know of any existing code that does this sort of thing? Or simply well-written C TransHandlers that I could work off of? Is there a better way? How are you doing session management? I'm sure mod_rewrite could peek at the user cookie or mangled URL and redirect accordingly. -- Dave Hodgkinson, http://www.hodgkinson.org Editor-in-chief, The Highway Star http://www.deep-purple.com Apache, mod_perl, MySQL, Sybase hired gun for, well, hire - - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Just Test. Don't read.
Test1. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: New Module Idea: MLDBM::Sync
Tim Bunce wrote: I looked through the code and couldn't see how you are doing i/o flushing. This is more of an issue with Berkeley DB than SDBM I think, since Berkeley DB will cache things in memory. Can you point to me it? I'm puzzled why people wouldn't just use version 3 of Berkeley DB (via DB_File.pm or BerkeleyDB.pm) which supports multiple readers and writers through a shared memory cache. No open/close/flush required per-write and very very much faster. Is there a reason I'm missing? I'm not sure I want to go the shared memory route, generally, and if I were to, I'd likely start with like you say BerkeleyDB or IPC::Cache. I know there isn't much of a learning curve, but its not complexity that I want to add to the system I'm working on now. I've been doing stuff like MLDBM::Sync for a while making DBMs work in multiprocess environment, and its comforting. 1000 reads/writes per second is enough for my caching needs now, as its just a front end to SQL queries. --Joshua - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: implementing server affinity
I thought about mod_rewrite but I would like this affinity module to handle the job of picking a random/round-robin backend server and setting the cookie itself. That way I don't have any session mgmt code in the backend server; I will just "know" that certain urls are guaranteed to bring the user back to the same box. --Chris On 22 Nov 2000, David Hodgkinson wrote: Chris Nokleberg [EMAIL PROTECTED] writes: Of course, the front-end proxy servers don't have mod_perl, so the TransHandler would have to be written in C (?). Does anyone know of any existing code that does this sort of thing? Or simply well-written C TransHandlers that I could work off of? Is there a better way? How are you doing session management? I'm sure mod_rewrite could peek at the user cookie or mangled URL and redirect accordingly. -- Dave Hodgkinson, http://www.hodgkinson.org Editor-in-chief, The Highway Star http://www.deep-purple.com Apache, mod_perl, MySQL, Sybase hired gun for, well, hire - - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: VirtualDocumentRoot problem
hi mark VirtualHost sections do support the ErrorLog and CustomLog sections i have these in my vhosts sections CustomLog /var/log/apache/vhost-httpd-access.log vcombined ErrorLog /var/log/apache/vhost-httpd-error.log havent tried the %0 option with this yet that would eat up to many file descriptors i have a vcombined LogFormat that adds the vhost to the front of the line Mark Bojara wrote: Date: Wed, 7 Nov 2000 02:38:00 + (GST) To: [EMAIL PROTECTED] Subject: VirtualDocumentRoot problem From: Mark Bojara [EMAIL PROTECTED] Hi, I have a problem with the vhost module. The module does not support, logging per virtualhost in seprate files... eg I am looking for something like: VirtualLoggingFile /home/logs/access-%0.log something to that effect. Regards, Mark Bojara MICS Networking - 012-661- - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
[ libapreq ] desesperatly need of .32 or .33
Hi everyone, I'm using Apache::Request, and I've encountered a bug ! [ I've done a POST multiform data, and the first field, a textarea, could contains at the end some garbage. Garbage because data is changing. ] I'm quite sure [ after a quick glance at mod_perl ML archive] one of the patches proposed could solve the problem. So I need desesperatly a new version of libapreq, the .32, or .33 if it's finished ! I do not know if Doug still maintains it, or if it's someone else. But please, could it be possible to see very very soon a new release ? thanks, kktos 365 Corp
Re: [ libapreq ] desesperatly need of .32 or .33
On Wed, 22 Nov 2000, Thierry-Michel Barral wrote: Hi everyone, I'm using Apache::Request, and I've encountered a bug ! [ I've done a POST multiform data, and the first field, a textarea, could contains at the end some garbage. Garbage because data is changing. ] I'm quite sure [ after a quick glance at mod_perl ML archive] one of the patches proposed could solve the problem. So I need desesperatly a new version of libapreq, the .32, or .33 if it's finished ! I do not know if Doug still maintains it, or if it's someone else. But please, could it be possible to see very very soon a new release ? Doug mentioned to me at ApacheCon (or it may have been back at TPC) that he would like someone else to take over maintainence of Apache::Request. If nobody volunteers, I'm willing to look at doing so, although I've only just started that long road into using XS, so I'm more likely to spend time applying patches than writing them. -- Matt/ /||** Director and CTO ** //||** AxKit.com Ltd ** ** XML Application Serving ** // ||** http://axkit.org ** ** XSLT, XPathScript, XSP ** // \\| // ** Personal Web Site: http://sergeant.org/ ** \\// //\\ // \\ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Question
Hi I would be grateful if someone could answer this question: Even if you tell Apache only to execute files in a certain directory under mod_perl do all processes still include the mod_perl code? Thanks Jonathan Tweed - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Question
On Wed, 22 Nov 2000, Jonathan Tweed wrote: I would be grateful if someone could answer this question: Even if you tell Apache only to execute files in a certain directory under mod_perl do all processes still include the mod_perl code? If I understand your question correctly, yes. MBM -- Matthew Byng-Maddick Home: [EMAIL PROTECTED] +44 20 8981 8633 (Home) http://colondot.net/ Work: [EMAIL PROTECTED] +44 7956 613942 (Mobile) Diplomacy is the art of saying "nice doggie" until you can find a rock. -- Wynn Catlin - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ANNOUNCE] HTTP::GHTTP
On Tuesday, November 21, 2000, at 06:28 PM, Matt Sergeant wrote: HTTP::GHTTP is a lightweight HTTP client library based on the gnome libghttp library. It offers a pretty simple to use API for doing HTTP requests. This can be useful under mod_perl because the alternatives (e.g. LWP) are quite large. Will the ghttp library work under Solaris and, if so, where can it be downloaded? Cheers, Mark - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ANNOUNCE] HTTP::GHTTP
On Wed, 22 Nov 2000, Mark Doyle wrote: On Tuesday, November 21, 2000, at 06:28 PM, Matt Sergeant wrote: HTTP::GHTTP is a lightweight HTTP client library based on the gnome libghttp library. It offers a pretty simple to use API for doing HTTP requests. This can be useful under mod_perl because the alternatives (e.g. LWP) are quite large. Will the ghttp library work under Solaris and, if so, where can it be downloaded? It should work pretty much anywhere (probably even windows if you can get past the configure stage). Its part of the gnome project, so you can get it from http://www.gnome.org/. Or a search should turn up a solaris package for it. -- Matt/ /||** Director and CTO ** //||** AxKit.com Ltd ** ** XML Application Serving ** // ||** http://axkit.org ** ** XSLT, XPathScript, XSP ** // \\| // ** Personal Web Site: http://sergeant.org/ ** \\// //\\ // \\ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Using the same mod_perl cgi-bin by 2 domains mapped to the same IP
Hi, I'm running a mod_perl application on a virtual hosted site but with its on apache web server. I need to install another domain name using the same virtual host ip. eg. Virtual Host aaa.aaa.aaa.aaa ServerName example1.com . . /VirtualHost Virtual Host aaa.aaa.aaa.aaa ServerName example2.com . . /VirtualHost I'd like to get it so that both domain names can access the same cgi-bin configured to run as mod_perl. It works well in one but I get an Undefined for the other (or vice versa) since its probably using scripts compiled for that child process from the other. eg. Undefined subroutine Apache::example2_2ecom::cgi2dbin::myscript_2ecgi::MyFunction Thank-you David J. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Using the same mod_perl cgi-bin by 2 domains mapped to the same IP
-Original Message- From: David Jourard [mailto:[EMAIL PROTECTED]] Sent: Wednesday, November 22, 2000 11:54 AM To: [EMAIL PROTECTED] Subject: Using the same mod_perl cgi-bin by 2 domains mapped to the same IP [snip] I'd like to get it so that both domain names can access the same cgi-bin configured to run as mod_perl. It works well in one but I get an Undefined for the other (or vice versa) since its probably using scripts compiled for that child process from the other. eg. Undefined subroutine Apache::example2_2ecom::cgi2dbin::myscript_2ecgi::MyFunction try setting $Apache::Registry::NameWithVirtualHost = 0; in your startup.pl and read http://perl.apache.org/guide/config.html#A_Script_From_One_Virtual_Host_C should do the trick... --Geoff Thank-you David J. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ANNOUNCE] HTTP::GHTTP
On Wednesday, November 22, 2000, at 10:44 AM, Matt Sergeant wrote: On Wed, 22 Nov 2000, Mark Doyle wrote: It should work pretty much anywhere (probably even windows if you can get past the configure stage). Its part of the gnome project, so you can get it from http://www.gnome.org/. Or a search should turn up a solaris package for it. Starting with www.gnome.org led to RPM hell and loading a lot more than just this single library. Key was to search Google for "libghttp and Solaris" and not just 'ghttp'. Found: ftp://ftp.sunfreeware.com/pub/freeware/SOURCES/libghttp-1.0.6.tar.gz Not quite the latest patchlevel, but easiest way to get the source. Compiles with no problem and HTTP::GHTTP is a happy camper. Thanks again Matt for a great contribution. Cheers, Mark - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ANNOUNCE] HTTP::GHTTP
On Wednesday, November 22, 2000, at 12:43 PM, Nathan Torkington wrote: Mark Doyle writes: Starting with www.gnome.org led to RPM hell and loading a lot more than just this single library. This: ftp://ftp.gnome.org/pub/GNOME/stable/sources/libghttp/ will hold the latest stable sources, and is the master distribution. Ah, thanks for the pointer Cheers, Mark - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ libapreq ] desesperatly need of .32 or .33
Thierry-Michel Barral [ Hi everyone, I'm using Apache::Request, and I've encountered a bug ! please, could it be possible to see very very soon a new release ? ] Matt Doug mentioned to me at ApacheCon (or it may have been back at TPC) that he would like someone else to take over maintainence of Apache::Request. If nobody volunteers, I'm willing to look at doing so, although I've only just started that long road into using XS, so I'm more likely to spend time applying patches than writing them. /Matt I'm not sure I can do it myself, sorry [not good/talented/... enough] :( . It could be really great if you can spend some time applying the patches... For me, but I'm pretty sure others would be very happy, because, after a glance at this mailing list, I've concluded each guy has is own libapreq ! thanks for your help, in advance :o) kktos - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: Question
How could they not? Since the files are executable by any process, then all processes must have the mod_perl code in it. You could if you really wanted to run 2 versions of Apache, one with mod_perl and one without. You could then call all CGI's through a different IP and then run mod_perl on that one only. This would reduce the sizes of your executables running in memory for Apache. Richard Web Engineer ProAct Technologies Corp. -Original Message- From: Jonathan Tweed [mailto:[EMAIL PROTECTED]] Sent: Wednesday, November 22, 2000 9:15 AM To: '[EMAIL PROTECTED]' Subject: Question Hi I would be grateful if someone could answer this question: Even if you tell Apache only to execute files in a certain directory under mod_perl do all processes still include the mod_perl code? Thanks Jonathan Tweed - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
An idea on END_REQUEST handler
Hi! Everyone knows that END handlers in packages under mod_perl are executed only when apache terminates. But from time to time there might be a need to execute something when the Request is finished. In practice what I do in these cases is install PerlCleanupHandler which checks all loaded packages and if they define 'END_REQUEST' function then execute that function. Maybe it's worth making it standart feature of mod_perl? Andrei - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ANNOUNCE] HTTP::GHTTP
man that is one crazy module! in under ten minutes i had the thing running! kudos again to you matt! - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ANNOUNCE] HTTP::GHTTP
On Wed, 22 Nov 2000, clayton cottingham wrote: man that is one crazy module! in under ten minutes i had the thing running! kudos again to you matt! I'd be happy if it wasn't turning out to be more popular than AxKit! *sigh* :-) -- Matt/ /||** Director and CTO ** //||** AxKit.com Ltd ** ** XML Application Serving ** // ||** http://axkit.org ** ** XSLT, XPathScript, XSP ** // \\| // ** Personal Web Site: http://sergeant.org/ ** \\// //\\ // \\ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ANNOUNCE] HTTP::GHTTP
Simple useful things get adopted quickly, more complex useful things take more time. marc - Original Message - From: "Matt Sergeant" [EMAIL PROTECTED] To: "clayton cottingham" [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, November 22, 2000 1:37 PM Subject: Re: [ANNOUNCE] HTTP::GHTTP On Wed, 22 Nov 2000, clayton cottingham wrote: man that is one crazy module! in under ten minutes i had the thing running! kudos again to you matt! I'd be happy if it wasn't turning out to be more popular than AxKit! *sigh* :-) -- Matt/ /||** Director and CTO ** //||** AxKit.com Ltd ** ** XML Application Serving ** // ||** http://axkit.org ** ** XSLT, XPathScript, XSP ** // \\| // ** Personal Web Site: http://sergeant.org/ ** \\// //\\ // \\ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ANNOUNCE] HTTP::GHTTP
haha i just got a patch for sablotron its parser wasnt working on the x3d xml stuff remember? i havent been able to get it to work but he just resent the patch cuz it had some extra line feeds in it or something! he also stated that the next release should make it so sablotron will use the real expat libs! instead of the ones off of their site! if you want it ill send i like axkit but our company is not about to be migrating to it right now, our graphic designers seems to like the html::template stuff more i think its because the xml is a lil heady for the designers! and us programmers would have to massage all their work into place if we moved to an xml based deliver syustem the other thing is how different parsers and engines can parse the xml files differently or incorrectly expat sax or w.h.y all seem to work differently once the xml movement settles down into a stable realm of cross conformance and compatibility i think xml will take off like gang busters! sort of how java was slow in the begining and then it exploded what i dont really want to see is what happened to the vrml/web3d movement where non conformance to the standard actually set back the movement no company could afford to make a browser lots went bankrupt and those that did release we non conforming non compliant no one was 'watchdogging' these issues and what happened is designers/programmers would have to take into account all these design issues for each plugin!! and at that time there were more vrml plugins than web browsers!! very frustrating in this case the open source movement has been saving vrml with freewrl and openvrml allright back to the grind On Wed, 22 Nov 2000 18:37:22 + (GMT), Matt Sergeant said: On Wed, 22 Nov 2000, clayton cottingham wrote: man that is one crazy module! in under ten minutes i had the thing running! kudos again to you matt! I'd be happy if it wasn't turning out to be more popular than AxKit! *sigh* :-) - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Apache::AuthenDBI request request
I need to access DBI table fields in the AuthenDBI/AuthzDBI data base. I think a good way to do this, would be to provide a new directive which would specify the table field values to be placed into the Apache notes. For example: Auth_DBI_note_field UID would add a key/value pair (field_name/value) into the Apache notes for each request accepted (not rejected) by Apache::AuthzDBI. If more that one field name directive was provided, each would then in turn have a notes key/value entry. In this way, a module down stream could retrieve the field/value pair from Apache notes to process their requirements. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
i need reload the scripts
i have some modules already written in my subdirectory and i have made chages to them an i need to reload them. how do i reload the perl scripts using mod perl. i was going through documentation it asked me to use PerlModule Apache::StatINC PerlInitHandler Apache::StatINC but i am not clear with this. where do i have to add this. i am new to mod perl. can any one help me figure out this. thanks - bari - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: i need reload the scripts
You can either restart the server... or, add PerlPostReadRequestHandler Apache::StatINC to your httpd.conf file. I think even with StatINC, you might have to restart your server once in a while - my own experience... - Sean On Wed, 22 Nov 2000, bari wrote: i have some modules already written in my subdirectory and i have made chages to them an i need to reload them. how do i reload the perl scripts using mod perl. i was going through documentation it asked me to use PerlModule Apache::StatINC PerlInitHandler Apache::StatINC but i am not clear with this. where do i have to add this. i am new to mod perl. can any one help me figure out this. thanks - bari - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
RE: i need reload the scripts
i dont have the access to restart my server. so is there a dynamic way to reload scripts. I would really apprecate your help. thanks - bari -Original Message- From: Sean C. Brady [mailto:[EMAIL PROTECTED]] Sent: Wednesday, November 22, 2000 11:47 AM To: bari Cc: [EMAIL PROTECTED] Subject: Re: i need reload the scripts You can either restart the server... or, add PerlPostReadRequestHandler Apache::StatINC to your httpd.conf file. I think even with StatINC, you might have to restart your server once in a while - my own experience... - Sean On Wed, 22 Nov 2000, bari wrote: i have some modules already written in my subdirectory and i have made chages to them an i need to reload them. how do i reload the perl scripts using mod perl. i was going through documentation it asked me to use PerlModule Apache::StatINC PerlInitHandler Apache::StatINC but i am not clear with this. where do i have to add this. i am new to mod perl. can any one help me figure out this. thanks - bari - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: New Module Idea: MLDBM::Sync
Paul Lindner wrote: I'm puzzled why people wouldn't just use version 3 of Berkeley DB (via DB_File.pm or BerkeleyDB.pm) which supports multiple readers and writers through a shared memory cache. No open/close/flush required per-write and very very much faster. Is there a reason I'm missing? Might MLDBM::Sync work over an NFS mounted partition? That's one reason I've not used the BerkeleyDB stuff yet.. Kinda, but only in SDBM_File mode like Apache::ASP state. Kinda, because flock() doesn't work over NFS, and that other patch we worked with that called NFS locking didn't work when I load tested it. I've heard that a samba share might support file locking transparently, but have yet to test this. MLDBM::Sync uses a similar method that Apache::ASP::State does to keep data synced. In an NFS environment, whether data gets committed is a matter of chance of collision. --Joshua - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: Question
If you run the 2-apache model described in the guide (as you generally need on a busy site), you can use the locations set in ProxyPass directives to determine which requests are passed to the backend mod_perl apache and let the lightweight front end handle the others directly. Or you can use mod_rewrite to almost arbitrarily select which requests are run immediately by the front end or proxied through to the back end server. You don't have to make it visible to the outside by running the back end on a different address - it can be another port accessed only by the front end proxy. Les Mikesell [EMAIL PROTECTED] - Original Message - From: "Peiper,Richard" [EMAIL PROTECTED] To: "'Jonathan Tweed'" [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Wednesday, November 22, 2000 8:20 AM Subject: RE: Question How could they not? Since the files are executable by any process, then all processes must have the mod_perl code in it. You could if you really wanted to run 2 versions of Apache, one with mod_perl and one without. You could then call all CGI's through a different IP and then run mod_perl on that one only. This would reduce the sizes of your executables running in memory for Apache. Richard Web Engineer ProAct Technologies Corp. -Original Message- From: Jonathan Tweed [mailto:[EMAIL PROTECTED]] Sent: Wednesday, November 22, 2000 9:15 AM To: '[EMAIL PROTECTED]' Subject: Question Hi I would be grateful if someone could answer this question: Even if you tell Apache only to execute files in a certain directory under mod_perl do all processes still include the mod_perl code? Thanks Jonathan Tweed - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
how do I restart the server
I have changed some of my scripts and I need to reload them. for that I need to restart the server. but the changes are in my subtree in the sandbox. so if I restart the server is it going to be problem for the other users. if not is there any other way to reload my scripts. I have tried adding PerlModule Apache::StatINC PerlInitHandler Apache::StatINC but it didn't help me as I made changes to one of my .pm files. I need help. - bari - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
i am looking for mod perl tutor
Hi, I am looking for mod perl tutors. I have a pretty good knowledge of perl but just a lil bit about appache modules. I need some one who could spare a day or 2 with 1 or 2 hours a day to get me basics. I live in santa clara. - bari _ Get more from the Web. FREE MSN Explorer download : http://explorer.msn.com - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ libapreq ] desesperatly need of .32 or .33
Matt Sergeant [EMAIL PROTECTED] writes: Doug mentioned to me at ApacheCon (or it may have been back at TPC) that he would like someone else to take over maintainence of Apache::Request. If nobody volunteers, I'm willing to look at doing so, although I've only just started that long road into using XS, so I'm more likely to spend time applying patches than writing them. I'd be happy to the help with the bulk of whatever needs fixing, however I'm somewhat reluctant to volunteer to maintain such a critical package since 1) I have no experience maintaining CPAN'd perl packages, 2) other than broken stuff, I wouldn't seek to change much code (especially *not* the API, but anything that reduces the memory footprint and/or increases performance would be of interest) 3) my familiarity with XS and perlguts is cursory at best. But before anyone bites off more than they can chew, perhaps some discussion of the current bugs and future needs for libapreq should be aired out. My own problems with libapreq revolved around the multipart buffer code, and since I patched it a while back, I haven't bumped into any other snags. What are the other unresolved issues with libapreq? How much of the "undocumented" API (like $q-parms) is in use? Regards. -- Joe Schaefer [EMAIL PROTECTED] - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: trouble compiling mod_perl-1.24_01
glad to help would anyone else be interested in a FreeBSD port to do this ?? At 02:23 PM 22/11/00 -0500, you wrote: Thanks! I finally did a dirty hack but this sounds like a good way to fix it right. I'll have to do that as soon as I get a chance. -Jere peter brown wrote: hi jere i am running FreeBSD 4.1.1 at work and i have worked out the best way to do a static compile of mod_perl with apache installed as a port i followed stas's mod_perl guide and changed it to work with FreeBSD steps as follows # cd /usr/ports/www/apache13 edit Makefile add thisline in CONFIGURE_ARGS section --activate-module=src/modules/perl/libperl.a # make extract # cd /mod_perl_src_directory/ # perl Makefile.PL \ APACHE_SRC=/usr/ports/www/apache13/work/apache_1.3.14/src/ \ NO_HTTPD=1 \ USE_APACI=1 \ PREP_HTTPD=1 \ EVERYTHING=1 # make # make test # make install # cd /usr/ports/www/apache13 # make # make install works like a charm :) i am thinking of writing a port to do this and submitting it happy mod_perling :) At 12:07 AM 15/11/00 -0500, you wrote: I'm on a FreeBSD 3.4-RELEASE box and I've just built and tested apache 1.3.14 from source. Then I try to build mod_perl with the following commands and get the errors below. The full text of the process can be seen at http://www.StageRigger.com/mod_perl-1.24_01.log Has anyone run into this? Suggestions? Thanks, -Jere Julian -- +---+ | Jere C Julianmailto:[EMAIL PROTECTED] | | Network Designer/Manager, Rigger, Sound Engineer | |http://www.StageRigger.com/| +---+ perl Makefile.PL APACHE_SRC=../apache_1.3.14/src DO_HTTPD=1 USE_APACI=1 make ... cp Server.pm ../blib/lib/Apache/Server.pm mkdir ../blib/arch/auto/Apache/Server mkdir ../blib/lib/auto/Apache/Server cp Symbol.pm ../blib/lib/Apache/Symbol.pm /usr/bin/perl -I/usr/local/lib/perl5/5.6.0/i386-freebsd -I/usr/local/lib/perl5/5.6.0 /usr/local/lib/perl5/5.6.0/ExtUtils/xsubpp -typemap /usr/local/lib/perl5/5.6.0/ExtUtils/typemap Symbol.xs Symbol.xsc mv Symbol.xsc Symbol.c cc -c -I/usr/local/include -O -DVERSION=\"1.31\" -DXS_VERSION=\"1.31\" -DPIC -fpic -I/usr/local/lib/perl5/5.6.0/i386-freebsd/CORE Symbol.c Symbol.xs: In function `XS_Apache__Symbol_cv_const_sv': Symbol.xs:106: `na' undeclared (first use this function) Symbol.xs:106: (Each undeclared identifier is reported only once Symbol.xs:106: for each function it appears in.) *** Error code 1 Stop. *** Error code 1 Stop. -- +---+ | Jere C Julianmailto:[EMAIL PROTECTED] | | Network Designer/Manager, Rigger, Sound Engineer | | http://www.StageRigger.com/~julianje/| +---+ - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Bug in Apache::test
It seems that when you ask it to scan for dynamic modules and produce the appropriate conf file you can end up with something like this in there: LoadModule setenvif_module"/usr/local/apache_mp"/libexec/mod_setenvif.so The quotes cause a problem. Here's a patch against the latest CVS version. --- test.pm.old Wed Nov 22 19:18:39 2000 +++ test.pm Wed Nov 22 18:52:31 2000 @@ -148,6 +148,8 @@ my @modules = grep /^\s*(Add|Load)Module/, @lines; my ($server_root) = (map /^\s*ServerRoot\s*(\S+)/, @lines); +$server_root =~ s/^"//; +$server_root =~ s/"$//; # Rewrite all modules to load from an absolute path. foreach (@modules) { - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Use Sambe, not NFS [Re: New Module Idea: MLDBM::Sync]
Paul Lindner wrote: Might MLDBM::Sync work over an NFS mounted partition? That's one reason I've not used the BerkeleyDB stuff yet.. Paul, For the first time, I benchmarked concurrent linux client write access over a SAMBA network share, and it worked, 0 data loss. This is opposed to a NFS share accessed from linux which would see data loss due to lack of serialization of write requests. With MLDBM::Sync, I benchmarked 8 linux clients writing to a samba mount pointed at a WinNT PIII 450 over a 10Mbs network. For 8000 writes, I got: SDBM_File: 105 writes/sec DB_File:99 writes/sec [ better than to local disk ] It seems the network was the bottleneck on this test, as neither client nor server CPU/disk was maxed out. The WinNT server was running at 20-25% CPU utilization during the test. As Apache::ASP $Session uses a method similar to MLDBM::Sync to flush i/o, you could then point StateDir to a samba/CIFS share to cluster well an ASP application, with 0 data loss. My understanding is that you have a NetApp cluster which can export CIFS? I'd benchmark this heavily obviously to see if there are any NetApp cluster locking issues, but am guessing that you could likely get 200+ ASP requests per second on a 100Mbs network, which will likely far exceed your base application performance. -- Joshua _ Joshua Chamas Chamas Enterprises Inc. NodeWorks free web link monitoring Huntington Beach, CA USA http://www.nodeworks.com1-714-625-4051 - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: [ libapreq ] desesperatly need of .32 or .33
But before anyone bites off more than they can chew, perhaps some discussion of the current bugs and future needs for libapreq should be aired out. My own problems with libapreq revolved around the multipart buffer code, and since I patched it a while back, I haven't bumped into any other snags. I agree--the multipart buffer memory issues are the key problem with the current 'official' release. There are at least two multipart buffer fixes around though, so it would be nice to merge one of these into the CPAN distribution. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: New Module Idea: MLDBM::Sync
On Wed, Nov 22, 2000 at 10:58:43AM +, Tim Bunce wrote: On Tue, Nov 21, 2000 at 03:00:01PM -0800, Perrin Harkins wrote: On Fri, 17 Nov 2000, Joshua Chamas wrote: I'm working on a new module to be used for mod_perl style caching. I'm calling it MLDBM::Sync because its a subclass of MLDBM that makes sure concurrent access is serialized with flock() and i/o flushing between reads and writes. I looked through the code and couldn't see how you are doing i/o flushing. This is more of an issue with Berkeley DB than SDBM I think, since Berkeley DB will cache things in memory. Can you point to me it? I'm puzzled why people wouldn't just use version 3 of Berkeley DB (via DB_File.pm or BerkeleyDB.pm) which supports multiple readers and writers through a shared memory cache. No open/close/flush required per-write and very very much faster. Is there a reason I'm missing? Might MLDBM::Sync work over an NFS mounted partition? That's one reason I've not used the BerkeleyDB stuff yet.. -- Paul Lindner [EMAIL PROTECTED] Red Hat Inc. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: New Module Idea: MLDBM::Sync
On Wed, 22 Nov 2000, Tim Bunce wrote: I'm puzzled why people wouldn't just use version 3 of Berkeley DB (via DB_File.pm or BerkeleyDB.pm) which supports multiple readers and writers through a shared memory cache. No open/close/flush required per-write and very very much faster. Is there a reason I'm missing? There are a few. It's much harder to build than most CPAN modules, partly because of conflicts some people run into with the db library Red Hat provides. The documentation is pretty weak on how to use it with a shared memory environment. (You have to use BerkeleyDB.pm for this incidentally; DB_File does not support it.) We got past these problems and then ran into issues with db corruption. If Apache gets shut down with a SIGKILL (and this seems to happen fairly often when using mod_perl), the data can be corrupted in such a way that when you next try to open it BerkeleyDB will just hang forever. Sleepycat says this is a known issue with using BerkeleyDB from Apache and they don't have a solution for it yet. Even using their transaction mechanism does not prevent this problem. We tried lots of different things and finally have reached what seems to be a solution by using database-level locks rather than page-level. We still get to open the database in ChildInit and keep it open, with all the speed benefits of the shared memory buffer. It is definitely the fastest available way to share data between processes, but the problems we had have got me looking at other solutions again. If you do try it out, I'd be eager to hear what your experiences with it are. - Perrin - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
Re: New Module Idea: MLDBM::Sync
On Wed, Nov 22, 2000 at 02:17:25PM +0300, Ruslan V. Sulakov wrote: Hi, Tim! I'd like to use BerkeleyDB! But have you test it in mod_perl environment? Not yet, but I will be very soon. I'm sure others are using it. May be I wrote scripts in wrong fasion. I open $dbe and $db at startup.pl of mod_perl Why do you think that no open/close/flush required? Not required *per write*. Open when the child is started and close when the child exits. (Probably best not to open in the parent. I haven't checked the docs yet.) No flush needed as the cache is shared and the last process to disconnect from it will flush it automatically. Each new apache server generation (about 1 time per 30 request in my case) need to run starup.pl So, how to syncronize changes between different apache server processes when no flush made? Or am I not write? Or synchronization between simaltanious BerkeleyDB objects is done automatically throwgh DBEnvironment? I believe so. I think ,this theme is very important to all developers! Tim. - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]