Caching DB queries amongst multiple httpd child processes
Does anyone have any experience in using IPC shared memory or similar in caching data amongst multiple httpd daemons ? We run a large-ish database dependent site, with a mysql daemon serving many hundreds of requests a minute. While we are currently caching SQL query results on a per-process basis, it would be nice to share this ability across the server as a whole. I've played with IPC::Shareable and IPC::ShareLite, but both seem to be a little unreliable - unsurprising as both modules are currently still under development. Our platform is a combination of FreeBSD and Solaris servers - speaking of which, has anyone taken this one step further again and cached SQL results amongst multiple web servers ? Thanks in advance, Peter Skipworth -- .-. | Peter SkipworthPh: 03 9897 1121 | | Senior Programmer Mob: 0417 013 292 | | realestate.com.au [EMAIL PROTECTED] | `-'
Re: mod_perl with Stronghold
Been running with it for almost three years now. What's the question? David M. Davisson [EMAIL PROTECTED] - Original Message - From: J. Horner [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, February 02, 2000 7:02 AM Subject: mod_perl with Stronghold I searched the archives, but no mention of this. Does anyone have any experience with mod_perl and Stronghold? Thanks, Jon J. Horner [EMAIL PROTECTED] http://jjhorner.penguinpowered.com/ 10:00am up 8 days, 1:38, 6 users, load average: 0.04, 0.03, 0.00
Re: mod_perl with Stronghold
On Wed, 2 Feb 2000, J. Horner wrote: I searched the archives, but no mention of this. Does anyone have any experience with mod_perl and Stronghold? We run StrongHold with mod_perl/1.21 (and other stuff), without any problems. Read INSTALL.simple.stronghold to give you a start on how to build modperl for stronghold. Cheers, -- Sander van Zoest [EMAIL PROTECTED] High Geek, MP3.com, Inc.http://www.vanZoest.com/
Re: XML applications in mod_perl/apache xml project (fwd)
Hi, I embarking on writing a mod_perl handler that accepts XML posted from client apps/browsers and then does "stuff" with the received XML snippets. I would like to take advantage of some of the projects discussed at xml.apache.org (The Apache XML Project), but I'm not sure how they fit into the mod_perl framework. Any XML gurus on this list have any experiences, pointers, or suggestions with integrating xml.apache.org projects with a mod_perl enabled apache server? I'd like to avoid using the java parsers and jserv if possible, but it seems that some of the nicer features of Cocoon, etc. are only available in java (hence the "Cocoon is a 100% pure Java publishing" tag line :). The next major release of Embperl will step in that direction, providing functionlity for doing things like Concon. Also I will not implement an XSL processor, I hope the Apache XML project provides it's C++ implementation and they (or we) add a perl interface, so this will be usable in that context. I expect (but can not promise) the first beta for this during march. Gerald
Re: XML applications in mod_perl/apache xml project (fwd)
i actually started writing a perl port of cocoon, but i only got a couple classes done before it became obvious that we weren't actually going to use it at work. however, cocoon itself is pretty simple. and the project leaders are definitely interested in having a perl port. one of my biggest problems with it is that we dont have a full perl implementation of xslt. also, because we dont have a standard perl web server interface like the servlet api, we have to couple the cocoon port very closely with the apache api. would probably be a fun project for someone with a little more time on their hands :) i can certainly pass on what little ive done. On Thu, 3 Feb 2000, Gerald Richter wrote: Hi, I embarking on writing a mod_perl handler that accepts XML posted from client apps/browsers and then does "stuff" with the received XML snippets. I would like to take advantage of some of the projects discussed at xml.apache.org (The Apache XML Project), but I'm not sure how they fit into the mod_perl framework. Any XML gurus on this list have any experiences, pointers, or suggestions with integrating xml.apache.org projects with a mod_perl enabled apache server? I'd like to avoid using the java parsers and jserv if possible, but it seems that some of the nicer features of Cocoon, etc. are only available in java (hence the "Cocoon is a 100% pure Java publishing" tag line :). The next major release of Embperl will step in that direction, providing functionlity for doing things like Concon. Also I will not implement an XSL processor, I hope the Apache XML project provides it's C++ implementation and they (or we) add a perl interface, so this will be usable in that context. I expect (but can not promise) the first beta for this during march. Gerald
Re: lookup_uri and access checks
Hi there, On Wed, 2 Feb 2000, Paul J. Lucas wrote: I have code that contains the line: $r-lookup_uri( $r-param( 'pm_uri' ) )-filename; [snip] However, if I have an access restriction that forbids access to files ending in a .pm extension and the URI maps to such a filename, then I get a "client denied by server confiruation" message in the error log. Are you checking the status of the subrequest? (Eagle book, p453). my $subr = $r-lookup_uri( inaccessible_file ); my $status = $subr-status; unless( $status == HTTP_OK ) { dont_access_the_file(); } 73, Ged.
Re: Using network appliance Filer with modperl
Hi, We've been running a modperl environment 'on' a NetApp since dec 1997 and would't even dare to think about going back;). We've found no gotcha's. Iff you can afford it I can really recommend it. The way we use it is that we store all configs, libraries and sites on the netapp. As a front-end we have 'cheap' PC's running Linux. The disks in the PC's are only used for the OS and temporary storage of logs, etc. Ronald Tim Bunce wrote: On Mon, Jan 31, 2000 at 11:16:23AM -0800, siberian wrote: Hi All- I am building a pretty in depth architecture for our new service using ModPerl. I've done a lot of large scale/high traffic apps in modperl before but never in conjunction with a network attached file server. I am thinking that it would really make my life easy to have one central repository of code, databases and sundry files that all the servers share ( making it easy to swap out servers, add servers etc since its a central file repository that everyone just hooks into ). My question is : Has anyone experienced any 'gotchas' in putting perl code that modperl handlers use on a Network Attached file server like a network appliance box ( www.netapp.com )? I am assuming that there are no real issues but before i go blow a ton of cash on this thing I wanted to be sure that no one had found a problem. And, just to be balanced, has anyone _not_ found any 'gotchas' and is enjoying life with a netapp or similar NFS file serving appliance? Tim. -- Ronald F. LensTel +31 (0)20 600 5700 xxLINK Internet Services Fax +31 (0)20 600 1825
Re: Embperl/mod_perl name space problems?
The server seems to be putting the 'included'embperl files into the same namespace. As long as you don't force EMbperl to do otherwise, every Embperl will run in it's own namespace Each page works fine if you hup the server, going from one section of the site to annother results in the top nav bar being incorrectly displayed. Thats if we use Apache::Registry or Apache::PerlRun. If I use Apache::PerlRun, and set PerlRunOnce everythign works fine. While this gives us a decent speed increase from preloading the modules, I'd like to not have to use PerlRunOnce, and ideally be able to use Apache::Registry. Some code in a typical template, there's embperl like this: [* Execute($base.'/'.$top_nav,$app); *] Try to use [- -] instead of [* *]. There are still some problems with [* *] blocks. With that change your setup should run just fine. Gerald
Re: RFD: comp.infosystems.www.modperl
The following message is a courtesy copy of an article that has been posted to news.groups as well. "jiminy" == jiminy [EMAIL PROTECTED] writes: jiminy Mod_perl is becoming a widely used method for developing jiminy dynamic web pages, superceding in many ways CGI. While jiminy discussion of it appears in a scattered fashion in CGI, Perl, jiminy database, server, and other newsgroups, there is no unified jiminy place to pose questions and hold discussion about mod_perl jiminy where mod_perl is the proper topic of the forum. Such a forum jiminy would benefit the communicty of web developers who are using jiminy mod_perl. The mod-perl mailing list (at [EMAIL PROTECTED]) does not yet have excessive traffic. Perhaps it is a bit hard to find, but the statement "there is no unified place to post questions and hold discussion about mod_perl where mod_perl is the proper topic of the forum" is a blatent lie, and therefore an insufficient precondition. If this is your only motivation to create a news group, I would vote no. When the mailing list traffic gets excessive, it's time to start a newsgroup. That has not yet happened. In fact, the creation of a newsgroup would cause the *dilution* of expert eyeball time on a given issue, and would therefore be *detrimental* to the mod_perl community, because some experts would choose to read only the mailing list, and others would choose to read only the newsgroup. No. Bad idea at this point. -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 [EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/ Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!
Re: Caching DB queries amongst multiple httpd child processes
Peter Skipworth wrote: Does anyone have any experience in using IPC shared memory or similar in caching data amongst multiple httpd daemons ? We run a large-ish database dependent site, with a mysql daemon serving many hundreds of requests a minute. While we are currently caching SQL query results on a per-process basis, it would be nice to share this ability across the server as a whole. I've played with IPC::Shareable and IPC::ShareLite, but both seem to be a little unreliable - unsurprising as both modules are currently still under development. Our platform is a combination of FreeBSD and Solaris servers - speaking of which, has anyone taken this one step further again and cached SQL results amongst multiple web servers ? I've written my own solution to this, called Tie::MmapCache, which implements an LRU cache of data in a memory mapped file, which can be shared by an arbitrary number of processes. Unfortunately, it's not publically available yet, but I hope to get it released soon. -- Peter Haworth [EMAIL PROTECTED] "The warts in a language tend to be more orthogonal than the features" -- Larry Wall, at the Perl Conference 2.0
Undefined of PL_siggv in mod_perl.c
Hi, My environment is apache.1.3.9, mod_perl-1.21, perl5.005_63. When I do make to compile mod_perl-1.21, from src directory, I have the undefine of identifier PL_siggv in the mod_perl.c modul. Is anyone working with the development version of perl5, the perl5.005_63 ? Doesn't any anyone know how to resolve the undefine ? Thanks for the responses, Ignasi Roca.
RE: Undefined of PL_siggv in mod_perl.c
Hi, My environment is apache.1.3.9, mod_perl-1.21, perl5.005_63. When I do make to compile mod_perl-1.21, from src directory, I have the undefine of identifier PL_siggv in the mod_perl.c modul. Is anyone working with the development version of perl5, the perl5.005_63 ? Doesn't any anyone know how to resolve the undefine ? You need to grab the CVS version of mod_perl if you're using perl5.005_63 or anything 5.00503. Grab a snapshot from from perl.apache.org/from-cvs -- Eric
Re: Using network appliance Filer with modperl
On Thu, Feb 03, 2000 at 01:01:43PM +0100, R. F. Lens wrote: Hi, We've been running a modperl environment 'on' a NetApp since dec 1997 and would't even dare to think about going back;). We've found no gotcha's. Iff you can afford it I can really recommend it. The way we use it is that we store all configs, libraries and sites on the netapp. As a front-end we have 'cheap' PC's running Linux. The disks in the PC's are only used for the OS and temporary storage of logs, etc. What level of web traffic are you handling 'from' the netapp? E.g., how much traffic to the netapp is there when your web site is getting peak traffic? Tim.
Re: mod_perl with Stronghold
"JH" == J Horner [EMAIL PROTECTED] writes: JH I searched the archives, but no mention of this. Does anyone have any JH experience with mod_perl and Stronghold? It works just fine like any other apache module. mod_perl has stronghold awareness for the configuration process. just be sure you can properly re-build stronghold without mod_perl first so you know whom to blame if it fails to build ;-) -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Vivek Khera, Ph.D.Khera Communications, Inc. Internet: [EMAIL PROTECTED] Rockville, MD +1-301-545-6996 PGP MIME spoken herehttp://www.kciLink.com/home/khera/
Re: cross site scripting security issue headsup
At 05:32 PM 02/02/00 -0700, Marc Slemko wrote: I thought about not putting the mod_perl specific one in there at all (ie. just the CGI.pm one, BTW about the CGI.pm example: use CGI (); $Text = "foobbar"; $URL = "foobbar.html"; print CGI::escapeHTML($Text), "BR"; Sorry for being off topic, but FYI from Bugtraq about Oct 5, 1999 titled "Time to update those CGIs again": "Seems that at least some Unix versions of Netscape treat characters 0x8b and 0x9b (NOT the strings "0x8b" and "0x9b" but the characters with these ascii values) just like and respectively..." I never tested it, but others on Bugtraq did confirm the problem on unix versions of Netscape. Bill Moseley mailto:[EMAIL PROTECTED]
Bug in mod_perl makefile?
Hi, I've just got my apache/modperl setup to work. This little nastie took me 2 days to find. In my /usr/src directory, I had: - apache_1.3.3 [dir] - apache_1.3.9 [dir] - mod_perl-1.21 [dir] and was compiling modperl/perl with: #perl Makefile.PL APACHE_SRC=/usr/src/apache_1.3.9 DO_HTTPD=1 USE_APACI=1 EVERYTHING=1 PERL_MARK_WHERE=1 APACI_ARGS=--enable-module=all #make make test make install and then a make install from apache_1.3.9 This setup was consistently failing, or rather ... - if I added --enable-shared=max apache compiled and started unless a call to mod_perl was made. If I tryed to load Embperl in httpd.conf, httpd would not start, claiming that Apache::Constants was not installed/found/whatever. - I upgraded perl, recompiled perl + all the perl modules I had + mod-perl + apache again. Several recompiles and shrieks. Now, today, with a fresh head, I noticed a little message when making mod_perl, which said it was 'getting into ../apache_1.3.3'. So I renamed the directory to be called old.apache_1.3.3 and recompiled. Voilá! So either my APACHE_SRC argument was wrong (I swear I read it in Stas' Guide) or the Makefile was ignoring it... Sadly, I'm going holidays tonight, so I can't [and won't] stay here to dissect Makefile.PL, but I thought it was nice to tell the list. Now if you find it was my mistake, one way or another, then please forgive me for saying the bug was yours [whoever wrote the makefile] when it was mine. ml -- -- To understand recursion, one must first understand recursion. -- -- - Martin Langhoff @ S C I M Multimedia Technology - - http://www.scim.net | God is real until - - mailto:[EMAIL PROTECTED] | declared integer -
Re: Caching DB queries amongst multiple httpd child processes
On Thu, 3 Feb 2000, Peter Skipworth wrote: Does anyone have any experience in using IPC shared memory or similar in caching data amongst multiple httpd daemons ? Well, I just released IPC::Cache version 0.02 to CPAN. Two major caveats, however -- first, it is not currently in production on our site (www.eziba.com) and won't be for another week or two. Second, it relies on IPC::ShareLite as a backend. However, it does offer a very straightforward interface, and attempts to manage the memory segments and clean up after them. We are running Linux 2.2 on the front end, btw, and that may account for why ShareLite seems rather reliable for us. I'd love it if you could download it and let me know it works for you. We are planning on maintaining this, and would appreciate any feedback you can give about how to improve it. You can get IPC::Cache here: http://www.cpan.org/modules/by-module/IPC/IPC-Cache-0.02.tar.gz -DeWitt
Re: Using network appliance Filer with modperl
Hi, is I can step in here... ;-) At 15:11 2/3/00 +, Tim Bunce wrote: As a front-end we have 'cheap' PC's running Linux. The disks in the PC's are only used for the OS and temporary storage of logs, etc. What level of web traffic are you handling 'from' the netapp? E.g., how much traffic to the netapp is there when your web site is getting peak traffic? As we put the maximum of RAM in our Linux boxes, in most cases we don't notice anything in the NetApp traffic when a site gets hit badly. For example, we host one of the Dutch National newspapers (http://www.nrc.nl) that way: because they come out with a daily edition around 4pm local time, traffic varies from about 300 Kbit/sec during the day to about 2Mbit/sec around the time the new update becomes available. However, we can't see anything special in the NetApp traffic graph at that time: it is all being served from the front-end server RAM. Since PC RAM is cheap, we can get a lot of mileage out of our NetApp. If we look at the total graph of NetApp traffic development of the past two years, that graph has only risen about 25% from the original average traffic. However, our web-traffic has quadrupled over that period, and the number of front-end servers now about 20 instead of the original 3. And the size of the NetApp has grown from 10 Gbyte to now about 45 Gbyte of diskspace. So I guess I would argue that maximum (relatively cheap) RAM in your front-end servers is much more important than the maximum NetApp bandwidth... Elizabeth Mattijsen Tel: 020-6005700Nieuwezijds Voorburgwal 68-70 Fax: 020-60018251012 SE AMSTERDAM Voor ernstige technische storingen zijn we buiten kantooruren bereikbaar: 06-29500176 of zie onze website. -- Web Ontwikkeling | Web Hosting | Web Onderhoud | Web Koppeling -- xxLINK, an Integra-Net company
Re: RFD: comp.infosystems.www.modperl
--- Randal L. Schwartz wrote: The mod-perl mailing list (at [EMAIL PROTECTED]) does not yet have excessive traffic. Perhaps it is a bit hard to find --- end of quote --- You're too kind. It's right there on the homepage. Maybe it deserves a link in the unordered list near the top, though. I too fear dilution. A good list server for such a narrow topic is much more useful, to me at least. Everyone over to news.groups... -Bill
New Bloke question - modifying canned footer (Eagle book)
Hi there, I've got the canned footer (Eagle book chapter 4) up and running, and it edits html files on the local host fine using the Files ~ "\.html$"... directive. I was wondering how to implement the footer so that it appeared on all html files (ie. external files), and not just local html files. Thanks for any help... Cheersthenoo, Ollie
Re: New Bloke question - modifying canned footer (Eagle book)
"OHI1" == Oliver Holmes ITS 1999 [EMAIL PROTECTED] writes: OHI1 Hi there, OHI1 I've got the canned footer (Eagle book chapter 4) up and running, and it edits html files on the local host fine using the Files ~ "\.html$"... directive. OHI1 I was wondering how to implement the footer so that it appeared on all html files (ie. external files), and not just local html files. What's an "external file"? You can only alter that which you control, namely stuff served by you. If you're getting a page from a remote site, then your server is not involved so it can't do anything, can it?
Re: lookup_uri and access checks
On Thu, 3 Feb 2000, G.W. Haywood wrote: On Wed, 2 Feb 2000, Paul J. Lucas wrote: I have code that contains the line: $r-lookup_uri( $r-param( 'pm_uri' ) )-filename; [snip] However, if I have an access restriction that forbids access to files ending in a .pm extension and the URI maps to such a filename, then I get a "client denied by server confiruation" message in the error log. Are you checking the status of the subrequest? (Eagle book, p453). No. Intentionally. As I wrote, I don't care what Apache says about the accessibility of the file. I *will* read it. All I want to do is suppress the "client denied by server request" messages in the log. - Paul
Re: Caching DB queries amongst multiple httpd child processes
At 03:33 PM 2/3/00 +1100, Peter Skipworth wrote: Does anyone have any experience in using IPC shared memory or similar in caching data amongst multiple httpd daemons ? We run a large-ish database dependent site, with a mysql daemon serving many hundreds of requests a minute. While we are currently caching SQL query results on a per-process basis, it would be nice to share this ability across the server as a whole. I've played with IPC::Shareable and IPC::ShareLite, but both seem to be a little unreliable - unsurprising as both modules are currently still under development. Our platform is a combination of FreeBSD and Solaris servers - speaking of which, has anyone taken this one step further again and cached SQL results amongst multiple web servers ? We looked at this, as we have a busy multiple web server environment and are planning to use Apache::Session + Mysql to manage session state. Although per-host caching in shared memory or whatever seemed desirable on paper, the complexities of ensuring that cache entries are not invalid due to an update on another server are major. When we set up a testbed to benchmark Mysql for this project, the time taken to retrieve or update a session state record across the network over an established connection to our Mysql host (Sparc 333 mhz Ultra 5/Solaris 2.6 with lots of memory) was so small (5-7 ms including LOCK/UNLOCK TABLE commands where needed) that we didn't pursue per host caches any further. Clearly, YMMV depending on the hardware you have available. - Simon Thanks in advance, Peter Skipworth -- .-. | Peter SkipworthPh: 03 9897 1121 | | Senior Programmer Mob: 0417 013 292 | | realestate.com.au [EMAIL PROTECTED] | `-' - Simon Rosenthal ([EMAIL PROTECTED]) Web Systems Architect Northern Light Technology 222 Third Street, Cambridge MA 02142 Phone: (617)577-2796 : URL: http://www.northernlight.com "Northern Light - Just what you've been searching for"
Re: RFD: comp.infosystems.www.modperl
From: [EMAIL PROTECTED] (Randal L. Schwartz) Date: 03 Feb 2000 06:19:54 -0800 Subject: Re: RFD: comp.infosystems.www.modperl If this is your only motivation to create a news group, I would vote no. When the mailing list traffic gets excessive, it's time to start a newsgroup. That has not yet happened. In fact, the creation of a newsgroup would cause the *dilution* of expert eyeball time on a given issue, and would therefore be *detrimental* to the mod_perl community, because some experts would choose to read only the mailing list, and others would choose to read only the newsgroup. I have to whole heartily agree. I couldn't, at first glance, find the mod_perl mailing list; but once I did, the posters here proved very helpful. I believe that once more people see the mailing list and start using it, you will need to start a newsgroup - however, that doesn't mean that the 'experts' will frequent the newsgroup more than a mailing list - people tend to start flame wars more in a usenet setting... Bill Jones * Systems Programmer * http://www.fccj.org/cgi/mail?sneex (' Running - //\ Perl, Apache, MySQL, PHP3, v_/_ Ultra 10, LinuxPPC, BeOS...
Re: lookup_uri and access checks
Hi there, On Thu, 3 Feb 2000, Paul J. Lucas wrote: Are you checking the status of the subrequest? No. Intentionally. As I wrote, I don't care what Apache says about the accessibility of the file. I *will* read it. All I want to do is suppress the "client denied by server request" messages in the log. I suppose you could take over the logging process entirely (e.g. Eagle Book pp 360-367, 669, CPAN, and if you're into C 684). Then you could do what you like with the error messages, including throw them away on a selective (hopefully a very selective) basis. But I'd urge caution in throwing away any log information, whatever it is, before a sentient being has had a look at it. Hope this helps. 73, Ged.
Re: Why I think mod_ssl should be in front-end
"TM" == Tom Mornini [EMAIL PROTECTED] writes: TM 2) Better scalability. I've head (but never benchmarked) that SSL in TMgeneral is 100 times more processor intensive than non-ssl connections. TMI want my mod_perl server running mod_perl, not mod_ssl! In a TMhigh-volume site you're going to have lots of front-end machines TMunderworked anyway, so why not let them do some SSL calculations? If you have a high volume site that uses SSL, you should really be offloading the SSL processing to dedicated cryptography hardware. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Vivek Khera, Ph.D.Khera Communications, Inc. Internet: [EMAIL PROTECTED] Rockville, MD +1-301-545-6996 PGP MIME spoken herehttp://www.kciLink.com/home/khera/
Re: Using network appliance Filer with modperl
On Thu, 3 Feb 2000, Oleg Bartunov wrote: [snipped] Stas, are you sure DESTROY executed when children died ? I'm using ApacheDBI and have problem with DESTROY when I use finish or disconnect methods. Of course! This works for me: die.pl use MyRun; print "Content-type: text/plain\n\n"; print "hi\n"; my $obj = new MyRun; die "dying..."; print "End of program\n"; MyRun.pm package MyRun; sub new{ return bless {}, shift;} DESTROY{ print STDERR "destructor was called\n";} 1; Such simple DESTROY works for me also. The disconnect method shouldn't work as it gets overriden by Apache::DBI with NOP. However finish() is supposed to work. I'm not sure it works: sub disconnect { my $self = shift; $self-sth_finish; warn "STH finished...\n"; $self-{dbh}-disconnect; warn "DB disconnected...\n"; } sub sth_finish { my $self = shift; foreach my $sth (keys %query) { $self-{$sth}-finish; } } sub DESTROY { my $self = shift; $self-disconnect; print STDERR "DB requests:PID:",$$,':', $self-total_db_requests(),"\n"; } I never get debug messages in error log. But If I comment all calling of methods I got what I expected. sub disconnect { my $self = shift; # $self-sth_finish; warn "STH finished...\n"; # $self-{dbh}-disconnect; warn "DB disconnected...\n"; } sub DESTROY { my $self = shift; $self-disconnect; print STDERR "DB requests:PID:",$$,':', $self-total_db_requests(),"\n"; } This behaivour doesn't depends whether or not I use ApacheDBI So it's DBI. Did you try to add debug messages to DBI module? in finish and disconnect? Actually in the appropriate DBD module of your db. What are the results? Did you try to ask at the dbi-users mailing list? Please summarize back to list or me, if you have solved it. This is an important issue that has to be documented. Thanks! It seems DESTROY doesnt' executed when apache+mod_perl chidlren get killed :-) It depends on how do you kill it. kill -9 is untrappable so you cannot do a thing about it. If it's any other kill signal you can trap it with %SIG and call the cleanup code. Does it help? ___ Stas Bekmanmailto:[EMAIL PROTECTED] http://www.stason.org/stas Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC perl.apache.orgmodperl.sourcegarden.org perlmonth.comperl.org single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com
Re: Why I think mod_ssl should be in front-end
On Thu, 3 Feb 2000, Vivek Khera wrote: "TM" == Tom Mornini [EMAIL PROTECTED] writes: TM 2) Better scalability. I've head (but never benchmarked) that SSL in TMgeneral is 100 times more processor intensive than non-ssl connections. TMI want my mod_perl server running mod_perl, not mod_ssl! In a TMhigh-volume site you're going to have lots of front-end machines TMunderworked anyway, so why not let them do some SSL calculations? If you have a high volume site that uses SSL, you should really be offloading the SSL processing to dedicated cryptography hardware. A fairly new option, I believe, and an excellent point. That said, I think that most sites are going to pull that trigger when it becomes necessary. Putting it up-front in mod_ssl is the best option until that level of traffic demands the new hardware... -- -- Tom Mornini -- InfoMania Printing and Prepress
Re: Bug in mod_perl makefile?
On Thu, 3 Feb 2000, Martin A. Langhoff wrote: Hi, I've just got my apache/modperl setup to work. This little nastie took me 2 days to find. In my /usr/src directory, I had: - apache_1.3.3 [dir] - apache_1.3.9 [dir] - mod_perl-1.21 [dir] and was compiling modperl/perl with: #perl Makefile.PL APACHE_SRC=/usr/src/apache_1.3.9 DO_HTTPD=1 USE_APACI=1 EVERYTHING=1 PERL_MARK_WHERE=1 APACI_ARGS=--enable-module=all #make make test make install and then a make install from apache_1.3.9 This setup was consistently failing, or rather ... - if I added --enable-shared=max apache compiled and started unless a call to mod_perl was made. If I tryed to load Embperl in httpd.conf, httpd would not start, claiming that Apache::Constants was not installed/found/whatever. - I upgraded perl, recompiled perl + all the perl modules I had + mod-perl + apache again. Several recompiles and shrieks. Now, today, with a fresh head, I noticed a little message when making mod_perl, which said it was 'getting into ../apache_1.3.3'. So I renamed the directory to be called old.apache_1.3.3 and recompiled. Voilá! So either my APACHE_SRC argument was wrong (I swear I read it in Stas' Guide) or the Makefile was ignoring it... Seems to me like you forgot to make clean before rebuilding stuff. A few folks reported in the past that starting from clean tar solved their problems. As for the APACHE_SRC, the scenario you have described can only happen if you provided invalid APACHE_SRC paramter. Here is the relevant snippet from Makefile.PL: for $src_dir ($APACHE_SRC, ../apache*/src, ../stronghold*/src, /usr/local/stronghold*/src, "../src", "./src") { next unless -d $src_dir; next if $seen{$src_dir}++; next unless $vers = httpd_version($src_dir); unless(exists $vers_map{$vers}) { print STDERR "Apache version '$vers' unsupported\n"; next; } $mft_map{$src_dir} = $vers_map{$vers}; #print STDERR "$src_dir - $vers_map{$vers}\n"; push @adirs, $src_dir; $modified{$src_dir} = (stat($src_dir))[9]; last if $DO_HTTPD; } If $APACHE_SRC is defined correctly relative to mod_perl src tree, the 'for' loop goes thru only once. Now if you find it was my mistake, one way or another, then please forgive me for saying the bug was yours [whoever wrote the makefile] when it was mine. The bad thing is that you report about a potential problem, but don't stay with us to follow up. So I thought twice before replying to your email, since I feel that I talk to the wall until before you come back from your vacation... Enjoy the vacation :) ___ Stas Bekmanmailto:[EMAIL PROTECTED] http://www.stason.org/stas Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC perl.apache.orgmodperl.sourcegarden.org perlmonth.comperl.org single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com
Re: Why I think mod_ssl should be in front-end
"TM" == Tom Mornini [EMAIL PROTECTED] writes: If you have a high volume site that uses SSL, you should really be offloading the SSL processing to dedicated cryptography hardware. TM A fairly new option, I believe, and an excellent point. Not really. I saw these boards available at least 2 years ago, which is about half the age of the Web ;-) TM That said, I think that most sites are going to pull that trigger when it TM becomes necessary. Putting it up-front in mod_ssl is the best option until TM that level of traffic demands the new hardware... Indeed. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Vivek Khera, Ph.D.Khera Communications, Inc. Internet: [EMAIL PROTECTED] Rockville, MD +1-301-545-6996 PGP MIME spoken herehttp://www.kciLink.com/home/khera/
benchmarking mod_perl
I'm sorry if this has been asked, but I haven't seen the answer. After you wonderful people pointed me in the right direction for making mod_perl with Stronghold, I'm now the proud parent of a mod_perl enabled stronghold server. How to I let my new baby stretch her wings? What good benchmarks are available? Jon J. Horner [EMAIL PROTECTED] http://jjhorner.penguinpowered.com/ 1:30pm up 9 days, 4:20, 3 users, load average: 0.00, 0.00, 0.00
Re: RFD: comp.infosystems.www.modperl
On 2/3/00 1:17 PM, Bill Jones wrote: however, that doesn't mean that the 'experts' will frequent the newsgroup more than a mailing list - people tend to start flame wars more in a usenet setting... OTOH, it's a lot easier to track and respond to particular issues/problems in a threaded setting like Usenet. On the mailing list, only one or two problems seem to be "active" at any one time. Perhaps not now, but eventually it'd be beneficial to have a newsgroup. I know Usenet is the first place I looked for mod_perl help before finding this list (and after finding, buying, and reading the Eagle book ;) -John
Re: memory leaks redux
On Sat Jan 29 13:11:25 2000 + Mike Whitaker wrote: [EMAIL PROTECTED] (Doug MacEachern) wrote: there are hints in the SUPPORT doc on how to debug such problems. there was also several "Hanging process" threads in the past weeks with more tips, search in the archives for keywords gdb, .gdbinit, curinfo if you can get more insight from those tips, we can help more. I have also seen (and reported via the Debian bugs system) a problem which I think has been observed before where HUP'ing the root httpd causes it to reload every darn PerlModule directive, and bloat accordingly (with our server, that's 3M per SIGHUP, which makes log rotation somewhat painful (when you get 3 million hits in 8 hours, you rotate those logs pretty fast!)). You can log to pipe. -- Ričardas Čepas ~~~ ~~ PGP signature
Re: RFD: comp.infosystems.www.modperl
"John" == John Siracusa [EMAIL PROTECTED] writes: John OTOH, it's a lot easier to track and respond to particular John issues/problems in a threaded setting like Usenet. Hmm. Get a threaded mailreader. It certainly changed my mail reading. :) -- Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095 [EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/ Perl/Unix/security consulting, Technical writing, Comedy, etc. etc. See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!
Re: benchmarking mod_perl
"J. Horner" wrote: I'm sorry if this has been asked, but I haven't seen the answer. After you wonderful people pointed me in the right direction for making mod_perl with Stronghold, I'm now the proud parent of a mod_perl enabled stronghold server. How to I let my new baby stretch her wings? What good benchmarks are available? I regrouped the HelloWorld benchmark data at http://www.chamas.com/bench/ recently, so that it gives you a more informative comparison of the relative startup costs of web environments on a per platform basis. -- Joshua _ Joshua Chamas Chamas Enterprises Inc. NodeWorks free web link monitoring Huntington Beach, CA USA http://www.nodeworks.com1-714-625-4051
RE: Why I think mod_ssl should be in front-end
On 3. februar 2000 19:49 Tom Mornini wrote: 2) Better scalability. I've head (but never benchmarked) that SSL in general is 100 times more processor intensive than non-ssl connections. That would have to be if you didn't cache session keys and had to set up a new symmetric key for every single connection. If you use the shared memory session caching mechanism that mod_ssl supports, then the overhead is actually rather small except for the initial connection. I want my mod_perl server running mod_perl, not mod_ssl! In a high-volume site you're going to have lots of front-end machines underworked anyway, so why not let them do some SSL calculations? That's certainly possible, but then on the other hand, why even bother to run a full scale mod_ssl in front? You might as well just choose a small tunnel/port forwarding app like Stunnel or one of the many other mentioned at http://www.openssl.org/related/apps.html. vh Mads Toftum, QDPH -- System Designer / Developer Tele Danmark Nøglecenter - http://www.certifikat.dk/ email: [EMAIL PROTECTED] / [EMAIL PROTECTED]
Re: RFD: comp.infosystems.www.modperl
On Thu, 3 Feb 2000, Kent Perrier wrote: "Randal L. Schwartz" wrote: "John" == John Siracusa [EMAIL PROTECTED] writes: John OTOH, it's a lot easier to track and respond to particular John issues/problems in a threaded setting like Usenet. Hmm. Get a threaded mailreader. It certainly changed my mail reading. :) Unfortunately, it appears that many of the MUAs that people use do not provide necessary information to properly thread the discussions :( what about a mail - news gateway?
Re: RFD: comp.infosystems.www.modperl
Yet, strangely, *this* thread seems to have threaded nicely (at least in 'mutt'). Tim. On Thu, Feb 03, 2000 at 02:54:41PM -0600, Kent Perrier wrote: "Randal L. Schwartz" wrote: "John" == John Siracusa [EMAIL PROTECTED] writes: John OTOH, it's a lot easier to track and respond to particular John issues/problems in a threaded setting like Usenet. Hmm. Get a threaded mailreader. It certainly changed my mail reading. :) Unfortunately, it appears that many of the MUAs that people use do not provide necessary information to properly thread the discussions :( Kent -- _ _ Timothy E. Peoples |_| C o l l e c t i v e |_| Senior Consultant |_technologies _| [EMAIL PROTECTED] [] [] a pencom company Cupiditas Erodos Probitas
Performance advantages to one large, -or many small mod_perl program?
Hello, Is there any performance advantage (speed, memory consumption) to creating a single, large, mod_perl program that can handle various types of "requests" for database data (fetchUserData fetchPaymentData, fetchSubscriptionData) as opposed to many small mod_perl scripts each related to a particular type of request? Thanks Keith
Embperl: loop control bug
Embperl (1.2.0) causes a core dump when I put in a loop control statement. For instance, in the following snippet of code, when the 'last' line is reached, the apache child dumps core. [- $i = 0 -] [$ while ($i 10) $] [+ $i +]br [$ if ($i == 5) $] [- last -] [$ endif $] [- $i++ -] [$ endwhile $] The problem occurs regardless of what looping mechanism I use (foreach, while, etc). Can someone confirm that this problem also occurs on their system, please? Regards, Christian - Christian Gilmore Senior Technical Staff Member ATT Labs IP Technology, Florham Park [EMAIL PROTECTED] http://www.research.att.com/info/cgilmore
How to tell what is using how much memory
Greetings, Is there a way to get mod_perl to tell how much memory is being used by each module and compiled scripts? -Bill begin:vcard n:Deegan;William tel;fax:650-413-1355 tel;work:650-598-3858 x-mozilla-html:FALSE url:http://www.iescrow.com org:iEscrow,Inc. version:2.1 email;internet:[EMAIL PROTECTED] title:Web Site Operations Manager note:http://www.orangefood.com/baddog adr;quoted-printable:;;2600 Bridge Parkway=0D=0ASuite 201;Redwood Shores;CA;94065; x-mozilla-cpt:;-30592 fn:William Deegan end:vcard
Re: lookup_uri and access checks
[EMAIL PROTECTED] (G.W. Haywood) wrote: Hi there, On Thu, 3 Feb 2000, Paul J. Lucas wrote: Are you checking the status of the subrequest? No. Intentionally. As I wrote, I don't care what Apache says about the accessibility of the file. I *will* read it. All I want to do is suppress the "client denied by server request" messages in the log. I suppose you could take over the logging process entirely (e.g. Eagle Book pp 360-367, 669, CPAN, and if you're into C 684). Then you could do what you like with the error messages, including throw them away on a selective (hopefully a very selective) basis. The other option would be to take over the access protection, allowing access if the request is coming from the right place, or contains the right secret key in the path_info, or whatever. ------ Ken Williams Last Bastion of Euclidity [EMAIL PROTECTED]The Math Forum
Re: Performance advantages to one large, -or many small mod_perl program?
Is there any performance advantage (speed, memory consumption) to creating a single, large, mod_perl program that can handle various types of "requests" for database data (fetchUserData fetchPaymentData, fetchSubscriptionData) as opposed to many small mod_perl scripts each related to a particular type of request? Keith, I think your question can be as easily changed to: "Should I use many small scripts or just a few having all the code inside or in modules?". There is almost no difference, since all the code is compiled once and cached since than. The best technique is to write all your code using modules and having only a few lines of code in the scripts. The only possible overhead with all-in-one approach is a need to pass the control variables so the code with know what functions to call. But this overhead is negligible. The only other thing is that you will have to change the URLs to use these control variables. From my experince, the maintance of big multi-functional applications is much easier when you don't mess with scripts, but centralize all the code in modules. It also make you write better code since you will have less code duplications. Probably there are more benefits. The only benefit of using many small scripts that I can see, is if there are some scripts that are almost never get used. So with small scripts you probably can save a few MBytes... Taken that there is no difference in time between loading a few big modules and a single small script (they all are cached), this question scales up to the general question of sw design which others will probably answer better than me :) ___ Stas Bekmanmailto:[EMAIL PROTECTED] http://www.stason.org/stas Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC perl.apache.orgmodperl.sourcegarden.org perlmonth.comperl.org single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com
Re: How to tell what is using how much memory
Is there a way to get mod_perl to tell how much memory is being used by each module and compiled scripts? Yes, Doug presented a few techniques at the last Perl Conference. I will put parts of it into a guide at some time in the future ... it's in "Part III - Debugging and Optimizing mod_perl Modules". Basically you should use these modules: B::Fathom B::Graph B::Deparse B::Size/B::TerseSize B::LexInfo/Apache::RegistryLexInfo which I'm not sure are all available from the current non-devel version of perl... Doug, may I put the slides at perl.apache.org? Some of the links are broken... Here is the readme from the package: this package contains the "advanced mod_perl tutorial" presented at the O'Reilly Open Source Conference in Monterey on Aug 22nd, 1999. when the tutorial was presented live, it was driven by a dynamic Apache module, converting .pod to .html on-the-fly and the like. in addition, the demos require having an Apache-1.3.9 and mod_perl-1.22-dev installation, along with the following modules from CPAN: B::Fathom B::Graph B::Deparse B::Size/B::TerseSize B::LexInfo/Apache::RegistryLexInfo Apache::Module/Apache::ShowRequest libwww-perl MPEG::MP3Info Xmms::Remote Apache::DB the slide generator was in no shape to make available to the public. so, the slides in this package are all static, thanks to the magic of "save-as", emacs, and perl -pi ___ Stas Bekmanmailto:[EMAIL PROTECTED] http://www.stason.org/stas Perl,CGI,Apache,Linux,Web,Java,PC http://www.stason.org/stas/TULARC perl.apache.orgmodperl.sourcegarden.org perlmonth.comperl.org single o- + single o-+ = singlesheavenhttp://www.singlesheaven.com
Re: Proxy example in eagle book does not work
I'm running a mod_perl proxy module similar to the one in the book. I'm curious why you say Keep-Alive is a problem? Is your concern performance because of timeout settings, or something else? I'm not exactly sure what the problem is, but when Keep-Alive is passed via LWP images are very slow to load, often taking 20/30 secs per image. Without it they load just fine. --- Jason Bodnar + [EMAIL PROTECTED] + Tivoli Systems I swear I'd forget my own head if it wasn't up my ass. -- Jason Bodnar
Registry scripts slower than handlers: Why?
On Thu, 3 Feb 2000, Joshua Chamas wrote: I regrouped the HelloWorld benchmark data at http://www.chamas.com/bench/ recently, so that it gives you a more informative comparison of the relative startup costs of web environments on a per platform basis. Registry scripts run slower according to the benchmark data here. Presuming both were preloaded at server startup, can anybody explain this slowness? Thanks, Jie
Re: Why I think mod_ssl should be in front-end
On Thu, 3 Feb 2000, Vivek Khera wrote: "TM" == Tom Mornini [EMAIL PROTECTED] writes: If you have a high volume site that uses SSL, you should really be offloading the SSL processing to dedicated cryptography hardware. TM A fairly new option, I believe, and an excellent point. Not really. I saw these boards available at least 2 years ago, which is about half the age of the Web ;-) Wow! I had no idea this was available back then. I just heard about them a few weeks ago, when Intel bought a company for $500M that had systems to do just this. Thanks for the info. I hate being wrong! :-) -- -- Tom Mornini -- InfoMania Printing and Prepress
Re: RFD: comp.infosystems.www.modperl
Hi all, On Thu, 3 Feb 2000, Tim Peoples wrote: Yet, strangely, *this* thread seems to have threaded nicely (at least in 'mutt'). Well, sort of. I use pine and the inbox default sort is as received. Which means that I seem to see most of the answers before I see the questions. Entertaining, though. 73, Ged.
Re: Registry scripts slower than handlers: Why?
Jie Gao wrote: On Thu, 3 Feb 2000, Joshua Chamas wrote: I regrouped the HelloWorld benchmark data at http://www.chamas.com/bench/ recently, so that it gives you a more informative comparison of the relative startup costs of web environments on a per platform basis. Registry scripts run slower according to the benchmark data here. Presuming both were preloaded at server startup, can anybody explain this slowness? You are talking about a stat() and some 20 lines of perl code for the Registry script rather than a couple lines of perl code for the mod_perl handler, a difference which should disappear if your programs are of any significant size. The Registry script will be compiled once for each httpd process so this performance hit is also negligible, and can be made even less by using RegistryLoader at server startup time. -- Joshua _ Joshua Chamas Chamas Enterprises Inc. NodeWorks free web link monitoring Huntington Beach, CA USA http://www.nodeworks.com1-714-625-4051
[Fwd: Registry scripts slower than handlers: Why?]
Jie Gao wrote: Registry scripts run slower according to the benchmark data here. Presuming both were preloaded at server startup, can anybody explain this slowness? Slow is relative ... still a lot faster than vanilla cgi. But registry needs to emulate a cgi-like environment before starting to work on the script itself (even if it is preloaded) - there is extra overhead for providing a separate namespace, etc. Inside of that environment, scripts can be pretty dirty - mod_perl-handlers have to be written a lot cleaner than registry-cgi-scripts. Quoting from http://perl.apache.org/dist/cgi_to_mod_perl.html: CGI lets you get away with sloppy programming, mod_perl does not. Why? CGI scripts have the lifetime of a single HTTP request as a separate process. When the request is over, the process goes away and everything is cleaned up for you, e.g. globals variables, open files, etc. Scripts running under mod_perl have a longer lifetime, over several request, different scripts may be in the same process. This means you must clean up after yourself. You've heard: always 'use strict' and -w!!! It's more important under mod_perl Perl than anywhere else, while it's not required, it strongly recommended, it will save you more time in the long run. And, of course, clean scripts will still run under CGI! - Gerd. begin:vcard n:Kortemeyer;Gerd tel;fax:(517) 432-2175 tel;work:(517) 432-5468 x-mozilla-html:FALSE url:http://www.lite.msu.edu/kortemeyer/ org:LITE Lab;DSME MSU version:2.1 email;internet:[EMAIL PROTECTED] title:Instructional Technology Specialist adr;quoted-printable:;;123 North Kedzie Labs=0D=0AMichigan State University;East Lansing;MI;48824;USA fn:Gerd Kortemeyer end:vcard begin:vcard n:Kortemeyer;Gerd tel;fax:(517) 432-2175 tel;work:(517) 432-5468 x-mozilla-html:FALSE url:http://www.lite.msu.edu/kortemeyer/ org:LITE Lab;DSME MSU version:2.1 email;internet:[EMAIL PROTECTED] title:Instructional Technology Specialist adr;quoted-printable:;;123 North Kedzie Labs=0D=0AMichigan State University;East Lansing;MI;48824;USA fn:Gerd Kortemeyer end:vcard
RE: Embperl: loop control bug
Embperl (1.2.0) causes a core dump when I put in a loop control statement. For instance, in the following snippet of code, when the 'last' line is reached, the apache child dumps core. [- $i = 0 -] [$ while ($i 10) $] [+ $i +]br [$ if ($i == 5) $] [- last -] [$ endif $] [- $i++ -] [$ endwhile $] The problem occurs regardless of what looping mechanism I use (foreach, while, etc). Can someone confirm that this problem also occurs on their system, please? "while/endwhile" are Embperl control statements and "last" is a Perl statement. This can't work at all, because Perl doesn't know anything about Embperl's while/endwhile. This is completly handled by Embperl itself. Gerald - Gerald Richterecos electronic communication services gmbh Internetconnect * Webserver/-design/-datenbanken * Consulting Post: Tulpenstrasse 5 D-55276 Dienheim b. Mainz E-Mail: [EMAIL PROTECTED] Voice:+49 6133 925151 WWW:http://www.ecos.de Fax: +49 6133 925152 -
Apache::ASP
Does Apache::ASP provide access to COM objects on UNIX if a cross-platform toolkit such as MainWin is used? In an FAQ, a similar question is asked about ActiveX objects. The answer is, "Only under Win32 will developers have access to ActiveX objects through the perl Win32::OLE interface. This will remain true until there are free COM ports to the UNIX world. At this time, there is no ActiveX for the UNIX world." Since COM will be supported on UNIX via MainWin or another product, does this answer still hold true? The cross-platform toolkits being considered are: - MainSoft's MainWin - Bristol's Win/U - Software AG's EntireX Thanks, John
Re: Proxy example in eagle book does not work
Jason Bodnar wrote: On 19-Jan-00 Doug MacEachern wrote: On Fri, 14 Jan 2000, Jason Bodnar wrote: A line in the proxy example of the eagle book on page 380 does not seem to work (entirely): The line: $r-headers_in-do(sub {$request-header(@_);}); what if you change that to: $r-headers_in-do(sub {$request-header(@_); 1}); ? That sets all the headers including Connection which is a problem if it's Keep-Alive. It probably needs to be something like: $r-headers_in-do(sub {return 1 if $_[0] eq 'Connection'; $request-header(@_); 1}); --- Jason Bodnar + [EMAIL PROTECTED] + Tivoli Systems That boy wouldn't know the difference between the Internet and a hair net. -- Jason Bodnar I'm running a mod_perl proxy module similar to the one in the book. I'm curious why you say Keep-Alive is a problem? Is your concern performance because of timeout settings, or something else? -- Doug Kyle - Information Systems Grand Rapids Public Library "During my service in the United States Congress, I took the initiative in creating the Internet." -- Al Gore
Re: XML applications in mod_perl/apache xml project (fwd)
I embarking on writing a mod_perl handler that accepts XML posted from client apps/browsers and then does "stuff" with the received XML snippets. I would like to take advantage of some of the projects discussed at xml.apache.org (The Apache XML Project), but I'm not sure how they fit into the mod_perl framework. Any XML gurus on this list have any experiences, pointers, or suggestions with integrating xml.apache.org projects with a mod_perl enabled apache server? I'd like to avoid using the java parsers and jserv if possible, but it seems that some of the nicer features of Cocoon, etc. are only available in java (hence the "Cocoon is a 100% pure Java publishing" tag line :). Currently at MP3.com we are using an XML, apache, mod_perl based system, that we would like to implement as an open source package. It is very much performance centric, so not too many features are available. We are looking into using Xalan-C and Xerces-C (with perl xs wrappers) to accomplish pretty much what we have build in house. I am working on addding some performance tweaks to Xalan and Xerces to make them compete against our in house style sheet language. You might want to consider checking out my session at ApacheCon 2000, to get a better feel of what we have done. Cheers, - Sander van Zoest [EMAIL PROTECTED] High Geek(858) 623-7442 MP3.com, Inc. http://www.mp3.com/ See you at ApacheCon 2000 - Your premiere Music Service Provider (MSP)
Re: RFD: comp.infosystems.www.modperl
On Thu, 3 Feb 2000, Bill Jones wrote: [...] I have to whole heartily agree. I couldn't, at first glance, find the mod_perl mailing list; but once I did, the posters here proved very helpful. I believe that once more people see the mailing ... if you fail to read the file called SUPPORT when you need .. uh, ... support I would say all bets are off anyway. Actually, people should in general not post before reading the few hints in that file. :-) - ask -- ask bjoern hansen - http://www.netcetera.dk/~ask/ more than 70M impressions per day, http://valueclick.com
Re: Apache::ASP
John DiFini wrote: Does Apache::ASP provide access to COM objects on UNIX if a cross-platform toolkit such as MainWin is used? In an FAQ, a similar question is asked about ActiveX objects. The answer is, "Only under Win32 will developers have access to ActiveX objects through the perl Win32::OLE interface. This will remain true until there are free COM ports to the UNIX world. At this time, there is no ActiveX for the UNIX world." Since COM will be supported on UNIX via MainWin or another product, does this answer still hold true? The cross-platform toolkits being considered are: - MainSoft's MainWin - Bristol's Win/U - Software AG's EntireX Hey John, Apache::ASP is a perl port of ASP to Apache and mod_perl. That said, in order to provide a COM interface under UNIX, there would have to be a perl module available that implemented COM. My searching just now found no COM perl modules, but did pull up some CORBA interfaces, which may fill the distributed object functionality that you are looking for: cpan d /CORBA/ DistributionO/OM/OMKELLOGG/CORBA-IDLtree-0.7.tar.gz DistributionOTAYLOR/CORBA-MICO-0.5.0.tar.gz DistributionOTAYLOR/CORBA-ORBit-0.3.0.tar.gz DistributionP/PH/PHILIPA/CORBA-IOP-IOR-0.1.tar.gz It may be that you are really stuck on COM and would like to see it under unix with perl ... if that's the case then you might be the one to develop the cross platform glue between perl + one of the development environments listed above. Only Software AG's EntireX looks like they have free development tool, and only on Linux, so that might be a good start. It may be possible that you could formulate your requirements such that you don't need COM. Perl is a very powerful language, and in the right hands can do anything, and may be enough for your needs in this case. -- Joshua _ Joshua Chamas Chamas Enterprises Inc. NodeWorks free web link monitoring Huntington Beach, CA USA http://www.nodeworks.com1-714-625-4051