Re: dynamic vs. mostly static data
Perrin Harkins [EMAIL PROTECTED] writes: On Wed, 8 Nov 2000, Marinos J. Yannikos wrote: Only if you don't already have a proxy front-end. Most large sites will need one anyway. After playing around for a while with mod_proxy on a second server, I'm not so convinced; we have been doing quite well without such a setup for some time now, despite up to 70-80 httpd processes (with mod_perl) during busy hours. If you can meet your performance needs without using a proxy front-end, then by all means avoid the extra work. If you find yourself bumping against MaxClients and can't easily fix the problem with more RAM, I recommend you give the proxy approach another look. Personally, I avoided it until the hardware costs of scaling without it became prohibitive. The killer at my last place was having loads of people a long, long way away hanging on to fat apache processes while loading gifs, jpegs and 20k of text. Still, that was before Stas had really got to the root of the memory sharability thing - I'm happy now that we're looking at a couple of meg unshared per process not tens. -- Dave Hodgkinson, http://www.hodgkinson.org Editor-in-chief, The Highway Star http://www.deep-purple.com Apache, mod_perl, MySQL, Sybase hired gun for, well, hire - --JAF13974.973674502/hodgkinson.org-- --- End of forwarded message ---
Re: Fast DB access
We would like to add one thing to this. Different application situations seem to require different approaches. While RDBMS seem to support say 80% of these situations there are some situations where we find it not good enough. We have developed an adserver which has exactly the kind of scenario that Sander has talked about. Lots of similar queries which are read-only data having to be distributed across servers and so on RDBMSes (in our experience) don't seem suited for this. Murali Differentiated Software Solutions Pvt. Ltd. 176, Ground Floor, 6th Main, 2nd Block, RT Nagar Bangalore - 560032 Phone : 91 80 3431470 www.diffs-india.com - Original Message - From: Sander van Zoest [EMAIL PROTECTED] To: Matt Sergeant [EMAIL PROTECTED] Cc: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Thursday, October 12, 2000 2:35 AM Subject: Re: Fast DB access On Wed, 11 Oct 2000, Matt Sergeant wrote: I really think that sometimes going for a flat file layout *can* be much more reliable and scalable then RDBMS software. It all really depends on what you plan to do with the data and what you would like to get out of it. I think you chose the wrong words there. I think a flat file layout can be more performant than an RDBMS, but I don't think its going to be more reliable or scalable than an RDBMS. There are far too many locking issues and transaction issues necessary for the terms "reliable and scalable", unless you're willing to spend a few years re-coding Oracle :-) I actually think that there are times that can be all three. Notice how I said there are times it can be all three, it definately isn't the case all the time. Neither are RDBMS. ;-) Lots of places use databases for read-only queries. Having a database that gets lots of similar queries that are read-only makes it an unnecessary single point of failure. Why not use the local disk and use rsync to replicate the data around. This way if a machine goes down, the others still have a full copy of the content and keep on running. If you have a lot of data that you need to keep in sync and needs constant updating with a random amount of different queries then you get some real use out of a RDBMS. I guess I am just saying that there are a gazillions of ways of doing things, and each tool has something it is good at. File systems are really good at serving up read-only content. So why re-invent the wheel? It all really depends on what content you are dealing with and how you expect to query it and use it. There is a reason that table optimisation and tuning databases is such a sought after skill. Most of these things require different things that all rely on the type of content and their use. These things need to be taken in consideration on a case by case basis. You can do things terribly using Oracle and you can do things well using Oracle. The same can be said about just about everything. ;-) -- Sander van Zoest [[EMAIL PROTECTED]] Covalent Technologies, Inc. http://www.covalent.net/ (415) 536-5218 http://www.vanzoest.com/sander/
Re: Fast DB access
Hi, We are returning after extensive tests of various options suggested. First, we are not entering into the debate about well designed DBs and database can handle lots of queries and all that. Assume that we have an app.(an adserver) which dbs don't support well.. i.e., fairly complex queries to be services quickly. Some of the things we've found are 1. DBD::RAM is quite slow !! We presume this is because the SQL's have to be parsed everytime we make requests 2. Building the entire DB into a hash variable inside the mod_perl program is the fastest we found it to be 25 times faster than querying a postgres database !! 3. We have a problem rebuilding this database in the ram even say every 1000 requests. We tried using dbm and found it a good compromise solution. We found that it is about 8 times faster than postgres querying. 4. Another surprising finding was we built a denormalised db on the Linux file system itself, by using the directory and file name as the key on which we wanted to search. We found that dbm was faster than this. We're carrying out more tests to see how scaleable is dbm. Hope these findings are useful to others. Thanks for all the help. Murali Differentiated Software Solutions Pvt. Ltd. 176, Ground Floor, 6th Main, 2nd Block, RT Nagar Bangalore - 560032 Phone : 91 80 3431470 www.diffs-india.com - Original Message - From: Francesc Guasch [EMAIL PROTECTED] To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, October 11, 2000 1:56 PM Subject: Re: Fast DB access "Differentiated Software Solutions Pvt. Ltd" wrote: Hi, We have an application where we will have to service as high as 50 queries a second. We've discovered that most database just cannot keep pace. The only option we know is to service queries out of flat files. There is a DBD module : DBD::Ram. If you got enough memory or there is not many data it could be what you need. I also have seen recently a post about a new DBD module for CSV files, in addition of DBD::CSV, try http://search.cpan.org -- - frankie -
Re: Fast DB access
Hi there, On Wed, 8 Nov 2000, Differentiated Software Solutions Pvt. Ltd wrote: We are returning after extensive tests of various options suggested. Did you try different indexing mechanisms in your tests? 73, Ged.
need AuthName error
Has anybody experienced something like this in their error log [(date)] [error] [client (client's ip)] need AuthName: (filename) But the pages are served without errors. Using mod_perl 1.24_01 on Apache 1.3.14 Apache::ASP -- Mike
RE: need AuthName error
From: Victor Michael Blancas [mailto:[EMAIL PROTECTED]] Has anybody experienced something like this in their error log [(date)] [error] [client (client's ip)] need AuthName: (filename) But the pages are served without errors. Using mod_perl 1.24_01 on Apache 1.3.14 Apache::ASP Yep. Same "problem" here. I've been tearing my hair out trying to figure where that error message is coming from No problems with Apache/1.3.12 and nod_perl/1.23 Henrik Tougaard, FOA, Denmark.
RE: need AuthName error
Yep. Same "problem" here. I've been tearing my hair out trying to figure where that error message is coming from No problems with Apache/1.3.12 and nod_perl/1.23 the problem only appears for files handled by Apache::ASP. -- Mike
RE: Clarification of PERL_STASH_POST_DATA
-Original Message- From: Paul J. Lucas [mailto:[EMAIL PROTECTED]] Sent: Tuesday, November 07, 2000 7:42 PM To: [EMAIL PROTECTED] Subject: Clarification of PERL_STASH_POST_DATA OK, so the documentation for PERL_STASH_POST_DATA reads: There is an experimental option for Makefile.PL called PERL_STASH_POST_DATA. If you turn it on, you can get at it again with $r-subprocess_env("POST_DATA"). This is not on by default because it adds overhead. So I rebuilt Apache and mod_perl with PERL_STASH_POST_DATA=1 on the perl Makefile.PL line; however: 1. What *is* $r-subprocess_env("POST_DATA") ? Just the entire posted data squished up into a single scalar? What about file uploads? 2. The $r-subprocess_env("POST_DATA") doesn't even seem to work. I "warn" it to the log file and I get nothing there. The general problem is preserving POSTed data, including file uploads, for all handlers. I don't know about PERL_STASH_POST_DATA, but Apache::RequestNotes may be able to help - it basically does cookie/get/post/upload parsing during request init and then stashes references to the data in pnotes. The result is a consistent interface to the data across all handlers (which is the exact reason this module came about) it requires Doug's libapreq and probably a few code changes, but it may be somewhat helpful... HTH --Geoff - Paul
RE: Apache::Util routines
-Original Message- From: Edwin Pratomo [mailto:[EMAIL PROTECTED]] Sent: Wednesday, November 08, 2000 12:44 AM To: [EMAIL PROTECTED] Subject: Apache::Util routines Guys, mod_perl 1.24: perl -M'Apache::Util qw(:all)' -e '$a = parsedate("")' Undefined subroutine Apache::Util::parsedate called at -e line 1. Looks like I've been missing something? I doubt that you can use Apache::Util outside of the mod_perl runtime environment... if you are seeing the same thing in your live server, then you probably didn't compile with PERL_UTIL_API=1 or EVERYTHING=1 HTH --Geoff Rgds, Edwin.
Re: Retrieve OID from newly added record
Be warned that the OID field for a PostgreSQL record may not always be a unique value. Visit the PostgreSQL Hackers list for more information. Consider creating your own unique value using a combination of time/IP/random number, or using a sequence. Although the condition rarely arises, it has bitten a few people. Don't make the same mistake I did. Have Fun! Tim Tompkins wrote: According to the DBD::Pg docs, $sth-pg_oid_status Returns the OID of the last INSERT command. See: http://theoryx5.uwinnipeg.ca/CPAN/data/DBD-Pg/dbd-pg.html Thanks, Tim Tompkins -- Programmer / Staff Engineer http://www.arttoday.com/ -- - Original Message - From: "cbell" [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, November 07, 2000 1:28 PM Subject: Retrieve OID from newly added record Hello everyone, does anyone know how to retrieve the Object Identifier (OID) from a record that was just inserted into a postgres database from within perl? These are the commands I'm using to insert the record: $sth = $dbh-prepare("Insert into inventory Values ($id)"); $rc = $sth-execute; $rc will tell me whether or not the insert was successful or not, but that's it. If I insert records from the psql utility, the OID is returned on the screen after the insert, so I know it's there. I just need to know how to get it from within modperl. I want to get this number so I can insert a bunch of records in another file using the OID as the key. Thanks in advance! -- BLH www.RentZone.org
Re: dynamic vs. mostly static data
"PH" == Perrin Harkins [EMAIL PROTECTED] writes: PH against MaxClients and can't easily fix the problem with more RAM, I PH recommend you give the proxy approach another look. Personally, I avoided PH it until the hardware costs of scaling without it became prohibitive. Also, moving all static content, mostly images, off to another server helps tremendously. You ultimately want to end up only doing mod_perl requests on the mod_perl server and leave static content to other processes. It doesn't have to be a proxy front end doing it. See the tuning docs for details. -- =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Vivek Khera, Ph.D.Khera Communications, Inc. Internet: [EMAIL PROTECTED] Rockville, MD +1-240-453-8497 AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/
Re: Apache::Util routines
Apache::Util XS hooks into some of Apache's C code for date handling, escaping and so forth. There's no way with the current architecture to use it in a non-Apache-runtime context; running it from within your shell won't work. perl -M'Apache::Util qw(:all)' -e '$a = parsedate("")' Undefined subroutine Apache::Util::parsedate called at -e line 1. Looks like I've been missing something? -- Salon Internet http://www.salon.com/ Manager, Software and Systems "Livin' La Vida Unix!" Ian Kallen [EMAIL PROTECTED] / AIM: iankallen / Fax: (415) 354-3326
Re: database access
Jason Liu wrote: Is Apache::DBI absolutely necessary if you want to establish persistent database connection per child? No you can write your own (its open source remember ;-) but why bother - standing on the shoulders of giants etc Greg Thanks, Jason -Original Message- From: David Hodgkinson [mailto:[EMAIL PROTECTED]] Sent: Monday, November 06, 2000 5:10 AM To: Jason Liu Cc: [EMAIL PROTECTED] Subject: Re: database access "Jason Liu" [EMAIL PROTECTED] writes: In general, how should database connections be handled between parent and child processes? Can you establish database connections from within a handler? Absolutely. And using Abache::DBI caches the connection handle. -- Dave Hodgkinson, http://www.hodgkinson.org Editor-in-chief, The Highway Star http://www.deep-purple.com Apache, mod_perl, MySQL, Sybase hired gun for, well, hire -
Re: Sharing vars across httpds
Perrin Harkins wrote: On Mon, 6 Nov 2000, Differentiated Software Solutions Pvt. Ltd wrote: We want to share a variable across different httpd processes. Our requirement is as follows : 1. We want to define one variable (which is a large hash). 2. Every httpd should be able to access this variable (read-only). 3. Periodically (every hour) we would like to have another mod_perl program to refresh/recreate this large hash with new values 4. After this, we want the new values in the hash to be available across httpds If that's all you want to do, I would stay away from the notoriously slow and sometimes tricky IPC modules. My dirt simple approach is to put the data in a file and then read it into each httpd. (No point in trying to load it before forking if you're going to refresh it in an hour anyway.) You can use Storable for your data format, which is compact and fast. To check for an update, just stat the file and reload the data if it has a newer mtime. If you don't like doing the stat every time, put a counter in a global variable and just stat once every 10 requests or something, like Apache::SizeLimit does. If your data is too big to pull into memory, you can use a dbm file instead. - Perrin Have you benchmarked this vs IPC::ShareLite ? I've heard similar rumours about IPC being slow - but is this using Shareable (a pure perl implementation) or ShareLite (C / XS implentation). I would be interested in any results / ideas on how to do it. Greg
RE: conflicts between mod_perl and php4
Hmmm, maybe they're fighting over the named pipe. Maybe you can get around it with your DBI $dsn. Try specifying the mysql server with the network path (as opposed to nothing or 'localhost') so it uses a tcp connection instead of the named pipe. It'll be slower if it works, but that's better than nothing. Gaf -Original Message- From: Andreas Gietl [mailto:[EMAIL PROTECTED]] Sent: Tuesday, November 07, 2000 4:05 PM To: [EMAIL PROTECTED] Subject: conflicts between mod_perl and php4 Hi, i also posted this on the mysql-list and on the phpdb-list! i've got the following configuration: apache_1.3.12 with php and mod_perl statically linked. Php has compiled-in mysql-support and mod_perl of course uses DBI. When i use a mod_perl-script that uses the DBI-Interface i get a segfault, because php somehow blocks the connection. I also tried to compile php with with-mysql=/path/to/mysql and that did not solve my problem. The only thing that helped me was compiling php w/o mysql, but that is an incomplete configuration for me. Do you have any suggestions? thx andreas -- andreas gietl gietl internet services fon +49 9402 2551 fax +49 9402 2604 mobile +49 171 60 70 008 [EMAIL PROTECTED] # Das Handbuch sagt, das Programm benötige # # Windows 95 oder besser. Also habe ich # # Linux installiert! #
Re: Problem reading from STDIN
Hi there, On Tue, 7 Nov 2000, Pramod Sokke wrote: I'm not able to read anything from stdin at all. Have you got a Limit directive somewhere in the config? 73, Ged.
Re: Memory Usage
Buddy Lee Haystack [EMAIL PROTECTED] wrote: The memory consumption increases by about 1 megabyte for each child process every time I issued the USR1 signal. Which is kind of what I expected given Jens-Uwe Mager's explanation of DSO's failings with mod_perl. FWIW, I get a 240k increase in httpd size every time I send a USR1 signal to apache, and that is with a 100% static Apache/mod_perl, without even mod_so compiled in. Doing a few "ps"s during the graceful restart process, it looks like all child processes die, then the parent grows and starts respawning new children. It smells to me like the leak is in mod_perl reloading the perl modules themselves. If I trim httpd.conf down so it only loads a very minimal "hello world" handler, then the leak goes down to 64k per restart. I've gathered from this list that the 'canonical' answer to these problems is Apache::Reload, but I've been reluctant to use it because it would presumably reload the modules separately in each child, and therefore leaving them unshared. So for the moment what I do is restart apache for real every couple weeks. It's only a couple seconds of downtime... -- Roger Espel Llima, [EMAIL PROTECTED] http://www.iagora.com/~espel/index.html
PerlRun StatInc perl5_00405
I'm having trouble using PerlRun with the StatInc module. The symptom is several error log file entries of the form: [Wed Nov 8 11:12:20 2000] PerlInitHandler subroutine `Apache::StatINC::handler' : Apache::StatINC: Can't locate The offending item in %INC appears to be the 'warnings.pm' entry defined on line 308 of PerlRun: BEGIN { if ($] 5.006) { $INC{'warnings.pm'} = __FILE__; *warnings::unimport = sub {}; } } The value for this entry is still defined after the PerlModule Apache::PerlRun directive, but, by the time the PerlHandler is invoked, $INC{'warnings.pm'} evaluates to undef (the key is still defined, just not its value). This, of course, causes StatInc to generate the above error message. We're using Bundle::Apache 1.02, perl 5_00405. My perl.conf file contains the following: snip PerlFreshRestartOn PerlModule Apache::StatINC PerlInitHandler Apache::StatINC PerlSetVar StatINC_UndefOnReload On PerlSetVar StatINC_Debug 1 PerlWarn On PerlTaintCheck On /snip My virtual server contains the following: snip Alias /cgi-bin /path/to/cgi-bin PerlModule Apache::PerlRun Location /cgi-bin SetHandler perl-script PerlHandler Apache::PerlRun Options +ExecCGI PerlSendHeader On /Location /snip Any help here would be most appreciated. Thanks! Chris
Re: Memory Usage
On Tue, Nov 07, 2000 at 04:26:45PM -0500, Buddy Lee Haystack wrote: I take it you've used DSO much more than I have, so I'm interested in any information in addition to that provided by the kind "G.W. Haywood" to the following: "What concerns me even more is the fact that I have Apache restart child processes after they each serve 100 requests [MaxRequestsPerChild 100] it's a RedHat default that is supposed to reduce memory leaks, but with mod_perl DSO it may actually have the opposite effect. I can easily increase the value, or remove it altogether. Any recommendations?" I do not think that this is a problem, as if a client dies after it has done it's number of requests it will not load and reload all DSO's, it will just fork a fresh copy of the master and be done with it. I think the problem only happens with the restart signal. -- Jens-Uwe Mager HELIOS Software GmbH Steinriede 3 30827 Garbsen Germany Phone: +49 5131 709320 FAX:+49 5131 709325 Internet: [EMAIL PROTECTED]
RE: Clarification of PERL_STASH_POST_DATA
On Wed, 8 Nov 2000, Geoffrey Young wrote: ... Apache::RequestNotes may be able to help - it basically does cookie/get/post/upload parsing during request init and then stashes references to the data in pnotes. The result is a consistent interface to the data across all handlers (which is the exact reason this module came about) This is /exactly/ right. The only caveat is that its API is different from Apache::Request. It Would Be Nice(TM) if the module subclassed itself off of Apache::Request so that the Apache::Request API would Do The Right Thing(TM). it requires Doug's libapreq and probably a few code changes, but it may be somewhat helpful... Such functionality should imply be absorbed into Apache::Request. Doug?? - Paul
Re: Sharing vars across httpds
Greg Cope wrote: Have you benchmarked this vs IPC::ShareLite ? Sorry, I don't have numbers for ShareLite vs. files. However, this is from DeWitt Clinton's File::Cache module docs: File::Cache implements an object store where data is persisted across processes in the filesystem. It was written to compliment IPC::Cache. Where IPC::Cache is faster for small numbers of simple objects, File::Cache tends towards being more performant when caching large numbers of complex objects. He posted some info to the list which you can find in the archive. IPC::Cache uses ShareLite. - Perrin
Re: need AuthName error
Victor Michael Blancas wrote: Has anybody experienced something like this in their error log [(date)] [error] [client (client's ip)] need AuthName: (filename) But the pages are served without errors. Using mod_perl 1.24_01 on Apache 1.3.14 Apache::ASP It seems to be a feature of recent apache's to give off this error ... just set apache's AuthName somewhere. The problem is that Apache::ASP tries to fetch auth info if its available by default. Another way to go would be to turn off this default behavior and create a config, which might be the right thing since this error is a terrible one to figure out. --Joshua
Re: Fast DB access
"Differentiated Software Solutions Pvt. Ltd" wrote: 3. We have a problem rebuilding this database in the ram even say every 1000 requests. What problem are you having with it? We tried using dbm and found it a good compromise solution. We found that it is about 8 times faster than postgres querying. Some dbm implementations are faster than others. Depending on your data size, you may want to try a couple of them. 4. Another surprising finding was we built a denormalised db on the Linux file system itself, by using the directory and file name as the key on which we wanted to search. We found that dbm was faster than this. Did you end up with a large number of files in one directory? When using the file system in this way, it's a common practice to hash the key you're using and then split that across multiple directories to prevent too many files from building up in one and slowing things down. For example: "my_key" -- "dHodeifehH" -- /usr/local/data/dH/odeifehH Also, you could try using mmap for reading the files, or possibly the Cache::Mmap module. We're carrying out more tests to see how scaleable is dbm. If you're using read-only data, you can leave the dbm handles persistent between connections. That will speed things up. You could look at BerkeleyDB, which has a built-in shared memory buffer and page-level locking. You could also try IPC::MM, which offers a shared memory hash written in C with a perl interface. Hope these findings are useful to others. They are. Keep 'em coming. - Perrin
Re: Memory Usage
Actually, I did search through the archives for log rotation and memory leaks, but was not able to find any information specifically linking memory issues to log rotation. If it were not for all the helpful people on this list, I wouldn't have known about all the options I have to minimize the adverse effects of this bug. Some on this list even suggested that I was the only person suffering this issue, but evidently this is not so. Since I've just subscribed to the list, I wouldn't have any knowledge of your previous posts, which most assuredly would have helped me. I don't think it's an issue of RedHat, or any other distribution, not providing enough documentation that's the issue here. I just feel that many people could benefit from the inclusion of a warning about mod_perl, DSO Apache in the location that the Apache Group maintains for this on their website "Apache 1.3 Dynamic Shared Object (DSO) Support [http://www.apache.org/docs/dso.html]." It certainly can't hurt, and is were most people would logically go for information. Let's face it, many people who purchase a major distribution probably aren't going to use mod_perl; consequently, the vendors may not even be aware of any issues. The knowledge base of the contributors to this list can help many people, especially on this issue, because everyone has to rotate logs sooner or later. From the responses to this particular issue we have seen that quite a few people have had the same experience, even when mod_perl is compiled statically in Apache. Addressing an issue openly, thoroughly, quickly and responsibly is what separates open source projects from our poor, closed source brethren. If anyone feels offended or slighted by my somewhat ineloquent, or poorly considered posts, you have my sincerest apologies. Life is too short for conflict. I think this post has gone on a bit too long, and sapped too much time from everyone's schedule, so I'd like to end it now so that you all can get back to more productive endeavors. I'd like to thank everyone who helped me out, especially the following: Jens-Uwe Mager for pointing out the source of the issue; Christian Gilmore for the rotation scripts; G.W. Haywood for the perls of wisdom; Gunther Birznieks for yet another option; Ri?ardas ?epas for the piped rotatelogs program suggestion, and Roger Espel Llima for pointing out that the same issue exists with mod_perl compiled statically. Have a great Day! Douglas Leonard wrote: First off, the complaint about the lack of documentation for DSO being experimental is a bit offbase IMO. It isn't up to the mod_perl group to make sure RedHat includes complete documentation in their build of mod_perl. Also, this issue has been talked about many times on this mailing list. Sometime after mod_perl 1.20 was released there was talk that the DSO problems had been fixed. I can remember putting out a post myself on exactly how to cause the process size to grow using HUP or USR1 when using mod_perl 1.21 and 1.22-dev in order to disprove this. It is always best to check one of the archives first for this kind of problem IMO. I find it best to do a daily staggered shutdown/restart of each apache server and rotate the logs via a custom script. One minute of downtime per server per day isn't exactly noticeable when you have a load balancing system set up. On Tue, 7 Nov 2000, Buddy Lee Haystack wrote: Thanks, but as a RedHat [or other typical major distribution] user, I would never see the documentation you mentioned below. Since DSO is still experimental, would it not be an absolute necessity to include that information in the location where most users are directed to look for information about all things Apache? The first place I go to look for information is on the online documentation, and I know that the "experimental" nature of using DSO is not mentioned anywhere in "Apache 1.3 Dynamic Shared Object (DSO) Support" [http://www.apache.org/docs/dso.html]. It appears as if the "experimental" nature of DSO's under Apache is fairly well removed from view. Had the information you included below been clearly listed on Apache's website, in the proper location, many people would have made a choice not to use DSO. It appears as if the Apache Group has been a little less than candid in Apache's true support for DSO. IMHO There really needs to be a warning in the documentation on their website explicitly stating the info you've included below. snip -- Douglas Leonard [EMAIL PROTECTED] -- BLH www.RentZone.org
Apache::ASP
Hi there I'm running Apache 1.3.14 on a redhat 6.1 machine. I've installed apache from the souce files, then every other modules are needed for a succesfull istallation of Apache::ASP module. Finaly the ASP module instalation worked fine (the "install test" was ok). My problem is that when I try to run the asp example included in the packet, the server tells me there is an internal error. Thi is happening with every other asp page. Could you tell me please what else should I configure? The ones in asp README (like PerlHandler or PerlSetVar) make the httpd to give error messages and it doesn't work. Thank you very much. Dan __ Do You Yahoo!? Thousands of Stores. Millions of Products. All in one Place. http://shopping.yahoo.com/
Apologies for vacation message!
Hi All, Very sorry for forgetting to kill that message. That's what having a baby does to you, makes you lose your mind. :-) Sorry! Alex
RE: database access
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Greg Cope Sent: Tuesday, November 07, 2000 10:53 AM To: Jason Liu Cc: [EMAIL PROTECTED] Subject: Re: database access Jason Liu wrote: Is Apache::DBI absolutely necessary if you want to establish persistent database connection per child? No you can write your own (its open source remember ;-) but why bother - standing on the shoulders of giants etc Well, there is a section in the mod_perl guide on the subject here: http://perl.apache.org/guide/performance.html#Efficient_Work_with_Databases_ un
Re: PerlRun StatInc perl5_00405
Hi there, On Wed, 8 Nov 2000, Chris Strom wrote: The offending item in %INC appears to be the 'warnings.pm' entry defined on line 308 of PerlRun: BEGIN { if ($] 5.006) { $INC{'warnings.pm'} = __FILE__; *warnings::unimport = sub {}; } } Do you even have the file 'warnings.pm' if you're using 5.004_05? I have it in a 5.6.0 inststallation, but not in 5.005_03. That's why there's a test for Perl version less than 5.006 (== 5.6.0). What does Perl -V say? 73, Ged.
RE: Clarification of PERL_STASH_POST_DATA
-Original Message- From: Paul J. Lucas [mailto:[EMAIL PROTECTED]] Sent: Wednesday, November 08, 2000 1:22 PM To: [EMAIL PROTECTED] Subject: RE: Clarification of PERL_STASH_POST_DATA On Wed, 8 Nov 2000, Geoffrey Young wrote: ... Apache::RequestNotes may be able to help - it basically does cookie/get/post/upload parsing during request init and then stashes references to the data in pnotes. The result is a consistent interface to the data across all handlers (which is the exact reason this module came about) This is /exactly/ right. The only caveat is that its API is different from Apache::Request. it's not really different, just different than most people are used to :) $r-pnotes('INPUT') contains what is returned from Apache::Request::parms() (undocumented in Apache::Request, but not in libapreq), which is an Apache::Table reference. those get() calls are really Apache::Table::get() at any rate, hope you find it useful... --Geoff
RE: PerlRun StatInc perl5_00405
No, I don't have warnings.pm. It is only included in 5.6. We have no plans on upgrading at this time. The code snippet from PerlRun is a workaround for perl versions prior to the most recent release, but it appears to be causing problems with StatInc. perl -V reports: Summary of my perl5 (5.0 patchlevel 4 subversion 5) configuration: It may be that the combination of PerlRun/StatInc/perl5.6 simply will not work. If we were not using StatInc or were using perl 5.6, I do not believe that we would be having these problems. Unfortunately, our development environment would be rather painful without StatInc and, as I said, we have no plans on upgrading from 5_00405. For now, I've got the following hack in perl.conf: PerlModule Apache::StatINC # # hack to keep $INC{'warnings.pm'} defined. PerlRun includes a # workaround for perl5.6 which defines $INC{'warnings.pm'} and # ties the warnings::unimport function to an anonymous sub-routine. # PerlInitHandler "sub {delete $INC{'warnings.pm'}; $INC{'warnings.pm'='/usr/local/lib/perl/site_perl/Apache/PerlRun.pm';}" PerlInitHandler Apache::StatINC PerlSetVar StatINC_UndefOnReload On PerlSetVar StatINC_Debug 1 PerlWarn On PerlTaintCheck On It's not the prettiest thing, but it does what's needed. -Original Message- From: G.W. Haywood [mailto:[EMAIL PROTECTED]] Sent: Wednesday, November 08, 2000 3:00 PM To: Chris Strom Cc: [EMAIL PROTECTED] Subject: Re: PerlRun StatInc perl5_00405 Hi there, On Wed, 8 Nov 2000, Chris Strom wrote: The offending item in %INC appears to be the 'warnings.pm' entry defined on line 308 of PerlRun: BEGIN { if ($] 5.006) { $INC{'warnings.pm'} = __FILE__; *warnings::unimport = sub {}; } } Do you even have the file 'warnings.pm' if you're using 5.004_05? I have it in a 5.6.0 inststallation, but not in 5.005_03. That's why there's a test for Perl version less than 5.006 (== 5.6.0). What does Perl -V say? 73, Ged..
Re: Fast DB access
On Wed, Nov 08, 2000 at 10:49:00AM -0800, Perrin Harkins wrote: Also, you could try using mmap for reading the files, or possibly the Cache::Mmap module. If you do play with mmap, note that it can lose some or all of it's effeciency in SMP environments, or so I've read. - Barrie
Re: Problem reading from STDIN
No I don't have the Limit directive. Also, I read a couple of other similar questions on the list. While I didn't find anything specific to this question, there are lot there that seem to indicate that the only way to access STDIN under mod_perl is to use Apache::Request. Is that right? Does this mean I have to change our legacy code to reflect this? Thanks, Pramod - Original Message - From: G.W. Haywood [EMAIL PROTECTED] To: Pramod Sokke [EMAIL PROTECTED] Cc: modperl [EMAIL PROTECTED] Sent: Wednesday, November 08, 2000 9:26 AM Subject: Re: Problem reading from STDIN Hi there, On Tue, 7 Nov 2000, Pramod Sokke wrote: I'm not able to read anything from stdin at all. Have you got a Limit directive somewhere in the config? 73, Ged.
Re: dynamic vs. mostly static data
Also, moving all static content, mostly images, off to another server helps tremendously. True, we had an extra thttpd for static content at one point while we were short on memory. Something else that seems to work well, although I can't really explain it, is to disable keepalive support. For some reason, the number of concurrent processes (for a single server setup) went from 70-80 to approx. 20(!), without a noticeable drop in performance or page impressions. My guess is that with such a configuration, since some httpd's are busy generating dynamic pages, those that are available for static content are usually (i.e. with a higher probability) those that just served static content and finished quickly, so the number of httpd's stays near the number of concurrent dynamic page accesses + max. number of concurrent connections. With keepalives on, httpd's need much more time for one page impression if the connection is slow, so that should explain why there are so many of them. Does this make sense? Regards, -mjy -- Marinos J. Yannikos Preisvergleich Internet Services AG, Linke Wienzeile 4/2/5, A-1060 Wien Tel/Fax: (+431) 5811609-52/-55
Re: Flakey [Named]VirtualHost support, should TieHash be used?
For some reason I usually need to start Apache 4-5 times before it actually "sticks" and starts. What do you mean by "sticks"? You mean, when it doesn't "work", Apache isn't running? What happens when you do httpd -S (should show the configured virtual hosts)? One other weirdness.. when I startup Apache, I get a bunch of "unreferenced and undefined" warnings. Those are evident in the ServerConfig.pm below. You mean the @VirtualHost = ( undef, undef, ... ) stuff? This is because of the following line: $VirtualHost{$ip}[++$#VirtualHost] = { Where you write $#VirtualHost, that's the *array* @VirtualHost; it's not the number of elements in @{ $VirtualHost{$ip} }. See the difference? Each time you increment $#VirtualHost, you It'd be possible to get the number of elements in @{ $VirtualHost{$ip} } and increment that etc. But why not just use push? push @{ $VirtualHost{$ip} }, { ... }; Much easier, yes? Same thing here: $NameVirtualHost[++$#NameVirtualHost] = $ip; push @NameVirtualHost, $ip; bye, Benjamin Trott
Re: Problem reading from STDIN
Hi there, On Wed, 8 Nov 2000, Pramod Sokke wrote: there are lot there that seem to indicate that the only way to access STDIN under mod_perl is to use Apache::Request. Is that right? If your code is clean I'm sure you'll be able to use Apache::Registry and things should work just as if you were using CGI. See the 'Porting' section of the Guide for more info about STDIN (and grep for "use strict" while you're at it:). 73, Ged.
Re: Flakey [Named]VirtualHost support, should TieHash be used?
Benjamin Trott [EMAIL PROTECTED] 11/8/00 4:23:12 PM For some reason I usually need to start Apache 4-5 times before it actually "sticks" and starts. What do you mean by "sticks"? You mean, when it doesn't "work", Apache isn't running? Exactly. It says httpd started but, form what I cant tell, it starts up really quickly, then exits. What happens when you do httpd -S (should show the configured virtual hosts)? Show's everything correctly, but it also shows: Attempt to free unreferenced scalar. Attempt to free unreferenced scalar. Attempt to free unreferenced scalar. Attempt to free unreferenced scalar. Attempt to free unreferenced scalar. Right before the VirtualHost configuration. I just noticed that when it gives ~16 of those lines, apache won't start. Less, it starts fine... One other weirdness.. when I startup Apache, I get a bunch of "unreferenced and undefined" warnings. Those are evident in the ServerConfig.pm below. You mean the @VirtualHost = ( undef, undef, ... ) stuff? Yes, that bothers me. This is because of the following line: $VirtualHost{$ip}[++$#VirtualHost] = { Where you write $#VirtualHost, that's the *array* @VirtualHost; it's not the number of elements in @{ $VirtualHost{$ip} }. See the difference? Each time you increment $#VirtualHost, you ... got cut off.. I increment the array not the elements?? It'd be possible to get the number of elements in @{ $VirtualHost{$ip} } and increment that etc. But why not just use push? push @{ $VirtualHost{$ip} }, { ... }; Much easier, yes? Same thing here: $NameVirtualHost[++$#NameVirtualHost] = $ip; push @NameVirtualHost, $ip; Hmmm, something doesn't make sense here. I originally used push, but the previous data got overwritten. I'll try it again tonight and report back.. Thank you! Alexey Zilber Director of MIS CCG.XM 498 Seventh Ave, 16th Fl New York, New York, 10018 tel 212.297.7048 fax 212.297.8939 email [EMAIL PROTECTED]
Re: PerlRun StatInc perl5_00405
here is something that occurred to me, but it is untested, and could be plain foolish. place a file named warnings.pm in your 5_00405 include path that contains the infamous: 1; this may help to quiet StatInc. Chris Strom wrote: No, I don't have warnings.pm. It is only included in 5.6. We have no plans on upgrading at this time. The code snippet from PerlRun is a workaround for perl versions prior to the most recent release, but it appears to be causing problems with StatInc. perl -V reports: Summary of my perl5 (5.0 patchlevel 4 subversion 5) configuration: It may be that the combination of PerlRun/StatInc/perl5.6 simply will not work. If we were not using StatInc or were using perl 5.6, I do not believe that we would be having these problems. Unfortunately, our development environment would be rather painful without StatInc and, as I said, we have no plans on upgrading from 5_00405. For now, I've got the following hack in perl.conf: PerlModule Apache::StatINC # # hack to keep $INC{'warnings.pm'} defined. PerlRun includes a # workaround for perl5.6 which defines $INC{'warnings.pm'} and # ties the warnings::unimport function to an anonymous sub-routine. # PerlInitHandler "sub {delete $INC{'warnings.pm'}; $INC{'warnings.pm'='/usr/local/lib/perl/site_perl/Apache/PerlRun.pm';}" PerlInitHandler Apache::StatINC PerlSetVar StatINC_UndefOnReload On PerlSetVar StatINC_Debug 1 PerlWarn On PerlTaintCheck On It's not the prettiest thing, but it does what's needed. -Original Message- From: G.W. Haywood [mailto:[EMAIL PROTECTED]] Sent: Wednesday, November 08, 2000 3:00 PM To: Chris Strom Cc: [EMAIL PROTECTED] Subject: Re: PerlRun StatInc perl5_00405 Hi there, On Wed, 8 Nov 2000, Chris Strom wrote: The offending item in %INC appears to be the 'warnings.pm' entry defined on line 308 of PerlRun: BEGIN { if ($] 5.006) { $INC{'warnings.pm'} = __FILE__; *warnings::unimport = sub {}; } } Do you even have the file 'warnings.pm' if you're using 5.004_05? I have it in a 5.6.0 inststallation, but not in 5.005_03. That's why there's a test for Perl version less than 5.006 (== 5.6.0). What does Perl -V say? 73, Ged.. -- ___cliff [EMAIL PROTECTED]http://www.genwax.com/
Re: Flakey [Named]VirtualHost support, should TieHash be used?
Attempt to free unreferenced scalar. Attempt to free unreferenced scalar. Attempt to free unreferenced scalar. Attempt to free unreferenced scalar. Attempt to free unreferenced scalar. Interesting. Is the number of "Attempt to free unreferenced scalar." messages the same as the number of undef elements in @VirtualHost? Where you write $#VirtualHost, that's the *array* @VirtualHost; it's not the number of elements in @{ $VirtualHost{$ip} }. See the difference? Each time you increment $#VirtualHost, you ... got cut off.. I increment the array not the elements?? Oops, sorry. I was going to say that when you increment $#VirtualHost, you increase the size of the array @VirtualHost, adding a new (undefined) element. In other words you're using an array as a glorified index counter. :) bye, Ben
Re: conflicts between mod_perl and php4
Andreas, compile PHP *without* mysql support. Read the PHP docs, it's there... don't worry, it'll still be able to use mysql connections... Andreas Gietl wrote: i've got the following configuration: apache_1.3.12 with php and mod_perl statically linked. Php has compiled-in mysql-support and mod_perl of course uses DBI.
Re: dynamic vs. mostly static data
On Wed, 8 Nov 2000, Marinos J. Yannikos wrote: Something else that seems to work well, although I can't really explain it, is to disable keepalive support. For some reason, the number of concurrent processes (for a single server setup) went from 70-80 to approx. 20(!), without a noticeable drop in performance or page impressions. KeepAlive will cause a connection to stay open (and a process to stay busy listening on it) for a period of time after the response is sent, even if nothing at all is happening on that connection. I think the default wait is 15 seconds. That can really add up fast when you have your MaxClients set at less than 100. KeepAlive is great on a proxy server or dedicated image server though. - Perrin
problems compiling modperl under macos x pb
When I try compiling apache with modperl under macos x pb I got the folowing error. Both the source for apache and modperl is the latest cvs version. I do it "the flexible way" perl Makefile.PL \ APACHE_SRC=../apache-1.3/src \ DO_HTTPD=1 \ USE_APACI=1 \ PREP_HTTPD=1 \ EVERYTHING=1 \ PERL_TRACE=1 make make install whitch goes fine, but when it comes to apache ./configure \ --prefix=/Library/WebServer \ --enable-module=status \ --enable-module=info \ --enable-module=rewrite \ --enable-module=digest \ --enable-module=proxy \ --enable-module=unique_id \ --activate-module=src/modules/perl/libperl.a make I got the folowin error: === src/modules/perl cc -O3 -I/System/Library/Perl/darwin/CORE -g -pipe -pipe -fno-common -DHAS_TELLDIR_PROTOTYPE -fno-strict-aliasing -DMOD_PERL_VERSION=\"1.24_02-dev\" -DMOD_PERL_STRING_VERSION=\"mod_perl/1.24_02-dev\" -DPERL_TRACE=1 -I../.. -I/System/Library/Perl/darwin/CORE -I../../os/unix -I../../include -DDARWIN -DMOD_PERL -DUSE_PERL_SSI -g -pipe -pipe -fno-common -DHAS_TELLDIR_PROTOTYPE -fno-strict-aliasing -DUSE_HSREGEX -DUSE_EXPAT -I../../lib/expat-lite -DNO_DL_NEEDED `../../apaci` -c mod_perl.c mod_perl.c:387: syntax error, missing `;' after `)' mod_perl.c:762: syntax error, missing `;' after `)' mod_perl.c:855: syntax error, missing `;' after `)' mod_perl.c:1165: syntax error, missing `;' after `)' mod_perl.c:1479: syntax error, missing `;' after `)' mod_perl.c:1479: illegal expression, found `register' mod_perl.c:1484: illegal expression, found `char' mod_perl.c:1485: illegal expression, found `int' mod_perl.c:1486: illegal expression, found `char' make[4]: *** [mod_perl.o] Error 1 make[3]: *** [all] Error 1 make[2]: *** [subdirs] Error 1 make[1]: *** [build-std] Error 2 make: *** [build] Error 2 perl is v5.6.0 cc is 2.7.2.1 from the developer cd. Any hints? - gustav -- Gustav Kristoffer Ek, Netcetera, Brolæggerstræde 4, 1211 København K +45 33147000, +45 2045, fax +45 33146200 http://www.netcetera.dk/
Re: database access
Greg Cope wrote: Jason Liu wrote: Is Apache::DBI absolutely necessary if you want to establish persistent database connection per child? No you can write your own (its open source remember ;-) but why bother - standing on the shoulders of giants etc Greg Thanks, Jason -Original Message- From: David Hodgkinson [mailto:[EMAIL PROTECTED]] Sent: Monday, November 06, 2000 5:10 AM To: Jason Liu Cc: [EMAIL PROTECTED] Subject: Re: database access "Jason Liu" [EMAIL PROTECTED] writes: In general, how should database connections be handled between parent and child processes? Can you establish database connections from within a handler? Absolutely. And using Abache::DBI caches the connection handle. -- Dave Hodgkinson, http://www.hodgkinson.org Editor-in-chief, The Highway Star http://www.deep-purple.com Apache, mod_perl, MySQL, Sybase hired gun for, well, hire - the problem can also be resolved depending on the type of db used yes you can program your own levels of persistence finding a database that can do this for you can be a great help. postgres for instance has concurrent locking , on table , row and or column. using its Pg module instead of DBD::Pg and DBI handles the opening and closing of the connections too ;) making your own wrapper module ito interface with DBI or PG is a good thing to do as well i usually use something that does all the basic statement handling and db connect strings then i can just use PGForm; #my module to go to Pg PGform($db,$user,$password,$sqlstatement); and if a select returns a array of arrays else if insert update delete etc returns ok or error
Re: Apache::ASP
What's your error log say? Often your solution lies there. -- Joshua _ Joshua Chamas Chamas Enterprises Inc. NodeWorks free web link monitoring Huntington Beach, CA USA http://www.nodeworks.com1-714-625-4051 danfromtitan wrote: Hi there I'm running Apache 1.3.14 on a redhat 6.1 machine. I've installed apache from the souce files, then every other modules are needed for a succesfull istallation of Apache::ASP module. Finaly the ASP module instalation worked fine (the "install test" was ok). My problem is that when I try to run the asp example included in the packet, the server tells me there is an internal error. Thi is happening with every other asp page. Could you tell me please what else should I configure? The ones in asp README (like PerlHandler or PerlSetVar) make the httpd to give error messages and it doesn't work. Thank you very much. Dan __ Do You Yahoo!? Thousands of Stores. Millions of Products. All in one Place. http://shopping.yahoo.com/
Re: Fast DB access
Yes. The tables were indexed. Otherwise we might have seen even more spectacular results Murali - Original Message - From: G.W. Haywood [EMAIL PROTECTED] To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, November 08, 2000 5:44 PM Subject: Re: Fast DB access Hi there, On Wed, 8 Nov 2000, Differentiated Software Solutions Pvt. Ltd wrote: We are returning after extensive tests of various options suggested. Did you try different indexing mechanisms in your tests? 73, Ged.
Re: Fast DB access
Hi, When we rebuild the hash in the RAM it takes too much time. Other questions, my collegues will answer. Murali Differentiated Software Solutions Pvt. Ltd. 176, Ground Floor, 6th Main, 2nd Block, RT Nagar Bangalore - 560032 Phone : 91 80 3431470 www.diffs-india.com - Original Message - From: Perrin Harkins [EMAIL PROTECTED] To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Thursday, November 09, 2000 12:19 AM Subject: Re: Fast DB access "Differentiated Software Solutions Pvt. Ltd" wrote: 3. We have a problem rebuilding this database in the ram even say every 1000 requests. What problem are you having with it? We tried using dbm and found it a good compromise solution. We found that it is about 8 times faster than postgres querying. Some dbm implementations are faster than others. Depending on your data size, you may want to try a couple of them. 4. Another surprising finding was we built a denormalised db on the Linux file system itself, by using the directory and file name as the key on which we wanted to search. We found that dbm was faster than this. Did you end up with a large number of files in one directory? When using the file system in this way, it's a common practice to hash the key you're using and then split that across multiple directories to prevent too many files from building up in one and slowing things down. For example: "my_key" -- "dHodeifehH" -- /usr/local/data/dH/odeifehH Also, you could try using mmap for reading the files, or possibly the Cache::Mmap module. We're carrying out more tests to see how scaleable is dbm. If you're using read-only data, you can leave the dbm handles persistent between connections. That will speed things up. You could look at BerkeleyDB, which has a built-in shared memory buffer and page-level locking. You could also try IPC::MM, which offers a shared memory hash written in C with a perl interface. Hope these findings are useful to others. They are. Keep 'em coming. - Perrin
Re: Fast DB access
On Thu, 9 Nov 2000, Differentiated Software Solutions Pvt. Ltd wrote: When we rebuild the hash in the RAM it takes too much time. Did you try using Storable as the data format? It has a function to load from files which is very fast. - Perrin
Re: running an expect script under mod_perl
Ok, I think I figured out what was causing my problem here. After much trial and error, I figured out that this was only happening with HTTPS under IE. Copying my Apache::Registry script from where it was to this Directory made it work: Directory "/usr/local/apache/cgi-bin" SSLOptions +StdEnvVars /Directory Apparently this sets up the environment correctly. Still very strange, but hopefully it'll work smoothly from now on. Wim Kerkhoff wrote: Hi Everyone, I've been bashing my head over this problems for the last couple of days, so I thought I'd post and see if anybody has experienced something similiar, and what they've done to solve it. Basically, I have an Apache::Registry script that creates an expect script, then executes it and reads its output. This script works just fine from the command line. The CGI runs fine when called from Netscape (Linux or Windows). However... when the CGI is called by Internet Explorer, it hangs the CGI. I've tried lots of different things, but I'm still not sure whether mod_perl, the expect script, or my code is to blame. I have pored through the guide, the mailing list archives, man pages, and google, but didn't find anything related to this problem. The relevant part of the process list when the expect/cgi script hangs: 14597 ?S 0:32 /usr/local/apache/bin/httpd -DSSL 26476 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26499 ?S 0:00 | \_ /usr/bin/expect /tmp/file6rqoCY.expect 26500 ttyp9S 0:00 | \_ /usr/bin/expect /tmp/file6rqoCY.expect 26501 ttyp9S 0:00 | \_ sh -c /bin/stty sane /dev/ttyp9 26502 ttyp9S 0:00 | \_ /bin/stty sane 26478 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26479 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26482 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26483 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26484 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26485 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26521 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26529 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26531 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL 26584 ?S 0:00 \_ /usr/local/apache/bin/httpd -DSSL The expect script: (called with /usr/bin/expect /tmp/file6rqoCY.expect) spawn /usr/bin/openssl smime -sign -in /tmp/fileicFBlw -signer verisign.pem -to [EMAIL PROTECTED] -from [EMAIL PROTECTED] -subject "Testing openssl" expect "phrase:" send "thepassphrase\r\n" expect to get the output of this, I have code that looks like this: open (EXPECT, "/usr/bin/expect /tmp/file6rqoCY.expect /dev/null |") || die $!; while (my ($line) = (EXPECT)) { print "got: $line"; } close (EXPECT); As far as I can tell, the CGI (when called by IE) hangs when getting input. It appears to be trying to get input from /dev/ttypX, which of course won't work because it is running in an Apache environment. Is there a way to force the TTY to /dev/null? Thanks, Wim Kerkhoff, Software Engineer Merilus Technologies, Inc. [EMAIL PROTECTED]
Re: persistent database problem
Yes - Original Message - From: Jeff Beard [EMAIL PROTECTED] To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Monday, October 23, 2000 7:08 PM Subject: Re: persistent database problem Are using Apache::DBI and establishing a connection in your startup.pl? On Mon, 23 Oct 2000, Differentiated Software Solutions Pvt. Ltd wrote: Hi, I have started with one httpd; and executed the following mod-perl program from the browser. We've configured apache to have persistent DBI The idea is first time the database handle will be inactive and it will print 'INSIDE'. From the second time onwards the database handle will be active and it will print 'OUTSIDE'. This is working. But, sometimes the 'OUTSIDE' comes from the third or fourth time only. (that is it takes more than one attempt to become persistent) Why it is happening like this? Thanks Muthu S Ganesh mod-perl code is here: $rc = $dbh_pg-{Active}; print "$$: $rc\n"; if($rc eq '') { print "INSIDE\n"; $dbh_pg = DBI-connect("dbi:Pg:dbname=adcept_smg_ctrl","postgres","postgres",{RaiseErr or = 1}) || die $DBI::errstr; } else { print "OUTSIDE\n"; } Differentiated Software Solutions Pvt. Ltd. 176, Ground Floor, 6th Main, 2nd Block, RT Nagar Bangalore - 560032 Phone : 91 80 3431470 www.diffs-india.com -- Jeff Beard ___ Web:www.cyberxape.com Location: Boulder, CO, USA
Re: persistent database problem
Hi, To avoid this problem, we specifically started only one httpd. Murali Differentiated Software Solutions Pvt. Ltd. 176, Ground Floor, 6th Main, 2nd Block, RT Nagar Bangalore - 560032 Phone : 91 80 3431470 www.diffs-india.com - Original Message - From: John K. Sterling [EMAIL PROTECTED] To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Monday, October 23, 2000 1:35 PM Subject: Re: persistent database problem The db connection happens once for each child - so every time you hit a child for the first time it will open up a new connection - you probably have apache configured to start with 4 or so kids. sterling On Mon, 23 Oct 2000, Differentiated Software Solutions Pvt. Ltd wrote: Hi, I have started with one httpd; and executed the following mod-perl program from the browser. We've configured apache to have persistent DBI The idea is first time the database handle will be inactive and it will print 'INSIDE'. From the second time onwards the database handle will be active and it will print 'OUTSIDE'. This is working. But, sometimes the 'OUTSIDE' comes from the third or fourth time only. (that is it takes more than one attempt to become persistent) Why it is happening like this? Thanks Muthu S Ganesh mod-perl code is here: $rc = $dbh_pg-{Active}; print "$$: $rc\n"; if($rc eq '') { print "INSIDE\n"; $dbh_pg = DBI-connect("dbi:Pg:dbname=adcept_smg_ctrl","postgres","postgres",{RaiseErr or = 1}) || die $DBI::errstr; } else { print "OUTSIDE\n"; } Differentiated Software Solutions Pvt. Ltd. 176, Ground Floor, 6th Main, 2nd Block, RT Nagar Bangalore - 560032 Phone : 91 80 3431470 www.diffs-india.com
VirtualDocumentRoot problem
Hi, I have a problem with the vhost module. The module does not support, logging per virtualhost in seprate files... eg I am looking for something like: VirtualLoggingFile /home/logs/access-%0.log something to that effect. Regards, Mark Bojara MICS Networking - 012-661-
Re: problems compiling modperl under macos x pb
[EMAIL PROTECTED] (Gustav Kristoffer Ek) wrote: When I try compiling apache with modperl under macos x pb I got the folowing error. Both the source for apache and modperl is the latest cvs version. I couldn't get it to work either, but I don't have much experience troubleshooting this stuff: http://forum.swarthmore.edu/epigone/modperl/beeginjar Looks like you got farther than I did. My compilation got hung up in the Apache part - since OS X comes with Apache, there should be some way to get this to work, though. I think I wasn't working with Apache CVS, but with 1.3.14 or something. I got no responses from my post, so I'm not sure whether others have succeeded or found anything out. ------ Ken Williams Last Bastion of Euclidity [EMAIL PROTECTED]The Math Forum
Re: Connection Pooling / TP Monitor
On Mon, Nov 06, 2000 at 09:19:04PM -0500, Thomas A. Lowery wrote: On Mon, Nov 06, 2000 at 04:19:13PM +, Tim Bunce wrote: On Thu, Nov 02, 2000 at 10:10:09PM -0800, Perrin Harkins wrote: Tim Bunce wrote: You could have a set of apache servers that are 'pure' DBI proxy servers. That is, they POST requests containing SQL (for prepare_cached) plus bind parameter values and return responses containing the results. Basically I'm proposing that apache be used as an alternative framework for DBI::ProxyServer. Almost all the marshaling code and higher level logic is already in DBI::ProxyServer and DBD::Proxy. Shouldn't be too hard to do and you'd gain in all sorts of ways. I think this is a really good idea. The thing is, any effort put into this kind of thing right now feels like a throw away, because mod_perl 2.0 will solve the problem in the right way with real pooling of database handles (and other objects) between threads. Maybe it's time for DBD:: authors to start checking their code for thread safety? Yeap. How about an explaination on how to test a pure perl driver for thread safety and/or what types of code we need to check for or look into? I'd hope that the Apache 2 docs would include a section on thread safety and how to check/change old code. Tim.
mod_perl success story (fwd)
-- Forwarded message -- Date: Wed, 08 Nov 2000 13:27:49 -0500 From: Richard Dice [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: mod_perl success story For the http://perl.apache.org/stories/ page, consider Webpersonals: http://www.webpersonals.com/ which uses HTML::Embperl. It's very impressive. Cheers, Richard -- Richard Dice * Personal 514 816 9568 * Fax 514 816 9569 ShadNet Creator * http://shadnet.shad.ca/ * [EMAIL PROTECTED] Occasional Writer, HotWired * http://www.hotwired.com/webmonkey/ "squeeze the world 'til it's small enough to join us heel to toe" - jesus jones
cvs commit: modperl-site jobs.html
ask 00/11/08 22:09:01 Modified:.jobs.html Log: jobs'o'rama Revision ChangesPath 1.46 +40 -6 modperl-site/jobs.html Index: jobs.html === RCS file: /home/cvs/modperl-site/jobs.html,v retrieving revision 1.45 retrieving revision 1.46 diff -u -r1.45 -r1.46 --- jobs.html 2000/10/30 14:52:29 1.45 +++ jobs.html 2000/11/09 06:09:00 1.46 @@ -24,6 +24,46 @@ ul li +!-- added 20001108 - [EMAIL PROTECTED] -- +a href="http://company.blackboard.com/careers/DisplayJob.cgi?JTID=14" +BlackBoard/a - Washington, DC + +li +!-- added 20001108 -- +a href="http://www.bgs.sk/" +Business Global Systems/a - Bratislava, Slovakia + +li +!-- added 20001108 [EMAIL PROTECTED] -- +a href="http://www.bgs.sk/" +Business Global Systems/a - Bratislava, Slovakia + +li +!-- added 20001108 [EMAIL PROTECTED] -- +a href="http://globalmedia.com/about_us/jobs/tech_srsoftdev.html" +GlobalMedia.com/a - Vancouver, Canada + +li +!-- added 20001108 [EMAIL PROTECTED] -- +a href="http://www.agoby.com/jobs.html" +Agoby.com/a - Redwood City, CA + +li +!-- added 20001108 [EMAIL PROTECTED] -- +a href="http://www.sybeo.com/job.html" +Sybeo Software/a - Menlo Park, CA + +li +!-- added 20001108 [EMAIL PROTECTED] -- +a href="http://www.sybeo.com/job.html" +Sybeo Software/a - Menlo Park, CA + +li +!-- added 20001108 [EMAIL PROTECTED] -- +a href="http://spinway.com/public/company/careers.html" +Spinway/a - Sunnyvale, CA + +li !-- added 2000.10.30 [EMAIL PROTECTED] -- a href="http://www.ecos.de/x/index.htm/jobs/r_jobs.htm" ecos gmbh /a - Dienheim (near Mainz), Germany @@ -257,12 +297,6 @@ !-- added 990712 - [EMAIL PROTECTED] -- a href="http://www.zing.com/about/jobs.html" Zing Networks/a - San Francisco, CA - -li -!-- added 990712 - [EMAIL PROTECTED] -- -a href="http://company.blackboard.net/company/job.html#10" -BlackBoard/a - Washington, DC - li !-- added 990617 - [EMAIL PROTECTED] --