Re: More stuff not working with conversion to modperl?
This seemed to hit it on the head, i really dont understand WHY this makes a difference in modperl and not nonmoperl services? I changed the count variable $i = 1 to my $i = 1 and the $msgnum to my $msgnum and it is working like the non modperl server does. Correctly. Isnt there a way to clear global variable to a null after a web transaction is complete? Thanks for the tip. John You might want to try declaring the file handles as LOCAL *myfile or whatever. You have to be very careful about making global variables with modperl since they have the benfit of sticking around after the web transaction is complete. ryan - Original Message - From: John Buwa [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, July 31, 2001 5:26 AM Subject: More stuff not working with conversion to modperl? Hello, I am trying to finish up my scripts conversion to mod perl and here is a routine i truely do not undestand why it is not working. This is the same code that is running on both the modperl i am useing to port scripts and test and the live non-modperl apache, which works fine: This is a routine that deletes a line from a mail file stored in this format: |user|date|time|message|status| If the file is say 20 lines and i am removing line 5, defined by $msgnum, it is suposed to remove only line 5 and print the rest back to the users mail file. The same code on the production machine works fine. I have experienced some really strange behavior of which none is correct on the modperl. Either it will remove multiple lines from the mail file along with the line it was suposed to OR it will remove the entire mail file and sometime later a portion of the mail file will re-apear garbled? Could this possibly have something to do with the flock files? I am using: use Fcntl ':flock'; # import LOCK_* constants in all my scripts, is this still valid in modperl servers? if (($user_matched == 1) ($pass_matched == 1)) { $i = 1; #init count variable open(MYFILE, mail/$user); #open users mail box to read in @OLDMAIL = MYFILE; close(MYFILE); open(NEW, mail/$user); #open new copy of users mail box flock (NEW, LOCK_SH); seek (NEW, 0, 0); # Rewind foreach $line (@OLDMAIL) { chomp($line); unless ($i == $msgnum) { print NEW $line\n; } $i++ #increment cnt }#End of foreach loop flock (NEW, LOCK_UN);#release flock close(NEW);#close new mail file }#end of it
Re: Bug??
- Original Message - From: Stas Bekman [EMAIL PROTECTED] To: Chris Rodgers [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, August 01, 2001 05:16 Subject: Re: Bug?? On Tue, 31 Jul 2001, Chris Rodgers wrote: Thanks for that. However, I've already seen this. The problem is that I'm requesting pages at: http://my.server.com/perl/blah.pl and also https://my.server.com/perl/blah.pl Now these should be different scripts, and Apache is set up with a completely different document and perl root for the http and https servers. Unfortunately, these still get confused, even with the NameWithVirtualHost code. Hence, I thought of hacking the .pm files to include the server port as well as the name in the uniquely generated namespace. Any other ideas?? Hmm, I think you are the first one to hit this issue. Try this (untested): --- ./lib/Apache/Registry.pm.orig Wed Aug 1 11:06:49 2001 +++ ./lib/Apache/Registry.pm Wed Aug 1 11:11:04 2001 @@ -70,7 +70,8 @@ if ($Apache::Registry::NameWithVirtualHost $r-server-is_virtual) { my $name = $r-get_server_name; - $script_name = join , $name, $script_name if $name; + $script_name = join , (exists $ENV{HTTPS}?'https':''), +$name, $script_name if $name; } # Escape everything into valid perl identifiers based on the earlier discussion about detecting https :) That will take care of standard http/https, but what if we have a custom client connecting on weird ports _without_ putting the port in the URL? Wouldn't it make sense to just take $ENV{SERVER_PORT} and join() _that_ to make the unique filename? That'll take care of all the weird server combinations possible including SSL, as only one listening socket can physically bind to a port. It'll even transparently take care of weird back-end server problems, as no matter how many servers seem to be on http://frontend:80, the backends MUST be unique combinations of either different hostnames or different ports... Just my $0.02 Issac PGP Key 0xE0FA561B - Fingerprint: 7E18 C018 D623 A57B 7F37 D902 8C84 7675 E0FA 561B
RE: Bug??
-Original Message- From: Stas Bekman [mailto:[EMAIL PROTECTED]] Sent: Tuesday, July 31, 2001 11:17 PM To: Chris Rodgers Cc: [EMAIL PROTECTED] Subject: Re: Bug?? On Tue, 31 Jul 2001, Chris Rodgers wrote: Thanks for that. However, I've already seen this. The problem is that I'm requesting pages at: http://my.server.com/perl/blah.pl and also https://my.server.com/perl/blah.pl Now these should be different scripts, and Apache is set up with a completely different document and perl root for the http and https servers. Unfortunately, these still get confused, even with the NameWithVirtualHost code. Hence, I thought of hacking the .pm files to include the server port as well as the name in the uniquely generated namespace. Any other ideas?? Hmm, I think you are the first one to hit this issue. Try this (untested): --- ./lib/Apache/Registry.pm.orig Wed Aug 1 11:06:49 2001 +++ ./lib/Apache/Registry.pm Wed Aug 1 11:11:04 2001 @@ -70,7 +70,8 @@ if ($Apache::Registry::NameWithVirtualHost $r-server-is_virtual) { my $name = $r-get_server_name; - $script_name = join , $name, $script_name if $name; + $script_name = join , (exists $ENV{HTTPS}?'https':''), +$name, $script_name if $name; } of course, that won't work with PerlSetupEnv Off - maybe use $r-subprocess_env('https') instead :) what about just moving to Apache::RegistryNG, since it subclasses PerlRun which uses the filename and not the URL? --Geoff
RE: More stuff not working with conversion to modperl?
-Original Message- From: John Buwa [mailto:[EMAIL PROTECTED]] Sent: Wednesday, August 01, 2001 5:03 AM To: [EMAIL PROTECTED] Subject: Re: More stuff not working with conversion to modperl? This seemed to hit it on the head, i really dont understand WHY this makes a difference in modperl and not nonmoperl services? as already (partly) suggested... use strict; use warnings; read http://perl.apache.org/guide/porting.html there are lots of things to consider when coming from a legacy CGI environment - you've hit the most common. Registry.pm requires a bit more care than other CGI environments and takes a while to get a feel for. the Guide should help... HTH --Geoff
RE: Apache::Reload???
Does that work under Unix only? I am on NT and it does not appear to work. Can someone clarify. Thanks Scott -Original Message- From: Stas Bekman [mailto:[EMAIL PROTECTED]] Sent: Tuesday, July 31, 2001 9:38 PM To: Bryan Coon Cc: Matt Sergeant; '[EMAIL PROTECTED]' Subject: Re: Apache::Reload??? On Tue, 31 Jul 2001, Bryan Coon wrote: I must have missed something in setting up Apache::Reload. What I want is simple that when I make a change in my scripts I dont have to restart the Apache server... I put PerlInitHandler Apache::Reload in my httpd.conf, and added 'use Apache::Reload' to the modules that I want to be reloaded on change. But I get the following warning message in my apache logs: Apache::Reload: Can't locate MyModule.pm for every module I have added Apache::Reload to. How do I do this so it works? The docs on Reload are a bit sparse... Your problem probably comes from the fact that @INC is reset to its original value after its get temporary modified in your scripts. Of course when Apache::Reload tries to find the file to test for its mod time, it cannot find it. The solution is to extend @INC at the server startup to include directories you load the files from which aren't in @INC. For example, if you have a script which loads MyTest.pm from /home/stas/myproject: use lib qw(/home/stas/myproject); require MyTest; Apache::Reload won't find this file, unless you alter @INC in startup.pl: startup.pl -- use lib qw(/home/stas/myproject); and restart the server I'll add these notes to the guide. Matt probably wants to add these to the Apache::Reload docs as well :) _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://eXtropia.com/ http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
Re: More stuff not working with conversion to modperl?
John Buwa wrote: Isnt there a way to clear global variable to a null after a web transaction is complete? Apache::PerlRun does that. - Perrin
Re: Ultimate Bulletin Board? Jezuz.
To all who responded to my original post about UBB: thanks. Thanks to all the great feedback, we have decided to punt UBB and are using wwwthreads instead. you might want to look into vBulletin, it is used on a lot of different sites is written in php with a MySQL back end and looks very similar to UBB. yes, but as an engineer i can't condone the use of PHP, sorry... You might want to consider WWWThreads (http://www.wwwthreads.com/). Code is simple to read/understand, and it works out of the box under Apache::Registry. ok, so now I am using wwwthreads and although the code is cleaner, it's still pretty ugly. there's SQL embedded all throughout the perl everywhere (who does this?! oh my god, are they on crack?), not to mention the HTML embedded all throughout the perl (are they on glue?), so it's pretty much a dog's breakfast of code and written by a script hacker who at least knew enough to write neat, well-commented rubbish. having said all that, it's much cheaper than UBB, far superior in overall design, and DB-driven... and it works beautifully, so i can't complain too much. :-) still makes my life miserable integrating it with an existing user-database... i wish programming were like driving; people should be licensed to program. cheerz kyle Software Engineer Central Park Software http://www.centralparksoftware.com
RE: Bug??
On Wed, 1 Aug 2001, Geoffrey Young wrote: -Original Message- From: Stas Bekman [mailto:[EMAIL PROTECTED]] Sent: Tuesday, July 31, 2001 11:17 PM To: Chris Rodgers Cc: [EMAIL PROTECTED] Subject: Re: Bug?? On Tue, 31 Jul 2001, Chris Rodgers wrote: Thanks for that. However, I've already seen this. The problem is that I'm requesting pages at: http://my.server.com/perl/blah.pl and also https://my.server.com/perl/blah.pl Now these should be different scripts, and Apache is set up with a completely different document and perl root for the http and https servers. Unfortunately, these still get confused, even with the NameWithVirtualHost code. Hence, I thought of hacking the .pm files to include the server port as well as the name in the uniquely generated namespace. Any other ideas?? Hmm, I think you are the first one to hit this issue. Try this (untested): --- ./lib/Apache/Registry.pm.orig Wed Aug 1 11:06:49 2001 +++ ./lib/Apache/Registry.pmWed Aug 1 11:11:04 2001 @@ -70,7 +70,8 @@ if ($Apache::Registry::NameWithVirtualHost $r-server-is_virtual) { my $name = $r-get_server_name; - $script_name = join , $name, $script_name if $name; + $script_name = join , (exists $ENV{HTTPS}?'https':''), +$name, $script_name if $name; } of course, that won't work with PerlSetupEnv Off - maybe use $r-subprocess_env('https') instead :) Are you sure? I think with PerlSetupEnv Off you don't get the usual CGI env vars set, is HTTPS one of these? in any case $ENV{SERVER_PORT} or $r-subprocess_env('https') seem to be fine as long as you get a unique string. what about just moving to Apache::RegistryNG, since it subclasses PerlRun which uses the filename and not the URL? true. Chris, does it solve your problem? _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://eXtropia.com/ http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
[OT] Inspired by closing comments from the UBB thread.
All, In his closing comments about UBB Kyle Dawkins made a statement that got me wondering. He said there's SQL embedded all throughout the Perl everywhere (who does this?! oh my god, are they on crack?). This comment got me wondering about alternatives to embedding SQL in to the code of a program. Alternatives I see are to use stored procedures which would limit one to using a certain DB server (or to be proficient in many servers and write stored procedures for all server flavors which would mean one is a very busy Perl and SQL guru) or possibly storing the embedded SQL in some sort of external file structure accessible via storable, XML::Simple or some other means. It would be interesting to know how other people have solved that problem. Currently, we are essentially using embedded SQL in our apps. Thanks in advance. --Joe Breeden --
Re: system()/exec() ?
I use the Apache::SubProcess, but system/exec for a script in perl, how: system script.pl; It only work if i write: system /usr/xxx/perl script.pl; In this manner, occurred an error with the request method in the script (CGI) called (script.pl). That's right ? - Original Message - From: Stas Bekman [EMAIL PROTECTED] To: Mauricio Amorim [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Tuesday, July 31, 2001 11:50 PM Subject: Re: system()/exec() ? On Tue, 31 Jul 2001, Mauricio Amorim wrote: Hi I see an discussion in April by Mike Austin, about utilization of exec and system commands with mod_perl. Anybody know if is possible to use system and exec commands, because i tried use it, but the script don´t execute and apache display nothing in the logs/error_log http://perl.apache.org/guide/porting.html#Output_from_system_calls _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://eXtropia.com/ http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
Re: [OT] Inspired by closing comments from the UBB thread.
I think a lot of people's approach, including mine, is to have OO Perl modules for all database access. In my code (I use Mason), a web page only gets its data through calls like this: my $obj = NAIC::User-(DBH=$dbh, EMAIL='[EMAIL PROTECTED]'); $obj-load; my $groups_list = $obj-groups(); That way any needed SQL changes, or even ports to a new database, don't have to be done everywhere in my code. On Wed, Aug 01, 2001 at 10:12:45AM -0500, Joe Breeden wrote: All, In his closing comments about UBB Kyle Dawkins made a statement that got me wondering. He said there's SQL embedded all throughout the Perl everywhere (who does this?! oh my god, are they on crack?). ... It would be interesting to know how other people have solved that problem. Currently, we are essentially using embedded SQL in our apps. -- Barry Hoggard
Not embedding SQL in perl (was RE: [OT] Inspired by closing comments from the UBB thread.)
Joe Breeden [mailto:[EMAIL PROTECTED]] wrote: ... wondering about alternatives to embedding SQL in to the code of a program. ... It would be interesting to know how other people have solved that problem. One approach is to use something like Ima::DBI, which I'm currently toying with. With Ima::DBI, you still embed your SQL in your perl code, but at least you put all of your SQL into a single module somewhere and you do so in a very structured way. To access the database from the rest of your program, you call methods of your database query object. This is a lot cleaner than whipping up a query string every time you want to hit the database. It's also a lot more flexible. You could, for instance, create different database classes for different database backends, and still keep the programming interface the same. Of course you could do all this without Ima::DBI; roll up your own custom database wrapper classes. But Ima::DBI also handles some mod_perl DBI issues such as guaranteeing one DBI statement handle per process. Michael
Re: [OT] Inspired by closing comments from the UBB thread.
All, In his closing comments about UBB Kyle Dawkins made a statement that got me wondering. He said there's SQL embedded all throughout the Perl everywhere (who does this?! oh my god, are they on crack?). This comment got me wondering about alternatives to embedding SQL in to the code of a program. Alternatives I see are to use stored procedures which would limit one to using a certain DB server (or to be proficient in many servers and write stored procedures for all server flavors which would mean one is a very busy Perl and SQL guru) or possibly storing the embedded SQL in some sort of external file structure accessible via storable, XML::Simple or some other means. I, as a crackhead, do embed my SQL into my modules. I've never liked the idea of a central SQL library... Too many dependencies. If I change one query in the library, I could end up breaking lots of modules using that query. I have, on occasion placed all the SQL into a %SQL global (since it's static). Then it get's shared by all the apache processes when the module loads. Rob -- A good magician never reveals his secret; the unbelievable trick becomes simple and obvious once it is explained. So too with UNIX.
Re: [OT] Inspired by closing comments from the UBB thread.
I think a lot of people's approach, including mine, is to have OO Perl modules for all database access. In my code (I use Mason), a web page only gets its data through calls like this: my $obj = NAIC::User-(DBH=$dbh, EMAIL='[EMAIL PROTECTED]'); $obj-load; my $groups_list = $obj-groups(); That way any needed SQL changes, or even ports to a new database, don't have to be done everywhere in my code. That's what I do too. I suppose this could still be called embedded SQL though. You could put your SQL in a separate file, but I don't like that approach because it doesn't seem like you would be changing SQL without changing the other code very often. Having your SQL right next to where it's being used is convenient, and a HERE doc makes it easy to read. - Perrin
Re: [OT] Inspired by closing comments from the UBB thread.
* Joe Breeden ([EMAIL PROTECTED]) [010801 10:25]: All, In his closing comments about UBB Kyle Dawkins made a statement that got me wondering. He said there's SQL embedded all throughout the Perl everywhere (who does this?! oh my god, are they on crack?). This comment got me wondering about alternatives to embedding SQL in to the code of a program. Alternatives I see are to use stored procedures which would limit one to using a certain DB server (or to be proficient in many servers and write stored procedures for all server flavors which would mean one is a very busy Perl and SQL guru) or possibly storing the embedded SQL in some sort of external file structure accessible via storable, XML::Simple or some other means. It would be interesting to know how other people have solved that problem. Currently, we are essentially using embedded SQL in our apps. As others have mentioned, one way would be to wrap your records in objects and have access, queries, etc. be centralized there. plugSPOPS (Simple Perl Object Persistence with Security) does this for you and gives you object linking and high-level database independence for free. It's on CPAN./plug Chris -- Chris Winters ([EMAIL PROTECTED]) Building enterprise-capable snack solutions since 1988.
RE: Bug??
-Original Message- From: Stas Bekman [mailto:[EMAIL PROTECTED]] Sent: Wednesday, August 01, 2001 10:50 AM To: Geoffrey Young Cc: Chris Rodgers; [EMAIL PROTECTED] Subject: RE: Bug?? [snip] of course, that won't work with PerlSetupEnv Off - maybe use $r-subprocess_env('https') instead :) Are you sure? I think with PerlSetupEnv Off you don't get the usual CGI env vars set, is HTTPS one of these? my take on this is that mod_perl uses two Apache C routines: ap_add_common_vars and ap_add_cgi_vars to set up the environment using the contents of the subprocess_env table. the former sets up things like REMOTE_PORT and translates the incoming headers into HTTP_*. the latter adds CGI spec variables to the subprocess_env table, like GATEWAY_INTERFACE. in both cases, all that is happening is that the subprocess_env table is populated. it is up to mod_cgi (which uses an ap_* call) or mod_perl (which uses its own) to move the contents of subprocess_env into %ENV. HTTPS is actually set up by mod_ssl using subprocess_env and relies on mod_cgi or mod_perl to populate %ENV. PerlSetupEnv supresses this. at any rate, my analysis may be slightly off, but in practice setting PerlSetupEnv Off hides $ENV{HTTPS} but it is still there in $r-subprocess_env. --Geoff
Re: [OT] Inspired by closing comments from the UBB thread.
I'd second the original question, I've always embedded the SQL (what's the S for?) in the code, isn't that the point of the wonderful DBD::* packages? As far as modularizing database calls, there are a couple reasons I've had problems with that. I found the methods being rewritten to handle about as many options as sql itself. (what if I want to sort differently? what if I need a slightly different statement?). My solution is to embed SQL most of the time, modularize basic calls (get_user, get_group type stuff). In addition, I'd like to rebut the original statement: not to mention the HTML embedded all throughout the perl (are they on glue?) What's the alternative there? Embed perl in the HTML? On Wed, 1 Aug 2001, Barry Hoggard wrote: I think a lot of people's approach, including mine, is to have OO Perl modules for all database access. In my code (I use Mason), a web page only gets its data through calls like this: my $obj = NAIC::User-(DBH=$dbh, EMAIL='[EMAIL PROTECTED]'); $obj-load; my $groups_list = $obj-groups(); That way any needed SQL changes, or even ports to a new database, don't have to be done everywhere in my code. On Wed, Aug 01, 2001 at 10:12:45AM -0500, Joe Breeden wrote: All, In his closing comments about UBB Kyle Dawkins made a statement that got me wondering. He said there's SQL embedded all throughout the Perl everywhere (who does this?! oh my god, are they on crack?). ... It would be interesting to know how other people have solved that problem. Currently, we are essentially using embedded SQL in our apps. -- Barry Hoggard
Re[2]: [OT] Inspired by closing comments from the UBB thread.
On Wednesday, August 01, 2001, Perrin Harkins wrote the following about [OT] Inspired by closing comments from the UBB thread. ph Having your SQL right next to where it's being used is convenient, ph and a HERE doc makes it easy to read. Agreed. IMHO, it also makes it easier to maintain months/years down the road, when you have forgotten what the sql (or the entire program) was supposed to do anyway, and have turned the module over to a junior staff member who has never seen it before, etc, etc. But it seems to me its a bit of a style thing, with pro's and con's on each side. Best Regards, Mike e-mail: [EMAIL PROTECTED]
Re: Ultimate Bulletin Board? Jezuz.
On Wed, 1 Aug 2001, Kyle Dawkins wrote: having said all that, it's much cheaper than UBB, far superior in overall design, and DB-driven... and it works beautifully, so i can't complain too much. :-) And has at least one major security hole (at least the 3.51 version did, which was the last free version). Do a search at Security Focus for details. Patching it is relatively easy. -dave /*== www.urth.org We await the New Sun ==*/
Re: [OT] Inspired by closing comments from the UBB thread.
not to mention the HTML embedded all throughout the perl (are they on glue?) What's the alternative there? Embed perl in the HTML? You could do that (Text::Template), or you could use a tool like Template Toolkit or HTML::Template. See http://perl.apache.org/features/tmpl-cmp.html for a description of the available options. - Perrin
Re: [OT] Inspired by closing comments from the UBB thread.
I wasn't clear enough... My point was more six one way, half dozen the other. For a public package, keeping dependancies down to a minimum is a bonus, as well as keeping performance up by not having to pre-process html looking for perl code. It can come down to a choice between maintainability and better performance (to whatever degree). I don't see any glue-sniffing symptoms from choosing embedded html in perl over embedded perl in html. Jay On Wed, 1 Aug 2001, Perrin Harkins wrote: not to mention the HTML embedded all throughout the perl (are they on glue?) What's the alternative there? Embed perl in the HTML? You could do that (Text::Template), or you could use a tool like Template Toolkit or HTML::Template. See http://perl.apache.org/features/tmpl-cmp.html for a description of the available options. - Perrin
RE: Apache::Reload???
That worked :) Ahh, at least I solved ONE problem! Thanks! Bryan The solution is to extend @INC at the server startup to include directories you load the files from which aren't in @INC. For example, if you have a script which loads MyTest.pm from /home/stas/myproject: use lib qw(/home/stas/myproject); require MyTest; Apache::Reload won't find this file, unless you alter @INC in startup.pl: startup.pl -- use lib qw(/home/stas/myproject); and restart the server
Re: [OT] Inspired by closing comments from the UBB thread.
Jay Jacobs wrote: I don't see any glue-sniffing symptoms from choosing embedded html in perl over embedded perl in html. Unless, of course, you're the graphic artist and you've been tasked with changing the look and feel of the application using embedded perl (which you, as the graphics person, probably don't know anything about), while the perl developer works on the perl portions of the code, then you might be sniffing some glue. This the motivation for some (if not most) of the templating solutions Perrin mentioned. --Alex
Re: [OT] Inspired by closing comments from the UBB thread.
Guys guys guys Mixing HTML with Perl with SQL is bad and evil on every single possible level. For those who don't know how to split apart your perl from your HTML I suggest you read some of Perrin's recent posts. There are so many ways to do it, I won't even bother with talking about them here. As for SQL, I just wish people would expand their horizons a little and start doing a bit of reading. There are so many different ways to avoid embedding SQL in application code and I sincerely wish programmers would THINK before just coding... it's what differentiates scripters from engineers and I suggest everyone who embeds SQL in their perl for anything other than quick-and-dirty hacks start considering other options for the good of the programming community AND THE SANITY OF WHOMEVER HAS TO MAINTAIN OR ALTER YOUR CODE. If you wish to see one enlightened approach, please read this: http://developer.apple.com/techpubs/webobjects/DiscoveringWO/EOFArchitecture /index.html Fine, it's Java (yuk). Fine, it's Apple (yuk). But it used to be *NeXT* and it used to be *Obj-C*, both very very fine things indeed. One of the projects I am working on right now, for example, involves an awful lot of DB access. There is not a single line of SQL in our application code. It's 100% mod_perl. This is a gd thing. To be fair, if you want to talk to DB at all, you will need SQL somewhere; what I mean by embedding SQL in perl is embedding it *application* logic. It has no purpose there and you might as well be using some dumbass technology like CF or PHP because your code will be just as maintainable. I just implore readers of this list to start thinking more as engineers and less as script kiddies. We all love mod_perl and its power and we want it to succeed. We'll only get somewhere with it if we actually make the effort to write better code. Mixing SQL and perl is not better code. Cheers to all kyle Software Engineer Central Park Software http://www.centralparksoftware.com
One more small Apache::Reload question
First, thanks to all the great suggestions, it looks like it works fine. However, now my logs are loaded with a ton of subroutine redefined warnings (which is normal I suppose?). I can certainly live with this in a development environment, but thought I would check to see if it is expected, and if it can be turned off while still enabling Reload. Thanks! Bryan
Re: [OT] Inspired by closing comments from the UBB thread.
As for SQL, I just wish people would expand their horizons a little and start doing a bit of reading. There are so many different ways to avoid embedding SQL in application code and I sincerely wish programmers would THINK before just coding... it's what differentiates scripters from engineers and I suggest everyone who embeds SQL in their perl for anything other than quick-and-dirty hacks start considering other options for the good of the programming community AND THE SANITY OF WHOMEVER HAS TO MAINTAIN OR ALTER YOUR CODE. If you wish to see one enlightened approach, please read this: http://developer.apple.com/techpubs/webobjects/DiscoveringWO/EOFArchitecture /index.html I appreciate your kind words about my templating posts, but I don't agree that an object-relational mapper is always the right answer for database integration. Using objects to model your data, and having the objects manage their own persistence through SQL calls is faster and easier for many things, and it allows you to do things that can't be done with an O/R mapper, like advanced SQL tuning (optimizer hints), aggregation of commonly fetched data into one query, etc. You still get encapsulation of the SQL behind the object interface, and your high-level logic doesn't need to use any SQL directly. It would really be nice if someone could write an overview of the O/R mapping tools for Perl. I know Dave Rolsky was working on one, but it's a big job and he's busy with Mason. - Perrin
Re: [VERY OT] Inspired by closing comments from the UBB thread.
On Wed, 1 Aug 2001, Kyle Dawkins wrote: Mixing HTML with Perl with SQL is bad and evil on every single possible level. This bugged me... TMTOWTDI applies on so many levels. The right way to do something is not always the technically best way to do something. If you work in a large corporate enviroment with many hands in the development pot, then hey, I agree, and there should probably be a corporate document stating the guidelines and restrictions of developement. If however you work in a two person company where you have barely enough time to go to the bathroom let alone think about creating your own database abstraction layer for a custom application and maintaining code means changing a link once a month. Then by all means embed away, and take the quick development path over performance or maintainability. On the other hand, if you are completely broke and work on a non-profit project and the only system you have is a P200 with 64M of Memory, then you may want to think about avoiding templating systems, and doing nothing but a single module with embedded SQL with Perl and HTML. There is always more then one way to do it, and there's usually more then one right way to do it. Let's keep that in mind. Jay
Re: [OT] Inspired by closing comments from the UBB thread.
All (and Perrin) If you wish to see one enlightened approach, please read this: http://developer.apple.com/techpubs/webobjects/DiscoveringWO/EOFArchitecture /index.html as I said... *ONE* enlightened approach :-) I think you'd find that EOF (the persistence framework in that example) does exactly what you speak of below. Nevertheless, I absolutely agree that the implementation is very much dependent on circumstances. I just wanted to give an example of an object-layer that doesn't require any SQL... and like a said in my previous post, there are many ways to do this. Our current persistence layer uses a combination of an O/R mapper and objects that manage their own persistence. I appreciate your kind words about my templating posts, but I don't agree that an object-relational mapper is always the right answer for database integration. Using objects to model your data, and having the objects manage their own persistence through SQL calls is faster and easier for many things, and it allows you to do things that can't be done with an O/R mapper, like advanced SQL tuning (optimizer hints), aggregation of commonly fetched data into one query, etc. You still get encapsulation of the SQL behind the object interface, and your high-level logic doesn't need to use any SQL directly. Concur, see above. It would really be nice if someone could write an overview of the O/R mapping tools for Perl. I know Dave Rolsky was working on one, but it's a big job and he's busy with Mason. I've taken a look at many of them (Tangram? a few others) and haven't been impressed with any of them. I think part of the problem is that they're all being developed in a bit of a vacuum. But let's capitalise on the interest that this thread has generated to start a push for something that we can all use. I think even the dudes who embed their SQL in perl could be made to realise the benefits if we all started using a common framework. Thoughts? kyle Software Engineer Central Park Software http://www.centralparksoftware.com
Apple not yukky aymore: was [OT] Inspired by closing comments from the UBB thread.
On Wednesday, August 1, 2001, at 09:27 AM, Kyle Dawkins wrote: Fine, it's Apple (yuk). But it used to be *NeXT* and it used to be *Obj-C*, both very very fine things indeed. Hey now! Those are fighting words! :-) OS X Mach + FreeBSD Project Builder + GCC (Including Objective-C) in EVERY OS BOX CVS, SSH, Apache, Perl, etc. in EVERY OS BOX Nothing yuk about Apple anymore, at least on the software/OS side of the house! Apple = NeXT ! Thank God! Hell, in 15 or 20 years, this OS could be as enlightened as Linux. :-) -- -- Tom Mornini -- ICQ 113526784
Re: [VERY OT] Inspired by closing comments from the UBB thread.
On Wednesday, August 1, 2001, at 10:01 AM, Jay Jacobs wrote: On Wed, 1 Aug 2001, Kyle Dawkins wrote: Mixing HTML with Perl with SQL is bad and evil on every single possible level. If however you work in a two person company where you have barely enough time to go to the bathroom let alone think about creating your own database abstraction layer for a custom application and maintaining code means changing a link once a month. Then by all means embed away, and take the quick development path over performance or maintainability. This is, in my opinion, circular logic. Perhaps the reason that you barely have enough time to go to the bathroom is that you're writing the code the wrong way. :-) On the other hand, if you are completely broke and work on a non-profit project and the only system you have is a P200 with 64M of Memory, then you may want to think about avoiding templating systems, and doing nothing but a single module with embedded SQL with Perl and HTML. Assuming they're paying you anywhere near a living wage, their money would be better spent on modestly upgraded hardware than having you fumbling around with inefficient to maintain code. There is always more then one way to do it, and there's usually more then one right way to do it. Let's keep that in mind. Agreed. However, Perl + HTML + SQL isn't one of the right ways! :-) -- -- Tom Mornini -- ICQ 113526784
Re: [OT] Inspired by closing comments from the UBB thread.
At 12:50 PM -0400 8/1/01, Perrin Harkins wrote: It would really be nice if someone could write an overview of the O/R mapping tools for Perl. I know Dave Rolsky was working on one, but it's a big job and he's busy with Mason. I agree. There was a bit of discussion on this topic on this list around May 10th of this year. Dave mentioned that you could have a look at what he'd started writing a long time ago at ... http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/poop/documents/poop-comparison.pod?rev=1.2content-type=text/vnd.viewcvs-markup One of the tools that is not mentioned in Dave's write-up (probably because it didn't exist then) is SPOPS, mentioned earlier in this thread. There is also a related mailing list at ... http://lists.sourceforge.net/lists/listinfo/poop-group -- Ray Zimmerman / e-mail: [EMAIL PROTECTED] / 428-B Phillips Hall Sr Research / phone: (607) 255-9645 / Cornell University Associate / FAX: (815) 377-3932 / Ithaca, NY 14853
RE: Not embedding SQL in perl
Joe Breeden queried: It would be interesting to know how other people have solved that problem. Currently, we are essentially using embedded SQL in our apps. I have found that stored procedures + perl module wrapper around the procs. is a nice, balanced approach. The procs. give a nice performance boost as they are precompiled into the server (we use Sybase). I believe that they are more secure, in that you aren't dynamically generating sql that might be 'hijack-able'. You are providing a discrete amount of functionality. Placing the stored procedure execution code in a perl module makes for easy/clean perl access from the rest of the app. Moving to a new db isn't too terribly difficult in that the proc names will probably remain as well as the parameters that you pass. Also, how often do you move to another database in the life of a web app anyway (at least in our corporate environment)? Regards, Dave Language shapes the way we think, and determines what we can think about. -- B. L. Whorf
Apache 2.0 / mod_perl 2.0 / Win NT/2000 ? pipe dream ?
Hi all, We currently use (close to) the latest Apache / mod_perl environment on HP/UX. Our holding company is forcing a move to Win2k :/, but they still want to use our mod_perl apps :). I was looking for more information on mod_perl 2.0 today but didn't come up w/much. I have several questions. If you can answer or point to docs, I would be most appreciative. 1) Is moving to mod_perl 2.0 going to require large code changes (even on a *nix system)? 1a) Are there any web sites detailing the types of changes that will be required? 2) I am aware that Apache 2.0 should see a performance increase on NT. Will I be a able to run my current modules in this environment? 2a)Will it be a production level environment? 2b) What will be the performance repercussions (if it will be possible at all)? 3) Is there any commercial company that would provide tech support contracts in this environment (not that I've needed it so far, but the uppers like the safety net)? 4) Is the code base stable enough that I can compile and test this out (I really can't even find a site that deals w/ mod_perl 2 in any detail - probably looking in the wrong places - I would think that perl.apache.org would mention something? - am I blind? ) Thanks for any assistance you can provide. Best Regards, Dave Webmaster MACtac IT Language shapes the way we think, and determines what we can think about. -- B. L. Whorf
Re: [VERY OT] Inspired by closing comments from the UBB thread.
Tom et al. Mixing HTML with Perl with SQL is bad and evil on every single possible level. If however you work in a two person company where you have barely enough time to go to the bathroom let alone think about creating your own database abstraction layer for a custom application and maintaining code means changing a link once a month. Then by all means embed away, and take the quick development path over performance or maintainability. This is, in my opinion, circular logic. Perhaps the reason that you barely have enough time to go to the bathroom is that you're writing the code the wrong way. :-) H AH AH AH AH HA HAHAHAHAH brilliant On the other hand, if you are completely broke and work on a non-profit project and the only system you have is a P200 with 64M of Memory, then you may want to think about avoiding templating systems, and doing nothing but a single module with embedded SQL with Perl and HTML. Assuming they're paying you anywhere near a living wage, their money would be better spent on modestly upgraded hardware than having you fumbling around with inefficient to maintain code. Tom, I couldn't have said it better myself. BTW. The project I am working on right now *is* for a small non-profit. We don't have a P200 but we have a single P3 machine doing all the work. We don't have huge fault-tolerant systems or UML models or Java Class Hierarchy posters on our walls, or a coding team in Bangalore working on our project. All this notwithstanding, I have time to go to the bathroom. I can even take reading material with me. I have been in the two-person startup before... and let me tell you, if you think that you should cut corners now, it's just going to bite you in the arse later. Just because we use free and/or open source tools to build our code, doesn't mean we can write crap. We have an obligation to do our duty to whomever we work for, and LEARN and apply that learning to our work. There is always more then one way to do it, and there's usually more then one right way to do it. Let's keep that in mind. Agreed. However, Perl + HTML + SQL isn't one of the right ways! :-) Couldn't agree more. Just because TMTOWDI doesn't mean that all of those ways are equal. Most ways suck, in fact. Cheers Kyle Software Engineer Central Park Software http://www.centralparksoftware.com
RE: Not embedding SQL in perl
Homsher, Dave V. writes: Joe Breeden queried: It would be interesting to know how other people have solved that problem. Currently, we are essentially using embedded SQL in our apps. I have found that stored procedures + perl module wrapper around the procs. is a nice, balanced approach. The procs. give a nice performance boost as they are precompiled into the server (we use Sybase). They are definitely faster, and significantly so. I believe that they are more secure, in that you aren't dynamically generating sql that might be 'hijack-able'. Using RPC calls instead of language commands also improves speed, and solves the quoting problem, too. Placing the stored procedure execution code in a perl module makes for easy/clean perl access from the rest of the app. Absolutely. I've actually created configuration files for logical database requests (essentially a hash that describes the input and output of each proc) which lets me use a generic module (about 400 lines) of Sybase::CTlib code for *all* database access. Works very well, and abstracts the database layer quite nicely. Michael -- Michael Peppler - Data Migrations Inc. - http://www.mbay.net/~mpeppler [EMAIL PROTECTED] - [EMAIL PROTECTED] International Sybase User Group - http://www.isug.com
Re: Not embedding SQL in perl
All Joe Breeden queried: It would be interesting to know how other people have solved that problem. Currently, we are essentially using embedded SQL in our apps. I have found that stored procedures + perl module wrapper around the procs. is a nice, balanced approach. Definitely; sotred procedures are hit-and-miss in a lot of environments. Remember that a large number of people in the mod_perl world can't use 'em because they (we) use MySQL. If one wanted to emulate this behaviour with MySQL, you would essentially clone the functionality of your stored procedures using Perl + DBI inside your persistence layer. That is a perfectly viable approach too, but a lot less efficient than stored procedures (many roundtrips versus one). The procs. give a nice performance boost as they are precompiled into the server (we use Sybase). I believe that they are more secure, in that you aren't dynamically generating sql that might be 'hijack-able'. You are providing a discrete amount of functionality. Placing the stored procedure execution code in a perl module makes for easy/clean perl access from the rest of the app. Moving to a new db isn't too terribly difficult in that the proc names will probably remain as well as the parameters that you pass. Also, how often do you move to another database in the life of a web app anyway (at least in our corporate environment)? True, although I don't think it's uncommon to want to move from MySQL to Postgres, for example. I have also seen a lot of places move away from MySQL up to something like DB2 or Oracle when they get their it-all-has-to-be-spent venture capital infusion. Sigh. Kyle Software Engineer Central Park Software http://www.centralparksoftware.com
Re: Not embedding SQL in perl
I have found that stored procedures + perl module wrapper around the procs. is a nice, balanced approach. The procs. give a nice performance boost as they are precompiled into the server (we use Sybase). They are definitely faster, and significantly so. Maybe so for Sybase. In Oracle, your SQL statements get cached anyway, as long as you're using bind variables instead of just dynamically building the SQL strings. (They get cached even if you don't use bind variables, but they'll quickly overflow the cache if you keep changing them with each new value in the WHERE clause.) Using RPC calls instead of language commands also improves speed, and solves the quoting problem, too. The same goes for bind variables. - Perrin
RE: [OT] Inspired by closing comments from the UBB thread.
Woooie!?! I didn't expect the firestorm this post would generate. From what I hear people are either embedding SQL or writing their own utility module to essentially do something along the line of: $s-StartDBI ( DSN = 'somedsn_pointer') ; eval { $s-SelectSQL ( NAME = 'sql_select', TABLE = 'sometable', FIELDS = ['field1', 'field2', 'field3'], WHERE = 'field1=?', VALUES = $some_value_for_field1); while ( my $return = $s-SQLGetArray( NAME = 'sql_select')) { #do something $return - maybe complete a template object? } }; $s-EndDBI ( DSN = 'somedsn_pointer', QUERIES = 'sql_select', RESULTS = $@); Where the different calls do the things hinted at in their name (i.e. StartDBI opens the DSN and connects to the database in question, SelectSQL would prepare the SQL select statement and execute it via DBI). This allows the us to pass a native Perl structure which is reformatted to work with DBI. We also get back scalars, arrays, or hashes that are easy to work with. This is what we do here where I work. I still consider this embedded SQL because a change to the table or even to the server could cause the program to break in a lot of places. I think what I had in mind was some way to put this type of processing into a layer where all the SQL related items are essentially in a template file somewhere maybe a SQL::Template type thingy. If this is something that people feel would be a worthwhile endeavor, let me know and maybe when there's have a little free time in the Fall one could write a CPAN module that has this functionality. We had the conversation awhile back about adding redundant and unnecessary crap to CPAN and I want to make sure something like this would be a good thing or not. Thanks, --Joe Breeden --
Re: Not embedding SQL in perl
Perrin Harkins writes: I have found that stored procedures + perl module wrapper around the procs. is a nice, balanced approach. The procs. give a nice performance boost as they are precompiled into the server (we use Sybase). They are definitely faster, and significantly so. Maybe so for Sybase. In Oracle, your SQL statements get cached anyway, as long as you're using bind variables instead of just dynamically building the SQL strings. (They get cached even if you don't use bind variables, but they'll quickly overflow the cache if you keep changing them with each new value in the WHERE clause.) Actually I did benchmark this for Sybase, both with stored procs and with SQL statements with bind variables. The stored procs are still faster, and make it easier in a non-trivial organization (where SQL code and perl code may be worked on by different people) to separate the database logic somewhat, and give SQL developpers and/or DBAs an easy way to tune SQL requests without having to touch the application code. Using RPC calls instead of language commands also improves speed, and solves the quoting problem, too. The same goes for bind variables. Agreed. Michael -- Michael Peppler - Data Migrations Inc. - http://www.mbay.net/~mpeppler [EMAIL PROTECTED] - [EMAIL PROTECTED] International Sybase User Group - http://www.isug.com
Re: [OT] Inspired by closing comments from the UBB thread.
On 01 Aug 2001 10:12:45 -0500, Joe Breeden wrote: All, In his closing comments about UBB Kyle Dawkins made a statement that got me wondering. He said there's SQL embedded all throughout the Perl everywhere (who does this?! oh my god, are they on crack?). This comment got me wondering about alternatives to embedding SQL in to the code of a program. Alternatives I see are to use stored procedures which would limit one to using a certain DB server (or to be proficient in many servers and write stored procedures for all server flavors which would mean one is a very busy Perl and SQL guru) or possibly storing the embedded SQL in some sort of external file structure accessible via storable, XML::Simple or some other means. http://axkit.org/docs/presentations/tpc2001/anydbd.axp -- Matt/ /||** Founder and CTO ** ** http://axkit.com/ ** //||** AxKit.com Ltd ** ** XML Application Serving ** // ||** http://axkit.org ** ** XSLT, XPathScript, XSP ** // \\| // ** mod_perl news and resources: http://take23.org ** \\// //\\ // \\
RE: [OT] Inspired by closing comments from the UBB thread.
Jay Jacobs wrote: I don't see any glue-sniffing symptoms from choosing embedded html in perl over embedded perl in html. Unless, of course, you're the graphic artist and you've been tasked with changing the look and feel of the application using embedded perl (which you, as the graphics person, probably don't know anything about), while the perl developer works on the perl portions of the code, then you might be sniffing some glue. This the motivation for some (if not most) of the templating solutions Perrin mentioned. Hmmm... Mason makes this *possible*, for me: I tell my guys, make it look ANY way you like. I don't care. I don't WANT to care. Just leave me ONE td/td. Since I have all of my components called by a single dispatch component, all that td has to have is one line of markup. Then I tell them, here's the list of styles I'll be using in my markup. You have access to the stylesheet, make them look however you want but don't add/remove/rename any of them. Using this method, I've been able to extend the SAME CODE on two different sites w/ radically different themes. Of course, at this point, some would say XML / XSL! Try AxKiT! But to be honest, I haven't gone there yet. XML, no matter how pretty the tools, is still a pain and a bother, IMHO. Dropping a couple of lines of perl in a (mostly) static HTML table/form/chart is FAR simpler than learning a new language (for the stylesheets) to implement a new paradigm (XML) that in spite of its buzzword compliance is still a hit-and-miss crapshoot against current browsers. L8r, Rob #!/usr/bin/perl -w use Disclaimer qw/:standard/;
Re: Not embedding SQL in perl
It would be interesting to know how other people have solved that problem. Currently, we are essentially using embedded SQL in our apps. I have found that stored procedures + perl module wrapper around the procs. is a nice, balanced approach. Definitely; stored procedures are hit-and-miss in a lot of environments. Remember that a large number of people in the mod_perl world can't use 'em because they (we) use MySQL. If one wanted to emulate this behavior with MySQL, you would essentially clone the functionality of your stored procedures using Perl + DBI inside your persistence layer. That is a perfectly viable approach too, but a lot less efficient than stored procedures (many roundtrips versus one). Interesting, I will be working w/MySQL in a few days on a side project of my own. We'll see how my outlook changes ;) Any recommendations? Regards, Dave Language shapes the way we think, and determines what we can think about. -- B. L. Whorf
Re: Apache 2.0 / mod_perl 2.0 / Win NT/2000 ? pipe dream ?
From: Homsher, Dave V. [EMAIL PROTECTED] Sent: Wednesday, August 01, 2001 12:32 PM Hi all, We currently use (close to) the latest Apache / mod_perl environment on HP/UX. Our holding company is forcing a move to Win2k :/, but they still want to use our mod_perl apps :). I was looking for more information on mod_perl 2.0 today but didn't come up w/much. I have several questions. If you can answer or point to docs, I would be most appreciative. 1) Is moving to mod_perl 2.0 going to require large code changes (even on a *nix system)? 1a) Are there any web sites detailing the types of changes that will be required? Depends on what you are doing, I'll let others comment. 2) I am aware that Apache 2.0 should see a performance increase on NT. Will I be a able to run my current modules in this environment? Depends on how they are written. If you stay within the Apache:: space, yes. The obvious caviats about certain perl functions still apply. 2a)Will it be a production level environment? Yes, but there are still issues (mutiple processes for robustness? no.) 2b) What will be the performance repercussions (if it will be possible at all)? You are hit with extra stats when the server is determining the 'correct canonical filename' since NT is case insensitive. Other than that, this will be a huge boon to mod_perl, since mod_perl on 2.0 supports threads! No more one-worker model :) 3) Is there any commercial company that would provide tech support contracts in this environment (not that I've needed it so far, but the uppers like the safety net)? Covalent (www.covalent.net) where Doug MacEachern and I both work stands strongly behind both Apache 2.0 and mod_perl. I would expect that upper management could feel pretty confident about Covalent support services :) 4) Is the code base stable enough that I can compile and test this out (I really can't even find a site that deals w/ mod_perl 2 in any detail - probably looking in the wrong places - I would think that perl.apache.org would mention something? - am I blind? ) I'm about to try the same thing myself ... I don't know how buildable this is on Windows yet, but I will email the list with whatever I discover. Bill
RE: One more small Apache::Reload question
Those warnings are normal, and you can use the warnings pragma to turn them off. (Although, I believe the warnings pragma only exists in Perl 5.6.0+). use warnings; no warnings qw(redefine); - Kyle -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Bryan Coon Sent: Wednesday, August 01, 2001 9:36 AM To: '[EMAIL PROTECTED]' Subject: One more small Apache::Reload question First, thanks to all the great suggestions, it looks like it works fine. However, now my logs are loaded with a ton of subroutine redefined warnings (which is normal I suppose?). I can certainly live with this in a development environment, but thought I would check to see if it is expected, and if it can be turned off while still enabling Reload. Thanks! Bryan
Re: [OT] Inspired by closing comments from the UBB thread.
On Wed, 1 Aug 2001, Ray Zimmerman wrote: One of the tools that is not mentioned in Dave's write-up (probably because it didn't exist then) is SPOPS, mentioned earlier in this thread. No, I just hadn't had a chance to get around to it yet. I really need to finish that thing someday. Of course, if people want to write up their favorite system (along the lines of the ones I've already done) I could just use that and it'd be done much quicker ;) -dave /*== www.urth.org We await the New Sun ==*/
Re: [OT] Inspired by closing comments from the UBB thread.
On Wed, 1 Aug 2001, Kyle Dawkins wrote: I've taken a look at many of them (Tangram? a few others) and haven't been impressed with any of them. I think part of the problem is that they're all being developed in a bit of a vacuum. But let's capitalise on the interest that this thread has generated to start a push for something that we can all use. I think even the dudes who embed their SQL in perl could be made to realise the benefits if we all started using a common framework. Thoughts? Well, people are starting to use my tool, Alzabo (alzabo.sourceforge.net) and I'm getting feedback. More feedback about what people want it always welcome. FWIW, Alzabo gives you a reasonable amount of control over the SQL that is generated, if you need it. It doesn't yet allow optimizer hints but that will change in a future version. OTOH, if you really _need_ to get into the nitty gritty details of SQL its hard to imagine that any abstraction layer would ever be satisfactory. -dave /*== www.urth.org We await the New Sun ==*/
RE: [OT] Inspired by closing comments from the UBB thread.
As for SQL, I just wish people would expand their horizons a little and start doing a bit of reading. There are so many different ways to avoid embedding SQL in application code and I sincerely wish programmers would THINK before just coding... it's what differentiates scripters from engineers and I suggest everyone who embeds SQL in their perl for anything other than quick-and-dirty hacks start considering other options for the good of the programming community AND THE SANITY OF WHOMEVER HAS TO MAINTAIN OR ALTER YOUR CODE. I just implore readers of this list to start thinking more as engineers and less as script kiddies. We all love mod_perl and its power and we want it to succeed. We'll only get somewhere with it if we actually make the effort to write better code. Mixing SQL and perl is not better code. WHY? WHY WHY WHY WHY Tell me why it's this horrible, glue-sniffing, script-kiddie badness to do something in a clear and simple fashion Below is a pseudo-code handler. It talks to the database: use strict; use vars qw/$dbh/; sub handler { my $r = shift; lookup_info($r); # ... blah... return OK; } sub lookup_info { my $r = shift; # ||= allows an already connected $dbh to skip reconnect $dbh ||= DBI-connect(My::dbi_connect_string(), My::dbi_pwd_fetch()) or die DBI-errstr; # WARNING! amateur code ahead!!! my $sql_lookup_password = $dbh-prepare_cached( SQL ); SELECT passwrd, pageid FROM siteinfo si, pages pg WHERE si.acctid = pg.acctid AND si.acctid = ? AND pageno = 0 SQL ($c_pass, $c_pid) = $dbh-selectrow_array( $sql_lookup_password, undef, $acctid ); return undef unless defined $c_pass and $pass eq $c_pass; # We've confirmed the password. return $c_pid if !$pid or $pid eq $c_pid; # some more logic, maybe even another query return $pid; } Now. Tell me ONE thing that's wrong with this? The statement handle is clearly named ($sql_lookup_password), the query is either A) really simple or B) commented w/ SQL comments, C) if I change my schema, the query is RIGHT THERE in the only place that acually USES it. OO is an idea for cleaning up and packaging functionality. Fine. If I need it that bad, I'll code my handler as an object. But let's not forget that the underlying mechanism, no matter how fancily layered, is still a list of FUNCTION CALLS. OO has its place. ABSOLUTELY. In perl I can create an FTP connection _object_ and tell it what to do, and trust that it knows how to handle it. But in the REAL WORLD, my script is its own object, with its own guts and implementation, and the interface is: MyModule::handler. Apache knows what function to call. I can mess with the guts and the interface doesn't change. So what do I gain by adding 6 layers of indirection to something this simple? OO has its PLACE as a TOOL. It should not be a jail with LOCKED DOORS and ARMED ESCORT. (and come to think of it, any objects I use aren't cons :-) My $.02. L8r, Rob #!/usr/bin/perl -w use Disclaimer qw/:standard/;
Re: [VERY OT] Inspired by closing comments from the UBB thread.
My apologies for beating this dead horse... I am just unable to get my point across at all today. On Wed, 1 Aug 2001, Kyle Dawkins wrote: Tom et al. This is, in my opinion, circular logic. Perhaps the reason that you barely have enough time to go to the bathroom is that you're writing the code the wrong way. :-) ...my point with that scenario was that there is just too much work to spend the time writing highly maintainable code that has only the simplest of maintance tasks. Just because we use free and/or open source tools to build our code, doesn't mean we can write crap. We have an obligation to do our duty to whomever we work for, and LEARN and apply that learning to our work. There is always more then one way to do it, and there's usually more then one right way to do it. Let's keep that in mind. Agreed. However, Perl + HTML + SQL isn't one of the right ways! :-) Couldn't agree more. Just because TMTOWDI doesn't mean that all of those ways are equal. Most ways suck, in fact. Granted, the world is full of incompetance, but if you spent your time coding for a perfect world in every situation, you could still be working on the write-up while the next guy is collecting the check for a finished project and bidding on the next project, might not be bad code, might be really good code, might really suck, who cares, it works, the customer is happy and both businesses do well, the down side is some geek may have to maintain it but they'll get to complain about crappy code and show their rightousness on a public mailing list. Don't get me wrong here, I agree with the perfect code... I'd absolutely love to see a clean solution to embedded html/perl/sql that has fast performance, fast development and easy maintainability. I wish that the technically best way always matched the right way. And us righteous developers decided how the world was run. But my misintrepreted point is that there are situations in which this version of perfect code has no place, even if I can't write them up in an email.
RE: One more small Apache::Reload question
However, now my logs are loaded with a ton of subroutine redefined warnings (which is normal I suppose?). I can certainly live with this in a development environment, but thought I would check to see if it is expected, and if it can be turned off while still enabling Reload. Well, first of all, you will want to turn off Apache::Reload during production. All of those stat()'s will slow down your server speed significantly, as the disk is kept busy for each request. Secondly, how is it you view your logs? I have a window running tail -f with a grep filter: tail -f /var/log/httpd/error_log | egrep -v 'redefined.at|Apache::Reload|AuthenCache' This way, I get the best of both worlds, by ignoring the noise: # use constant SIGNATURE = 'TSTAT'; Constant subroutine SIGNATURE redefined at /usr/lib/perl5/5.00503/constant.pm line 175. # One of my module's subroutines.. there are 15 of these Subroutine test_handler redefined at /etc/httpd/lib/perl/Stat/Count.pm line 315 I have AuthenCache in my filter because at LogLevel debug, Apache::AuthenCache is *noisy*!! HTH! L8r, Rob #!/usr/bin/perl -w use Disclaimer qw/:standard/;
Re: [OT] Inspired by closing comments from the UBB thread.
http://axkit.org/docs/presentations/tpc2001/anydbd.axp Is this basically a hash of SQL statements, indexed by DBD type? Or is there something more that I'm missing? (I should have gone to your TPC talk...)
RE: [OT] Inspired by closing comments from the UBB thread.
I have to agree here. Is this just a hash of SQL statements or is there more to it than that? --Joe Breeden -- -Original Message- From: Perrin Harkins [mailto:[EMAIL PROTECTED]] Sent: Wednesday, August 01, 2001 1:29 PM To: Matt Sergeant Cc: [EMAIL PROTECTED] Subject: Re: [OT] Inspired by closing comments from the UBB thread. http://axkit.org/docs/presentations/tpc2001/anydbd.axp Is this basically a hash of SQL statements, indexed by DBD type? Or is there something more that I'm missing? (I should have gone to your TPC talk...)
Re: [OT] Inspired by closing comments from the UBB thread.
On 01 Aug 2001 14:29:10 -0400, Perrin Harkins wrote: http://axkit.org/docs/presentations/tpc2001/anydbd.axp Is this basically a hash of SQL statements, indexed by DBD type? Or is there something more that I'm missing? (I should have gone to your TPC talk...) All AnyDBD does is create a class hierarchy in the namespace of your choice, based on the type of database you're connecting to. The idea being that you can create a cross database application that makes use of all database features (such as optimisations, hints, stored procs) where appropriate. You can abstract stuff away behind methods, and build up a nice layer of cross-database methods. (note I'm not saying this is the best way to do it, but the original question was what do people use, and this is what I use). It's a shame you don't have access to the code we wrote (for WebBoard Unix), as it would be a nice example to look at. -- Matt/ /||** Founder and CTO ** ** http://axkit.com/ ** //||** AxKit.com Ltd ** ** XML Application Serving ** // ||** http://axkit.org ** ** XSLT, XPathScript, XSP ** // \\| // ** mod_perl news and resources: http://take23.org ** \\// //\\ // \\
Re: Not embedding SQL in perl
On Wed, 1 Aug 2001, Kyle Dawkins wrote: KD Definitely; sotred procedures are hit-and-miss in a lot of KD environments. Remember that a large number of people in the KD mod_perl world can't use 'em because they (we) use MySQL. If one KD wanted to emulate this behaviour with MySQL, you would essentially KD clone the functionality of your stored procedures using Perl + DBI KD inside your persistence layer. That is a perfectly viable KD approach too, but a lot less efficient than stored procedures KD (many roundtrips versus one). And while we are discussing not cutting corners, those who still use MySQL should switch to a real DBMS before they even think of abstracting the SQL away from their Perl code. That people still use MySQL really shows how many lusers there are with computers that try to develop real software. I said _try_. *sigh* -- Henrik Edlund [EMAIL PROTECTED] http://www.edlund.org/ You're young, you're drunk, you're in bed, you have knives; shit happens. -- Angelina Jolie
Re: Not embedding SQL in perl
Original Message Subject: Re: Not embedding SQL in perl Date: Wed, 01 Aug 2001 15:56:00 -0400 From: kyle dawkins [EMAIL PROTECTED] To: Henrik Edlund [EMAIL PROTECTED] References: [EMAIL PROTECTED] Henrik Edlund wrote: And while we are discussing not cutting corners, those who still use MySQL should switch to a real DBMS before they even think of abstracting the SQL away from their Perl code. That people still use MySQL really shows how many lusers there are with computers that try to develop real software. I said _try_. *sigh* Henrik Not sure if you're aware of it, but that argument is pretty old. We're onto a much more interesting, new argument now. :-) Seriously though, you're right, MySQL is not a real RDBMS. No transactions, no foreign key constraints, no stored procedures. It is, however, free, and in use in a lot of places. And interestingly enough, in a way that makes the current argument even MORE important; writing SQL into your code (as per the current thread of discussion) will make it exponentially more difficult for you to move to a real RDBMS as Henrik urges you to. If you abstract DB access into a middleware layer, you will have a much, much easier time. By placing SQL into your application code, you are removing the flexibility of changing your persistence mechanism at a later date. And believe it or not, that's not as uncommon as you might think. I cite the example of wwwthreads here... it's a great BBS, runs under mod_perl, is fast, and has a DB backend. However, the source is LITTERED with SQL, and everywhere there's a line of SQL, the dude has to put an if conditional around it to check if the installation is using MySQL or something else, because MySQL has numerous features that are not found elsewhere (last inserted id, REPLACE command, LIMIT m,n)... so, twice the number of SQL statements in code that (in my opinion) should not have any SQL in it at all... It's all food for thought (I hope). Kyle Software Engineer Central Park Software http://www.centralparksoftware.com
Re: Not embedding SQL in perl
On Wed, 1 Aug 2001, kyle dawkins wrote: kd Not sure if you're aware of it, but that argument is pretty old. kd We're onto a much more interesting, new argument now. :-) All old arguments eventually becomes new again, once in a while... :-) kd Seriously though, you're right, MySQL is not a real RDBMS. No kd transactions, no foreign key constraints, no stored procedures. kd It is, however, free, and in use in a lot of places. And kd interestingly enough, in a way that makes the current argument kd even MORE important; writing SQL into your code (as per the kd current thread of discussion) will make it exponentially more kd difficult for you to move to a real RDBMS as Henrik urges you kd to. If you abstract DB access into a middleware layer, you will kd have a much, much easier time. By placing SQL into your kd application code, you are removing the flexibility of changing kd your persistence mechanism at a later date. And believe it or kd not, that's not as uncommon as you might think. Or you can make sure you do not use any of those features and write pure SQL92. I have managed so far to write one SQL statement (no if's) for what I want to do, and it works with PostgreSQL, Oracle, (those two I use) and even with MySQL and others. You have to be careful and have a SQL92 definition handy, and it doesn't take much extra time. Then you get easy portability to other DBMS with DBI/DBD. (And yes, I do seperate code and content, Perl and HTML, with the excellent Template Toolkit.) There are times when abstracting your SQL has a use, and times when it is overkill. If you can't write clean SQL92 (or what you are aiming at) then you do need to abstract yourself even more than DBI already does. I am though very anti the use of DBMS-specific SQL. Regards, Henrik -- Henrik Edlund [EMAIL PROTECTED] http://www.edlund.org/ You're young, you're drunk, you're in bed, you have knives; shit happens. -- Angelina Jolie
Re: Not embedding SQL in perl
I can see your arguement regarding SQL within one's code, but doesn't your arguement fail to hold up if we assume that the SQL is fully compliant? In other words, if the makers of WWWThreads had stuck with standard SQL, rather than using any non-standard features of MySQL like last inserted ID, wouldn't their code be useable on Oracle, for example (assuming we changed the correct var to tell DBI we are using Oracle now) ? Just trying to make sure I understand what all the fuss is about. Jon R. [EMAIL PROTECTED] wrote: Original Message Subject: Re: Not embedding SQL in perl Date: Wed, 01 Aug 2001 15:56:00 -0400 From: kyle dawkins [EMAIL PROTECTED] To: Henrik Edlund [EMAIL PROTECTED] References: [EMAIL PROTECTED] Henrik Edlund wrote: And while we are discussing not cutting corners, those who still use MySQL should switch to a real DBMS before they even think of abstracting the SQL away from their Perl code. That people still use MySQL really shows how many lusers there are with computers that try to develop real software. I said _try_. *sigh* Henrik Not sure if you're aware of it, but that argument is pretty old. We're onto a much more interesting, new argument now. :-) Seriously though, you're right, MySQL is not a real RDBMS. No transactions, no foreign key constraints, no stored procedures. It is, however, free, and in use in a lot of places. And interestingly enough, in a way that makes the current argument even MORE important; writing SQL into your code (as per the current thread of discussion) will make it exponentially more difficult for you to move to a real RDBMS as Henrik urges you to. If you abstract DB access into a middleware layer, you will have a much, much easier time. By placing SQL into your application code, you are removing the flexibility of changing your persistence mechanism at a later date. And believe it or not, that's not as uncommon as you might think. I cite the example of wwwthreads here... it's a great BBS, runs under mod_perl, is fast, and has a DB backend. However, the source is LITTERED with SQL, and everywhere there's a line of SQL, the dude has to put an if conditional around it to check if the installation is using MySQL or something else, because MySQL has numerous features that are not found elsewhere (last inserted id, REPLACE command, LIMIT m,n)... so, twice the number of SQL statements in code that (in my opinion) should not have any SQL in it at all... It's all food for thought (I hope). Kyle Software Engineer Central Park Software http://www.centralparksoftware.com
What counts as a real DBMS?
On Wed, 1 Aug 2001, Henrik Edlund wrote: And while we are discussing not cutting corners, those who still use MySQL should switch to a real DBMS before they even think of abstracting the SQL away from their Perl code. That people still use MySQL really shows how many lusers there are with computers that try to develop real software. I said _try_. What would you consider to be a real DBMS? Sybase and Oracle obviously, but I actually am the hypothetical programmer with a 233MHz machine with 64 MB RAM (hey, it runs emacs fine :/) on a shoestring budget who is mostly limited to using freeware tools. What about PostgreSQL and Interbase? Do those have the features of a 'real' DBMS?
Re: Not embedding SQL in perl
Jon I can see your arguement regarding SQL within one's code, but doesn't your arguement fail to hold up if we assume that the SQL is fully compliant? Well, yes and no. I was citing that example as *another* reason to keep SQL out of your application-level code. If you do, as Henrik suggests, write pure SQL92, then obviously you wouldn't need to wrap all your SQL in ifs like they did with wwwthreads... you could just switch out MySQL and switch in Filemaker Pro if it supported SQL92 and had a DBD module :-). I maintain, however, that SQL embedded in application logic is evil in all but the simplest of scripts. Putting it in middleware is mandatory; I don't take issue with that. In other words, if the makers of WWWThreads had stuck with standard SQL, rather than using any non-standard features of MySQL like last inserted ID, wouldn't their code be useable on Oracle, for example (assuming we changed the correct var to tell DBI we are using Oracle now) ? Sure thing. Cheers Kyle Software Engineer Central Park Software http://www.centralparksoftware.com
Re: What counts as a real DBMS?
Hi guys, On Wed, 1 Aug 2001, Philip Mak wrote: On Wed, 1 Aug 2001, Henrik Edlund wrote: And while we are discussing not cutting corners, those who still use MySQL should switch to a real DBMS before they even think of abstracting What would you consider to be a real DBMS? Guys, please stop it. There are people who have work to do. 73, Ged.
Re: What counts as a real DBMS?
At 4:27 PM -0400 8/1/01, Philip Mak wrote: On Wed, 1 Aug 2001, Henrik Edlund wrote: And while we are discussing not cutting corners, those who still use MySQL should switch to a real DBMS before they even think of abstracting the SQL away from their Perl code. That people still use MySQL really shows how many lusers there are with computers that try to develop real software. I said _try_. What would you consider to be a real DBMS? Sybase and Oracle obviously, but I actually am the hypothetical programmer with a 233MHz machine with 64 MB RAM (hey, it runs emacs fine :/) on a shoestring budget who is mostly limited to using freeware tools. What about PostgreSQL and Interbase? Do those have the features of a 'real' DBMS? I use sequences. Therefore I need a real DBMS (either that or a rock solid way of generating UNIQUE ids). Oracle and PostgreSQL have great support for sequences. Sybase sucks from the standpoint of sequences. It's almost impossible to write a conversion script for Oracle sequences to Sybase (@@identity?) sequences without developing a tumor... Same goes for MySQL (LAST_INSERTED_ID?). There are things about Sybase and MySQL that I consider amazing. Sybase is amazingly fast, even with many thousands of connections. MySQL is blazingly fast for even large database (so long as you're not doing any inserts). Over all though, I consider Oracle and PostgreSQL the top in the Commercial and Free markets. Rob -- A good magician never reveals his secret; the unbelievable trick becomes simple and obvious once it is explained. So too with UNIX.
Re: What counts as a real DBMS?
On Wed, 1 Aug 2001, Philip Mak wrote: PM What would you consider to be a real DBMS? Sybase and Oracle obviously, PM but I actually am the hypothetical programmer with a 233MHz machine with PM 64 MB RAM (hey, it runs emacs fine :/) on a shoestring budget who is PM mostly limited to using freeware tools. PM PM What about PostgreSQL and Interbase? Do those have the features of a PM 'real' DBMS? PostgreSQL is a real DBMS. I am involved in a non-profit project where we run PostgresSQL (and Linux, Apache, ...) on a P200 with 128 megabyte RAM. Works great. It also worked great when we had a i486 and 32 megabyte RAM. (http://www.ticalc.org/) I have never worked with InterBase so someone else might have to answer if it complies with ACID. Regards, Henrik -- Henrik Edlund [EMAIL PROTECTED] http://www.edlund.org/ You're young, you're drunk, you're in bed, you have knives; shit happens. -- Angelina Jolie
RE: [OT] Inspired by closing comments from the UBB thread. (fwd)
Since you asked, my opinion is that what you describe would not be useful. Primarily for the reason pointed out already by a number of people -- lack of flexibility. Most, if not all, database servers accept highly customizable performance params to a query, and most even moderately evolved applications make use of SQL queries that are significantly more complex than a single-where-clause select. At ValueClick we built a wrapper module (DB.pm :) that delivered a $dbh into the API, handling everything up to that point with minimal fuss. From that point on, some standard things were collected in a utility class, but most modules created their own $sth, usually with bind variables, with SQL statements nicely formatted in the source using a here doc ... it was highly manageable and functional, and most of all it was flexible. Not all applications are fast-developing, but my experience is that it pays to develop as if yours were ... rapid access to tweak the SQL fetching data into the application is very desirable, IMHO. The point is not that you can't abstract it all away as you show in your code below, it's that by the time you have covered all eventualities (sorts, groups, selects from multiple tables, et al.), your interface is so complicated you are basically paraphrasing the SQL in some new language of your invention. And that, if I am not mistaken, is the purpose of SQL in the first place! There is such a thing as over-abstraction, IMHO, and having played with this a lot, I have found that this type of effort would be such. Hope this helps, ~~~ Nick Tonkin On Wed, 1 Aug 2001, Joe Breeden wrote: Woooie!?! I didn't expect the firestorm this post would generate. From what I hear people are either embedding SQL or writing their own utility module to essentially do something along the line of: $s-StartDBI ( DSN = 'somedsn_pointer') ; eval { $s-SelectSQL ( NAME = 'sql_select', TABLE = 'sometable', FIELDS = ['field1', 'field2', 'field3'], WHERE = 'field1=?', VALUES = $some_value_for_field1); while ( my $return = $s-SQLGetArray( NAME = 'sql_select')) { #do something $return - maybe complete a template object? } }; $s-EndDBI ( DSN = 'somedsn_pointer', QUERIES = 'sql_select', RESULTS = $@); Where the different calls do the things hinted at in their name (i.e. StartDBI opens the DSN and connects to the database in question, SelectSQL would prepare the SQL select statement and execute it via DBI). This allows the us to pass a native Perl structure which is reformatted to work with DBI. We also get back scalars, arrays, or hashes that are easy to work with. This is what we do here where I work. I still consider this embedded SQL because a change to the table or even to the server could cause the program to break in a lot of places. I think what I had in mind was some way to put this type of processing into a layer where all the SQL related items are essentially in a template file somewhere maybe a SQL::Template type thingy. If this is something that people feel would be a worthwhile endeavor, let me know and maybe when there's have a little free time in the Fall one could write a CPAN module that has this functionality. We had the conversation awhile back about adding redundant and unnecessary crap to CPAN and I want to make sure something like this would be a good thing or not. Thanks, --Joe Breeden --
Re: What counts as a real DBMS?
On Wed, 1 Aug 2001, Philip Mak wrote: On Wed, 1 Aug 2001, Henrik Edlund wrote: And while we are discussing not cutting corners, those who still use MySQL should switch to a real DBMS before they even think of abstracting the SQL away from their Perl code. That people still use MySQL really shows how many lusers there are with computers that try to develop real software. I said _try_. What would you consider to be a real DBMS? Sybase and Oracle obviously, but I actually am the hypothetical programmer with a 233MHz machine with 64 MB RAM (hey, it runs emacs fine :/) on a shoestring budget who is mostly limited to using freeware tools. What about PostgreSQL and Interbase? Do those have the features of a 'real' DBMS? Yes. Postgres has integrity constraints triggers, stored procedures, a well-known extension interface, transactions, high concurrency, and all of ANSI SQL. PostgreSQL is Free. -jwb
Re: Not embedding SQL in perl
On Wed, 1 Aug 2001, Kyle Dawkins wrote: KD Definitely; sotred procedures are hit-and-miss in a lot of KD environments. Remember that a large number of people in the KD mod_perl world can't use 'em because they (we) use MySQL. If one KD wanted to emulate this behaviour with MySQL, you would essentially KD clone the functionality of your stored procedures using Perl + DBI KD inside your persistence layer. That is a perfectly viable KD approach too, but a lot less efficient than stored procedures KD (many roundtrips versus one). And while we are discussing not cutting corners, those who still use MySQL should switch to a real DBMS before they even think of abstracting the SQL away from their Perl code. That people still use MySQL really shows how many lusers there are with computers that try to develop real software. I said _try_. *sigh* MySQL has its place in the database world, otherwise it would not be so widely deployed. Some tasks do not require a huge full featured DBMS to get the job done, so why should they put that requirement on the end user? Are you under the impression that Oracle is the best db server to use for a web based voting application? Probably not... Using MySQL is not cutting corners, its a design decision... if MySQL suits the needs of the developers and their application, spending time switching to a real DBMS is a total waste. Ryan
Re: Not embedding SQL in perl
On Wed, 1 Aug 2001, kyle dawkins wrote: kd Well, yes and no. I was citing that example as *another* reason to keep kd SQL out of your application-level code. kd If you do, as Henrik suggests, write pure SQL92, then obviously you kd wouldn't need to wrap all your SQL in ifs like they did with kd wwwthreads... you could just switch out MySQL and switch in Filemaker kd Pro if it supported SQL92 and had a DBD module :-). I maintain, kd however, that SQL embedded in application logic is evil in all but the kd simplest of scripts. Putting it in middleware is mandatory; I don't take kd issue with that. I am not against removing redudancy and creating function/methods of code that is used more than once so that you don't do the same SQL query at several places in your code. But that is good programming practices within your own classes/modules. But to abstract everything to a SQL class only moves your SQL there and probably causes severe limitations when wanting to do something advanced Maybe if you were writing a data abstraction layer and API for some other programmers, but if you have a database that you know only your script will use, writing an extra abstraction seems very overkill. I could see a use for abstraction if we were going to support several different query languages, but as long as we only use SQL my belief is that DBI is abstraction enough to maintain DMBS interoperability. And of course only use SQL92. Someone once said that more abstraction levels than four (4) is counter productive. I can see both sides in real life. It all comes down to what kind of application development you are doing. And writing your SQL in your main Perl code now does not make it impossible in the future to abstract it to it's own class. But I have seen whole applications go under because they have been so heavily abstracted that in the end no one is even sure what happens anymore - and then of course - class/object operations in Perl 5 are not the fastest either. Regards, Henrik -- Henrik Edlund [EMAIL PROTECTED] http://www.edlund.org/ You're young, you're drunk, you're in bed, you have knives; shit happens. -- Angelina Jolie
Re: Not embedding SQL in perl (was RE: [OT] Inspired by closingcomments from the UBB thread.)
On Thu, 2 Aug 2001, Gunther Birznieks wrote: When you've had your fill of wrestling over mySQL vs PostGres and stored procs versus inline SQL (I know I have long ago) You guys should definitely read the following: http://www.ambysoft.com/persistenceLayer.html One of my current coworkers turned me on to this. I have found it to be one of the best series of articles related towards what it takes to abstract database away from your object layer and the various levels at which it makes sense to do so. You may find the design a little complex, but Scott pretty explicitly states that this is what is necessary for a *large* system. You can always go down a less complex path by choice if you feel your programs aren't complex enough to need the full Persistence Layer structure he advocates. I've worked with Scott Ambler, and I could record everything Scott Ambler knows about actually devleloping large systems on the head of a pin, using a magic marker. That guy is a hopeless academic without the slightest clue of how to actually make software happen. Here's the brutal truth about persistance abstractions using an RDBMS backing store. At some point, your DBA is going to come up to you and tell you that you code is too slow. You need to rewrite some SQL queries to use a different index, or some sorting hints, or whatever. You will realize that you need to pass some extra information down through your abstraction layers to make it all happen. After that happens twice or thrice, you will slowly come to realize that your abstraction is really no abstraction at all: every time the schema changes, the top level interface needs to change as well. -jwb
RE: [OT] Inspired by closing comments from the UBB thread.
Nick, Thanks for the comments. Actually, we use something like the example code now and can do select from multiple tables (TABLES = ['table1', 'table2', 'table2 as someAlias']), can do inner and outer joins, order by clauses, binding values, just about anything we want with straight SQL. Essentially, our Database.pm delivers $dbh and the modules create their own $sth so what we do and what you do probably isn't very far apart. I was shocked at how much response the thread generated so I thought that maybe a solution was warranted and just want to give something back. I still think the solution I've outlined is not the best, but it may a good solution for a lot of people. Thanks everyone for the comments. I can see from the responses this something everyone deals with everyday and that I not alone out here wondering if my solution is the right one or not. --Joe Breeden -- Sent from my Outlook 2000 Wired Deskheld (www.microsoft.com) -Original Message- From: Nick Tonkin [mailto:[EMAIL PROTECTED]] Sent: Wednesday, August 01, 2001 4:15 PM To: Joe Breeden Cc: [EMAIL PROTECTED] Subject: RE: [OT] Inspired by closing comments from the UBB thread. Since you asked, my opinion is that what you describe would not be useful. Primarily for the reason pointed out already by a number of people -- lack of flexibility. Most, if not all, database servers accept highly customizable performance params to a query, and most even moderately evolved applications make use of SQL queries that are significantly more complex than a single-where-clause select. At ValueClick we built a wrapper module (DB.pm :) that delivered a $dbh into the API, handling everything up to that point with minimal fuss. From that point on, some standard things were collected in a utility class, but most modules created their own $sth, usually with bind variables, with SQL statements nicely formatted in the source using a here doc ... it was highly manageable and functional, and most of all it was flexible. Not all applications are fast-developing, but my experience is that it pays to develop as if yours were ... rapid access to tweak the SQL fetching data into the application is very desirable, IMHO. The point is not that you can't abstract it all away as you show in your code below, it's that by the time you have covered all eventualities (sorts, groups, selects from multiple tables, et al.), your interface is so complicated you are basically paraphrasing the SQL in some new language of your invention. And that, if I am not mistaken, is the purpose of SQL in the first place! There is such a thing as over-abstraction, IMHO, and having played with this a lot, I have found that this type of effort would be such. Hope this helps, ~~~ Nick Tonkin On Wed, 1 Aug 2001, Joe Breeden wrote: Woooie!?! I didn't expect the firestorm this post would generate. From what I hear people are either embedding SQL or writing their own utility module to essentially do something along the line of: $s-StartDBI ( DSN = 'somedsn_pointer') ; eval { $s-SelectSQL ( NAME = 'sql_select', TABLE = 'sometable', FIELDS = ['field1', 'field2', 'field3'], WHERE = 'field1=?', VALUES = $some_value_for_field1); while ( my $return = $s-SQLGetArray( NAME = 'sql_select')) { #do something $return - maybe complete a template object? } }; $s-EndDBI ( DSN = 'somedsn_pointer', QUERIES = 'sql_select', RESULTS = $@); Where the different calls do the things hinted at in their name (i.e. StartDBI opens the DSN and connects to the database in question, SelectSQL would prepare the SQL select statement and execute it via DBI). This allows the us to pass a native Perl structure which is reformatted to work with DBI. We also get back scalars, arrays, or hashes that are easy to work with. This is what we do here where I work. I still consider this embedded SQL because a change to the table or even to the server could cause the program to break in a lot of places. I think what I had in mind was some way to put this type of processing into a layer where all the SQL related items are essentially in a template file somewhere maybe a SQL::Template type thingy. If this is something that people feel would be a worthwhile endeavor, let me know and maybe when there's have a little free time in the Fall one could write a CPAN module that has this functionality. We had the conversation awhile back about adding redundant and unnecessary crap to CPAN and I want to make sure something like this would be a good thing or not. Thanks, --Joe Breeden --
Re: segfault w/ Apache 1.3.20, mod_perl 1.26
On Sun, 22 Jul 2001, Richard L. Goerwitz III wrote: I apologize if this problem has already been identified and solved. After upgrading from mod_perl 1.25 to mod_perl 1.26 I fired up an Apache server instance that uses a config file with an extensive set of Perl/Perl sections. I'm using the Perl that came with my Linux (RedHat 7.0) machine, namely 5.6.0. i can't reproduce with 5.6.1. can you post a Perl section the produces the segv?
Re: Can't load mod_perl in Solaris 8
On Fri, 13 Jul 2001, Jie Gao wrote: On Thu, 12 Jul 2001, Doug MacEachern wrote: pitty perl -V does not report usebincompat5005, if you are trying to build modperl as a dso, Makefile.PL should have warned you: Your current configuration will most likely trigger core dumps, suggestions: *) Do not configure mod_perl as a DSO *) Upgrade your Perl version to 5.6.0 or higher (w/ -Ubincompat5005) *) Configure Perl with -Uusemymalloc (not recommended for performance) This is different from what I have been hearing for the past few years: Solaris' malloc is better than perl's. fyi.. Message-ID: [EMAIL PROTECTED] Date: Wed, 18 Jul 2001 11:40:07 +0100 From: Alan Burlison [EMAIL PROTECTED] To: Doug MacEachern [EMAIL PROTECTED] CC: [EMAIL PROTECTED], Alan Burlison [EMAIL PROTECTED] Subject: Re: solaris malloc Doug MacEachern wrote: seeing mixed reviews with regards to performance, README.solaris says: =head2 Malloc Issues with Perl on Solaris. Starting from Perl 5.7.1 Perl uses the Solaris malloc, since the perl malloc breaks when dealing with more than 2GB of memory, and the Solaris malloc also seems to be faster. but this message from alan says: http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2001-01/msg00465.html A bit more can be squeezed out if you use the perl malloc putting aside the 2GB limit issue, curious if there are any numbers out there on solaris malloc vs. perl malloc? An interesting question. The answer as to which is faster is 'it depends'. The answer will depend on: o Which Solaris version are you using (malloc has been changed more or less with every release) o Is perl built MT or not and if so how many CPUs is it using. o What is the allocation profile. And I'm sure I could think of a few other variables as well. Perl *should* be better with its own malloc, as it has been written with knowledge of the likely allocation behaviour of perl. As an aside, a paper was presented at this year's Usenix describing the implementation of the Solaris kernel slab allocator, which is an arena-based object-caching allocator. It stores partially constructed objects, so that a malloc/free/malloc of the same object doesn't have to totally de/reinitialise the object every time. A userland port of this allocator is also described, along with some performance comparisons of other malloc implementations. The abstract is at http://www.usenix.org/event/usenix01/bonwick.html, but you need Usenix membership to download the paper. If anyone is interested, I'll try and get permission to send them a copy. The existing arena allocation in perl5 is quite similar in intent to the slab allocator, so the paper might be useful background reading for the perl6 allocator. Alan Burlison
Re: Errors when trying to use AuthAny.pm
The error log message is: [Wed Jul 11 09:04:59 2001] [error] (2)No such file or directory: access to /tools/ failed for nr2-216-196-142-76.fuse.net, reason: User not known to the underlying authentication module question is where does this error message come from? its not from apache or mod_perl or AuthAny.pm. you must have some sort of custom auth module installed.
Re: can't start apache-1.3.20 with mod_perl and Mason
On Fri, 13 Jul 2001, Louis-David Mitterrand wrote: * On Wed, Jul 11, 2001 at 08:09:20AM -0700, Doug MacEachern wrote: On Wed, 11 Jul 2001, Louis-David Mitterrand wrote: Will I have to build a debugging-enabled libperl to get relevant information? Or is this enough to understand the problem? libperld would help, all i can tell is that something in %SIG is being caught, which normally shouldn't happen at startup. are you assigning anything to %SIG ? you could also try this to get the perl filename:line where the segv happens: (gdb) source mod_perl-x.xx/.gdbinit (gdb) curinfo Thanks again Doug for taking the time to help. Here is the output from curinfo: Program received signal SIGSEGV, Segmentation fault. 0x402b14b6 in Perl_sighandler () from /usr/lib/libperl.so.5.6 (gdb) source .gdbinit (gdb) curinfo Attempt to extract a component of a value that is not a structure pointer. (gdb) Does that help a little? nope. if you could a libperld and build mod_perl with PERL_DEBUG=1 that might help (see SUPPORT doc for howto)
Re: Prob w/make test - server doesn't warm up
On Sun, 15 Jul 2001, Joan Wang wrote: I am getting the same exact problem on RedHat7.0. I was wondering if there is a solution to this access permission problem? sounds like it, when 'make make test' are done as root, things break. The strace.out looks like this: accept(16, which means the server has indeed started and is awaiting connections, sounds like the permissions problem.
Re: can't start apache-1.3.20 with mod_perl and Mason
On Mon, 16 Jul 2001, Louis-David Mitterrand wrote: * On Wed, Jul 11, 2001 at 08:09:20AM -0700, Doug MacEachern wrote: libperld would help, all i can tell is that something in %SIG is being caught, which normally shouldn't happen at startup. are you assigning anything to %SIG ? you could also try this to get the perl filename:line where the segv happens: (gdb) source mod_perl-x.xx/.gdbinit (gdb) curinfo OK, I rebuilt a debugging libperl and here is the gdb output: hmm, this is a totally different stack trace. have you tried modperl-1.26? or try adding this to httpd.conf: PerlSetEnv PERL_DESTRUCT_LEVEL 2
Re: swapping of mod_perl parent process / mlockall()
On Mon, 16 Jul 2001, Adi Fairbank wrote: Actually, I don't want child processes to inherit the page locks across a fork. I just wanted to experiment with performance issues when only the parent process is locked in memory. (I have a theory that when the parent process swaps to disk, the swapped pages become unshared for the rest of the server's life) I was hoping you could give me a hint as to where in the source code I could call mlockall(), e.g. file mod_perl.c, line NNN.. you should just be able to create a module with a .xs something like: #include EXTERN.h #include perl.h #include XSUB.h #include sys/mman.h MODULE = Mlock PACKAGE = Mlock int mlockall() CODE: RETVAL = (mlockall(MCL_CURRENT) == 0); OUTPUT: RETVAL and call Mlock::mlockall(); in a startup script.
Re: Not embedding SQL in perl (was RE: [OT] Inspired by closing comments from the UBB thread.)
At 02:44 PM 8/1/2001 -0700, Jeffrey W. Baker wrote: On Thu, 2 Aug 2001, Gunther Birznieks wrote: When you've had your fill of wrestling over mySQL vs PostGres and stored procs versus inline SQL (I know I have long ago) You guys should definitely read the following: http://www.ambysoft.com/persistenceLayer.html One of my current coworkers turned me on to this. I have found it to be one of the best series of articles related towards what it takes to abstract database away from your object layer and the various levels at which it makes sense to do so. You may find the design a little complex, but Scott pretty explicitly states that this is what is necessary for a *large* system. You can always go down a less complex path by choice if you feel your programs aren't complex enough to need the full Persistence Layer structure he advocates. I've worked with Scott Ambler, and I could record everything Scott Ambler knows about actually devleloping large systems on the head of a pin, using a magic marker. That guy is a hopeless academic without the slightest clue of how to actually make software happen. I suppose I can't comment on your opinion as I do not personally know him. But I find his statements to be worthy (as explained further below) regardless of what you say about his real-world knowledge. So I can only imagine that he has taken in many comments from users over the years and made up his articles based on feedback since I think this one is particular is reasonable. Although I've never had to implement all 6 or so object abstractions in the ultimate persistence layer he recommends. :) Here's the brutal truth about persistance abstractions using an RDBMS backing store. At some point, your DBA is going to come up to you and tell you that you code is too slow. You need to rewrite some SQL queries to use a different index, or some sorting hints, or whatever. You will realize that you need to pass some extra information down through your abstraction layers to make it all happen. After that happens twice or thrice, you will slowly come to realize that your abstraction is really no abstraction at all: every time the schema changes, the top level interface needs to change as well. I can't say that I agree. It depends on what you are coding for. Are you coding for performance or are you coding for getting a product out there that is easy to maintain? In many cases, these two requirements are quite at odds. This thread was originally sparked by someone getting annoyed that SQL was embedded throughout the code and finding it hard to grasp how to deal with this. While it's true that the best performance comes from hand-coding the SQL, and if you hand-code the SQL, it should arguably be close to the section of code that requires this SQL, not all programs require this. In fact, very few in my experience. Those that have required speed have required it for a small subset of operations in a larger project. I strongly believe many apps can get away without having SQL embedded. I've been doing it for the last several years and definitely coding and maintenance time improves with some persistence layer abstraction. But yes, you run the risk of having to go back and code a SQL statement or two, and you run the risk of somewhat lower performance, but as Scott mentions in his article, these should be the well-documented exception, not the rule. Nick Tonkin posted a very clear and well written post a few minutes ago about embedding SQL close to the code which may demonstrate the opposite of what I am trying to say. But on the other hand, I could understand that a company such as ValueClick really have to make sure their databases and the SQL that accesses them are completely tweaked. So I think given speed requirements, making a HERE document and using other clean-coding techniques to keep the SQL close to the code that needs it is quite reasonable. However, in my experience... Of the things that are harder to duplicate in a persistence layer to one degree or another... Not all applications require transactions Not all applications require aggregation beyond count Not all applications require blinding speed (just decent speed) Not all applications require joins Not all applications require unions Not all applications require subselects And even if you would argue that taking into account a union of probabilities an application may need at least one of the above, I have found it simply is not true. Usually when an application has a fairly complex data model then they need more than one of the above and that's when you have to move to SQL. In other words, if the probability that an app needs each of the features above is 5%, then rather than the union of the probabilities being 5 + 5 + 5 + 5 + 5 + 5, it is really more like 8% where the majority of the 5% is really in applications that needs more than one of the above advanced SQL list.
Re: Overwriting the Basic Password
On Wed, 18 Jul 2001, Arthur M. Kang wrote: Is there a reverse to the($res,$password)=$r-get_basic_auth_pw function? Is there anyone to globally set or reset the values that come out of $r-get_basic_auth_pw? Can I set a new password to come out? You can do it with the user ($c-user)... $r-header_in(Authorization = 'Basic ' . MIME::Base64::encode(join ':', $username, $password));
Re: libapreq build error
On Tue, 24 Jul 2001, brian moseley wrote: hiya. trying to build the latest cpan version of libapreq with perl 5.6.1 + use5005threads, apache 1.3.20, mod-perl 1.25. got this error: Request.xs: In function `upload_hook': Request.xs:230: `thr' undeclared (first use in this function) try adding a dTHX; in upload_hook, and anywhere else that complains about `thr'
Re: Error with apache with mod_perl
On Mon, 30 Jul 2001, Mauricio Amorim wrote: Hi, my name is Mauricio 2) I install mod_perl 1.1.26 with the following options: cd mod_perl_1.1.26 perl Makefile.PL APACHE_SRC=../apache.1.3.20/src USE_APACI=1 USE_DSO=1 you should have seen this warning: Your Perl is uselargefiles enabled, but Apache is not, suggestions: *) Rebuild mod_perl with Makefile.PL PERL_USELARGEFILES=0 *) Rebuild Apache with CFLAGS=-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 *) Rebuild Perl with Configure -Uuselargefiles *) Let mod_perl build Apache (USE_DSO=1 instead of USE_APXS=1) try the first option.
Re: What counts as a real DBMS?
ok my 5c, My vote is for Interbase. Why ? +small runtime size +zero administration +FK with CASCADE +I think it runs on more platforms than any other DB +SUSPEND in stored procs +stored procs can be used in FORM clause +can run on less-powerfull PC's Personaly I've used it on Win95, Win98, WinNT and Linux.. -no LIMIT and/or TOP I had tried a litle bit both MySQL and PostgreSQL.. -the thing I hated in PostgreSQL was that u can't use VIEWS with agregating functions 'cause of the way they are/was implemented ... don't know if it is already corrected in newer versions.. -does it support FK already ?!? (PSQL) + InterBase, PostgreSQL are multiversioning engines.. so they don't need transaction log (anyone with expirience with MS SQL transaction log :) ) so I think InterBase counts as a REAL DB. :) = iVAN [EMAIL PROTECTED] =
require v.s. do in modperl
I have a CGI application where I do: require 'db.pl'; where db.pl defines some functions and variables related to connecting to the database, and then executes C$dbh = DBI-connect(...). I tried to convert this application to modperl, but I ran into the problem that require did not execute db.pl again the second time I called the script, so that the C$dbh = DBI-connect(...) line was not executed. I can get around this by changing Crequire to Cdo, but is that the correct way of doing things? It seems a waste to redefine all the subroutines and variables again. But I do need it to reinitialize $dbh when Crequire 'db.pl'; is called. What should I do?
Re: Bug??
Thanks for that. However, I've already seen this. The problem is that I'm requesting pages at: http://my.server.com/perl/blah.pl and also https://my.server.com/perl/blah.pl Now these should be different scripts, and Apache is set up with a completely different document and perl root for the http and https servers. Unfortunately, these still get confused, even with the NameWithVirtualHost code. Hence, I thought of hacking the .pm files to include the server port as well as the name in the uniquely generated namespace. Any other ideas?? Yours, Chris Rodgers [EMAIL PROTECTED] On Tue, 31 Jul 2001, Stas Bekman wrote: On Tue, 31 Jul 2001, Chris Rodgers wrote: Hi, I'm running Apache with mod_perl and mod_ssl. (Apache/1.3.20 (Unix) mod_perl/1.25 mod_ssl/2.8.4 OpenSSL/0.9.5a to be precise.) I am listening on both port 80 (HTTP) and port 443 (HTTPS) and serving perl scripts. There are two separate vhosts on the two ports - i.e. entirely different websites, but only one httpd (and associated set of children). Now, this works fine, except that if I try to access a script with the same name on the http site and the https site, I get the WRONG version sometimes. I think that this is because mod_perl is only using the server name (and not the port / protocol) when it builds its table for caching scripts. Does anyone know how to fix this? I was thinking of editing all the .pm files for mod_perl around where they refer to $NameWithVirtualHost - and to add the port onto the begining of the unique identifier which is used to form the namespace for each script. Would this work - or might it break something else??! Any hints/tips would be greatly appreciated! http://perl.apache.org/guide/multiuser.html#Virtual_Hosts_in_the_guide _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://eXtropia.com/ http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
Re: segfault with mod_perl, Oraperl, XML::Parser
On Mon, Jul 30, 2001 at 03:30:48PM -0400, Philip Mak wrote: On Mon, 30 Jul 2001, Scott Kister wrote: uselargefiles=define Have you tried turning off uselargefiles? I might be off track here, but recently I tried to install mod_perl on Solaris 5.8. It kept segfaulting until I turned off uselargefiles and binary compatibility with 5.00503. You could try recompiling perl with this configure line, then recompiling mod_perl and see what happens: sh Configure -des -Dcc=gcc -Ubincompat5005 -Uuselargefiles And -Uusemymalloc (or something like that) to get perl to use the system's own malloc. Tim.
Re: ODBC for Apache
On Fri, Jul 27, 2001 at 05:38:12PM -0700, Adi Fairbank wrote: Joshua Chamas wrote: Castellon, Francisco wrote: Hi I am running on Windows98SE, Apache 1.20, mod_perl 1.25, php 4.0.6, and have the latest Apache::ASP installed and have Activestate's Perl installed (build 626). I want to be able to access, Oracle, SQL and MSAccess databases as well as a couple of other Databases that support ODBC. I want to access these DBs from my apache server. What do i need?? DBD::ODBC is your ticket, running under DBI http://dbi.symbolstone.org/ You can probably install DBD::ODBC for Activestate perl with the ppm installer, as they will likely already have compiled it for you. Apache::DBI is installed with CPAN. What's the stability of DBD::ODBC like? It claims to be alpha software. Does anyone have any experience with it in a production environment? Is it at least stable with MS SQL server? The Alpha tag is a relic. I believe DBD::ODBC is fine and in widespread production use. Tim.
Re: system()/exec() ?
Hi, If I'm interpreting you correctly, you'll find that your scripts are actually executing correctly, you're simply not capturing their output, which, presumably, is what you want. The mod_perl docs mention that you can solve this by recompiling your perl installation to support sfio, but I've always found this a little extreme. My prefered solution is to use a perl module called Apache::SubProcess (search on CPAN) which redefines system()/exec() to work from within a mod_perl script. Cheers, Aaron On Tue, 31 Jul 2001, Mauricio Amorim wrote: Hi I see an discussion in April by Mike Austin, about utilization of exec and system commands with mod_perl. Anybody know if is possible to use system and exec commands, because i tried use it, but the script don´t execute and apache display nothing in the logs/error_log thank you
Re: require v.s. do in modperl
I have a CGI application where I do: require 'db.pl'; where db.pl defines some functions and variables related to connecting to the database, and then executes C$dbh = DBI-connect(...). snip I can get around this by changing Crequire to Cdo, but is that the correct way of doing things? No. Put the connect stuff in a subroutine and call it from your application. Things in the main section of a required file are only supposed to run once. - Perrin
Re: Apache::DBI Oracle LOB problem
Mmm, haven't seen it, but we use LONG instead of CLOB as the datatype for the sequence. Is there any reason to use CLOB, and does using LONG make the problem disappear? Oracle doesn't want you to use LONG anymore. It's deprecated. Questions for Steven: Have you followed all the documentation on using LOBs in DBD::Oracle? Are you sure that LongReadLen is set high enough? - Perrin
Re: require v.s. do in modperl
For what you are trying to do, you should turn it into a module. Sorry for the short post, I've gotta split... Although it's not user friendly, my more constructive hint is to type perldoc perlmod to get a quick tutorial on writing a module. At 06:56 PM 8/1/2001 -0400, Philip Mak wrote: I have a CGI application where I do: require 'db.pl'; where db.pl defines some functions and variables related to connecting to the database, and then executes C$dbh = DBI-connect(...). I tried to convert this application to modperl, but I ran into the problem that require did not execute db.pl again the second time I called the script, so that the C$dbh = DBI-connect(...) line was not executed. I can get around this by changing Crequire to Cdo, but is that the correct way of doing things? It seems a waste to redefine all the subroutines and variables again. But I do need it to reinitialize $dbh when Crequire 'db.pl'; is called. What should I do? __ Gunther Birznieks ([EMAIL PROTECTED]) eXtropia - The Open Web Technology Company http://www.eXtropia.com/
Perl on Apache
Hello all, I'm having trouble reading a .cgi file on a virtual domain on my server. When I go to the file through a browser I just see the text but it does not execute it. I checked the permissions and they all are OK so I figured may be I don't have Perl installed properly. I'm running Red Hat 7.1 Is there any way to check the set up. May be in the httpd.conf file to see if it's set up correctly? :---: Anthony Minero Creative Director PencilFight Design PENCILFIGHT.COM 2518 Lincoln Blvd. Los Angeles, CA. 90291 310.403.6599 :---:
Re: [OT] Inspired by closing comments from the UBB thread. (fwd)
Nicely put Nick. There's already a Structured Query Language, And there's an easy to use abstraction called DBI up on CPAN. Feel free to use in application code thusly: my $statement = qq~ SELECT field1, field2 FROM table WHERE id = ? ~; my $ref; my $sth = $dbh-prepare($statement); foreach my $question (@questions) { $sth-execute($question); $ref = $sth-fetchrow_hashref; $sth-finish; display_data($ref); } At the end of the day you're gonna have a $dbh somewhere and it's gotta receive some SQL to be useful. Hide it where you want to, I'll put it real close to where the data is going to be used (unless the data needs to be used from many different access points in which case all that nasty :-) SQL goes into a OO module that understands how to provide: my $handle = new foobar $dbh; my $arrayref = $handle-gimme_foobar_data; ). -- Daniel Bohling NewsFactor Network The point is not that you can't abstract it all away as you show in your code below, it's that by the time you have covered all eventualities (sorts, groups, selects from multiple tables, et al.), your interface is so complicated you are basically paraphrasing the SQL in some new language of your invention. And that, if I am not mistaken, is the purpose of SQL in the first place! There is such a thing as over-abstraction, IMHO, and having played with this a lot, I have found that this type of effort would be such. Hope this helps, ~~~ Nick Tonkin On Wed, 1 Aug 2001, Joe Breeden wrote: Woooie!?! I didn't expect the firestorm this post would generate. From what I hear people are either embedding SQL or writing their own utility module to essentially do something along the line of: $s-StartDBI ( DSN = 'somedsn_pointer') ; eval { $s-SelectSQL ( NAME = 'sql_select', TABLE = 'sometable', FIELDS = ['field1', 'field2', 'field3'], WHERE = 'field1=?', VALUES = $some_value_for_field1); while ( my $return = $s-SQLGetArray( NAME = 'sql_select')) { #do something $return - maybe complete a template object? } }; $s-EndDBI ( DSN = 'somedsn_pointer', QUERIES = 'sql_select', RESULTS = $@);
Re: [OT] Inspired by closing comments from the UBB thread.
On Wed, Aug 01, 2001 at 01:19:58PM -0500, Dave Rolsky wrote: On Wed, 1 Aug 2001, Kyle Dawkins wrote: I've taken a look at many of them (Tangram? a few others) and haven't been impressed with any of them. I think part of the problem is that they're all being developed in a bit of a vacuum. But let's capitalise on the interest that this thread has generated to start a push for something that we can all use. I think even the dudes who embed their SQL in perl could be made to realise the benefits if we all started using a common framework. Thoughts? Well, people are starting to use my tool, Alzabo (alzabo.sourceforge.net) and I'm getting feedback. More feedback about what people want it always welcome. FWIW, Alzabo gives you a reasonable amount of control over the SQL that is generated, if you need it. It doesn't yet allow optimizer hints but that will change in a future version. OTOH, if you really _need_ to get into the nitty gritty details of SQL its hard to imagine that any abstraction layer would ever be satisfactory. I think DBIx::AnyDBD is a pretty good compromise. Tim.
Re: [OT] Inspired by closing comments from the UBB thread. (fwd)
On Wed, Aug 01, 2001 at 05:29:10AM -0700, Daniel wrote: Nicely put Nick. There's already a Structured Query Language, And there's an easy to use abstraction called DBI up on CPAN. Feel free to use in application code thusly: my $statement = qq~ SELECT field1, field2 FROM table WHERE id = ? ~; my $ref; my $sth = $dbh-prepare($statement); foreach my $question (@questions) { $sth-execute($question); $ref = $sth-fetchrow_hashref; $sth-finish; display_data($ref); } Umm, these days I'd write loop that as: foreach my $question (@questions) { display_data( $dbh-selectrow_arrayref($sth, undef, $question) ); } :-) Since ValueClick's been mentioned I'll point out that I now have the task of exploring how to migrate all the embedded SQL code that Nick mentioned from MySQL over to Oracle :-) [Hi Nick!] I'm not a big fan of heavy abstractions and I'm pretty comfortable with how much of the code is structured, in general. I'm hoping that a mixture of new DBD::Oracle and DBI features, possibly a DBD::Oracle::mysql subclass, and a sprinkling of DBIx::AnyDBD will prove sufficient. Combining that with using Oracle's ODBC gateway to make MySQL tables appear live within Oracle should enable a smooth migration without a sharp 'big bang' transition. Of course, all this is just theory at the moment. Tim.
Re: require v.s. do in modperl
At 07:16 PM 8/1/2001 -0400, Perrin Harkins wrote: I have a CGI application where I do: require 'db.pl'; where db.pl defines some functions and variables related to connecting to the database, and then executes C$dbh = DBI-connect(...). snip I can get around this by changing Crequire to Cdo, but is that the correct way of doing things? No. Put the connect stuff in a subroutine and call it from your application. Things in the main section of a required file are only supposed to run once. I am not sure, but I don't think connect() is only supposed to run once especially with Apache::DBI?
Re: require v.s. do in modperl
On Wed, 1 Aug 2001, Philip Mak wrote: I have a CGI application where I do: require 'db.pl'; where db.pl defines some functions and variables related to connecting to the database, and then executes C$dbh = DBI-connect(...). I tried to convert this application to modperl, but I ran into the problem that require did not execute db.pl again the second time I called the script, so that the C$dbh = DBI-connect(...) line was not executed. I can get around this by changing Crequire to Cdo, but is that the correct way of doing things? It seems a waste to redefine all the subroutines and variables again. But I do need it to reinitialize $dbh when Crequire 'db.pl'; is called. What should I do? One of the things you should do that I have not yet seen mentioned is look into using Apache::DBI so you don't have it reinitialize $dbh on every request. If you do have multiple DBs in your application you can still used cached database handles; just name them differently. - nick
Re: Prob w/make test - server doesn't warm up
Thanks for the reply. I was able to eliminate this problem by not using PREP_HTTPD=1 option when building mod_perl. I used DO_HTTPD etc... That got rid of the problem. Doug MacEachern wrote: -- From: Doug MacEachern[SMTP:[EMAIL PROTECTED]] Sent: Wednesday, August 01, 2001 6:26:31 PM To: Joan Wang Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: Prob w/make test - server doesn't warm up Auto forwarded by a Rule On Sun, 15 Jul 2001, Joan Wang wrote: I am getting the same exact problem on RedHat7.0. I was wondering if there is a solution to this access permission problem? sounds like it, when 'make make test' are done as root, things break. The strace.out looks like this: accept(16, which means the server has indeed started and is awaiting connections, sounds like the permissions problem.
Re: require v.s. do in modperl
At 07:18 PM 8/1/2001 -0700, Nick Tonkin wrote: On Wed, 1 Aug 2001, Philip Mak wrote: I have a CGI application where I do: require 'db.pl'; where db.pl defines some functions and variables related to connecting to the database, and then executes C$dbh = DBI-connect(...). I tried to convert this application to modperl, but I ran into the problem that require did not execute db.pl again the second time I called the script, so that the C$dbh = DBI-connect(...) line was not executed. I can get around this by changing Crequire to Cdo, but is that the correct way of doing things? It seems a waste to redefine all the subroutines and variables again. But I do need it to reinitialize $dbh when Crequire 'db.pl'; is called. What should I do? One of the things you should do that I have not yet seen mentioned is look into using Apache::DBI so you don't have it reinitialize $dbh on every request. If you do have multiple DBs in your application you can still used cached database handles; just name them differently. But you should still call $dbh = connect() on every request so that Apache::DBI's magic can truly work?? Otherwise ping tests and other such stuff will not work and you may as well not use Apache::DBI at all and just use BEGIN { $dbh ||= connect() } which of course, is probably not working well.
Re: require v.s. do in modperl
Gunther Birznieks wrote: At 07:16 PM 8/1/2001 -0400, Perrin Harkins wrote: I have a CGI application where I do: require 'db.pl'; where db.pl defines some functions and variables related to connecting to the database, and then executes C$dbh = DBI-connect(...). snip I can get around this by changing Crequire to Cdo, but is that the correct way of doing things? No. Put the connect stuff in a subroutine and call it from your application. Things in the main section of a required file are only supposed to run once. I am not sure, but I don't think connect() is only supposed to run once especially with Apache::DBI? Right, and at the moment he has it in the main section, so it's only running once. He should move it to a sub and call it from his application so it gets run every time.
cvs commit: modperl-2.0/todo possible_new_features.txt
sbekman 01/08/01 21:38:12 Modified:pod modperl_dev.pod todo possible_new_features.txt Log: document the issue with Apache::compat and CGI.pm Revision ChangesPath 1.31 +20 -0 modperl-2.0/pod/modperl_dev.pod Index: modperl_dev.pod === RCS file: /home/cvs/modperl-2.0/pod/modperl_dev.pod,v retrieving revision 1.30 retrieving revision 1.31 diff -u -r1.30 -r1.31 --- modperl_dev.pod 2001/07/20 01:48:11 1.30 +++ modperl_dev.pod 2001/08/02 04:38:11 1.31 @@ -580,7 +580,27 @@ =back +=head1 mod_perl 1.x Compatibility +CApache::compat provides mod_perl 1.x compatibility feature, which +allows CApache::Registry from mod_perl 1.x to be used: + + startup.pl: + --- + use Apache::compat (); + use lib ...; #or something to find 1.xx Apache::Registry + +then in Ihttpd.conf: + + Alias /perl /path/to/perl/scripts + Location /perl + Options +ExecCGI + SetHandler modperl + PerlResponseHandler Apache::Registry + /Location + +Notice that CApache::compat has to be loaded before CCGI.pm if the +latter module is used. =head1 Submitting Patches 1.4 +11 -0 modperl-2.0/todo/possible_new_features.txt Index: possible_new_features.txt === RCS file: /home/cvs/modperl-2.0/todo/possible_new_features.txt,v retrieving revision 1.3 retrieving revision 1.4 diff -u -r1.3 -r1.4 --- possible_new_features.txt 2001/05/08 22:26:00 1.3 +++ possible_new_features.txt 2001/08/02 04:38:12 1.4 @@ -65,6 +65,17 @@ - core Apache::SubProcess w/ proper CORE::GLOBAL::{fork,exec} support +- Apache::compat has to be loaded before CGI.pm, other than + documenting this issue, it's possible that we will add: + + #ifdef MP_APACHE_COMPAT + modperl_require_module(Apache::compat); + #endif + + if MP_APACHE_COMPAT Makefile.PL option is true, but this carries a + performance hit, so this is just an option. + + new modules: ---
cvs commit: modperl-2.0/todo possible_new_features.txt
sbekman 01/08/01 23:11:18 Modified:todo possible_new_features.txt Log: s/performance hit/bloat/ Revision ChangesPath 1.5 +2 -2 modperl-2.0/todo/possible_new_features.txt Index: possible_new_features.txt === RCS file: /home/cvs/modperl-2.0/todo/possible_new_features.txt,v retrieving revision 1.4 retrieving revision 1.5 diff -u -r1.4 -r1.5 --- possible_new_features.txt 2001/08/02 04:38:12 1.4 +++ possible_new_features.txt 2001/08/02 06:11:18 1.5 @@ -72,8 +72,8 @@ modperl_require_module(Apache::compat); #endif - if MP_APACHE_COMPAT Makefile.PL option is true, but this carries a - performance hit, so this is just an option. + if MP_APACHE_COMPAT Makefile.PL option is true. But this adds bloat, + so this is just an option to consider. new modules:
cvs commit: modperl-site/netcraft graph.jpg index.html input.data pseudo-graph.jpg
sbekman 01/08/01 08:25:02 Modified:netcraft graph.jpg index.html input.data pseudo-graph.jpg Log: july updates Revision ChangesPath 1.11 +194 -195 modperl-site/netcraft/graph.jpg Binary file 1.39 +2 -1 modperl-site/netcraft/index.html Index: index.html === RCS file: /home/cvs/modperl-site/netcraft/index.html,v retrieving revision 1.38 retrieving revision 1.39 diff -u -r1.38 -r1.39 --- index.html2001/07/06 14:42:51 1.38 +++ index.html2001/08/01 15:25:02 1.39 @@ -19,7 +19,7 @@ p SecuritySpace provides yet -a href=http://www.securityspace.com/s_survey/data/man.29/apachemods.html; +a href=http://www.securityspace.com/s_survey/data/man.200107/apachemods.html; another report/a. Make sure to click on the menu at the left to pick the latest month, since the link hardcodes the month. p @@ -47,6 +47,7 @@ table cellpadding=3 border=1 trtdSurvey/tdtdhostnames/tdtdunique ip addresses/td/tr +trtd July 2001 /tdtd2936558/tdtd281471/td/tr trtd June 2001 /tdtd2802093/tdtd273827/td/tr trtd May 2001 /tdtd2475367/tdtd265466/td/tr trtd April 2001 /tdtd2482288/tdtd256862/td/tr 1.24 +1 -0 modperl-site/netcraft/input.data Index: input.data === RCS file: /home/cvs/modperl-site/netcraft/input.data,v retrieving revision 1.23 retrieving revision 1.24 diff -u -r1.23 -r1.24 --- input.data2001/07/06 14:42:52 1.23 +++ input.data2001/08/01 15:25:02 1.24 @@ -1,3 +1,4 @@ +July 20012936558 281471 June 20012802093 273827 May 2001 2475367 265466 April 2001 2482288 256862 1.11 +51 -64modperl-site/netcraft/pseudo-graph.jpg Binary file
cvs commit: modperl-2.0/src/modules/perl modperl_io.c
dougm 01/08/01 09:52:41 Modified:src/modules/perl modperl_io.c Log: better tracing of tie/untie STDIN/STDOUT Revision ChangesPath 1.3 +13 -7 modperl-2.0/src/modules/perl/modperl_io.c Index: modperl_io.c === RCS file: /home/cvs/modperl-2.0/src/modules/perl/modperl_io.c,v retrieving revision 1.2 retrieving revision 1.3 diff -u -r1.2 -r1.3 --- modperl_io.c 2001/07/13 17:12:12 1.2 +++ modperl_io.c 2001/08/01 16:52:40 1.3 @@ -11,14 +11,26 @@ MP_INLINE void modperl_io_handle_untie(pTHX_ GV *handle) { sv_unmagic((SV*)handle, 'q'); + +MP_TRACE_g(MP_FUNC, untie *%s(0x%lx), REFCNT=%d\n, + GvNAME(handle), (unsigned long)handle, + SvREFCNT((SV*)handle)); } MP_INLINE void modperl_io_handle_tie(pTHX_ GV *handle, char *classname, void *ptr) { SV *obj = modperl_ptr2obj(aTHX_ classname, ptr); -modperl_io_handle_untie(aTHX_ handle); + +if (mg_find((SV*)handle, 'q')) { +modperl_io_handle_untie(aTHX_ handle); +} + sv_magic((SV*)handle, obj, 'q', Nullch, 0); + +MP_TRACE_g(MP_FUNC, tie *%s(0x%lx) = %s, REFCNT=%d\n, + GvNAME(handle), (unsigned long)handle, classname, + SvREFCNT((SV*)handle)); } MP_INLINE int modperl_io_handle_tied(pTHX_ GV *handle, char *classname) @@ -52,9 +64,6 @@ IoFLUSH_off(PL_defoutgv); /* $|=0 */ -MP_TRACE_g(MP_FUNC, tie *STDOUT(0x%lx) = Apache::RequestRec\n, - (unsigned long)handle); - TIEHANDLE(handle, r); return handle; @@ -73,9 +82,6 @@ if (TIED(handle)) { return handle; } - -MP_TRACE_g(MP_FUNC, tie *STDIN(0x%lx) = Apache::RequestRec\n, - (unsigned long)handle); TIEHANDLE(handle, r);