Re: templating system opinions (axkit?)
Jesse Erlbaum [EMAIL PROTECTED] wrote: It's mostly hype in my experience. And not even very useful hype, like Java or PHP, which are actually real things which people might want to use. XSLT seems to be XML geeks' answer to CSS+templating. As if CSS wasn't very successful, as if the world needed another templating system, XSLT seems to have been invented to take the creative work of designing web sites out of the hands of HTML designers, and put it in the hands of XPath programmers. You know. Programmers who are really good at both creative design and communicating with human beings. Not. Alright, pretty smarmy. But unless you just happen to have thousands of XML documents sitting around on your hard drive, XSLT is a solution in search of a problem. Most of my data is in a RDBMS -- not XML. To enhance the *need* for XSLT, some databases will now return XML. That's an interesting idea. Instead of using a mature language like Perl|Java|PHP, let's use something like XSLT to turn my data into a web page! It's new, shiny, and will solve the problem of TOO MANY people knowing the other aforementioned languages. D'oh! Too cynical? Maybe. The fact that XSLT is still discussed in serious company just bugs me. ;-) This is a bit disorganized, but I'm trying to explain why different things have their place, at least in the work I'm doing. I am working on a project with the following simplified pipeline in an MVC environment: TT2 - HTML::Mason - AxKit - Client I use each of these for their strengths. I don't expect each one to do everything I need. We want the people that know our customers the best to be the ones that provide the content for the site. These same people are not programmers. They do not like programming. They don't like being near code for fear they will mess something up. I will let them edit TT2 templates. Since they don't like the Unix editors or CVS, I will provide (initially) a web interface for editing and a revision-controlled repository (Gestinanna::POF::Repository). The templates will produce XML so the author can concentrate on content and not worry about presentation. We want others who understand the processes a customer can understand to be the ones writing the controllers. These are XML documents that define a state machine (StateMachine::Gestinanna) that walk a customer through a process to get something done (and applying the right XSLT can create the documentation for the state machine). But these same documents do not expose the full Perl language or the server in the hope of having one less security hole to worry about. In fact, the applications can usually be prototyped without invoking the model or having any code run on a state/edge transition. Once the process flow is finalized, the model can be tied in. These are run in HTML::Mason and determine which template will be used to produce the XML. The model is written as a set of Perl modules (e.g., Gestinanna::POF). The authors of these modules are trusted, usually the same people that are responsible for system security and operation. They can have full access to the server. The modules provide an OO interface to most business operations controlled by the controller. The XML produced by the template is processed by AxKit to produce HTML, WML, or some other format usable by the customer's client. The other benefit of XSLT is that like content is treated in a consistent manner in the end document. Customers can always expect a particular content type to be in a particular format for a given document type without the person writing the content having to constantly check their work against a style sheet. If the person responsible for the layout and look of the site changes something, only the XSLT and CSS have to be changed. Usually, only the CSS has to be changed unless there are major structural changes to the site. The look and layout of the site is done in Photoshop, not in IE. This allows someone that does know XSLT to go in and make sure the resulting HTML matches for a wide range of browsers what was done in Photoshop. We also tend to stick with the W3C recommendations instead of relying on proprietary features/bugs. Of course, we're also a state institution under certain legal restrictions regarding what we can do on the web. So I'm using TT2, HTML::Mason and AxKit to work on a site using XML and XSLT. Each has its role based on personnel constraints that are outside the technical requirements of the project. Even so, it results in a highly customizable application that requires little effort at any particular point. I'm working on throwing SOAP and Jabber into the mix as well. I haven't done any performance tuning yet. The primary focus of the application is security, then maintainability, then usability. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: [RFC] web-messaging application for mod_perl
Adi Fairbank [EMAIL PROTECTED] wrote: On, or in the near vicinity of Tue, 15 Jul 2003 01:47:13 -0500 Ok, I'm sold. Now I get the reason for not using such a generic name. In fact, I really like your suggestion Apache::App::Mercury. If you don't mind, I'll use that name! Do you mind? Glad I could help. As far as I'm concerned, you are free to use the name. I don't have any particular claim to it myself. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: [RFC] web-messaging application for mod_perl
Adi Fairbank [EMAIL PROTECTED] wrote: On, or in the near vicinity of Mon, 14 Jul 2003 18:49:58 +0300 Stas Bekman [EMAIL PROTECTED] has thus written: Probably the best bet is to give it some cool unique name, like Apache::AdiChat and then you are all set, since you are not going to take over any future framework/namespaces... What's wrong with WebMessaging ? Do you foresee that interfering with some future software in the Apache:: namespace, or is it just too generic? I thought it was a good name since it accurately describes what it is: not webmail, not instant messaging, but web messaging. (basically, it's like those message boxes you get on a stock trading website when you login to your account) Here are the possibilities: 1 Apache::WebMessaging 2 Apache::App::WebMessaging 3 Apache::SomeOtherUniqueName (e.g. ServerMessaging, or UserMessaging, or SystemMessaging) I personally prefer 1 or 2, so if there are no serious objections, I'll pick one of those. Let me know which you like the best. As an aside, RFC 1178 has some ideas on host naming that might be useful here: http://www.ietf.org/rfc/rfc1178.txt?number=1178 . We're not talking about naming hosts, but the principles are similar. (I do make a suggestion on names in the penultimate paragraph.) First, there are several things WebMessaging could mean: a Web e-mail client such as TWIG (in PHP) or SquirrelMail (I think in Perl) or a web interface for sending SMS messages to cell phones. There are protocols that this can be done with: SOAP, XML-RPC, Jabber, Sun RPC, SMTP, etc. Some are more useful in certain situations than others. For customer to customer messaging, there are several different types: instant messaging, usually mediated via Java clients but sometimes through a reloaded web page (at least in olden times [4 years ago]), store and forward (e.g., WebCT internal e-mail system whereby customers can send messages to other customers without leaving the application). There are probably others I haven't run into yet or that I've forgotten about. From what I can see from your description and a brief look at some of the code, you are doing a small portion of what web messaging can mean: customer to customer, store and forward messaging. Because you don't cover all the possibilities (and it would be unreasonable to expect anyone to do so), I would discourage using such a generic name. There are other applications on CPAN that use somewhat fanciful names that have a connection to the application: o I've used Uttu (a Sumerian goddess of weaving) for an application framework framework and Gestinanna (... of record keeping, iirc) for a system/customer account management application. (Neither of these are `popular' or finished enough to warrant any significant attention -- I use them only as examples.) o Dave Rolsky's used Alzabo (``The red orbs of the alzabo were something more, neither the intelligence of humankind nor the innocence of the brutes. So a fiend might look, I thought, when it had at last struggled up from the pit of some dark star.'' -- Gene Wolfe _The Sword of the Lictor__) for an RDBMS schema management and data access system. o Jonathan Swartz chose Mason for a component-based templating system. There's OpenInteract, Bricolage, Tangram, AxKit, etc., all of which have names only loosely tied to what they are doing. Having unique names like these helps in several ways. First, they don't preclude others from entering the same `market,' which can be seen as part of the TMTOWTDI tradition in Perl. Second, they serve to brand the application. If you give a talk about Web Messaging, what do people expect? We're back to the survey above. On the other hand, a talk about a particular name, such as Apache::App::Mercury, might let people know more quickly what you are wanting to discuss. Finally, you might want to change the version from 0.80pre1 to 0.80_01 -- CPAN might get confused by the first format. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
ANNOUNCE: Gestinanna::POF 0.04
Gestinanna::POF is yet another persistent object framework. It supports data accessable via Alzabo, Net::LDAP, and MLDBM (or MLDBM::Sync), as well as combinations of these, using a uniform API. Changes in this version: LDAP support is on-par with support for the other data stores. EXISTS operation is supported in searches Gestinanna::POF is the basis for the revision controlled data store in Gestinanna::POF::Repository. Work is underway on a data store module that will allow remote access to Gestinanna::POF objects via SOAP (this will include both the client and server) while still maintaining security. The uploaded file Gestinanna-POF-0.04.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Gestinanna-POF-0.04.tar.gz size: 63535 bytes md5: 9320a57904fd6358dd01cf3adef16883 It is also available from the SourceForge project page: http://sf.net/projects/gestinanna/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
ANNOUNCE: Gestinanna::POF::Repository 0.01
Gestinanna::POF::Repository manages a revision controlled data store based on Gestinanna::POF::Alzabo, allowing revision-controlled documents without requiring direct filesystem access. This package uses CVS as its model but with some differences: o Objects may not be deleted o If a revision greater than the currently modified one already exists, a branch is forced (e.g., if saving a modification to revision 1.2 and revision 1.3 already exists, then a branch is forced, creating a revision similar to 1.2.1.1 (or the first available branch number under 1.2) ) o This module does not track blame information -- blame tracking implementation is application-dependent o An object may have multiple attributes under revision control This does require Gestinanna::POF 0.03, also released at the same time as this package. Changes for Gestinanna::POF: ENHANCEMENTS: - Added support for MLDBM::Sync to Gestinanna::POF::MLDBM. - Added private (unpublished) tests for Gestinanna::POF::LDAP. It should work with LDAP databases at least in read-only mode. BUG FIXES: - Gestinanna::POF::Alzabo had incorrect is_live call to see if it should insert or update. This was exposed by the Gestinanna::POF::Repository package. The uploaded file Gestinanna-POF-Repository-0.01.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Gestinanna-POF-Repository-0.01.tar.gz size: 23973 bytes md5: 2781649b37773d5c31ecead6ade464f2 The uploaded file Gestinanna-POF-0.03.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Gestinanna-POF-0.03.tar.gz size: 36424 bytes md5: 7ed3395416ae518e76dd472f6ab34658 It may take a little while for these packages to make their way to your favorite mirror. They are also available on the SourceForge project page: http://sf.net/projects/gestinanna/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
ANNOUNCE: Gestinanna::POF 0.02
Gestinanna::POF is a collection of modules providing an abstract persistant object framework intended for use by the Gestinanna application framework though it may be used outside of that framework. Gestinanna::POF currently supports Alzabo, MLDBM, LDAP (limited testing), and aggregations of objects. Security is supported on an attribute basis instead of object basis, providing finer granularity than most other persistant object frameworks. A rudimentary, abstract locking protocol is supported. Transaction support is still in development. Gestinanna::POF tries to stress security over performance, so it may not perform as well as other frameworks. If you do not need attribute- level security, you will probably want to look at one of the more mature frameworks available on CPAN. - This version adds basic search functionality, e.g.: $cursor = $factory - find( user = ( where = [ balance_due = qw( 10) ] ) ); while($id = $cursor - next_id) { ... } The balance_due could be in an LDAP directory (though the LDAP code is still severely alpha quality), an MLDBM file, or an RDBMS table, or any combination of them. A new security attribute is used to manage searching ability: `search'. This version also fixes this module's author's misunderstanding/misreading of the Alzabo documentation (I hope). - Gestinanna-POF-0.02.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Gestinanna-POF-0.02.tar.gz size: 35610 bytes md5: 1223ccb17ee1a7b4e77989a876066d12 -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
ANNOUNCE: Gestinanna::POF 0.01
This is yet another persistent object framework, but designed for the Gestinanna application framework ( http://sf.net/projects/gestinanna/ ) though it can work outside that framework. Gestinanna::POF currently supports Alzabo, MLDBM, LDAP (limited testing), and aggregations of objects. Security is supported on an attribute basis instead of object basis, providing finer granularity than most other persistant object frameworks. A rudimentary, abstract locking protocol is supported. Transaction support is still in development. Gestinanna::POF tries to stress security over performance, so it may not perform as well as other frameworks. If you do not need attribute-level security, you will probably want to look at one of the other more mature frameworks also available on CPAN. Gestinanna::POF::Alzabo is the basis for the forthcoming Gestinanna::POF::Repository which manages revision controlled object collections in an RDBMS (waiting on a few more tests). The uploaded file Gestinanna-POF-0.01.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Gestinanna-POF-0.01.tar.gz size: 29573 bytes md5: c0484a6516e0a3ae02fd7dfa29ef62b9 -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] Uttu 0.05
The uploaded file Uttu-0.05.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Uttu-0.05.tar.gz size: 54148 bytes md5: 29ac0663f8dce1037e8c52b4c20ea26e For those feeling adventurous Uttu (still in late-alpha/early-beta and needing a lot more documentation) is a web interface driver that provides support for web-based application frameworks and applications. Support is provided to install these from CPAN using the CPAN shell. Uttu requires Apache and mod_perl along with one of the content management systems (AxKit, HTML::Mason, or Apache::Template [TT2]). Major changes from 0.04: o Management of ResourcePool objects configurable from an XML file (requires XML::XPath for now). o Better (more correct) support for AxKit -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
Re: Load balancers
Perrin Harkins [EMAIL PROTECTED] wrote: John Siracusa wrote: But meanwhile, we're still open to alternatives. Surprisingly, there don't seem to be many (software) options. (A hardware load balancer is not an option at his time, but I'll also take any suggestions in that area :) I've always used hardware ones. I believe big/ip does everything you need. However, if I were going to use a software solution I would be looking at Linux Virtual Server, probably starting with the Red Hat offering based on it. We're currently using a couple of big/ip switches, but don't have web servers behind them yet (using them for smtp and such at the moment). We're looking at using them or one of the switches from NetScaler (netscaler.com) which looked quite impressive. NetScalar is really built for web servers (or so it seems from our meetings with them) while big/ip is a more generic solution. Both big/ip and netscalar allow sessions to be bound to a backend server, iirc, which can be a nice optimization (which we haven't had to take advantage of yet). -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: OSCON ideas - MVC talk
Nathan Torkington [EMAIL PROTECTED] wrote: Ask Bjoern Hansen writes: On Wed, 8 Jan 2003, Perrin Harkins wrote: Like Perrin I would like feedback on the idea before putting in my proposal. I've also been asked if anyone has a wishlist of talks they'd like to see at the conference. Ideally they'd be talks I'd pay money to see but I could live with talks I'd like to see even though they're hard to justify to my boss. Feel free to brainstorm here as much as you want :-) I've already submitted my proposal :/ But.. We've had toolkits such as HTML::Mason, AxKit, TT2, Embperl, etc., around for some time. Originally, these seem to have been developed as complete applications in and of themselves (my impression - could be wrong). But, as with anything that is well-done, they are starting to be used in ways that perhaps the developers didn't foresee. For example, we now have Bricolage, OpenInteract, and a host of others (going on memory, not web pages here) that are application frameworks using HTML::Mason, AxKit, etc., as tools just as they might use File::Spec. I can't think of a way to use Bricolage or OpenInteract in the way that they use TT2 or some other toolkit, but I look forward to the day when someone figures out how to do that. :) What I would find interesting would be some talks about what led to some of the design decisions in these frameworks. For example, why is authorization done the way it is -- what were the requirements that led to the data structures, etc? What compromises were made (e.g., speed vs. granularity)? No one authorization system can meet the needs of all applications. The application frameworks represent a lot of the design work in creating a web application. Different applications have different needs in what the frameworks must support. Going over an existing framework in this kind of detail would be instructive for those needing to decide whether to use an existing framework (and which one, if so) or to write one from scratch. One of the beauties of mod_perl is that it inherits the TMTOWTDI attitude of Perl. Unlike other environments, there isn't one framework, one exception structure, one authorization scheme. There are many. We can more easily fit our infrastructure to our application instead of our application to the infrastructure. But for mod_perl to work well, developers need to be able to make educated choices. I think most people in mod_perl understand this and are well-able to educate themselves when needed. But for someone new to Perl/mod_perl, the choices can be daunting (some complain that there are too many choices). A few talks along the line of educating people on what is there and why it is there might help them feel a bit more comfortable. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: [RFC] Apache::LDAP
Chris Winters [EMAIL PROTECTED] wrote: On Sun, 2002-12-01 at 20:32, James G Smith wrote: ( Actually, the name is chosen to `rhyme' with Apache::DBI. There are no dependencies on Apache or mod_perl. ) NAME Apache::LDAP - provides persistent LDAP connections Does this overlap with the ResourcePool series of modules? Thanks. I didn't realize that existed. A search wasn't bringing it up earlier on CPAN, but I probably wasn't using the right terms. It looks like it does a lot of what I want plus a bit more. I think I'll do more browsing of CPAN :/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[RFC] Apache::LDAP
( Actually, the name is chosen to `rhyme' with Apache::DBI. There are no dependencies on Apache or mod_perl. ) NAME Apache::LDAP - provides persistent LDAP connections SYNOPSIS use Apache::LDAP; # use Net::LDAP my $connection = Apache::LDAP - new( connection = [ # Net::LDAP - new parameters ], tls = [ # Net::LDAP - start_tls parameters ], bind = [ # Net::LDAP - bind parameters ], ); # use Net::LDAPS my $connection = Apache::LDAPS - new( connection = [ # Net::LDAPS - new parameters ], tls = [ # Net::LDAPS - new tls parameters ], bind = [ # Net::LDAPS - bind parameters ], ); DESCRIPTION This module initiates a persistant LDAP connection. The LDAP access uses the Net::LDAP family of modules. Since a usable LDAP connection usually requires two or three steps, Apache::LDAP is not a simple drop-in module in the manner of Apache::DBI. Code must be written to take advantage of Apache::LDAP. Unfortunately, LDAP connections are not sharable across processes. This means that connections must be made in Apache children processes, not in the startup process (i.e., during child initialization, not during server configuration). Connections are cached based on the class and the parameters passed to the new method. A previously cached connection is returned only if the class and parameters are identical to those used to create the connection and if the connection appears to be alive. The connection is tested by requesting the root DSA information from the server. Net::LDAPS Support Apache::LDAPS is included with Apache::LDAP to support Net::LDAPS. The parameters to new are identical. The tls parameters are included with the connection parameters when calling Net::LDAPS - new. SUB-CLASSING A sub-class of Apache::LDAP may be useful if you want to use an LDAP module other than Net::LDAP. package My::Persistant::LDAP; use base qw(Apache::LDAP); sub make_connection { my($class, $params) = @_; # return new connection } sub test_connection { my($class, $connection) = @_; # return true if $connection is good } __END__ SEE ALSO the Net::LDAP manpage, the Net::LDAPS manpage. AUTHOR James G. Smith [EMAIL PROTECTED] COPYRIGHT Copyright (C) 2002 Texas AM University. All Rights Reserved. This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
Re: [RFC] Apache::LDAP
Per Einar Ellefsen [EMAIL PROTECTED] wrote: Hi James, At 02:32 02.12.2002, James G Smith wrote: ( Actually, the name is chosen to `rhyme' with Apache::DBI. There are no dependencies on Apache or mod_perl. ) If there is no link with Apache::DBI, I suggest that you choose a more appropriate namespace, like Persistent::LDAP Unfortunately, that's already taken. The module on CPAN implements persistent objects with an LDAP backend (if I read the docs right). The problem is that it's generic enough to be made available without requiring a lot of misc. stuff (i.e., part of a larger package), but there's not a good place for it. At least, I'm not thinking of anything. It's also, imho, most useful in an Apache-like environment. I'll keep thinking about it. rant (to noone in particular) I've found this list to be far more helpful than the modules@perl list. I've submitted a fair number of modules to that list with little to no response. While I am thankful for those that have responded, it's never been enough to feel that there was closure. Keep in mind, this is my experience with several modules spread across three-five years (not sure when I first posted to the list). Other people have had different experiences. While I am willing to try it again, my experience makes me shy away from it like a child from a hot frying pan. This rant is in no way meant to start flames. Please do not. I will not respond -- I am just trying to express a small part of my frustration that causes me to post RFCs to the mod_perl list instead of the modules list, even if they are on the edge (or beyond) of mod_perl OT-ness. /rant -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: AW: Apache::DBI and password security
=?iso-8859-1?Q?=22Fa=DFhauer=2C_Wolfgang=2C_FCI3=22?= [EMAIL PROTECTED] ads.net wrote: Hi, I want to build a database application based on mod_perl and Apache::DBI. The goal of Apache::DBI is to get persistent database connections using only one database user because of resource limits. The problem I see is that the password for connecting to the database is clear readable in the perl script. Does anybody know how to hide that password? I think, storing it in a file for reading by the script is not the right way (?). Thanks for help! - Wolfgang Have you thought of running your webserver as some 'www' user? You can then make your scripts readonly by a 'dev' group which the www user and the developes are members of. CORRECT: 'readonly' should be 'only readable' by Yes, that's our plan, too. But the risk still remains that someone will get a look to the script. I think, there is a golden rule: Never put clear text passwords in files. Those files are stored in archives by backup for example. There maybe a lot of people (sysadmin, developer, ...) concerned with the webserver. So it's not easy to secure it. Something we do is put the password in a file outside the document root. The script reads the file. If running with mod_perl, this can be in a file readable only by root read during server startup (assuming the server starts up as root). Then the password can be cached in memory. If it changes, a graceful restart might be sufficiant, but I haven't tried that yet -- most of our current code is PHP that we're are working on replacing. The last time I played with mod_perl and graceful restarts was the early 1.2x or late 1.1x mod_perl and it didn't always work well, iirc. I think some of that has been fixed. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: sending ssl certificate according to virtual host
Mathieu Jondet [EMAIL PROTECTED] wrote: hi all, i'm actually working on a system where a user can create domains / subdomains throug a webinterface and doesn't have to interact with the httpd.conf. For this I use a unique virtualhost which intercept all client request no matter which vh is requested. After a handler treat the request and fetch the data where it should be fetch. Everyhing is working fine, but I would like to add SSL support on the system. I want to be able to send the SSL certificate and key files for the requested virtual host. Depending on the vh requested I set the SSLCertificateFile and SSLCertificateKeyFile which will point to the correct ssl files for the requested vh. Is there a way for doing this ? All input appreciated and I hope my explanatins are clear enough on what i want to do. HTTP rides on top of SSL/TLS. The SSL connection is established and certificates exchanged before any HTTP request is sent. The SSL certificate must be configured on a per-IP-address basis. You might want to look into a certificate for a wildcarded domain (e.g., *.mydomain.com) and have that handle all the subdomains. I think that's possible, but I'm not positive. We use fully qualified domain names ourselves. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: OO handlers
Richard Clarke [EMAIL PROTECTED] wrote: Now I feel stupid. $sub-handler was supposed to be $sub-handler. That's what you get for being impatient. or perhaps `sub { $sub - handler(@_) }' -- if quoting works, great, but I would fear that $sub-handler would stringify before push_handlers got called. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Same $dbh under different pids?
Perrin Harkins [EMAIL PROTECTED] wrote: harm wrote: On Wed, Oct 30, 2002 at 06:05:51PM +0800, Philippe M. Chiasson wrote: For the same reason that running this: $ perl -e'fork; { $foo = {}; print $$:$foo\n}' 1984:HASH(0x804c00c) 1987:HASH(0x804c00c) produces this for me, every single time I run this program You are assuming that if (0x804c00c) is equal in different processes, they must be pointers(or references, or handles) to the same thing. And it is not the case ;-) Wait, ins't it the case? That number is supposed to be the location in memory. It seems like these are all pointing to the same hash. I can't explain how that would happen though, based on the code shown here. The same address in two different applications doesn't always point to the same place in physical memory. Virtual memory address != physical memory address on most `modern' processors. This is what allows copy-on-write to work for Apache children -- all the addresses are the same, but the data is different. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] StateMachine::Gestinanna 0.06
This module can be used for some of the same applications as CGI::Application. It can also be used outside the web environment. It does not handle sessions and other application-dependent data management. This is a fairly complete object oriented approach to building state machines. Both ISA and HASA relationships are supported. No profiling has been done yet. YMMV. Significant changes: Added can(,) method to see if code exists that should be run during a transition between two states Added overrides key for edge transition definitions so a child state machine can mask certain variables (override the data from the client, for example) Added Mail Form example based on the similar example in CGI::Application so there's some basis for comparison (easier to find the right tool for the job) The uploaded file StateMachine-Gestinanna-0.06.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/StateMachine-Gestinanna-0.06.tar.gz size: 13717 bytes md5: efd82d1b7638699fdd53b542c4350643 Allow a few hours for it to propagate to the mirrors. It is also available at the SourceForge project: http://sf.net/projects/gestinanna/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
Re: [ANNOUNCE] StateMachine::Gestinanna 0.06
William McKee [EMAIL PROTECTED] wrote: On 25 Oct 2002 at 1:25, James G Smith wrote: This module can be used for some of the same applications as CGI::Application. It can also be used outside the web environment. It does not handle sessions and other application-dependent data management. Hi James, I've been following your posts about Gestinanna because I currently use CGI::Application and really like it. Can you give any reasons why someone would want to use your module instead? I'm all for having choices but like to know the rationale behind the different distributions of similar modules. No problem. Almost all (just in case there are some that aren't, especially in Acme::) modules on CPAN are there because they solve some problem. Even modules that do essentially the same thing are solving slightly different problems. I think this is the case with CGI::Application and my module. We have essentially the same problem, but slightly different requirements. I'm concerned more with certain security and OO aspects than with initial ease of use. CGI::Application is good at what it does, from what I can tell, but doesn't address those two sufficiently for me to be comfortable using it where I am working. Thus the difference in design. I'm working on an application framework for an account/system administration web interface. I'm having to think reliability and security. At the same time, I'm having to consider ease of use for those developing applications within the framework. I chose to follow the MVC paradigm (discussed some time ago on this list) to accomplish this. The controller is basically a state machine. It looks at what comes in from the client, decides what needs to be done and what view needs to be sent back. Thus StateMachine::Gestinanna. It manages state based on the data from the client (instead of relying on the client to tell it the next state) with minimal requirements for how the application does views or manages persistance (sessions and contexts) and client interaction. I'm also trying to develop a system that allows applications to be distributed on CPAN and installed as any module on CPAN would be installed. Uttu is the basis for that system, but the idea spilled over a little into this module and drove the development of the object-oriented features. I can write an abstract state machine that has holes in it, but has enough there to guide the development of a useful state machine. For example, we have a process for activating an account on one of our systems. It involves providing sufficient information to convince us you are who you are, agreeing to the terms of services, selecting usernames and a password, and confirming that you want what you selected. I would not distribute the finished state machine, but one that described the general flow and provided the edges for terms-of-service-edit and for confirm-done. The others depend on what information is required for a given system. The code would need to also be written that needs to run when the state transitions. An example making fairly good use of inheritance is account creation. If there are several account types that are essentially the same except for some details in types of data required, then a basic state machine can be written that manages the data required for all account types. Then a set of child classes can be developed, one per account type, with the information/code specific to that account type. These can then all be combined in one class using HASA relationships to make one account addition application. Adding an account type is then trivial. Hopefully that helps a little. I probably should polish it up some, expand on a few things, and submit it as an article somewhere :/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: virtualhost based variables
Alan [EMAIL PROTECTED] wrote: Greetings again. I'm trying to figure out the best/fastest/most elegant way of setting virtualhost based variables. Basically I have three sites, and the only difference between them is the DocumentRoot ($htdocroot) and the database their data is being accessed from ($dbh). Document root should be accessable from $r. I would use Apache::DBI for persistent connections. Then connect at the beginning of the request with the DBI connection parameters coming from $r-dir_config. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Change in module naming conventions
Per Einar Ellefsen [EMAIL PROTECTED] wrote: What I came to was this: http://users.skynet.be/pereinar/mod-perl/modules.txt Looks good, overall. I like the Apache::Framework:: namespace :) Some questions I got which I'm not too sure of: - I originally had Apache::Auth::Authen, ::Authz and ::Access, but Robin Berjon told me he preferred to have the 4 as top-level namespaces. What do people think? What's the difference between Apache::Auth and Apache::Authen ? They both seem to have authentication handlers. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: process priorities and performance
Jim Helm [EMAIL PROTECTED] wrote: Everything I've read as an SA (for Solaris at least - though I would expect the other *nices to be similar) was to never set a user space (non O/S) process to less than -15. Other than that, it's another of those YMMV, measure before and after, and if it helps great. Trying to second guess process schedulers is a tricky business though, and you really need to intimately know how your system behaves before trying it. -Original Message- Alexey Zvyagin has suggested a use of Unix process priorities to improve the performance of the web services during the peak hours: Alex writes: - [snip] The CPU priorities help to handle an increased traffic on the overloaded server. - I think the key here is the fact that the system is overloaded/overcommitted. Too many processes are competeing for CPU. Putting my SA hat on, I would say the processes need to be split across multiple pieces of hardware or a new machine needs to replace the current system. That's the only real long-term solution for a system in this state. The priorities as described help set a relative importance between the processes--the front-end is more important than the database with the larger server processes in the middle. This is not unreasonable. But a serious solution to the problem of an overloaded system is to put in more system. The priorities might be helpful for the few minutes/hours/days needed to get the new hardware on the floor. Imho, a web server should be designed for the expected peak normal usage plus a fudge factor thrown in for safety and multiplied by a small integer greater than one for growth (I'm optimistic). Conclusion of my thoughts: putting in a blurb about priorities being able to set relative importance of processes is fine, but don't cast it as a solution (just as swap space is not a solution to constrained memory). It's a bit of a band-aid that can help until the problem can be fixed. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Apache::Session - What goes in session?
Jesse Erlbaum [EMAIL PROTECTED] wrote: Hi Peter -- The morale of the story: Flat files rock! ;-) If I'm using Apache::DBI so I have a persistent connection to MySQL, would it not be faster to simply use a table in MySQL? Unlikely. Even with cached database connections you are probably not going to beat the performance of going to a flat text file. Accessing files is something the OS is optimized to do. The process of issuing a SQL query, having it parsed and retrieving results is probably more time-consuming than you think. All depends on the file structure. A linear search through a thousand records can be slower than a binary search through a million (500 ave. compares vs. about 20 max [10 ave.] compares - hope the extra overhead for the binary search is worth the savings in comparisons). One way to think about it is this: MySQL stores its data in files. There are many layers of code between DBI and those files, each of which add processing time. Going directly to files is far less code, and less code is most often faster code. MySQL also stores indices. As soon as you start having to store data in files and maintain indices, you might as well start using a database. The best way to be cure is to benchmark the difference yourself. Try out the Benchmark module. Quantitative data trumps anecdotal data every time. Definitely. But before you do, make sure the proper indices are created on the MySQL side. Wrong database configurations can kill any performance gain. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Mod_perl Application Development
Chris Winters [EMAIL PROTECTED] wrote: On Sat, 2002-08-17 at 19:31, Jonathan Lonsdale wrote: I'm curious to know how people approach application development with mod_perl in situations where there could be dozens of distinct screens/interfaces. I'm currently using the HTML::Template system. Here's a few approaches I thought of: 1. Single monolithic content handler. Could be hard to maintain. 2. Distinct content handlers each with their own Location directive. Could be a pain to maintain the server config. 3. Take a small performance hit and use an Apache::Registry script for each screen to handle the content phase. Use 'PerlSetupEnv Off', $r and Apache:: modules and don't bother being backwardly compatible with CGI. There's a separate one that's used in OpenInteract: create a single content handler that uses some sort of lookup table to map requests to handlers. This lookup table can be maintained separately from the apache configuration and can generally be more flexible, allowing for application-level security settings, etc. Yet another of the many ways : This is similar to what I am doing with the Uttu/Gestinanna projects. Gestinanna is designed around the MVC paradigm. I have Uttu provide my database/cache creation, application configuartion, and uri-handler mapping which in this case (for web applications with a lot of screens) maps to a Mason dhandler. The dhandler makes sure the proper state machine description is in memory and then continues the state machine execution based on the information sent from the client. The state machine tells the dhandler which view to send back to the client. I have several tricks up my sleeves to allow multiple state machines to be active simultaneously in a session and for even different parts of a state machine to be active simultaneously. I am using Template Toolkit to produce the views (since the people responsible for the views don't like code) and AxKit to generate the end result for the client (so we can support screen, tv, handheld, etc., media types as well as themes [we've had customers request this]). I haven't finished it all yet nor have I done any profiling, so YMMV. You can see the current code at http://sourceforge.net/projects/gestinanna/ (the PerlKB project will be worked in to handle documentation -- most of the current stuff in the Gestinanna project handles dynamic content instead of static content). Btw - I am looking at some of the various CMSs for `inspiration', including OpenInteract and Bricolage. I would recommend looking at how they do things if you are wanting to do content management. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] StateMachine::Gestinanna 0.05
Changes: 0.05 - Added Class::Container to make creation of new state machine types (vs. applications) easier StateMachine::Gestinanna is a fairly simple state machine implementation that is driven by the application. It does not actually drive an application but provides hints as to what the application should do next. It is designed to be especially helpful in an Model/View/Controller web-application environment to help the controller decide which view should be used. However, it may also be used in other areas, such as traditional GUIs. StateMachine::Gestinanna supports both ISA and HASA inheritance of state transition definitions and code triggered by those transitions. This allows the development of classes of applications. Available on CPAN: http://www.cpan.org/modules/by-authors/id/J/JS/JSMITH/StateMachine-Gestinanna-0.05.tar.gz Available on SourceForge: http://sourceforge.net/project/showfiles.php?group_id=55902release_id=105413 -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] StateMachine::Gestinanna 0.02
We now support HAS-A inheritance as well as ISA (sort of standard Perl) inheritance (see documentation for details). Now a state machine can contain other state machines without state namespace clashes. StateMachine::Gestinanna is a fairly simple state machine implementation that is driven by the application. It does not actually drive an application but provides hints as to what the application should do next. It is designed to be especially helpful in an Model/View/Controller web-application environment to help the controller decide which view should be used. However, it may also be used in other areas, such as traditional GUIs. It has been uploaded to CPAN is also available on sourceforge at http://sourceforge.net/projects/gestinanna/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] StateMachine::Gestinanna 0.01
StateMachine::Gestinanna is a fairly simple state machine implementation that is driven by the application. It does not actually drive an application but provides hints as to what the application should do next. It is designed to be especially helpful in a Model/View/Controller web-application environment to help the controller decide which view should be used. However, it may also be used in other areas, such as traditional GUIs. StateMachine::Gestinanna supports inheritance (via @ISA) of state transition definitions and code triggered by those transitions. This allows the development of classes of applications. HAS-A relationships are not yet supported. The distribution is available on CPAN (soon -- has been uploaded) and at http://sourceforge.net/project/gestinanna/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
Re: [ANNOUNCE] StateMachine::Gestinanna 0.01
James G Smith [EMAIL PROTECTED] wrote: The distribution is available on CPAN (soon -- has been uploaded) and at http://sourceforge.net/project/gestinanna/ Make that http://sourceforge.net/projects/gestinanna/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: [ANNOUNCE] StateMachine::Gestinanna 0.01
Ron Savage [EMAIL PROTECTED] wrote: On Wed, 31 Jul 2002 12:32:51 -0500, James G Smith wrote: James G Smith [EMAIL PROTECTED] wrote: The distribution is available on CPAN (soon -- has been uploaded) and at http://sourceforge.net/project/gestinanna/ Make that http://sourceforge.net/projects/gestinanna/ There are some state machine modules in CPAN already, under the prefix DFA:: for Discrete Finite Automata. Do we really need a separate prefix StateMachine:: for this? Actually, state machines exist under DFA::, POE::, and a host of other specialized name spaces. For the record, DFA was suggested to me when I asked for module name suggestions, because it (DFA) was already in use when I wanted to submit my module. StateMachine:: was the most recent (actually, only) suggestion made on the modules@ list outside of POE::, but POE:: doesn't fit the type of machine I need. If it's strictly a DFA, then it could go under the DFA:: namespace just as well, though NFA:: would be a more-encompassing namespace :/. It is potentially non-deterministic. http://search-beta.cpan.org/modlist/Control_Flow_Utilities http://search-beta.cpan.org/search?mode=allquery=state+machine (finds 2200+ entries) of which the following seem to be genuine state machines (from the first 500 results). Bio::Tools::StateMachine::AbstractStateMachine Bio::Tools::StateMachine::IOStateMachine CGI::MxScreen Decision::Markov DFA::Command DFA::Kleene DFA::Simple POE::NFA POE::Session Set::FA Wizard::State XML::SAX::Machines -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] Uttu 0.03
In preparation for some other work I'm doing, I have added some features and fixed some bugs in Uttu: o Added support for AxKit as a content handler. The handler allows using either HTML::Mason or Template Toolkit as an XML provider with a simple configuration setting. o Made the core code more robust by testing for certain conditions that can raise warnings in Perl. o Moved from multiple database specifications to multiple host specifications. Now, Uttu will go through a list of hosts looking for one that allows a connection to the database. There are still some features on the todo list, such as one-time urls, but this is enough for now. Released onto CPAN as Uttu-0.03.tar.gz and available at http://www.cpan.org/modules/by-authors/id/J/JS/JSMITH/Uttu-0.03.tar.gz http://sourceforge.net/projects/gestinanna/ Documentation and more information can be found http://uttu.tamu.edu/ . -- James Smith [EMAIL PROTECTED], 979-862-3725 Senior Software Applications Developer, Texas AM CIS Operating Systems Group, Unix
Re: [ANNOUNCE] Petal 0.1
Dave Rolsky [EMAIL PROTECTED] wrote: On Wed, 17 Jul 2002, Rob Nagler wrote: Petal lets me do that. If that's not of any use to you, fine. The world is full of excellent 'inline style' modules such as HTML::Mason, HTML::Embperl and other Apache::ASP. These all work on the assumption that the template is written in HTML. Actually, neither Mason nor Embperl are HTML-specific these days. Mason never really was, and Embperl has become much more generic with version 2, which is in fact now simply called Embperl. Mason will probably changes its name eventually as well. -nod- (As an example of a non-HTML [and potentially twisted] app:) I'm working on our next-generation administrative web application (handles some system account management and other similar things for the University). I decided early on to use the MVC paradigm because the programmers (me) are better at programming the MC part than writing the content for the V part. So, looking at the modules available on CPAN (I'm trying to make maximal use of CPAN), I decided to use the following: Mason (Controller): provides easy management of form values from the client, clean division between sections (init, once, shared, etc.), and nice inheritance. For now, Mason is called from AxKit. TT2 (View): makes it easy for non-programmers to edit XML and embed occasional references to data without having to understand the underlying object model -- views are ultimately called from Mason. I use Data::FormValidator to decide which view to use. AxKit (View): translates the XML to the output device the customer is using. Also can support themes. Allows us to internally structure content in a logical manner that may ultimately aid in building a search engine (for a document repository, for example). Also provides the site a the consistent look feel. Perl (Model): actual database manipulation is done through Perl modules I think I am using each item in its strongest area. There is no HTML until AxKit sends it to the client. It's also easier to throw a few more CPUs or sticks of RAM at the solution than half-a-dozen programmers that can't write anything customer-friendly or technical writers that can't deal with code. (Of course, one of my other mantras is: Always write for a webfarm.) -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: mod_perl/passing session information (MVC related, maybe...)
John Siracusa [EMAIL PROTECTED] wrote: On 6/12/02 12:17 PM, Perrin Harkins wrote: James G Smith wrote: The nice thing about the context then is that customers can have multiple ones for multiple windows and they can have more than they have windows. How do you tie a context to a window? I don't see any reliable way to do it. The only way to maintain state for a window (as opposed to global state for a session) is to pass ALL the state data on every link. Nah, you could just shove a context param into all forms and links on each page, and store the actually (possibly large) context server-side, keyed by context id (and session id, see below) a href=/foo/bar?context_id=2.../a ... input type=hidden name=context_id value=2 ... Note the tiny context id. If you lookup contexts using both the context id and the (cookie-stored) session id, you can get really short context ids :) Just an idea... I haven't worked this part out yet, though that is one way I thought of. This is similar to how Twig handles contexts. Another way I was thinking about was making it part of the URL. For example: https://x.y.z.edu/contextid/rest/of/url.html The session would be with a cookie. This would allow cutting and pasting of URLs for help tickets and such while preserving the context. This would also make coding easier by using relative URLs. Of course, this has all the problems of storing the session ID in the URL in the same manner. We might also have to look for links that open a new browser window and give them a new context. I'm still working out the details. I could be really evil and make the URLs 32-hex strings that map to a context and URL combination :) Obfuscated web site with no hope of deep linking -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: separating C from V in MVC
Valerio_Valdez Paolini [EMAIL PROTECTED] wrote: Ray Zimmerman wrote: So how is everybody else handling URL mapping? On Mon, 10 Jun 2002, John Hurst wrote: In the filesystem. Directly requested .tt files are all sent to a default template handler: [...] % cat admin/proj-edit.tt [% Ctrl.DBEdit.run(ObjectType = 'Project') %] I used html pages with augmented tags parsed by a standard handler: I'm doing something similar, but using a database (and caches) for url-filename mappings (usually to Mason components, searching the component root path) and then using a filter in the autohandler to change urls of the form comp:docs/index to a url that maps to that component. This lets me rearrange the public view of the site without moving any files and allows me to rearrange the files without changing the public view. I'm working on a framework that will use the Mason component as the controller, Perl modules as the model, and either Mason components or TT templates called from the controller as the view. The view would output XML that would then be put through AxKit or similar by the autohandler to add style information and produce HTML or whatever format we needed. The end result is that the work-code (model) is indepenent of interface, the controller is independent of view, and the view is somewhat (via XML) independent of lookfeel. I don't have benchmarks yet to demonstrate its non-scalability. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Generating dynamic VirtualHost and Location directives and reloading Apache configuration
Mathieu Jondet [EMAIL PROTECTED] wrote: Hi all, I'm actually working on a application for generating dynamic virtual host and locations in these virtual host from a web interface. The purpose of this application is to give a non-administrator the possibility of adding on the fly virtualhost to a webserver without modifying httpd.conf and to add locations to a specific virtualhost. So far I've manage to do the part of generating the httpd.conf file through Perl sections which takes all the necessary data from a MySQL server. The tricky part is to update the server configuration without having to restart the server. I've looked on the web and in the archives of the mailing list and found things about using the PerlInitHandler and sending Apache a SIGUSR1 signal. I don't know how to do this as I'm quite new to the mod_perl programming. I would also want to know if it won't be too slow to check the modifications times of the config in the database at each client request. Do you have any other idea of doing this ? What will be the more efficient way to update my configuration against a database ? Thanks for answers and ideas, I'm working on the design of something similar -- right now I'm doing a config file that will (hopefully) allow configuration of Apache 2 or Apache 1.3 (Apache 2 as front SSL proxy and Apache 1.3 as backend application server) depending on which role it is playing. I still have some studying and playing to do to see if this is possible. If Apache 1.3 is in both roles, I know it is possible. side note Apache 2 with mod_perl should not be an expensive proxy (compared to Apache 1.3/mod_perl or even Apache 2 without mod_perl). It can be configured to have no more than one Perl interpreter, regardless of the number of threads. It might be possible to configure it with zero perl interpreters and only use Perl during the configuration. Of course, Apache 2/mod_perl isn't quite production-quality, yet. /side note Instead of doing expensive checks, the page that is used to manage the configuration information in the database can detect changes and either send the appropriate signals or provide a button the user can use when they are finished making changes to send the signals (I'm also having to make this work in a distributed environment---a.k.a., a web farm). The same script can also do any checking of configuration files (run them through the appropriate Apache with the -c flag). -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Cheap and unique
[EMAIL PROTECTED] wrote: I would have sent both to the client. The sequence would be *the* id and is guaranteed to be uinique by the database (or whatever else is around that does this reliably). The idea is that by combining the random secret with the ID and sending the digest with that the ID number can't just be incremented or fooled with. The digest isn't unique but it would keep the unique bit from being fiddled with. That said, I'm just a paranoid person regarding security (especially for my out-side-of-work work at http://www.greentechnologist.org) and I wouldn't want to keep the random bits around for too long to prevent them from being brute-forced. I'm imagining that someone with a fast computer, the ID number and knowledge of how that combines with randomness for the digest source might be able to locate the bits just by trying a lot of them. I would expire them after a while just to prevent that from happening by stating that if there is a 15 minute session, new random bits are generated each five minutes. New sessions would be tied to the most recent random data. The random data might be expired at the session timeout. This assumes that I'm tracking which random bits are associated with the session to verify that the digest was ok. All that means is that the random-ness is valid as long as the session is still active and normally expires after a time period otherwise. Perhaps other people would get by just keeping a static secret on the server. That may be overkill for many people, it might not be for the apps I'm working with. Thanks for the clarification -- makes a lot more sense. At first glance, I think that would work. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: SOAP and web services
Bart Frackiewicz [EMAIL PROTECTED] wrote: Hi Ric, I use mod_perl/apache/soap::lite to create an internal application server so that I can distribute processing load from the public webserver. This also allows me to expose a soap web service to the public if I wished to give them direct access to a method on this application server, but of course they would need a soap client to access this service. Do you want to just return an xml to your user, or do you want to do what I do which is do internal RPC using SOAP. in my opinion a web service has another great benefit - you can sperate the logic and the front ends. we have here an application running on php/html, and all the logic is inside this scripts, in case of running on another medium/language (like flash or php for plain html) you must copy all the logic - you can call this a nightmare. in this case i think that a solution provides with PRC/SOAP is a good idea, but on every article i read more, i realize that this technology is still young and just experimental (e.g. php 4.x). To add to the fun :) (now that my semester is over...) I mention the following only so you are aware of the possibilities, not because I think the decision to use SOAP is wrong (I don't think it is or isn't). I am working on an application that involves inter-process communication, of which XML-RPC is an example. I've decided to break my IPC needs into two classes: many-to-one or one-to-one, and one-to-many or many-to-many. XML-RPC does okay for the first set, but can't handle in any reasonably scalable manner the second set. For that, I'm taking a look at Spread [1], which Stas mentioned in passing on this list a few weeks ago. Spread also can work reasonably well, afaik, for many-to-one or one-to-one when the connection overhead associated with XML-RPC/SOAP becomes significant (e.g., frequent requests where the return value is not very important such as log consolidation). SOAP is a more complicated extension of XML-RPC (don't let the `Simple' fool you - the spec is anything but simple compared to XML-RPC -- of course, XML-RPC doesn't support object-orientation). XML-RPC is actually a fairly mature technology, imho, since it is available in such a wide range of languages and implementations. The initial spec was drawn up in April 1998 [2]. The PHP 4.x implementation is new and immature [3], but the spec itself is fairly mature. That said, XML-RPC, SOAP, and Spread all have reasonably simple Perl interfaces. [1] http://www.spread.org/ [2] St. Laurent S., Johnston J. and E. Dumbill, _Programming Web Services with XML-RPC_, O'Reilly (2001). [3] http://www.php.net/manual/en/ref.xmlrpc.php -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: File::Redundant
Cahill, Earl [EMAIL PROTECTED] wrote: Just putting about a little feeler about this package I started writing last night. Wondering about its usefulness, current availability, and just overall interest. Designed for mod_perl use. Doesn't make much sense otherwise. I would think it could be useful in non-mod_perl applications as well - you give an example of a user's mailbox. With scp it might be even more fun to have around :) (/me is thinking of config files and such) transactionalizing where I can. The whole system depends on how long the dirsync takes. In my experience, dirsync is very fast. Likely I would have dirsync'ing daemon(s), dirsync'ing as fast as they can. In some best case scenario, the most data that would ever get lost would be the time it takes to do one dirsync (usually less than a second for even very large amounts of data), and the loss would only happen if you were making changes on a dir as the dir went down. I would try to deal with boxes coming back up and keeping everything clean as best I could. What's a `very large amount of data' ? Our NIS maps are on the order of 3 GB per file (64k users). Over a gigabit ethernet link, this still takes half a minute or so to copy to a remote system, at least (for NIS master-slave copies) -- this is just an example of a very large amount of data being sync'd over a network. I don't see how transferring at least 3 GB of data can be avoided (even with diffs, the bits being diff'd have to be present in the same CPU at the same time). If any of the directories being considered by your module are NFS mounted, this will be an issue. Personally, I see NFS mounting as a real possibility since that allows relatively easy maintenance of a remote copy for backup if nothing else. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: File::Redundant
Andrew McNaughton [EMAIL PROTECTED] wrote: On Thu, 25 Apr 2002, James G Smith wrote: What's a `very large amount of data' ? Our NIS maps are on the order of 3 GB per file (64k users). Over a gigabit ethernet link, this still takes half a minute or so to copy to a remote system, at least (for NIS master-slave copies) -- this is just an example of a very large amount of data being sync'd over a network. I don't see how transferring at least 3 GB of data can be avoided (even with diffs, the bits being diff'd have to be present in the same CPU at the same rsync solves this problem with sending diffs between machines using a rolling checksum algorithm. It runs over rsh or ssh transport, and compresses the data in transfer. Yes - I forgot about that - it's been a year or so since I read the rsync docs :/ but I do remember it mentioning that now. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Apache::Session suggested mod
Vuillemot, Ward W [EMAIL PROTECTED] wrote: Has anyone ever thought to have the table name modifiable? E.g. instead of 'sessions', you could set it to something like 'preferences' for a given instance. I wanted to maintain session information, but also preferences that are attached to a given username. I could just put the two within the same table. . .but as I am anal, I would rather see the data separated. I was thinking of doing it myself -- but thought it might be a worthwhile mod for the entire community. And it saves me maintaining two sets of nearly identical code...and of course, there might be good reasons NOT to do this. Ideas? Thoughts? I would love to see this, but am not sure how it would be implemented (don't want to design something without the author's input). I'm wanting to use it to track sessions and contexts -- sessions can own multiple contexts and contexts can pass from session to session. Basically break identity apart from process. Apache::Session would be ideal for both since the storage mechanisms are identical. Unfortunately, the table name is hard-coded. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: mod_perl Cook Book
Rasoul Hajikhani [EMAIL PROTECTED] wrote: Hello folks, Has anyone purchased the mod_perl cook book from this list? If so, what do you think of it? Is it a good buy? Appreciate feed back. Thanks in advance I enjoyed it -- I have it on my desk ready to crack open at a moment's notice. But then, I'm a bit biased since I'm mentioned in it :) It's been invaluable in writing some mod_perl handlers lately (Uttu.pm - 0.02 next week, hopefully, much improved). You'll want to either get the next print run or take a look at the errata on www.modperlcookbook.org if you run into problems. There don't seem to be a lot of large, glaring problems, just small things that can be easily overlooked. A few sample chapters are also available at that site. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Permission conflict between mod_cgi and mod_perl
[EMAIL PROTECTED] (Randal L. Schwartz) wrote: Jim == Jim Smith [EMAIL PROTECTED] writes: Jim Basically, mod_perl can run scripts in the same manner as any other Jim unix program. Maybe we're getting hung up on details, but mod_perl is not a unix program. It's a module for Apache. Therefore, in the same manner is no longer applicable. mod_cgi forks to run processes. mod_perl doesn't fork. mod_perl can run Perl code via the embedded Perl interpreter, and this interpreter can cause a fork. But mod_perl doesn't inherently fork at all. And the distinction is important, especially in the context of this discussion (setuid with mod_perl). And the sky isn't blue, but the results are the same. mod_perl can't run scripts. Scripts can be run from mod_perl. More than that, set-uid scripts can be run from mod_perl and offer one of the better ways of doing things that require root privileges. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Permission conflict between mod_cgi and mod_perl
Ilya Martynov [EMAIL PROTECTED] wrote: On Mon, 25 Mar 2002 15:17:06 -0600, James G Smith [EMAIL PROTECTED] said: JS And the sky isn't blue, but the results are the same. JS mod_perl can't run scripts. JS Scripts can be run from mod_perl. JS More than that, set-uid scripts can be run from mod_perl and offer JS one of the better ways of doing things that require root privileges. Results are not same. Basically Apache::Registry (handler used with mod_perl to emulate execution of scripts) just opens file which contains script, evals it as big subroutine and calls that subroutine. Opening and reading set-uid file which contains script doesn't give automagically root rights to instance of apache process which handles request. I never said anything about Apache::Registry. If you are using a Perl interpreter with mod_perl, then you can fork and run a suid script. That's what I'm trying to say. Nothing more, nothing less. End of story. Even the Eagle book speaks of running scripts (though in the context of Apache::Registry), so using that terminology is not original with me. I was trying to talk about something in a manner that might be somewhat understandable, but people started pointing out the trees and ignoring the forest. I will say no more on this subject. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] Uttu 0.01 (dev) - web application driver
I finally got enough stuff done and put together that I feel ready to let someone else hammer at the code and tell me where my stupid mistakes are :) Most likely, the documentation will be poor - as usual, it lags a bit behind the code. $CPAN/authors/id/J/JS/JSMITH/Uttu-0.01.tar.gz $CPAN/authors/id/J/JS/JSMITH/Uttu-Framework-Uttu-0.01.tar.gz CPAN may need a little time to propogate files. The first is the base Uttu module. Installing it allows installation of the second to follow the familiar Perl pattern for modules (perl Makefile.PL; make; make install). The second tarball is an example of a framework to be used atop Uttu. READMEs are included. Some debugging code is still lying around, so don't be surprised by scrolling text :) Feel free to email me questions on or off the list. Uttu allows for fairly easy configuration and use of a content handler. Right now, only HTML::Mason is supported. It has been tested with virtual hosts, though not with multiple configured locations within a single host (though it is designed to work even with such a configuration). It also provides uri-to-filename translation and limited internationalization support. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: About PerlLogHandler phase
Randy J. Ray [EMAIL PROTECTED] wrote: * If I install a handler for PerlLogHandler, does the normal logging still take place? Is it a function of whether my handler returns OK, DECLINED, etc.? Not sure -- anyone want to play around a bit? * Are there ways to register other log types, in addition to the access, error, etc.? Such that people could specify those (and a format) the same way they specify the others? More to the point, so that there might be a ready file-descriptor I could print to, rather than having to deal with the open/lock/write/unlock/close cycle. Not in Apache 1.3, afaik, and probably not in 2.0. Part of it has to do with how STDERR is handled - being sent to the error log. Apache only handles two log file descriptors afaik. (well, kinda - two per virtual server) You should be able to open the file at server startup/configuration and then write to it during the logging phase, closing it at child exit time. This mirrors how Apache does it, except you'll need to do it all in Perl. I'm doing something similar to help me debug stuff since Apache doesn't open log files until after it reads the configuration file once. I haven't run into any problems. You'll probably want to set autoflush to true so Perl writes the text immediately. Hmm... /me smells an Apache::Logger module... (or something with a similar name). -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: About PerlLogHandler phase
James G Smith [EMAIL PROTECTED] wrote: Hmm... /me smells an Apache::Logger module... (or something with a similar name). Looks like Paul caught it before I did... (Apache::LogFile). -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[RFC] Mason-based application framework, kindof
I'm working on putting together a site that can handle existing system management functions as well as allow for easy expansion by people that might not know how the core of the site works. In the process, I've come up with an interesting division of labor and a module. For now, I'm calling the module ZZZ until I come up with a better name. Feel free to offer suggestions :) The current perldoc for the module is available at http://moya.tamu.edu/~jgsmith/ZZZ.html This should be considered temporary (for purposes of mail archives) and should be available for a week or so. I'm looking for feedback on the feature set of the module as well as the design. I'm hoping to polish the code a bit more over the next week or so and release it with a sample framework. Part of the problem with using existing code is that we (here at TAMU) usually have to do deep modifications due to our data setup -- usernames are not unique in time, for example, and so can't be used to denote ownership of resources. But only usernames are valid for login (accompanied by password, of course). Often, it is easier to write from the ground up than try to twist everything else around it. In the process, I'm trying to get as much code releasable as possible. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Storing data for conf directives
I have a module I'm calling ZZZ that provides an Apache conf directive, ZZZConf, which takes one argument - a file name. This is then given to an AppConfig object to read in the configuration file. I am storing this in the $cfg object tied to the Location section in which the directive appears. After reading in the configuration, I create an HTML::Mason::ApacheHandler object and store it also in the $cfg object. Everything is fine and Dumping the $cfg object shows everything is there (too much output to do so here). When I send my first request and dump the $cfg object (returned by Apache::ModuleConfig), I get the following: Got something from ModuleConfig for ZZZ - ZZZ - ZZZ=HASH(0x718900) $VAR1 = bless( { 'config' = bless( { 'FILE' = bless( { 'DEBUG' = 0, 'PEDANTIC' = 1, 'STATE' = bless( { 'ALIAS' = {}, 'CASE' = 0, 'ERROR' = '', 'GLOBAL' = { 'ARGCOUNT' = 1, 'EXPAND' = 15, 'DEFAULT' = undef }, 'ARGS' = {}, 'ALIASES' = {}, 'EXPAND' = {}, 'DEFAULT' = {}, 'VARIABLE' = {}, 'ACTION' = {}, 'PEDANTIC' = 1, 'CREATE' = 0, 'ARGCOUNT' = {}, 'EHANDLER' = sub { DUMMY }, 'VALIDATE' = {} }, 'AppConfig::State' ) }, 'AppConfig::File' ), 'STATE' = $VAR1-{'config'}{'FILE'}{'STATE'} }, 'AppConfig' ), 'config_file' = 'conf/zzz.conf' }, 'ZZZ' ); Notice that there is no `ah' entry (in which I store the Mason handler object) nor file-specific information in the AppConfig object. Also missing are two other hashes that are stored in this object. It looks like there might be another configuration going on, but I can't find it -- by sprinkling warn statements throughout the code, even in a DIR_CREATE and SERVER_CREATE functions, I can't find more than one. DIR_CREATE and SERVER_CREATE are each called once, and DIR_CREATE's result is being passed to the configuration object -- just once. No merging is being done - {DIR,SERVER}_MERGE do not appear to be called (warn statements don't do anything). The following comes from server startup: Creating (dir) ZZZ object (ZZZ=HASH(0x72113c)) Creating (server) ZZZ object (ZZZ=HASH(0x7210e8)) Setting config for ZZZ=HASH(0x72113c) to AppConfig=HASH(0x72125c) Setting ah for ZZZ=HASH(0x72113c) to HTML::Mason::ApacheHandler=HASH(0x75c4e0) Putting the above data (the ZZZ objects) in an array and storing the index into the array in the $cfg object doesn't change the above object, so it doesn't appear to be problems with storing the configuration data. Tieing the global array of ZZZ objects and watching the activity on the array points to only one such object being configured once. The most likely suspect that I can think of is the configuration being done twice or incompletely the second time, but I don't know where else to look. Anyone have any suggestions? I'll post the code if anyone thinks they would like to take a look at it. System: Apache/1.3.22 (Unix) mod_perl/1.26 % perl -V Summary of my perl5 (revision 5.0 version 6 subversion 1) configuration: Platform: osname=solaris, osvers=2.7, archname=sun4-solaris-multi uname='sunos hex.tamu.edu 5.7 generic_106541-15 sun4u sparc sunw,ultra-5_10 ' -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: apache 2.0.28 and mod_perl
Bryan Henry [EMAIL PROTECTED] wrote: are there issues with running mod_perl on Apache 2.0? I have not found any complaints or warnings in any documentation. I wouldn't expect mod_perl 1.x to work with Apache 2.x. The API is completely (or pretty much so) different. mod_perl 2.0 is being written for Apache 2.0 and should work in the threaded environment that should be expected with Apache 2.x (even if not built with it -- modules for Apache 2.x might run with threading enabled depending on the platform and so should be written to expect it). However, mod_perl 2.0 and Apache 2.0 are still pretty much development code and shouldn't be expected to be as robust in production as Apache 1.3.x and mod_perl 1.26. Anyone can feel free to correct me on any of this :) -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: push_handlers
Stathy Touloumis [EMAIL PROTECTED] wrote: For some reason the call to 'push_handlers' does not seem to register the 'handler' with mod_perl correctly when used in the code below. It seems that only a few initial requests will successfully be processed by this handler. It then just seems to be bypassed. It only works when the 'push_handler' code is commented out and a Perl*Handler directive is added to the apache conf file. Does anyone know why this is so? Here is a snippet of code which is read in at server startup through a 'require' directive. Apache-push_handlers( PerlPostReadRequestHandler= \handler ); sub handler { warn Hello World\n; } As far as I know, push_handlers only works the the current request -- that is, the handlers pushed with it are cleared at the end of the request. It would seem that doing this at startup sets up the handler which then gets used by the children and cleared after the first request they serve. This would give you the symptoms you're seeing (each child called once, and then it disappears). Try running httpd -X to see what happens. There's also probably something in the guide about it. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] Apache::Handlers 0.02 / Perl::WhichPhase 0.01
The uploaded file Apache-Handlers-0.02.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Apache-Handlers-0.02.tar.gz size: 6720 bytes md5: 7b7174b3b60bb7258d388467e33cfbff This module allows snippets of code to be run at any of the phases during a request. It can be prettier than using the Apache - push_handlers method, and if done in the right way (either during the configuration phase or via a `use' statement) will be persistant across requests. The uploaded file Perl-WhichPhase-0.01.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Perl-WhichPhase-0.01.tar.gz size: 1926 bytes md5: fc9bd37aa54d4af8e52c86a97880cec8 This module provides tests for execution within BEGIN, END, INIT, and CHECK blocks. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[DIGEST] mod_perl digest 2001/12/31
-- mod_perl digest December 23, 2001 - December 31, 2001 -- Recent happenings in the mod_perl world... Covering through the end of 2001 since it's only one more day and then I won't have to switch directories a lot in mutt for next week's digest :) Features o mod_perl status o module announcements o mailing list highlights o links mod_perl status o mod_perl - stable: 1.26 (released July 11, 2001) [1] - development: 1.26_01-dev [2] o Apache - stable: 1.3.22 (released October 9, 2001) [3] - development: 1.3.21-dev [4] o mod_perl 2.0 - in development (cvs only) [?] o Apache 2.0 - beta: 2.0.28 (released November 13, 2001) [5] o Perl - stable: 5.6.1 (released April 9, 2001) [6] - development: 5.7.2 [7] module announcements o AxKit 1.5 - translates XML content with a focus on producing web content [8] o Cache::Mmap 0.04 - uses a memory mapped file to provide a shared cache [9] o Class::Trigger 0.03 - imbues packages with inheritable triggers [10] o mod_accel and mod_deflate 1.0.10 - provide pass-through proxying and content compression [11] mailing list highlights o irc: looks like we're congregating on #modperl on irc.rhizomatic.net [12] o PerlEditor thread discussing alternate development environments for Unix and Windows [13] links o The Apache/Perl Integration Project [14] o mod_perl documentation [15] o mod_perl modules on CPAN [16] o mod_perl homepage [17] o mod_perl news and advocacy [18] o mod_perl list archives - modperl@ [19] [20] - dev@ [21] [22] - advocacy@ [23] happy mod_perling... --James [EMAIL PROTECTED] -- [1] http://perl.apache.org/dist/ [2] http://perl.apache.org/from-cvs/modperl/ [3] http://www.apache.org/dist/httpd/ [4] http://dev.apache.org/from-cvs/apache-1.3/ [5] http://www.apache.org/dist/httpd/ [6] http://www.cpan.org/src/stable.tar.gz [7] http://www.cpan.org/src/devel.tar.gz [8] http://mathforum.org/epigone/modperl/plinladwil [9] http://mathforum.org/epigone/modperl/froopherex [10] http://mathforum.org/epigone/modperl/plespiti [11] http://mathforum.org/epigone/modperl/merspachul [12] http://mathforum.org/epigone/modperl/zhersunang [13] http://mathforum.org/epigone/modperl/clehkezy [14] http://perl.apache.org [15] http://perl.apache.org/#docs [16] http://www.cpan.org/modules/by-module/Apache/ [17] http://www.modperl.com [18] http://www.take23.org [19] http://mathforum.org/epigone/modperl/ [20] http://marc.theaimsgroup.com/?l=apache-modperlr=1w=2 [21] http://marc.theaimsgroup.com/?l=apache-modperl-devr=1w=2 [22] http://www.mail-archive.com/dev%40perl.apache.org/ [23] http://www.mail-archive.com/advocacy@perl.apache.org/
RFC: Apache::Handlers
This is a module I am working on, but haven't debugged yet. Looking for comments on what the name of the module should be if Apache::Handlers is not a good name. General comments on design are welcome as well. NAME Apache::Handlers SYNOPSIS In code: use Apache::Handlers qw(CLEANUP PerlCleanupHandler); our $global; our $other_global : PerlCleanupHandler; my $lexical : PerlLogHandler(sub { print STDERR $lexical\n; }); CLEANUP { our $global = undef; }; In httpd.conf: PerlModule Apache::Handlers PerlChildInitHandler Apache::Handlers DESCRIPTION Apache::Handlers provides two different methods of declaring when code snippets should be run during the Apache request phase. If Attribute::Handlers is available, then attributes are defined that allow cleanup or setting of values during particular request phases. The code defined with the constructs provided by this module do not directly affect the success or failure of the request. Thus, this module does not provide a replacement for content, access, or other handlers. BLOCK CONSTRUCTS The following allow for blocks of code to be run at the specified phase. Note that these are subroutines taking a single code reference argument and thus require a terminating semi-colon (;). They are named to be like the BEGIN, END, etc., constructs in Perl, though they are not quite at the same level in the language. If the code is seen and handled before Apache has handled a request, it will be run for each request. Otherwise, it is pushed on the handler stack, run, and then removed at the end of the request. ACCESS AUTHEN AUTHZ CHILDEXIT CHILDINIT CLEANUP CONTENT FIXUP HEADERPARSER LOG POSTREADREQUEST TRANS TYPE ATTRIBUTES If Attribute::Handlers is available, then the following attributes are available. If the attribute argument is a constant value (non-CODE reference), then the variable is assigned that value. Otherwise, the CODE is run and the return value is assigned to the variable. PerlAccessHandler PerlAuthenHandler PerlAuthzHandler PerlChildInitHandler PerlChildExitHandler PerlCleanupHandler PerlFixupHandler PerlHandler PerlHeaderParserHandler PerlLogHandler PerlPostReadRequestHandler PerlTransHandler PerlTypeHandler -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[DIGEST] mod_perl digest 2001/12/22
-- mod_perl digest December 16, 2001 - December 22, 2001 -- Recent happenings in the mod_perl world... Features o mod_perl status o module announcements o job announcements o available mod_perlers o mailing list highlights o links mod_perl status o mod_perl - stable: 1.26 (released July 11, 2001) [1] - development: 1.26_01-dev [2] o Apache - stable: 1.3.22 (released October 9, 2001) [3] - development: 1.3.21-dev [4] o Apache 2.0 - beta: 2.0.28 (released November 13, 2001) [5] o Perl - stable: 5.6.1 (released April 9, 2001) [6] - development: 5.7.2 [7] module announcements o Module::Require 0.02 - provides glob and regular expression capabilities for `require'ing Perl modules (0.03 has since been released) [8] o Log::Dispatch::Config 0.06 - provides configuration support for Log::Dispatch [9] o Apache:Singleton 0.04 - a mod_perl-aware version of Class::Singleton [10] job announcements o Webmaster with light programming in Chicago [11] available mod_perlers o GNU software and infrastructure engineer in Northern California (Bay area) [12] mailing list highlights o Report on mod_accel and mod_deflate, where two C modules are discussed [13] o mod_perl site redesign -- we have a winner! [14] o What phase am I in? or Where's the documentation? Conclusion: join the mod_perl 2.0 documentation list and help make this a better world [15] links o The Apache/Perl Integration Project [16] o mod_perl documentation [17] o mod_perl modules on CPAN [18] o mod_perl homepage [19] o mod_perl news and advocacy [20] o mod_perl list archives - modperl@ [21] [22] - dev@ [23] [24] - advocacy@ [25] happy mod_perling... --James [EMAIL PROTECTED] -- [1] http://perl.apache.org/dist/ [2] http://perl.apache.org/from-cvs/modperl/ [3] http://www.apache.org/dist/httpd/ [4] http://dev.apache.org/from-cvs/apache-1.3/ [5] http://www.apache.org/dist/httpd/ [6] http://www.cpan.org/src/stable.tar.gz [7] http://www.cpan.org/src/devel.tar.gz [8] http://mathforum.org/epigone/modperl/phumclinhou [9] http://mathforum.org/epigone/modperl/snobrozor [10] [EMAIL PROTECTED]">http://mathforum.org/epigone/modperl/querddingkul/[EMAIL PROTECTED] [11] http://mathforum.org/epigone/modperl/smonangqueld [12] http://mathforum.org/epigone/modperl/snolkimsnar [13] http://mathforum.org/epigone/modperl/whimpkhoodah [14] http://mathforum.org/epigone/modperl/jandpleedwoi [15] http://mathforum.org/epigone/modperl/zaxterdrol [16] http://perl.apache.org [17] http://perl.apache.org/#docs [18] http://www.cpan.org/modules/by-module/Apache/ [19] http://www.modperl.com [20] http://www.take23.org [21] http://mathforum.org/epigone/modperl/ [22] http://marc.theaimsgroup.com/?l=apache-modperlr=1w=2 [23] http://marc.theaimsgroup.com/?l=apache-modperl-devr=1w=2 [24] http://www.mail-archive.com/dev%40perl.apache.org/ [25] http://www.mail-archive.com/advocacy@perl.apache.org/
[DIGEST] mod_perl digest 2001/12/15
-- mod_perl digest December 1, 2001 - December 15, 2001 -- Recent happenings in the mod_perl world... With many thanks to Geoffrey Young for his work on this digest in the past, I will try and continue the job for a while. Features o mod_perl status o module announcements o job announcements o mailing list highlights o links mod_perl status o mod_perl - stable: 1.26 (released July 11, 2001) [1] - development: 1.26_01-dev [2] o Apache - stable: 1.3.22 (released October 9, 2001) [3] - development: 1.3.21-dev [4] o Apache 2.0 - beta: 2.0.28 (released November 13, 2001) [5] o Perl - stable: 5.6.1 (released April 9, 2001) [6] - development: 5.7.2 [7] module announcements o HTTPD::Bench::ApacheBench 0.62 - Perl interface to Apache's ab benchmarking program [8] o Embperl 1.3.4 - allows Perl code to be embedded in HTML pages [9] o Log::Dispatch::Config 0.04 - provides configuration support for Log::Dispatch [10] o OpenFrame 2.06 - Perl application framework [11] o Apache::MSIISProbes 1.08 - defends an Apache server against certain worms attacking Microsoft IIS vulnerabilities [12] o Cache::Cache 0.99 - successor to the File::Cache and IPC::Cache modules [13] o LaBrea::Tarpit 0.02 - collection daemon that caches output from LaBrea [14] o Module::Info 0.09 - lists the modules used in Perl code without having to run the Perl code [15] o Apache::CacheContent 0.12 - PerlFixupHandler class that caches dynamic content [16] job announcements o web/systems programming position at Texas AM University [17] mailing list highlights o (FYI for those that might not yet have updated links) The mailing list archives formerly accessable at forum.swarthmore.edu are now at mathforum.org [18] o Voting for new mod_perl site design [19] [20] [21] [22] o Comparison of different caching schemes, in which eight different caching schemes are benchmarked and discussed [23] o New mod_perl Developer's Cookbook [24] links o The Apache/Perl Integration Project [25] o mod_perl documentation [26] o mod_perl modules on CPAN [27] o mod_perl homepage [28] o mod_perl news and advocacy [29] o mod_perl jobs [30] o mod_perl list archives - modperl@ [18] [31] - dev@ [32] [33] - advocacy@ [34] happy mod_perling... --James [EMAIL PROTECTED] -- [1] http://perl.apache.org/dist/ [2] http://perl.apache.org/from-cvs/modperl/ [3] http://www.apache.org/dist/httpd/ [4] http://dev.apache.org/from-cvs/apache-1.3/ [5] http://www.apache.org/dist/httpd/ [6] http://www.cpan.org/src/stable.tar.gz [7] http://www.cpan.org/src/devel.tar.gz [8] http://mathforum.org/epigone/modperl/dangshoowhay [9] http://mathforum.org/epigone/modperl/chendplerlul [10] http://mathforum.org/epigone/modperl/snofendzoi [11] http://mathforum.org/epigone/modperl/juntwolfu [12] http://mathforum.org/epigone/modperl/proilorspy [13] http://mathforum.org/epigone/modperl/dwuboxnox [14] http://mathforum.org/epigone/modperl/khunkerthoo [15] http://mathforum.org/epigone/modperl/waxshaxsu [16] http://mathforum.org/epigone/modperl/whoosermro [17] http://mathforum.org/epigone/modperl/crusniquimp [18] http://mathforum.org/epigone/modperl/ [19] http://mathforum.org/epigone/modperl/bahflaxchou [20] http://mathforum.org/epigone/modperl/yonbulfron [21] http://mathforum.org/epigone/modperl/plingblandstoo [22] http://mathforum.org/epigone/modperl/mellixlee [23] http://mathforum.org/epigone/modperl/dwimpblelkox [24] http://mathforum.org/epigone/modperl/steltoidand [25] http://perl.apache.org [26] http://perl.apache.org/#docs [27] http://www.cpan.org/modules/by-module/Apache/ [28] http://www.modperl.com [29] http://www.take23.org [30] http://jobs.perl.org/ [31] http://marc.theaimsgroup.com/?l=apache-modperlr=1w=2 [32] http://marc.theaimsgroup.com/?l=apache-modperl-devr=1w=2 [33] http://www.mail-archive.com/dev%40perl.apache.org/ [34] http://www.mail-archive.com/advocacy@perl.apache.org/
[ANNOUNCE] Module::Require
Module::Require allows regular expressions or globs to be used for loading modules. For example: require_regex qw[ DBD::.* ]; OR require_glob qw[ DBD::* ]; will both load all of the available DBD:: modules, returning a list of existing files that could not be loaded. Both functions take multiple arguments, working on each in turn and returning a combined list of failed loads. I thought this might be useful if someone wrote an application framework that imported certain modules but didn't want to know what they were beforehand. file: $CPAN/authors/id/J/JS/JSMITH/Module-Require-0.02.tar.gz size: 2362 bytes md5: 9202900c90fed83e5722575566a26eba Version 0.01 forgot to set @ISA :/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Any good WebMail programs?
Francesco Pasqualini [EMAIL PROTECTED] wrote: IMP is probably the best but it's written in PHP www.horde.org I know this is straying into OT territory, but hopefully someone can benefit from the discussion. We used an IMP based product for a while, but if you must customize the code, TWIG is much easier to read (and *much* more modular) - twig.screwdriver.net. It's also written in PHP. My headache might have been from an earlier version of IMP than is now available. If you are interested in getting the LDAP extensions I have for it, email me in private (https://neoweb.tamu.edu/ for anonymous access example). I hope to have a tarball public in a week. I'm tired of dealing with PHP and it's problems (mainly string comparisons) so I'm working on moving it all over to Perl over the next year. But that doesn't help you now :) I can use a primer on researching WebMail programs with the following criterian: - Linux based - Free - Preferably in Perl - Modularized Authentication subsystem (ie could hook up adapters to check with LDAP or RDBMS, though Linux can do that also) - Apache support - IMAP support - Multi-lingual (can be a phase II) - As feature-rich as possible (can be a phase II) Please note that I'm not looking for a service, I'm looking for the software itself. Thanks -- - Medi Montaseri [EMAIL PROTECTED] Unix Distributed Systems EngineerHTTP://www.CyberShell.com CyberShell Engineering - -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: RFC: CGI vs. Apache Namespace Question
darren chamberlain [EMAIL PROTECTED] wrote: 5) Include Apache::URI2Param with the CGI::URI2Param module that gets installed along with CGI::URI2Param if Apache.pm is installed, where Apache::URI2Param calls CGI::URI2Param::uri2param. That'd be the way I would go, although I'm not sure what Makefile.PL would look like. I'd go ahead and install it regardless of whether or not Apache.pm is installed (if writing an Apache:: module) in case mod_perl gets installed after this one and the mod_perl installer doesn't realize they have to go back and re-install this module to get the Apache:: version. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[JOB] web/systems programming position at TAMU
Texas AM University just opened a position for a software applications developer. You would be working (most likely) with me developing web applications and other code to enable system functions. You would need to relocate to the Bryan/College Station area. Notice of vacancy: http://cis.tamu.edu/about/jobs/positions/241.html Job Title: Software Applications Developer Department: Computing and Information Services Salary: $36,250 - $40,250 (negotiable based on qualifications and experience) Start Date: ASAP Security Sensitive: No Major/essential duties of job: Function as a software applications developer on complex network-based projects. Provide consulting, technical support, and training to users of custom application software and technical staff. Develop interfaces to systems applications such as e-mail and directory services. Develop web-based applications that operate in large, multi-user environments. Occasional duties: Provide technical support to other CIS groups in areas of expertise. Develop and maintain documentation for applications including e-mail and directory services systems. Educational qualifications: Bachelor's degree or any equivalent combination of education and relevant experience (1 yr experience equals 1 yr education). Work experience: Two (2) years of software applications development. Experience programming in UNIX and web environments using Perl, C, C++, PHP (the more, the merrier). System administration experience is a plus. See http://cis.tamu.edu/about/jobs/ for links to Human Resources, etc. If you are interested in the job, please submit your resume and application to HR with a reference to the job number (020427). About the group: Homepage for my group: http://cis.tamu.edu/systems/opensystems/ We are a very open and affirming group about many things, including the use of open source software. Several people regularly attend LISA and other similar events. We do make use of commercial products when it seems prudent. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Installing mod_perl 1.26 on solaris 2.7
I have the 1.26 tarball untar'd and run the following command: % find . -name Request.pm -print It prints nothing. Is there supposed to be an Apache/Request.pm file somewhere? Apache complains that it can't find it on startup (I'm trying to use HTML::Mason). -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] AI::Menu 0.01
The uploaded file AI-Menu-0.01.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/AI-Menu-0.01.tar.gz size: 5587 bytes md5: 8272d6782f0cb041e27ffd50cd38ce56 readme: http://sourceforge.net/project/shownotes.php?release_id=62025 download: http://prdownloads.sourceforge.net/perlkb/AI-Menu-0.01.tar.gz This is part of the PerlKB project: http://sourceforge.net/projects/perlkb/ This is an attempt to take a graph representing arbitrary relationships between categories and functions and turn it into a tree that can be used as a menu. This should be considered experimental at this point. See the readme for a list of known issues. +-- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
Re: open-source suggestion for a knowledge-base/search engine?
Grant Babb [EMAIL PROTECTED] wrote: --=_3114418==_.ALT Content-Type: text/plain; charset=us-ascii; format=flowed all- In our migration to open-source solutions, I have been asked to suggest a solution for our knowledge base. We have found that a well-indexed text search is really a more effective way to go, but I was hoping for some suggestions from this list on some mod_perl based projects or apache modules that might do the the trick. PHP is OK too i guess, but when it comes to both fast lookups and text manipulation, mod_perl/mySQL seemed like the obvious way to go. You're welcome to look at (and contribute to :) the PerlKB project: http://sourceforge.net/projects/perlkb/ It's still in the fairly early stages right now - most work is on the data store/management than on a web interface. If you are interested, you might also want to look at the template toolkit documentation list - some of the discussion there may affect the PerlKB implementation. My current work on the PerlKB involves categorization of documents and presenting that to the user - should have more on that after next week (deadline for Usenix tech. conference submissions :). Neither of these are ready for production at this point. If you need something *now*, then I'd look elsewhere (not necessarily for non-mod_perl solutions, just not this one). -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
ANNOUNCE: PerlKB 0.04
Well, after some more code slinging, here's another cut of the Perl Knowledge Base code. I've added support for loading modules from case-insensitive file systems with no prior knowledge of the module case (pod, Pod, or POD, for example and know which it probably was). PerlKB::Base is the module to look at. Added initial support for SPOPS in the storage area - I haven't really worked with SPOPS before, so I don't know how well it will work. Anyone's welcome to play with it and submit patches :) It just seemed like a useful thing to have around. Started a manual after the SPOPS model. Sorry if parts resemble the SPOPS::Manual layout. Seemed like a good way to start :) Download: http://prdownloads.sourceforge.net/perlkb/PerlKB-0.04.tar.gz Changelog: http://sourceforge.net/project/shownotes.php?release_id=57542 Project page: http://sourceforge.net/projects/perlkb/ Where to next? Hopefully I can get a repository actually working :/ Then we can finally see how everything acts together. Oh... and write more documentation. +-- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
[Knowledge Base] First coherent release
I've put together a tarball of the PerlKB modules: http://prdownloads.sourceforge.net/perlkb/PerlKB-0.02.tar.gz The example/ directory has a little script that shows how the store objects work. The scripts/ directory has a perl script that starts up the PerlKB::Shell monitor (modeled in a small way after the CPAN shell). Otherwise, it doesn't do a whole lot, but it's starting to come together. Project page: http://sourceforge.net/projects/perlkb/ +-- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
[Knowledge Base] Initial storage code
I've worked up a couple of modules that illustrate the storage mechanism I have in mind. http://prdownloads.sourceforge.net/perlkb/PerlKB-Store-0.01.tar.gz This consists of PerlKB::Store and PerlKB::Store::File. An example: #! /usr/bin/perl use PerlKB::Store; $t = tie %hash, 'PerlKB::Store', (type = File, document_root = /www/com/htdocs); print join \n, keys %hash; This will print the names of all the files under my document root but not print the directories. The main reason for mentioning this is to let people see it and either suggest other storage mechanisms or even play around with creating others - PerlKB::Store::DBI for example :) Of course, if I've made some kind of mistake, fundamental or otherwise, feel free to say so. The guts shouldn't be changing in a big way. Configuration might. The PerlKB:: namespace should be considered temporary at this point. PerlKB site: http://sourceforge.net/projects/perlkb/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Knowledge Base design/proposal
I have an initial *very rough* draft of a design document and project proposal for the knowledge base. Please do not consider it final. http://www.jamesmith.com/code/perlkb/ There are links to both PostScript and PDF versions, uncompressed and gzip'd. Comments are welcome on general design issues -- sensible? too over-engineered? under-engineered? I'll be updating it and adding information to the document over the next few days to couple of weeks (e.g., still no cost analysis or work breakdown and only a cursory overview of the issues being solved and the design itself). It will likely evolve into a printable manual as the design becomes more detailed. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] Config::LDAP
The uploaded file Config-LDAP-0.01.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Config-LDAP-0.01.tar.gz size: 6725 bytes md5: aa8ba7d25e8e059fe9b71ddbdb668550 Nothing too mod_perly, but LDAP and websites do seem to go together at times. This module will try to read any RFC 2252-complient attribute type and object class configuration file. This can allow web scripts (for example) to know what to expect before sending information to an LDAP server without having to hard-code that information. This is an initial release and should be considered alpha -- not fit for production use. Thought I'd let some others play around with it and let me know what they would find useful in the interface. :) -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
[ANNOUNCE] Config::Pod 0.01
This is the first module to come out of the knowledge base project. This module allows configuration files to be written in POD, plain ol' documentation. It only looks at headers and items (=head(\d)+ and =item lines), so anything else can be included to help explain what is being configured. Should be showing up on CPAN sometime in the next few hours as Config-Pod-0.01.tar.gz in one or more of the following locations: $CPAN/modules/by-authors/id/J/JS/JSMITH/Config-Pod-0.01.tar.gz $CPAN/modules/by-module/Config/Config-Pod-0.01.tar.gz size: 2825 bytes md5: 4b67c853c00e38edd0e6ffc2fc3ce1b5 or on the SourceForge site for the Perl Knowledge Base project (if I figured out their release system): http://sourceforge.net/projects/perlkb/ -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Knowledge Base for 2.0
Stas Bekman [EMAIL PROTECTED] wrote: On Sun, 16 Sep 2001, Jim wrote: I am able to provide hosting at least during development. I'll put out a URL when we get something together. That's great! Once it's working, we may start using it for all ASF projects and then it'll be easy to host it on ASF machines. Sounds good. I've just stumbled upon SuSe knowledge base -- very nice: http://sdb.suse.de/en/sdb/html/index.html I did look at it - initial impression is very favorable. Also check http://thingy.kcilink.com/modperlguide/, which is a special version of the guide that generates a page per section rather than having huge pages. So it should be able to take a document and split it into factoids/sections and while keeping them in relation, like the URL above. So when you search, each item gets searched separately, but when you view a section, you can still easily jump to the adjusent sections in the original doc. This will take a little more thought, but I don't think it's unreasonable. and we have some docs in modperl-docs cvs rep already. Not much yet, but it's the beginning. Good for testing with :) One thing I'd like to add is that the system should be capable of scratching its database and rebuilding everything from scratch. Consider some documentation being used as a source, getting modified, or some items become absolute and get removed, some items get added. Somehow the system should be able to handle this. I should be able to track sufficient metadata that I can do this without having to have the documents around (for the indices used in the interface) or from the documents themselves (if *all* database files are wiped). This also allows for arbitrary addition/deletion of arbitrary documents. I started a project on sourceforge so I could use CVS and let others see what's going on. I'll add RFEs that are mentioned on the list at some point. http://sourceforge.net/projects/perlkb/ I'm concentrating on the backend stuff. Someone else can focus on the frontend(s) -- we can make the frontends subprojects just like the backend. Each can have their subdirectory in cvs (basic future direction -- not enough backend done yet for that). I'll put together a test interface at some point - not meant to be functional, just testing. I also haven't emailed the perl modules list yet, so the PerlKB name might change at some point (the perl module, not the sub-project name). +- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
Re: BOF?
Matt Sergeant [EMAIL PROTECTED] wrote: On Sat, 14 Jul 2001, brian moseley wrote: On Sat, 14 Jul 2001, Ken Williams wrote: I just noticed that there's no mod_perl BOF listed at http://conferences.oreillynet.com/cs/os2001/pub/10/bofs.html . Is one scheduled? If not, let's get one together. speaking of which. there should be an opening night piss-up, eh? somebody that knows the area should propose a place. Judging by where the hotel is, I think probably the hotel bar is going to be best. I arrive on Sunday. As do I. Just let me know where and when. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: RFC: Logging used Perl Modules (was Re: API Design Question)
Doug MacEachern [EMAIL PROTECTED] wrote: On Tue, 3 Jul 2001, James G Smith wrote: The current code I have uses %INC, but I wanted to write something like the following: sub use : immediate { # do stuff here if logging return CORE::use(@_); } you could just override CORE::GLOBAL::require. you don't need to override the import, and your version of require will be called at the same time as the 'use'. Thanks! I will see what I can do with that. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Module::Use 0.03
Yes, another day and another version... o Decay and Grow configurations to help make module set change with the needs of the scripts. o DB_FileLock logger actually works now. o Debug logger now sorts the modules before printing. o Modules are automatically loaded properly. The Decay and Grow options are useful for dropping modules out of the data store if they have not been loaded for a while. Statistics are not kept on any modules loaded prior to Module::Use. Statistics are also not kept on modules starting with `/' or [a-z] (means we don't automatically load a pragma). The idea behind this module is for something that could be useful in a production server to track which modules are needed and to preload the most commonly loaded modules without requiring explicit Cuse statements in a startup.pl or httpd.conf. I am continuing to look at the debug modules. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: DSO suexecx mod_perl
Oliver [EMAIL PROTECTED] wrote: Hello, OK, but I can I govern this: in the apache configuration of ./configure I tried to add some DSO (Dynamical Shared Objects) like this ./configure --enable-module=most --enable-shared=max --enable-suexec --suexe c-caller=apache --suexec-docroot=/home --suexec-userdir=/home --suexec-uidmi n=500 --suexec-gidmin=100 --suexec-safepath=/usr/local/bin:/usr/bin:/bin --a ctivate-module=src/modules/perl/libperl.a --enable-shared=perl - after running make and then make install I tried to restart httpd it failed. My Question: What can I do to have most of the DSO object modules for apache and mod_perl running as suExec. You can't. suExec is used to run other programs. DSOs are not programs but shared libraries. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: RFC: Logging used Perl Modules (was Re: API Design Question)
darren chamberlain [EMAIL PROTECTED] wrote: James G Smith [EMAIL PROTECTED] said something to this effect on 07/02/2001: How would something like this do: NAME Apache::Use SYNOPSIS use Apache::Use (Logger = DB, File = /www/apache/logs/modules); DESCRIPTION Apache::Use will record the modules used over the course of the Perl interpreter's lifetime. If the logging module is able, the old logs are read and frequently used modules are automatically loaded. Note that no symbols are imported into packages. You can get this information from %INC, can't you? e.g.: Most definitely. However, you lose information about which modules are needed more often than others. There's no difference between all scripts needing CGI.pm and one script needing Foo::Bar. We also lose timing information. If 90% of the modules are loaded into the process with the last request before the child is destroyed, there's no point in loading them during the configuration phase. We can help this a little by taking snapshots of %INC at regular intervals (at the end of each request, for example). The current code I have uses %INC, but I wanted to write something like the following: sub use : immediate { # do stuff here if logging return CORE::use(@_); } -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: RFC: Logging used Perl Modules
darren chamberlain [EMAIL PROTECTED] wrote: James G Smith [EMAIL PROTECTED] said something to this effect on 07/03/2001: sub use : immediate { # do stuff here if logging return CORE::use(@_); } To go OT here, what would 'immediate' be doing here, if Perl supported it? It would be run at compile time when the compiler ran into it instead of waiting for run-time. Basically, the following invocations of foo and bar would be equivalent. The `immediate' modifier could wrap an implicit BEGIN { } around any invocation of the subroutine. sub bar { do something; } sub foo : immediate { bar(@_); } foo($a, $b); BEGIN { bar($a, $b); } This is used in FORTH to support such things as if-then-else and do-while constructs since they are just entries in the dictionary like any other definition. FORTH actually uses it to build up the entire language since there are no reserved words. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: using XML::Parser with apache/modperl ???
Christian Wattinger [EMAIL PROTECTED] wrote: hi question: is it possibly (should certainly be) to use the perl XML::Parser module with apache/modperl?? I don't know about OS X, but my experience on OpenBSD requires Apache and XML::Parser be compiled with the same expat library. Either have Apache link against the system installed lib (which Perl most likely links against for XML::Parser) or change the RULE_EXPAT rule in src/Configuration (or similarly named file) to RULE_EXPAT=no. Details may vary a bit from system to system, but the goal is the same: Apache and XML::Parser using the same expat lib. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Module::Use 0.02 uploaded
The uploaded file Module-Use-0.02.tar.gz has entered CPAN as file: $CPAN/authors/id/J/JS/JSMITH/Module-Use-0.02.tar.gz size: 2917 bytes md5: 96c3d47fb65b1f392626b082cf6ad85d No action is required on your part Request entered by: JSMITH (James G Smith) Request entered on: Wed, 04 Jul 2001 02:42:28 GMT Request completed: Wed, 04 Jul 2001 02:42:59 GMT The name was changed from Apache::Use due to the general nature of the code. It does not depend on Apache/mod_perl to work. Feel free to test and offer suggestions. If there are data stores that you would like to see work with this, then either mention them or contribute a module for them :) +-- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
RFC: Logging used Perl Modules (was Re: API Design Question)
How would something like this do: NAME Apache::Use SYNOPSIS use Apache::Use (Logger = DB, File = /www/apache/logs/modules); DESCRIPTION Apache::Use will record the modules used over the course of the Perl interpreter's lifetime. If the logging module is able, the old logs are read and frequently used modules are automatically loaded. Note that no symbols are imported into packages. --- I really wish we had `use' as a function instead of a keyword and had an `immediate' property for subs (kindof a Forth thing). Then we could do reference counting of `use' and `require'. If the above seems reasonable, I'll try to get a 0.01 out asap. Passing this by the modules list for comment also. The current code I have does not actually depend on Apache and mod_perl. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: SOAP, WSDL etc.
=?iso-8859-1?Q?Jes=FAs_Lasso_S=E1nchez?= [EMAIL PROTECTED] wrote: Hi James, I'm working on a similar project with a farm of web servers that must to access data in a LDAP server. May you explain something more about the application you use in your work, please?. We're thinking to develope a little daemon with opened persistent connections to the LDAP server and use this daemon as a gateway between the web servers and the LDAP. but may be your solution will be more reliable and standard than our. (CCing the list since this is a follow-on to the discussion there.) Well, we're still in the middle of development, and it's in PHP, but the protocol and idea are the same -- we could do it in Perl with the SOAP::Lite module (and I am getting tempted to go that direction if I can't get PHP and Apache to compile together under Irix). Basically, we're wanting to have the web farm on a private network so we can trust the hosts that make the XML-RPC requests -- they send an LDIF to the XML-RPC server along with a list of LDAP objects on which the server should lock (so we can serialize modifications across branches). The central server then uses ldapmodify to push the LDIF into LDAP (a suppos\'ed atomic operation) and at the same time maintains an audit trail. There's really not a whole lot to it. When I get the code finished though, I am wanting to release it. Hopefully by the end of August at the latest since we are going production at the beginning of August. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: SOAP, WSDL etc.
ASHISH MUKHERJEE [EMAIL PROTECTED] wrote: Can anyone suggest sites which discuss practical application of SOAP/WSDL with Perl and with code snippets etc.? Or if anyone has any examples etc. that you can forward it will be greatly appreciated. Also, any recommended books for learning SOAP and related technology? http://xmlrpc.org/ has a lot of information on XML-RPC, which is the precurs[eo]r for SOAP before the corporate world got ahold of it and made it an unreadable standard (imho). It can do a lot of what SOAP can do, but is easier to debug. http://www.soaplite.com/ the website for the Perl SOAP::Lite module. This module supports both SOAP and XML-RPC. http://www.w3.org/TR/SOAP this is the actual standards document. Take a big pot of coffee, a few XML references, and spend the night trying to slog through it. It's fairly dense. It says SOAP is lightweight, but if so, then XML-RPC has no weight at all. Ok, so I'm biased a bit, but I like simple protocols :) Haven't read any books on the subject, though I know they're out there. The SOAP::Lite module (available on CPAN) has some examples. Example applications from my job: mailstore administration and LDAP modification (so we can keep a central LDIF audit trail when using a web farm for the web interface). We're using the XML-RPC standard instead of SOAP since it is sufficient for what we're doing. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Questions Concerning Large Web-Site
Purcell, Scott [EMAIL PROTECTED] wrote: I was hoping to hear some simple input from people who have architected good, sound sites, and was hoping for some good feedback, or some old sample code that I can study and find out how the other half live. Well, I can give some things I've tried to live by. (1) Keep each component simple - it's a lot easier to debug. (2) Never depend on hidden variables telling you which page you are on -- look at the data and see which page the data fits best. This increases security. I typically make one script per task and let the script figure out which page to show. Cf. TWIG, which has a single controlling script (index.php3). (3) Build a solid foundation - makes the user interface almost trivial and easier to manage when policies change. This usually consists of a set of libraries to do the actual work. (4) If it's on CPAN, try to use it. Better to use someone else's work (if it fits) then reinvent the wheel. You might want to check out some of the templating packages available. If you want a framework, take a look at Mason. (5) Even for a single developer, revision control can be nice. These (except the CPAN part) are based on my experience with a PHP project that I am working on (yes, different language but same rules) which consists of around 50,000 lines of code, unfinished (email and directory service management for customers based on TWIG). John's email has some good suggestions specific to Perl. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Real Widgets and Template Languages
Gunther Birznieks [EMAIL PROTECTED] wrote: At 12:15 PM 5/28/01 -0400, Stephen Adkins wrote: The rendering of this widget as HTML requires at least the following * config information (Widget::Config) [snip] Also will we require XML to configure? Or is this also an optional feature that you more or less want for yourself but others can choose to not use? Configuration data is read in via the Widget::Config class, and this class can be replaced with a class which reads config data from any other source, as long as it conforms to the same calling interface. I was under the impression that XML was your desired means of writing a config file. Do you have a preference to use something different? I like XML for Config files, we use that in our Java stuff all the time. But Perl is one of the nicest and flexible config file languages out there. IE My config file is Perl. Anyway, I think it is weird to think of configuring just widgets. Usually you configure an application and widgets are a part of that. But everyone here will have a different way of preferring to write their application config whether it's XML or Perl and what features of these that are used (eg just a set of scalars or references to hashes or ... ?) or in the case of XML using attributes versus subtags... IMHO, having a configuration API is much better than requiring a particular way to do configuration. If the backend configuration is done via Perl code, then any configuration file format can be supported with an appropriate module handling it. These widget configurations will need to be flexible enough that I can construct a page with them without any knowledge of how they will look -- the configurations should be tie-able to an overall theme for the site. I've always been a champion of themes for websites. I should be able to select a configuration at run-time without a lot of trouble. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: LDAP utilities (was: the widgets thread)
Gunther Birznieks [EMAIL PROTECTED] wrote: At 10:49 AM 5/28/01 -0500, James G Smith wrote: Hmm... Something I'd like to see is a set of classes in Perl for managing LDAP. These classes would need to be generic (configurable) enough to work with any LDAP schema. They would need to provide an audit trail, transaction log, etc., that could be used to replay changes made to LDAP. They would need to be able to enforce data consistancy across branches and data integrity. If noone gets to it before I do, I'll port my PHP code to Perl :) I guess it depends on what you store in LDAP. I find that LDAP is useful for simple data structures but it can be much nicer to also have the data in a relational database for most applications to access things in one query. We use it as a University-wide directory service, but with two branches for people (unfortunately -- due to policies and other decisions beyond our control) and one for roles and organizations. We put an abstraction layer between the web account management scripts and LDAP so we can easily write scripts that allow customers to manage their entry without the scripts having to worry about enforcing policy and keeping data consistent across branches. The username in one branch must equal the username in the other branch, for example. All data policies are pushed down into the abstraction layer, making it easier to manage them. I think I would like to see a public domain LDAP management console. This is really what you are talking about right? One of my coworkers wrote one in Perl back at Barclays Capital several years ago but he never bothered to open source it. I suspect there are a ton of places like that which custom design something like this and then never open source it due to laziness. Not sure I understand what this would be. Could you provide a bit more detail or an example of what would be done with it? Oh, and locking mechanisms used must be transferable between machines -- I lock resource A on machine X and then hand off the lock to machine Y -- this code must be useful in a distributed environment (web farm) and robust enough for use in a PKI. I guess I don't understand. The locking bit sounds slightly odd to me. Wouldn't it be easier to administrate a master LDAP server and have a push model of replication? This keeps two people from claiming the same username, for example. We identify people with a 32 character uid internally and allow them to select up to three usernames. The locking prevents certain race conditions from happening. I am not sure what you mean by robust being a prerequisite for PKI? Passwords are passwords -- just that certificates are larger and the SSL protocol has already decoded it for you. The PKI stuff is dealing with the integrity of the data in the directory. We must have correct information before we can issue certificates based on that information. Allowing race conditions allows one customer to hijack the account of another and obtain a certificate that is not theirs to have. The audit trail allows us to track changes and find out who did what when. Just in case a customer complains that they didn't make a particular change to their information or if we lose the disks (raid and mirrored) and the backups (both on-site and off-site) with the directory databases. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Real Widgets and Template Languages
Stephen Adkins [EMAIL PROTECTED] wrote: At 09:53 PM 5/29/2001 +0800, Gunther Birznieks wrote: At 05:17 PM 5/28/01 -0400, Stephen Adkins wrote: ... $widget = $wc-widget(first_name); print First Name: , $widget-html(), \n; A widget type has already been defined. So I don't see that the method to output it's display should be called html() which is, well, HTML specific. I prefer print First Name: , $widget-display(), \n; Since widgets are components that know how to display themselves whether its WML or HTML or whatever. This is a philosophical design decision that I have been struggling with. The widget does indeed know that it should generate HTML, so it could have a method, $widget-display(), $widget-draw(), etc. However, this implies that the widget has the freedom to decide how it should render itself and that the caller does not need to know. This is not correct. Actually, it could be. It would allow for general templates of widgets that are not specific to any particular format (thinking of the standard template library in C++). Perhaps overloading the stringizing operator for output... :) The caller in this case has already cooked up a bunch of HTML and is counting on the widget to produce HTML which can be inserted. The widget does *not* have the freedom to render any other way. This is why I have (sort of stubbornly) stuck with the $widget-html() method despite the unanimous suggestions for a $widget-display() method. The actual output code could be controlled by a configuration option -- if the configuration is help in another object that the widget can query, then the caller will not *have* to know, though it could know, which format the widget will be rendered as. I do believe there is a place for a display() method, but it is at the controller level. The is the level at which the caller may not know what technologies are being used by the widgets. Agree... but I don't see why we can't push this down as low as possible so we don't have to know how we are rendering until the absolute last possible moment. 1. TECHNOLOGIES I propose that the following technologies will have supporting Controller/State/Widget combinations to make this not just another web widget library. * CGI/HTML - a web application * mod_perl/HTML - similar, a web application using mod_perl * WAP/WML - driven from a WAP device * X11 (Gtk-Perl)- an X windows application * Curses (terminal) - a screen-oriented terminal application * Term - a line-oriented (scrolling) terminal application * Cmd - similar to Term, but the state must be saved between each cmd (I know I'm stretching the paradigm a little bit here, probably beyond what is reasonable. Stretching the paradigm can be good. My mind was going along the same direction -- figuring out an implementation is a bit of a problem at the moment though. One of the primary design rules is to *not* fall into the least common denominator trap. Many cross-platform application frameworks have done this and failed. Rather, the design goal is to *enable* the widget to fully utilize the capabilities of the technical environment it is in. Provide a way to query capabilities of the underlying technology -- how big is the screen, remote vs. local, degree of user interaction available, etc. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Real Widgets and Template Languages
[EMAIL PROTECTED] wrote: Where is this language value coming from? The widget's container. You only care about English? Then set it to EN-US and forget it. [snip] Implementation strategies can be as simple as: sub label { my $self=shift; my $lang=shift || $self-container-language; if (exists $self-{'label'}{$lang}) { return $self-{'label'}{$lang}; } return $self-{'label'}{$self-container-language('default'); } Something I've seen elsewhere is to have a master table of strings that the widgets can then reference. Different ways of doing this: index strings by number (MicroSoft resources in executables); index strings by the string in a particular language (TWIG with English as the indexing language). This allows for sharing of strings across widgets and memory savings, always a good thing in mod_perl. It also doesn't slow the system down much if any compared to storing the strings in each widget with duplication. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: FW: Apache::Session / No-Cookie-Tracking
Jonathan Hilgeman [EMAIL PROTECTED] wrote: [snip] I accidentally caught them during testing or something and got a variable on the URL line. (I substituted the domain name - it's not really cart.com) http://www.cart.com/cgi-bin/cart.cgi?cartidnum=208.144.33.190T990806951R5848 E cartidnum seems to be: $IP-Address + T + Unix-TimeStamp + R + Unknown number + E By the way, the session only seems to active until the browser completely shuts down. Any ideas? If I could identify my users on another site without using cookies at all, that would be fantastic! Be careful with using too much magic. I recently tested/evaluated a product to provide a web interface for email. It appears that it uses a combination IP address and URL to track authenticated users. For example, if I authenticated as foo from 192.168.0.4, then as long as I was coming from 192.168.0.4, I could read foo's email, even if I was someone else logged into the machine. The proper URL would be of the form http://192.168.0.10/foo (if 192.168.0.10 were the server). While it is nice to assume one person per IP address, there are many cases when this is not true. In the product I evaluated, guessing the proper URL to read someone else's email was trivial. Going through an SSL proxy didn't mask the behavior, just required the use of openssl's client. In the example you give, the timestamp and unknown number may make it more difficult to guess the proper information. This is a good thing. Without some information passing between the client and server that is only known to them, it is too easy to spoof the client and access a session unauthorized. There is also no way to distinguish two clients on the same machine, especially if they are the same application. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Concepts of Unique Tracking
Jonathan Hilgeman [EMAIL PROTECTED] wrote: Okay, after I think about it, there must be a way to identify a unique user, even if they are behind a firewall. Let's run through this process: 1) Person behind the firewall sends out a request to a web server. 2) The firewall intercepts that request, masks the person's IP address and lets the request keep going out. 3) The web server receives the request and sends back packets of data to the IP of the user, which is really the IP of the firewall now. 4) The firewall receives the packets of data first, but now must send those data packets to someone inside the firewall. 5) The packets of data MUST have some unique identifier to let the firewall know who requested the data in the first place. Now, I'm assuming that Apache has full access to these incoming packets. Therefore, they must also have access to this invisible identifier. Is it possible to extract that identifier somehow by tinkering with Apache? No. What happens is more like this: (1) Browser opens socket for connecting to remote server. This assigns a unique identifier to the TCP connection - IP + socket on client side. (2) Browser connects to remote server, which actually ends up connecting to firewall. Firewall has a unique number on its side - its IP + socket (80 or 443 most likely). (3) Firewall opens socket for connecting to remote server. This assigns a unique identifier to the TCP connection - firewall's public IP + socket. Firewall remembers this and will transfer any data coming from client to this connection, and any data from this connection to the client. This is part of what is meant by a firewall which saves state information. All the information needed to connect the client and server via the firewall is kept within the firewall. Neither the client or server need be aware of any of it, nor, afaik, can they be aware of it without putting a http proxy on the firewall. The server is seeing the firewall's IP and socket, not the actual client's. This will change with each connection made, which will happen if the keepalive timeout happens. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Preventing duplicate signups
Rob Bloodgood [EMAIL PROTECTED] wrote: A really simple trick would be rather than to use a cookie, if you are saving state to DB anyway. Set a flag in the DB and test for its existence. sub handler{ my $s = session-new(); $s-continue(); my $flag = $s-get('flag'); Be careful of race conditions. If things are timed right, two requests could run in parallel with just enough time differential to get around this test. You may want to see if there is a way to lock on something so this code section is serialized for any particular user. If you're using a web farm, good luck. Otherwise, either shared memory or a file link should be sufficient (non-NFS). if($flag){ # do something else } else{ # run insert for new signup } } -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Preventing duplicate signups
Haven't had enough time for my previous reply to make it back to me so I could reply to it If using SQL, you might be able to do row or table locking to get around any race conditions. Rob Bloodgood [EMAIL PROTECTED] wrote: A really simple trick would be rather than to use a cookie, if you are saving state to DB anyway. Set a flag in the DB and test for its existence. sub handler{ my $s = session-new(); $s-continue(); my $flag = $s-get('flag'); if($flag){ # do something else } else{ # run insert for new signup } } -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: Reverse engineered HTML
Paul Cotter [EMAIL PROTECTED] wrote: This is a multi-part message in MIME format. --=_NextPart_000_0044_01C0DD12.8D1C3600 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Does a package exist that will read an HTML document and generate an = Apache::Registry cgi script? Even better if it accepts an !--Perl tag. That one's easy: print 1HERE1; insert HTML document here 1HERE1 __END__ But is that what you're asking to do? -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: capturing subrequest output (have things changed)
Anand Raman [EMAIL PROTECTED] wrote: hi guys I just started off using modperl so excuse me if this has been answered. Is there a way to capture output from a subrequest.. rather than allowing the subrequest to directly output the response to the client browser. I need to be able to parse the output of the subrequest before outputting it to the browser. Apache 1.x does not support this -- stdout is tied to the socket, iirc. If you are into heavy wizardry, you might try redirecting the stdout file descriptor (fd 0) before the subrequest and restore it afterwards. Note that this requires more than tieing STDOUT to something else and is likely to be OS dependent to some degree. Apache 2.x has support for I/O layering which would (hopefully) provide for this. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: capturing subrequest output (have things changed)
Anand Raman [EMAIL PROTECTED] wrote: hi guys I just started off using modperl so excuse me if this has been answered. Is there a way to capture output from a subrequest.. rather than allowing the subrequest to directly output the response to the client browser. I need to be able to parse the output of the subrequest before outputting it to the browser. I'm not positive on fd 0 being stdout -- stdout, stdin, and stderr are fd 0-2, I believe. Just not sure of the ordering (I don't use them by number very often...). -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: deploying tips running 2 apache server
"rene mendoza" [EMAIL PROTECTED] wrote: This is a multi-part message in MIME format. --=_NextPart_000_0044_01C0BF6F.A24D64B0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable i ve been reading the mod perl guide and ive learned that i dont want to = use apache child processes to serve static html or images, so i want to = implement a lightweight (only mod_dav and cgi enabled apache server, = lots of child processes) and a heavy mod perl apache server 5 or 10 = apache processes/=20 does anybody has pointers, or tips of where to start? how to configure = that only static content of a virtual host goes thru one server and = dynamic goes thru another. im using Mason in the mod_perl enabled apache is it done with mod_rewrite?, should both servers be ssl enabled? Short answer first: Rewrite rules can do almost anything the proxy module can do except cache results and provide reverse proxying. With proper configuration, the only server that should need SSL is the proxying server the client interacts with. Long answer: http://perl.apache.org/guide/strategy.html#Adding_a_Proxy_Server_in_http_Ac My most recent experience: We aren't using mod_perl (unfortunately), but we are using mod_php for our central email system (web interface for it). We found that we get a performance boost by using two Apaches. The Apache+mod_php process listens on port 80 and redirects any requests for php pages to the Apache listening on port 443 (https). A lightweight Apache (mod_rewrite, mod_proxy, mod_ssl) listens on port 443. Most requests will come to this server since only the initial request in a session will arrive at port 80. The following ascii art illustrates how the systems work together: ++ | client | ++ | | (request)/html/* | +--+/MBX/* +-+ + | port 443 | | WebMail | +--+ +-+ | *.php3 | *.php +-+ +-- | port 80 | +-+ The server at port 443 will proxy requests - any php scripts get sent to port 80 to be filled, and any requests for /MBX/* or /html/* get sent to the WebMail installation. This setup was put in place to allow testing of WebMail without requiring people learn a new URL. The side-effect was a speedup in the server. Requests for static files (non-php files) on the site are served directly to the client. The response flows back the same way the request arrived. Except for the WebMail proxying, all the proxying can be accomplished with rewrite rules. If reverse proxy rules are not required and all the proxied material is dynamic, then mod_rewrite should be sufficient. This caln allow for a lighter front end. For now, we allow 35 lightweight servers and 30 heavy servers. We used to have 38 heavy servers, but by giving up 8, we had more than enough memory left to hold 35 light ones, and the system became more responsive. Except for a few peaks at noon, this is sufficient. We serve around a terabyte of data per day with 88,500 requests for pages/day. This is about a third to a fifth of our target customer base. We will probably move the port 80 server to separate hardware and up the port 443 server count, but currently our web server, mailstore, imap and pop3 daemons run on the same machine. Configuration is fairly simple -- we have a few rewrite rules and a couple proxy rules for the port 443 server and the usual configuration for the port 80 server. The only server requiring special configs is the proxying server. Quoting the configuration here wouldn't really help any since it really depends on the site. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: [OT] Client Certificate Authentification module?
[EMAIL PROTECTED] wrote: I am looking for a module that will allow me to use Client Certificates to authenticate the users. I am pretty sure I have come accros this before, but I cannot find it anywhere. Anybody know where I can find this. I have seached CPAN for 'cert', 'authen' and 'client', but unless I am overlooking something there doesn't seem to be anything there. You might want to look at mod_ssl and OpenSSL. They can mimic basic authentication with client certificates. -- James Smith [EMAIL PROTECTED], 979-862-3725 Texas AM CIS Operating Systems Group, Unix
Re: getting rid of multiple identical http requests (bad users double-clicking)
Stas Bekman [EMAIL PROTECTED] wrote: On Fri, 5 Jan 2001, Gunther Birznieks wrote: Sorry if this solution has been mentioned before (i didn't read the earlier parts of this thread), and I know it's not as perfect as a server-side solution... But I've also seen a lot of people use javascript to accomplish the same thing as a quick fix. Few browsers don't support javascript. Of the small amount that don't, the venn diagram merge of browsers that don't do javascript and users with an itchy trigger finger is very small. The advantage is that it's faster than mungling your own server-side code with extra logic to prevent double posting. Nothing stops users from saving the form and resubmitting it without the JS code. This may reduce the number of attempts, but it's a partial solution and won't stop determined users. Nothing dependent on the client can be considered a fail-safe solution. I encountered this problem with some PHP pages, but the idea is the same regardless of the language. Not all pages have problems with double submissions. For example, a page that provides read-only access to data usually can be retrieved multiple times without damaging the data. It's submitting changes to data that can become the problem. I ended up locking on some identifying characteristic of the object whose data is being modified. If I can't get the lock, I send back a page to the user explaining that there probably was a double submission and everything might have gone ok. The user would need to go in and check the data to make sure. In pseudo-perl-code: sub get_lock { my($objecttype, $objectid) = @_; $n = 0; local($sec,$min,$hr,$md, $mon, $yr, $wday, $yday,$isdst) = gmtime(time); $lockfile = sprintf("%s/%4d%2d%2d%2d%2d%2d-%s", $objecttype, $yr+1900, $mon+1, $md, $hr, $min, $sec, $objectid); for( $n = 0; $n 1 !$r; $n++ ) { $r = link("$dir/$nullfile", "$dir/$lockfile-$n.lock"); } return $r; } So, for example, if I am trying to modify an entry for a test organization in our directory service, the lock is "/var/md/dsa/shadow/www-ldif-log/roles and organizations/20010107175816-luggage-org-0.lock" $dir = "/var/md/dsa/shadow/www-ldif-log"; $objecttype = "roles and organizations"; $objectid = "luggage-org"; This is a specific example, but I'm sure other ways can have the same result -- basically serializing write access to individual objects, in this case, in our directory service. Then, double submissions don't hurt anything. Regarding the desire to not add code - never let down your guard when you are designing and programming. Paranoid people should be inherently more secure. +- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://sourcegarden.org/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
Re: getting rid of multiple identical http requests (bad users double-clicking)
James G Smith [EMAIL PROTECTED] wrote: Stas Bekman [EMAIL PROTECTED] wrote: On Fri, 5 Jan 2001, Gunther Birznieks wrote: Sorry if this solution has been mentioned before (i didn't read the earlier parts of this thread), and I know it's not as perfect as a server-side solution... But I've also seen a lot of people use javascript to accomplish the same thing as a quick fix. Few browsers don't support javascript. Of the small amount that don't, the venn diagram merge of browsers that don't do javascript and users with an itchy trigger finger is very small. The advantage is that it's faster than mungling your own server-side code with extra logic to prevent double posting. Nothing stops users from saving the form and resubmitting it without the JS code. This may reduce the number of attempts, but it's a partial solution and won't stop determined users. Nothing dependent on the client can be considered a fail-safe solution. I encountered this problem with some PHP pages, but the idea is the same regardless of the language. Not all pages have problems with double submissions. For example, a page that provides read-only access to data usually can be retrieved multiple times without damaging the data. It's submitting changes to data that can become the problem. I ended up locking on some identifying characteristic of the object whose data is being modified. If I can't get the lock, I send back a page to the user explaining that there probably was a double submission and everything might have gone ok. The user would need to go in and check the data to make sure. In pseudo-perl-code: sub get_lock { my($objecttype, $objectid) = @_; $n = 0; local($sec,$min,$hr,$md, $mon, $yr, $wday, $yday,$isdst) = gmtime(time); $lockfile = sprintf("%s/%4d%2d%2d%2d%2d%2d-%s", $objecttype, $yr+1900, $mon+1, $md, $hr, $min, $sec, $objectid); for( $n = 0; $n 1 !$r; $n++ ) { $r = link("$dir/$nullfile", "$dir/$lockfile-$n.lock"); } return $r; } So, for example, if I am trying to modify an entry for a test organization in our directory service, the lock is "/var/md/dsa/shadow/www-ldif-log/roles and organizations/20010107175816-luggage-org-0.lock" $dir = "/var/md/dsa/shadow/www-ldif-log"; $objecttype = "roles and organizations"; $objectid = "luggage-org"; I realized shortly after I sent this that I made a mistake... The above code gives me a good filename for creating an LDIF to feed to ldapmodify. To actually lock on an object, the code should be sub get_lock { my($objecttype, $objectid) = @_; $lockfile = "$objecttype/$objectid.lock"; return link("$dir/$nullfile", "$dir/$lockfile"); } The resulting lockfile is "/var/md/dsa/shadow/www-ldif-log/roles and organizations/luggage-org.lock" +- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://sourcegarden.org/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
Re: perl calendar application
Blue Lang [EMAIL PROTECTED] wrote: On Sat, 6 Jan 2001 [EMAIL PROTECTED] wrote: On Fri, 5 Jan 2001, Jim Serio wrote: Why not just write one to suite your needs? If you want one I'd really like to hack on a freeware version, but it'd be nice to start with one that at least had some decent sheduling features so I could use Eh, I'm prepared to take my lynching, but I'd just like to remind everyone that there's nothing at all wrong with using PHP for things like this. You'll never be a worse person for learning something new, and the overheard required to manage a php+perl enabled apache is only minimally more than managing one or the other. IMHO, it's just lame to rewrite something for which there exists dozens of good apps just because of the language im which it is written. You might as well be arguing about GPL/BSD/Artistic at that point. I have to agree. At Texas AM, we just went production with a combination of TWIG (in php), custom php scripts to handle directory service tasks (LDAP), php scripts creating a CGI environment for some Perl scripts (Apache is 32- bit on Irix, oracle is 64-bit...), and a smattering of tcl (mail store management), sh (kerberos), and Perl (PH management) scripts to help out when php couldn't quite do it. My rule of thumb is to use whichever language makes the task easiest. Most languages can work together. +- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://sourcegarden.org/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
Re: Help with Limit in Perl
"Scott Alexander" [EMAIL PROTECTED] wrote: My final question is: Is it possible to have the name of the REMOTE_USER in the httpd.conf file? Short answer: no. Long answer: Since the httpd.conf file is read only at startup (or other well defined times, such as a HUP signal), which REMOTE_USER would you have in mind? Mod_perl feeds any information from the perl sections into the httpd configuration engine at the time the conf file is read. Any variable substitutions get made at that time. +- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://sourcegarden.org/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
Re: Help with Limit in Perl
"Scott Alexander" [EMAIL PROTECTED] wrote: On 1 Jan 2001, at 11:40, James G Smith wrote: Thanks for the answer. I'm no mod_perl or apache guru but I had a feeling it didn't make sense. I'm trying to use a Apache::AuthAnyModDav to authenticate for mod_dav. I already have a Apache::AuthAny for normal authentication which works fine. I'd like to have it so a user can use Web folders in Win 98 and connect to a directory on the web server. But only their own directory. I can get it to work via authentication but that is any user can access any directory in the tree. I want that they can only access their own directory. At the moment users can upload, download to their own directory via a html interface but to add WebDav functionality would be great! Ahh... Now we get into the interesting bits of mod_perl. There is a way to change the DocumentRoot on a per-request basis. Just not a way to do this in the config files. Use the function Apache::document_root. For example: my $old_docroot = $r - document_root($new_docroot); $r - register_cleanup(sub { shift - document_root($old_docroot) } ); The new document root will be available as (the scalar) $Apache::Server::DocumentRoot . This is based on information from several months ago. I haven't actually tested it, but I think Doug said it should work back then (v. 1.23). You might be able to put this in you authentication handler. If you get something working with this, let us know. +- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://sourcegarden.org/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
Re: Help with Perl in httpd.conf
"Scott Alexander" [EMAIL PROTECTED] wrote: Is the syntax still wrong or does anyone have any ideas about this? I am thinking it is. Try the following correction. Perl #!perl $Location {"/users/supervisor"} = { DAV = 'On', AllowOverride = 'None', Options = 'None', AuthName = '"Test"', AuthType = 'Basic', Auth_MySQL_Password_Table = 'users', Auth_MySQL_Username_Field = 'user', Auth_MySQL_Password_Field = 'passwd', Auth_MySQL_Encryption_Types = 'Plaintext', Auth_MYSQL = 'on', Limit = { 'GET POST' = { require = 'user supervisor', }, }, } ; __END__ /Perl Limit = { METHODS = `GET POST`, require = `user supervisor`, }, Also, you have backticks (`) in the above instead of single- quotes ('). This will result in Perl trying to execute what is enclosed. Probably not what you wanted. +- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://sourcegarden.org/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--
Re: Where can I find....
"Michael" [EMAIL PROTECTED] wrote: Where can I find documentation on the how to use all the values that appear in Apache::Constants The obviously do something, but what??? I figured out what OK, DECLINED do by reading the source, but what about all the rest. Are they described somewhere?? Andrew Ford's reference card lists them (not sure of the URL). As far as I can tell, they are fairly straightforward if you keep in mind that they are copies of constants used in Apache and HTTP. For example, HTTP_* constants are typically status codes in the protocol. M_* constants are HTTP methods. I don't know of anyplace that treats the constants specifically. They are usually incidental to doing something. +- James Smith - [EMAIL PROTECTED] | http://www.jamesmith.com/ [EMAIL PROTECTED] | http://sourcegarden.org/ [EMAIL PROTECTED] | http://cis.tamu.edu/systems/opensystems/ +--