Re: RFC: mod_perl 2.0 documentation project
On Tue, 7 Aug 2001, Barrie Slaymaker wrote: > On Tue, Aug 07, 2001 at 10:16:26PM +0800, Stas Bekman wrote: > > [Barrie, I hope you don't mind that I put it on the list, the more people > > contribute the better the outcome :)] > > Not at all, I just noticed that others seem to have replied directly and > followed suit. yup, whoever wants to dive in, please keep the posts on the list. Don't be shy. > > Something like this: http://perl.apache.org/guide/troubleshooting.html ? > > That's what made me think of it. That deals with a lot of the common > ones, I was thinking of something more open-ended that would grow to > contain more symptoms and (where useful) more details on how to fix > things. For instance, including the failures found in a lot of > systems would mean covering various tempalting system failures. When creating this page, I've followed a simple rule: when somebody asks a new question and gets an answer, I ignore it, unless the answer is really interesting and covers more than the question's scope. When the same question gets repeated, I add the Q&A to the troubleshooting chapter. Thus I've avoided having this page bloated. I believe it's probably not a good idea to collect all Q&As, because the content will become very hard to maintain and harder to navigate. > One of the great things about Perl and mod_perl is the diversity, but > it's also daunting to newcomers to identify all the peices and what > might be wrong, especially in a troubleshooting situation. Not if they use the search interface, you type in your symptom and you come up with hits. Let's say you see the error: "Apache.pm failed to load!", I go to the search input, type in the symptom and the first hit that comes is http://thingy.kcilink.com/modperlguide/troubleshooting/_Apache_pm_failed_to_load_.html May be the problem is in educating users how to use, and not how to store the data. I've never claimed the the current guide is easy to navigate, unless you read it all. But I think the search engine on the split version of the guide does a great job. I use it quite a lot by myself. > > The problem with DB, is maintence. When it's flat, people read mostly > > sequentially, point out and fix problems. When it's in the DB, most of the > > items would hardly come up and it's easier to have stale data in there. > > Especially when you knowledge base getting big. > > > > It's also hard to follow the evolution of the DB, since you don't see the > > changes like you do with flat files, as they change with CVS commits. > > New and significantly revised entries could be sent to the list, both > to be proofed by area experts and to act as sort of a "FAQ a day" > update to people learning about different sections. Don't want to > drive list traffic up, but hey. Hmmm, sonder if a mod_perl FAQ a day > list would make sense. Kinda your one-a-day Vitamin MP. This is a hard issue. A. If you create a new list, most of the people won't get on it, because they don't want extra traffic. I assume that this list is not only for sending the Q&A items, but discussing them as well. B. If you do this on the main list, people will get unsubscribed, because of the heavy traffic. I guess we could have modperl-daily-faq list ala [EMAIL PROTECTED] (which is mainly dead). But this doesn't solve the problem of where the FAQ items are to be discussed. Another problem is how to avoid overlapping with the book/guide like materials. In http://perl.apache.org/guide/troubleshooting.html I've solved this mostly by listing the symptoms only and linking to the portiongs of the guide for the explanations. Having the knowledge base disconnected from the main material will make this duplication removal and maintance overhead very hard. > > I think I need more convincing points to decide to make it as DB. > > I think the biggest points are to make it easy to submitting "articles" > and encourage near real-time peer review, along with structured > searching. What's the problem to submit articles right now? You want something to be added to the guide, submit to the list, get peer review, get someone to store the cleaned version in the guide, then update the DB. I still prefer to have it flat, while easily convertable to any flexible format imaginable. The idea of throwing many items into DB simple doesn't work, because many records in this database will want to preserve some order between them. For example look at the most disconnected items in the troubleshooting chapter. I've the items sorted by 'configuration', 'build', 'startup', 'run time'... categories, so you can easily narrow your search, by jumping to the right section. I don't say you cannot do this with DB approach, but then it gets complicated as you lose some of the flexibility. In any case, as long as we build the knowledge base in a way that can be easily converted from one format to another without doing any manual adjustments, we can fine tune things as we go. I'd hate to find things n
Re: knowledge base - was Re: RFC: mod_perl 2.0 documentation project
On Tue, 7 Aug 2001, Jim Smith wrote: > On Tue, Aug 07, 2001 at 10:16:26PM +0800, Stas Bekman wrote: > > > Just some pseudo-random ideation boiling down to "let's use mod_perl > > > to buils a knowledge base" both to demonstrate it's power and to serve > > > the community. > > > > I like the idea!!! > > If we can come up with a proposal for a generic knowledge base product that > would be useful in an IT environment, I can probably devote some of my work > to it -- this is something I've been wanting to put together at work for > customers and our help desk people but haven't had time yet to get it all > together. I'll have more time in September after the students return. go ahead and be the first to propose Jim :) _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://eXtropia.com/ http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
san diego reminiscent
As Geoff reported in his weekly report last week, the TPC5 conference was a great success. I dare to say that this was the best conference I've gone to in the last 3 years. I hope the next year's conference will be of the same quality. So congratulations to ORA folks who produced the conference and all the speakers who provided the content (there were 275! speakers at the conference). Special thanks go to Nat, the conference king. mod_perl track was huge this year (I think there were about 27 hours of mod_perl content), again thanks to Nat! I hope it stays this way the next year. If you went to the conference, enjoyed it and didn't thank Nat, do this now. Most of the conference materials are available from: ftp://ftp.ora.com/pub/conference/os2001 (again courtesy to ORA) Nat, I think that having the conference center isolated from the rest of the world was originally considered as a big drawback, but during the conference we have realized that it was a huge plus! Since people couldn't escape from the hotel, it was so easy to meet all the people that otherwise you'd never meet if they could escape the hotel. All those who didn't make to the conference this time, consider to come next year. This is sort of event that changes the perception of working in the open source community, since once you meet people you talk with on the list, the new experience is so different... And since now I own a cool Sony DSC-S75 camera, here are some pics from the conference and San Diego. Notice that the location is temporary, so if you want the pictures grab them now and please don't link to them. http://stason.org/tmp/San_Diego_Jul_2001/ Enjoy! _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://eXtropia.com/ http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
Apache::Upload bug?
Ok, for a the last couple days I've been searching out a problem I've been having with Apache::Upload and Image::Magick. The code consists mainly of:: @upload = $r->upload; foreach my $file (@upload) { my $fh = $file->fh; # There's some .ext checking here to get $type and some renaming. my $i = Image::Magick->new(magick=>$type); $err = $i->ReadImage(file=>$$fh); $i->Set(magick=>$type); $err = $i->WriteImage(filename=>$filename)); warn "$err" if "$err"; undef $i; } Now the main problem was that when the request was over the temp file Apache::Upload creates, "apreq??", in the /tmp directory was getting unlink'd but there was still an open filehandle to it. This meant that the space the image was taking up on the /tmp dir was not being cleared, but the file wasn't showing up in the directory. And each image upload after this was creating more stale filehandles and keeping more and more drive space occupied. If Apache was restarted the filehandles were closed and the memory was free'd. I assume when a child dies it also clears the memory, but I'm not sure on that one. After much tracing I found that the problem occurs in the command "my $fh = $file->fh;" where Apache::Upload dup()'s the filehandle and passed the duplicate to the perl script. Then when the program exits the perl script still has an open filehandle to that file. I fixed my problem by adding a "close $$fh;" to my program which closed the duplicate filehandle. Now I'm not sure if this is a bug or if it's supposed to be like that, but the documentation makes it sound like it gives you the actually filehandle of the tempfile, and not a copy. I just assumed that it would be closed by Apache::Upload when the request was finished. -Jeff Hartmann
Re: Random requests in log file
At 10:24 AM 08/07/01 -0700, Nick Tonkin wrote: >> > /r/dr >> > /r/g3 >> > /r/sb >> www.yahoo.com/r/dr >> www.yahoo.com/r/sw > > >Yes, and I have seen plenty of cases where broken web servers or web sites >or web browsers screw up HREFs, by prepending an incorrect root uri to a >relative link. > >That would be my guess, broken URLs somewhere out in space. But why the continued hits for the wrong pages? It's like someone spidered an entire site, and then has gone back and is now testing all those HREFs against our server. Currently mod_perl is generating a 404 page. When I block I return FORBIDDEN, but that doesn't seem to stop the requests either. They don't seem to get the message... And isn't it correct that if they request again before CLOSE_WAIT is up I'll need to spawn more servers? If they are not sending requests in parallel I wonder if it would be easier on my resources to really slow down responses as long as I don't tie up too many of my processes. If they ignore FORBIDDEN maybe they will see the timeouts. Time to look at the Throttle modules, I suppose. Bill Moseley mailto:[EMAIL PROTECTED]
RE: 2 problems with mod_perl/Apache::DBI
> startup.pl cannot be run from the command line when it > contains apache server specific modules. But you can put those (Apache specific) modules in your httpd.conf instead as PerlModule Apache::DBI Apache::Status and avoid compilation warnings in startup.pl. But you should clearly note this, both in startup.pl and httpd.conf, as explanatory comments. Otherwise, you *will* forget that you did this... :-) > > However, when run under Apache > > > > PerlRequire /usr/local/etc/apache/startup.pl > > > > [Mon Aug 6 17:33:09 2001] [error] Can't load > '/usr/local/lib/perl5/site_perl/5.6.1/i386-freebsd/auto/DBI/DBI.so > ' for module DBI: > /usr/local/lib/perl5/site_perl/5.6.1/i386-freebsd/auto/DBI/DBI.so: > Undefined symbol "PL_dowarn" at > > Not sure what's up. As far as the DBI error, is there a possibility that you are NOT using the same build of perl as was compiled into Apache? try rebuilding mod_perl/apache at the same time. L8r, Rob #!/usr/bin/perl -w use Disclaimer qw/:standard/;
Re: Want modperl-friendly web based email & website builderpackages
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 At 4:02 PM +1000 8/7/01, Rod Butcher wrote: >I run a community ISP on Win32. Can anybody recommend modperl-friendly >packages to do :- WebMail.Com (Perl downloadable) works reasonably well. It's Perl CGI, not mod_perl, and the version I have (perhaps it's been updated) doesn't support APOP (it requires a POP mailbox, it's not standalone). I seriously doubt that it has JavaScript protections builtin. I use it for personal use (like relatives who need to access their mailboxes remotely). - -- Kee Hinckley - Somewhere.Com, LLC http://consulting.somewhere.com/ I'm not sure which upsets me more: that people are so unwilling to accept responsibility for their own actions, or that they are so eager to regulate everyone else's. -BEGIN PGP SIGNATURE- Version: PGP Personal Security 7.0.3 iQA/AwUBO3AleiZsPfdw+r2CEQLd7QCgqlTe3kVwOrz5OhzgcKD74HtTrNQAnAix pWxXmJ7USaN/ZkflO72TKBO2 =yzZn -END PGP SIGNATURE-
Re: Random requests in log file
On Tue, 7 Aug 2001, Christof Damian wrote: > Bill Moseley wrote: > > > Does everyone else see these? What's the deal? Are they really probes or > > some spider run amok? > > > > Right now someone is looking for things like: > > > > /r/dr > > /r/g3 > > /r/sb > > /r/sw > > /r/s/2 > > /r/a/booth > > /r/s/pp > > /NowPlaying > > /mymovies/list > > /terms > > /ootw/1999/oarch99_index.html > > the first couple look like yahoo > > www.yahoo.com/r/dr > www.yahoo.com/r/sw Yes, and I have seen plenty of cases where broken web servers or web sites or web browsers screw up HREFs, by prepending an incorrect root uri to a relative link. That would be my guess, broken URLs somewhere out in space. - nick
Re: RFC: mod_perl 2.0 documentation project
On Tue, Aug 07, 2001 at 10:16:26PM +0800, Stas Bekman wrote: > [Barrie, I hope you don't mind that I put it on the list, the more people > contribute the better the outcome :)] Not at all, I just noticed that others seem to have replied directly and followed suit. > > Something like this: http://perl.apache.org/guide/troubleshooting.html ? That's what made me think of it. That deals with a lot of the common ones, I was thinking of something more open-ended that would grow to contain more symptoms and (where useful) more details on how to fix things. For instance, including the failures found in a lot of systems would mean covering various tempalting system failures. One of the great things about Perl and mod_perl is the diversity, but it's also daunting to newcomers to identify all the peices and what might be wrong, especially in a troubleshooting situation. > The problem with DB, is maintence. When it's flat, people read mostly > sequentially, point out and fix problems. When it's in the DB, most of the > items would hardly come up and it's easier to have stale data in there. > Especially when you knowledge base getting big. > > It's also hard to follow the evolution of the DB, since you don't see the > changes like you do with flat files, as they change with CVS commits. New and significantly revised entries could be sent to the list, both to be proofed by area experts and to act as sort of a "FAQ a day" update to people learning about different sections. Don't want to drive list traffic up, but hey. Hmmm, sonder if a mod_perl FAQ a day list would make sense. Kinda your one-a-day Vitamin MP. > I think I need more convincing points to decide to make it as DB. I think the biggest points are to make it easy to submitting "articles" and encourage near real-time peer review, along with structured searching. > > Hmmm, since you've already pointed out that printability is not the > > primary goal, I wonder if we should just take AxKit and it's nascent CMS > > and start building a knowledge base? The book format is nice for > > getting spun up to speed, but the knowledge base interface is what might > > actually cut down on list traffic. > > Well, if you don't get to work with XML directly. I sure thing dislike > maintaining simple documenents in XML. Since you have to use some web > interface to edit the documents, you have no power of editors like > vi/emacs, which makes the work much harder. I don't care about the underlying format, make it a POD variant if you like. > > I could even see a search interface on an email address, so when you > > see a FAQ pop up on-list, a simple forward to [EMAIL PROTECTED] or > > something would do a search and send you back a message suitable for > > forwarding to the original poster or something. > > This would be a very sensitive change. You don't want to AI replies end up > on the list, since they won't be correct all the time. I would suspect that the reply might be a URL for the user to follow. Probably 20% of all questions on-list are answered by you or Perrin or others zinging a URL to the relavant section of the guide. Can that part of you be augmented by an AI? > > Kinda like the IRC bot purl. > > Heh, I'd love to play with infobot (==purl) in the real world. I think > Kevin said that it's ready for working outside IRC. :-). > Sure, there is no limit on how the third book should look. As long as it's > manageable and useful for users. The first two books will play it strict. > The third one is very flexible. Agreed. > Yup, exactly. seems very exciting if actually get to implement it. Then > the world domation will be finally a next chapter. :) The winners get to write the history books ;-). - Barrie
Re: Random requests in log file
Bill Moseley wrote: > Does everyone else see these? What's the deal? Are they really probes or > some spider run amok? > > Right now someone is looking for things like: > > /r/dr > /r/g3 > /r/sb > /r/sw > /r/s/2 > /r/a/booth > /r/s/pp > /NowPlaying > /mymovies/list > /terms > /ootw/1999/oarch99_index.html the first couple look like yahoo www.yahoo.com/r/dr www.yahoo.com/r/sw -- Christof Damian Technical Director, guideguide ltd.
Random requests in log file
Hi, We always see the normal probes for known insecure CGI scripts, and spiders keep our logs full. But lately there have been a huge number of requests for resources that are not on our server (even not counting Code Red II). It looks like someone is spidering another server, yet sending requests to our machine -- the requests don't really look like probes for insecure scripts, rather just for files that are not and never have been on this server (or any related virtual hosts). Does everyone else see these? What's the deal? Are they really probes or some spider run amok? Right now someone is looking for things like: /r/dr /r/g3 /r/sb /r/sw /r/s/2 /r/a/booth /r/s/pp /NowPlaying /mymovies/list /terms /ootw/1999/oarch99_index.html I currently have a killfile of IP addresses and a PerlInitHandler that blocks requests, but it would be nice to automate that process. Are there any current modules that do this? Another thing I find odd: this server has three virtual hosts. In the second and third VH's logs I find requests for files found on the first, default, VH. I've logged the Host: header and indeed it was there. Odd. Bill Moseley mailto:[EMAIL PROTECTED]
Re: RFC: mod_perl 2.0 documentation project
[Barrie, I hope you don't mind that I put it on the list, the more people contribute the better the outcome :)] > Hi Stas, sorry it took so long to get back to this :-/. it's not late al all, really ;0) we have years to come to work on this project. > Some minor feedback. I could see an additional "books", a > troubleshooting reference (as opposed to a guide, like part VI). Many > service organizations have large manuals that are essentially a > compendium of failure modes and instructions for how to troubleshoot > each one. Seems like a searchable database of error messages, etc. > might be a boon to the community. Given some keywords or a message, you > could find (in one query) articles in the db *and* mailing list messages > related to them. I could see the usual knowledge base type rank this > article. Something like this: http://perl.apache.org/guide/troubleshooting.html ? The problem with DB, is maintence. When it's flat, people read mostly sequentially, point out and fix problems. When it's in the DB, most of the items would hardly come up and it's easier to have stale data in there. Especially when you knowledge base getting big. It's also hard to follow the evolution of the DB, since you don't see the changes like you do with flat files, as they change with CVS commits. I think I need more convincing points to decide to make it as DB. > Hmmm, since you've already pointed out that printability is not the > primary goal, I wonder if we should just take AxKit and it's nascent CMS > and start building a knowledge base? The book format is nice for > getting spun up to speed, but the knowledge base interface is what might > actually cut down on list traffic. Well, if you don't get to work with XML directly. I sure thing dislike maintaining simple documenents in XML. Since you have to use some web interface to edit the documents, you have no power of editors like vi/emacs, which makes the work much harder. This doesn't mean that you cannot split the flat files into items and have a parallel interface for search. In fact Matt has already done this. Since http://perl.apache.org/guide/troubleshooting.html is all simple items, it's very easy to itemize it. Also don't forget the split version of the guide, used by the search engines: http://perl.apache.org/guide/index.html#search > I could even see a search interface on an email address, so when you > see a FAQ pop up on-list, a simple forward to [EMAIL PROTECTED] or > something would do a search and send you back a message suitable for > forwarding to the original poster or something. This would be a very sensitive change. You don't want to AI replies end up on the list, since they won't be correct all the time. But if you only reply to users, humans will still reply, so what's the point :) May be having posts like: WARNING !! !! THIS IS AUTOMATIC GUESS IT CAN BE WRONG ! But definitely an idea to consider and explore. > Kinda like the IRC bot purl. Heh, I'd love to play with infobot (==purl) in the real world. I think Kevin said that it's ready for working outside IRC. One of the cool things I thought about is replace Doug's presentation 'command server' protocol with 'infobot' loaded with mod_perl factoids. This will make the presentation even funnier, and we can actually put the bot online for others to re-use! > The other "book" would really be a set of how-tos for getting various > systems (templating engines, CMSs) up and running. That's probably a > lot like your C.IV, but the "howto" format has become a meme with a lot > of understanding in the community. Sure, there is no limit on how the third book should look. As long as it's manageable and useful for users. The first two books will play it strict. The third one is very flexible. > I guess I really see this as more of a "mod_perl guide plus knowledge > base" implementation than a "three volume book set", with the > pumpkings being "series editors" for knowledge base articles, and the > pumpkins could easily drag in "tech editors" from the appropriate > systems if need be. Yup, exactly. seems very exciting if actually get to implement it. Then the world domation will be finally a next chapter. :) > Just some pseudo-random ideation boiling down to "let's use mod_perl > to buils a knowledge base" both to demonstrate it's power and to serve > the community. I like the idea!!! _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://eXtropia.com/ http://singlesheaven.com http://perl.apache.org http://perlmonth.com/
virus warning & apology
Folks, please delete any email with attachment you may have just received from me, somehow my Outlook Express got a virus, makes it send crap to everybody in my Address Book. apologies, Rod
Re: compiling troubles on Solaris 8
>> As an aside, Solaris 8 comes with prebuilt versions of Apache >> and mod_perl, >does anyone familiar with HP-UX, AIX, or IRIX know whether this is true of >these platforms as well? >Whether they are DSO mod_perl or not would also be helpful. HP-UX doesn't consider Perl a supportable piece of software, nor Apache. Needless to say, it doesn't come prebuilt. The compiler that comes w/the os isn't ansii compliant and is only good for rebuilding the kernel (but hey, you can buy HP's ansii c compiler for only a few thousand dollars!). We ended up building a gnu development environment in order to build mod_Perl. We could not get it to work as a DSO :(. Once we got everything up and running, it's a decent enough platform. Regards, Dave Homsher