Re: Memory leak hell...
On Mon, 11 Sep 2000, Stas Bekman wrote: > I was thinking about going thru the symbol table and dumping all the > variables. And run diff between the dumps of the two requests. Be careful > though that Devel::Peek doesn't show a complete dump for the whole > structure, and I couldn't find the interface for changing the deepness of > datastructure traversal. I think Doug has patched Apache::Peek to make > this configurable. > > Do you think it still won't work? Yes. Because this only seems to give you access to package variables, not trapped lexicals or anything in closures. -- Fastnet Software Ltd. High Performance Web Specialists Providing mod_perl, XML, Sybase and Oracle solutions Email for training and consultancy availability. http://sergeant.org | AxKit: http://axkit.org
Re: [Mason] Problem: no-Content.
HTML::Mason is your perlhandler in the /mason request space, so what's supposed to handle a root object "/" request? Do a simple setup for one virtualhost and make sure your choices of documentroot, alias settings and component_root settings agree before broadening the solution for multiple virtualhosts. Also, back down to Mason 0.87 until 0.89 is released. -Ian > Hello all, > > -- The Problem -- > > When i try try retrieve http://www.clickly.com/index.html > (Test-Site: not the real clickly.com) i get a blank return and i > mean a real blank content return (tried it with telnet to port 80 and > the server only sends back the headers of the web-page?) > > Does anybody know what the problem is? I'va tried all sorts of > things but nothing worked. > > Thanks in advance, > Guido Moonen > > -- The Stuff i have -- > > * Solaris V 2.6 op Sun ultrasparc. > * perl, v5.6.0 built for sun4-solaris > * Server version: Apache/1.3.12 (Unix) > Server built: Sep 6 2000 14:51:05 > * mod_perl-1.24 > * Mason v. 0.88 > > -- Handler.PL -- > << SNIP >> > my (%parsers, %interp, %ah); > foreach my $site qw(www modified management) > { $parsers{$site} = new HTML::Mason::Parser(allow_globals => > [qw($dbh %session)]); > > $interp{$site} = new HTML::Mason::Interp (parser=>$parsers{$site}, > > comp_root=>"/clickly/html/$site/", > > data_dir=>"/clickly/masonhq/$site/", >system_log_events=>"ALL"); > $ah{$site} = new HTML::Mason::ApacheHandler(interp=>$interp{$site}); > > chown (scalar(getpwnam "nobody"), scalar(getgrnam "nobody"), > $interp{$site}->files_written); > } > > sub handler > { my ($r) = @_; > my $site = $r->dir_config('site'); > return -1 if $r->content_type && $r->content_type !~ m|^text/|i; > my $status = $ah{$site}->handle_request($r); > return $status; > } > << SNIP >> > > -- httpd.conf -- > > << SNIP >> > # www.clickly.com (Default) > > ServerAdmin [EMAIL PROTECTED] > DocumentRoot /clickly/html/www > ServerName www.clickly.com > PerlSetVar site 'www' > > > Options Indexes FollowSymLinks MultiViews ExecCGI > AllowOverride None > Order allow,deny > Allow from all > > alias /mason /clickly/html/www > > SetHandler perl-script > PerlHandler HTML::Mason > > > << SNIP >> > > == > Guido Moonen > Clickly.com > Van Diemenstraat 206 > 1013 CP Amsterdam > THE NETHERLANDS > > Mob: +31 6 26912345 > Tel: +31 20 6934083 > Fax: +31 20 6934866 > E-mail: [EMAIL PROTECTED] > http://www.clickly.com > > > Get Your Software Clickly! > == > > > ___ > Mason-users mailing list > [EMAIL PROTECTED] > http://lists.sourceforge.net/mailman/listinfo/mason-users > -- Salon Internet http://www.salon.com/ Manager, Software and Systems "Livin' La Vida Unix!" Ian Kallen <[EMAIL PROTECTED]> / AIM: iankallen / Fax: (415) 354-3326
Re: Eval block error trapping bug????
It was the "use CGI::Carp qw(fatalsToBrowser);". Took it out and now all is well. And eval block will work for my purposes - mostly trapping my own sql generation errors. I read up on $SIG{__DIE__} in the "Guide". It may become useful for me in the future. Thanks for the quick info. Chuck -Original Message- From: Eric L. Brine <[EMAIL PROTECTED]> To: Chuck Goehring <[EMAIL PROTECTED]> Cc: mod perl list <[EMAIL PROTECTED]>; [EMAIL PROTECTED] <[EMAIL PROTECTED]> Date: Friday, September 08, 2000 9:58 PM Subject: Re: Eval block error trapping bug >> Under mod_perl, the die() within the eval block causes the >> program to really die. > >Does your program (maybe CGI.pm or something used by CGI.pm?) set >$SIG{'DIE'}? IIRC, $SIG{'DIE'} has precedence over eval{}, something >many consider to be a bug. > >If so, I'd try: > eval { >local $SIG{'DIE'}; # undefine $SIG{'DIE'} in this block >...normal code... > }; > if ($@) { ...normal code... } > >ELB
Re: [ RFC ] A Session Manager module
Jules Cisek wrote: > > Sounds interesting. > > Is this module just managing the sessionID or also the session data? i.e. > is the manager capable of storing complex objects (via something like > Storable or Data::Dumper)? Will you provide hooks "caching and DB > abstraction" layers so that the developer can provide the backend > implementation? Just managing the session ID between client and server. I prefer simple modules that are small and neat (KISS). storage of any data is upto you - can I suggest Apache::Session ? All I want to do is provide a simple module in the unix sense of doing one thing well. I also had the idea of creating a BB (BareBones) version that has few(er) configuration options and is very simple. Greg > > ~J
Re: Memory leak hell...
On Sun, 10 Sep 2000, Matt Sergeant wrote: > On Sun, 10 Sep 2000, Stas Bekman wrote: > > > On Sun, 10 Sep 2000, Matt Sergeant wrote: > > > > > For 2 days solid now I've been trying to track down a very bizarre memory > > > leak in AxKit. > > > > > > I've checked everything I can think of - all circular references are now > > > gone, all closures clean up carefully after themselves, and I've reduced > > > the usage of some external modules. But still the processes grow. > > > > > > Now to the wierd bit. I could track this down if it wasn't for this: > > > > > > The memory leak starts after the Nth hit, usually around 35. This is > > > running under httpd -X. > > > > > > So it goes along very happily for 35 hits - memory usage is rock solid > > > stable. Then after the 35th hit the memory usage starts to grow about 4k > > > per hit. Obviously thats an impossible rate of growth to sustain for any > > > amount of time, and soon the server is swamped. > > > > > > Can anyone give me _any_ help in figuring out where this might be coming > > > from? I've tried several things, including adding the ability to display > > > the memory usage in the debug statements (by getting the VmSize out of > > > /proc/self/status). This is driving me absolutely mad, as you might > > > tell. The only thing I can come close to in locating it is some remote > > > possibility that it is in one particular section of the code, but that > > > doesn't actually seem to be the case - sometimes the memory increase goes > > > outside that section of code. > > > > > > The modules I'm using are: > > > > > > Time::HiRes, Apache::* (core apache stuff only), AxKit, Digest::MD5, > > > Compress::Zlib, Fcntl, XML::Parser, XML::XPath, Unicode::Map8, > > > Unicode::String, MIME::Base64, Storable (loaded but not used), > > > XML::Sablotron (loaded but not used). > > > > > > mod_perl is 1.24 and Perl is 5.00503 (Mandrake release). > > > > First, Apache::VMonitor and GTop will make your debugging work much > > easier. > > Not really. :-( I meant that GTop lets you an easier access to memory usage. Of course you don't need it if you read directly the /proc/mem > > Second, The only reason that I can think of that things start happening > > after Nth hit is that one of the modules that you use does an internal > > accounting that goes astrey after Nth hit. Using Apache::Peek will help > > you discover the leakage by dumping and comparing the complete package > > variables tables. You will find some examples of using Apache::Peek in the > > guide. This will definitely allow you to spot the offending module. Of > > course the complete list of loaded modules can be found in /perl-status :) > > I don't see this example. It doesn't seem like ::Peek gives me any useful > information at all unless I recompile Perl with DEBUGGING_MSTATS, and that > seems like a nightmare... I was thinking about going thru the symbol table and dumping all the variables. And run diff between the dumps of the two requests. Be careful though that Devel::Peek doesn't show a complete dump for the whole structure, and I couldn't find the interface for changing the deepness of datastructure traversal. I think Doug has patched Apache::Peek to make this configurable. Do you think it still won't work? > > Apache::Leak is another thing to try, but it's not really good unless you > > know the exact offensive code. Otherwise it reports leakages for things > > which are Perl optimizations in fact. > > I've tried ::Leak before and its not very helpful. Doesn't even give you a > clue where the variables that are leaking are located. Yup, that's what I've said :( > > Yet another thing I'd think about is monitoring the code with GTop, and > > running it under strace/gdb. So it goes like this: > > > > You run a daemon that watches the processes and prints its memory size via > > Gtop. You complete 35 hits and now run the server under Apache::DB > > debugger. So you do 's' by 's' (step by step) and watch the printouts of > > the GTop daemon. Once you see the growth -- you know the offending > > module/line - and voila... of course this should be automized... > > While I may now have started to hallucinate due to working on this > non-stop for about 40 hours, this hasn't helped. The leak still moves > around. > > I think maybe I have to go back to the drawing board on this one :-( BTW, Frank's comment seems cool, I didn't think about it. What happens if you create some small variable and start making it bigger (string?) and see whether you still get this event on hit #36. May be you at least could move it from #36 to #3. It's a documented fact that the data structures change from hit #1 and #2, but not #3 assumming that it's the same request. _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED]
Re: [ RFC ] A Session Manager module
Sounds interesting. Is this module just managing the sessionID or also the session data? i.e. is the manager capable of storing complex objects (via something like Storable or Data::Dumper)? Will you provide hooks "caching and DB abstraction" layers so that the developer can provide the backend implementation? ~J - Original Message - From: "Greg Cope" <[EMAIL PROTECTED]> To: "Modperl list" <[EMAIL PROTECTED]> Sent: Sunday, September 10, 2000 2:40 PM Subject: [ RFC ] A Session Manager module > Dear All > > As some of you are aware for the past few weeks I have been working on a > Session Manager style module. > > It works (ish ;-), I know of a few issues (that may not be important > enough to change), but it works in my developement environment. > > What do I do with it now ? I think it may fit in well as a front end to > Apache::Session, although it needs a name. Its only around 400 lines > (including some POD and comments). > > It could fit in quite well with Apache::Session (ie providing a session > id, and Apache::Session does the server side storage). It may also fit > in with other implementations such as Embperl, Apache::ASP, and Mason - > although most of these have their own implenetations. > > My original plan - believe it or not - was to write a short "how to" > style tutorial to creating a mod perl shopping cart, the idea was not to > have another shopping cart, as there are many other better > implentations, but to have a few reasonably easy examples of mod_perl in > a real world type example. This was inspired by someones post a few > months back for documentation / articles etc. Well I started and wrote > a few modules, and went on to create a templating module (as per any > true path to mod_perl wisdom!), this session module and a DB abstraction > layer (and aparently I should be creating a caching module as well!). > > Well any ideas - please let me know. > > Greg Cope > > > A few details below. > > AIM: > > To manage session ID's between client and server - to get (or optionally > set) a sesion id via Cookies, Mungled URL or path_info. > > Implemetation: > > Uses a transhandler. > Optional configurations to alter logic / options via a package scalar eg > (some but not all) in a startup.pl: > > use SessionManager(); > > $SessionManager::DIR_MATCH = '/foohandler'; # default is match > everything of /\.html/ ! > $SessionManager::REDIRECT = 1; # default to no > redirect > $SessionManager::DEBUG = 7; # default debug is off > $SessionManager::SESSION_ID_LENGTH = 32;# nice long ID lenght > $SessionManager::NON_MATCH = '\.gif|\.jpeg|\.jpg'; # ignore images > > i.e. the above will session manager a URI matching 'foohandler', if > cookies are off it will redirect and set an mungled URI with a session > id length of 32 - it will also dump loads of debug info (between 3 and > 20 lines a request), and ignore any gifs, jpegs, and jpg files within a > URI 'foohandler'. > > Also: > > $SessionManager::COOKIES_ONLY = 1; > > Will only try cookies and then stop > > $SessionManager::ARGS_ONLY = 1; > > Will only try ARGS (after cookies). > > $SessionManager::URI_FIRST = 1; > > Try URI (mangled) after cookies, before ARGS, this allows changing the > order of which things are checked. > > $SessionManager::USE_ENV = 1; > > Instead of using pnotes entries use Environmental variables. > > There are aslo a few other bits in the works for trying to setting a > cookie if they are off by redirecting, and then using a mangled URI or > ARGS if that failed - this will have a TTL, and DOMAIN vars that will > allow overriding of the defaults. > > Thats about it.
Re: Memory leak hell...
On Sun, 10 Sep 2000, Stas Bekman wrote: > On Sun, 10 Sep 2000, Matt Sergeant wrote: > > > For 2 days solid now I've been trying to track down a very bizarre memory > > leak in AxKit. > > > > I've checked everything I can think of - all circular references are now > > gone, all closures clean up carefully after themselves, and I've reduced > > the usage of some external modules. But still the processes grow. > > > > Now to the wierd bit. I could track this down if it wasn't for this: > > > > The memory leak starts after the Nth hit, usually around 35. This is > > running under httpd -X. > > > > So it goes along very happily for 35 hits - memory usage is rock solid > > stable. Then after the 35th hit the memory usage starts to grow about 4k > > per hit. Obviously thats an impossible rate of growth to sustain for any > > amount of time, and soon the server is swamped. > > > > Can anyone give me _any_ help in figuring out where this might be coming > > from? I've tried several things, including adding the ability to display > > the memory usage in the debug statements (by getting the VmSize out of > > /proc/self/status). This is driving me absolutely mad, as you might > > tell. The only thing I can come close to in locating it is some remote > > possibility that it is in one particular section of the code, but that > > doesn't actually seem to be the case - sometimes the memory increase goes > > outside that section of code. > > > > The modules I'm using are: > > > > Time::HiRes, Apache::* (core apache stuff only), AxKit, Digest::MD5, > > Compress::Zlib, Fcntl, XML::Parser, XML::XPath, Unicode::Map8, > > Unicode::String, MIME::Base64, Storable (loaded but not used), > > XML::Sablotron (loaded but not used). > > > > mod_perl is 1.24 and Perl is 5.00503 (Mandrake release). > > First, Apache::VMonitor and GTop will make your debugging work much > easier. Not really. :-( > Second, The only reason that I can think of that things start happening > after Nth hit is that one of the modules that you use does an internal > accounting that goes astrey after Nth hit. Using Apache::Peek will help > you discover the leakage by dumping and comparing the complete package > variables tables. You will find some examples of using Apache::Peek in the > guide. This will definitely allow you to spot the offending module. Of > course the complete list of loaded modules can be found in /perl-status :) I don't see this example. It doesn't seem like ::Peek gives me any useful information at all unless I recompile Perl with DEBUGGING_MSTATS, and that seems like a nightmare... > Apache::Leak is another thing to try, but it's not really good unless you > know the exact offensive code. Otherwise it reports leakages for things > which are Perl optimizations in fact. I've tried ::Leak before and its not very helpful. Doesn't even give you a clue where the variables that are leaking are located. > Yet another thing I'd think about is monitoring the code with GTop, and > running it under strace/gdb. So it goes like this: > > You run a daemon that watches the processes and prints its memory size via > Gtop. You complete 35 hits and now run the server under Apache::DB > debugger. So you do 's' by 's' (step by step) and watch the printouts of > the GTop daemon. Once you see the growth -- you know the offending > module/line - and voila... of course this should be automized... While I may now have started to hallucinate due to working on this non-stop for about 40 hours, this hasn't helped. The leak still moves around. I think maybe I have to go back to the drawing board on this one :-( -- Fastnet Software Ltd. High Performance Web Specialists Providing mod_perl, XML, Sybase and Oracle solutions Email for training and consultancy availability. http://sergeant.org | AxKit: http://axkit.org
[ RFC ] A Session Manager module
Dear All As some of you are aware for the past few weeks I have been working on a Session Manager style module. It works (ish ;-), I know of a few issues (that may not be important enough to change), but it works in my developement environment. What do I do with it now ? I think it may fit in well as a front end to Apache::Session, although it needs a name. Its only around 400 lines (including some POD and comments). It could fit in quite well with Apache::Session (ie providing a session id, and Apache::Session does the server side storage). It may also fit in with other implementations such as Embperl, Apache::ASP, and Mason - although most of these have their own implenetations. My original plan - believe it or not - was to write a short "how to" style tutorial to creating a mod perl shopping cart, the idea was not to have another shopping cart, as there are many other better implentations, but to have a few reasonably easy examples of mod_perl in a real world type example. This was inspired by someones post a few months back for documentation / articles etc. Well I started and wrote a few modules, and went on to create a templating module (as per any true path to mod_perl wisdom!), this session module and a DB abstraction layer (and aparently I should be creating a caching module as well!). Well any ideas - please let me know. Greg Cope A few details below. AIM: To manage session ID's between client and server - to get (or optionally set) a sesion id via Cookies, Mungled URL or path_info. Implemetation: Uses a transhandler. Optional configurations to alter logic / options via a package scalar eg (some but not all) in a startup.pl: use SessionManager(); $SessionManager::DIR_MATCH = '/foohandler'; # default is match everything of /\.html/ ! $SessionManager::REDIRECT = 1; # default to no redirect $SessionManager::DEBUG = 7; # default debug is off $SessionManager::SESSION_ID_LENGTH = 32;# nice long ID lenght $SessionManager::NON_MATCH = '\.gif|\.jpeg|\.jpg'; # ignore images i.e. the above will session manager a URI matching 'foohandler', if cookies are off it will redirect and set an mungled URI with a session id length of 32 - it will also dump loads of debug info (between 3 and 20 lines a request), and ignore any gifs, jpegs, and jpg files within a URI 'foohandler'. Also: $SessionManager::COOKIES_ONLY = 1; Will only try cookies and then stop $SessionManager::ARGS_ONLY = 1; Will only try ARGS (after cookies). $SessionManager::URI_FIRST = 1; Try URI (mangled) after cookies, before ARGS, this allows changing the order of which things are checked. $SessionManager::USE_ENV = 1; Instead of using pnotes entries use Environmental variables. There are aslo a few other bits in the works for trying to setting a cookie if they are off by redirecting, and then using a mangled URI or ARGS if that failed - this will have a TTL, and DOMAIN vars that will allow overriding of the defaults. Thats about it.
Re: Memory leak hell...
Matt Sergeant <[EMAIL PROTECTED]> writes: > Now to the wierd bit. I could track this down if it wasn't for this: > > The memory leak starts after the Nth hit, usually around 35. This is > running under httpd -X. > > So it goes along very happily for 35 hits - memory usage is rock solid > stable. Then after the 35th hit the memory usage starts to grow about 4k > per hit. This could be a misleading symptom. Suppose your code always leaks 4K per hit. The process size will only start visibly growing when memory that was allocated and then freed during initialisation is exhausted. You could see this effect if the malloc pool has 140K free before the first hit. -- Frank Cringle, [EMAIL PROTECTED] voice: (+49 7745) 928759; fax: 928761
Re: Memory leak hell...
Matt Sergeant wrote: > Can anyone give me _any_ help in figuring out where this might be coming > from? When I'm working on problems like this, there are two basic things I try. They're not rocket science, but they usually work. The first is removing sections of code until the leak goes away. Then you get to tear your hair out wondering why that subrotuine leaks, but at least you'll know where it is. The second is to run under the debugger, with top running in strobe-light mode (refresh 0) in another window, watching for which line pushes the memory up. You've already named the usual suspects - closures and circular refs - but some closure problems can be very subtle. There's also the memory-related stuff that Apache::Status provides, but I found it difficult to get useful info out of it. - Perrin
Re: Memory leak hell...
On Sun, 10 Sep 2000, Matt Sergeant wrote: > For 2 days solid now I've been trying to track down a very bizarre memory > leak in AxKit. > > I've checked everything I can think of - all circular references are now > gone, all closures clean up carefully after themselves, and I've reduced > the usage of some external modules. But still the processes grow. > > Now to the wierd bit. I could track this down if it wasn't for this: > > The memory leak starts after the Nth hit, usually around 35. This is > running under httpd -X. > > So it goes along very happily for 35 hits - memory usage is rock solid > stable. Then after the 35th hit the memory usage starts to grow about 4k > per hit. Obviously thats an impossible rate of growth to sustain for any > amount of time, and soon the server is swamped. > > Can anyone give me _any_ help in figuring out where this might be coming > from? I've tried several things, including adding the ability to display > the memory usage in the debug statements (by getting the VmSize out of > /proc/self/status). This is driving me absolutely mad, as you might > tell. The only thing I can come close to in locating it is some remote > possibility that it is in one particular section of the code, but that > doesn't actually seem to be the case - sometimes the memory increase goes > outside that section of code. > > The modules I'm using are: > > Time::HiRes, Apache::* (core apache stuff only), AxKit, Digest::MD5, > Compress::Zlib, Fcntl, XML::Parser, XML::XPath, Unicode::Map8, > Unicode::String, MIME::Base64, Storable (loaded but not used), > XML::Sablotron (loaded but not used). > > mod_perl is 1.24 and Perl is 5.00503 (Mandrake release). First, Apache::VMonitor and GTop will make your debugging work much easier. Second, The only reason that I can think of that things start happening after Nth hit is that one of the modules that you use does an internal accounting that goes astrey after Nth hit. Using Apache::Peek will help you discover the leakage by dumping and comparing the complete package variables tables. You will find some examples of using Apache::Peek in the guide. This will definitely allow you to spot the offending module. Of course the complete list of loaded modules can be found in /perl-status :) Apache::Leak is another thing to try, but it's not really good unless you know the exact offensive code. Otherwise it reports leakages for things which are Perl optimizations in fact. Yet another thing I'd think about is monitoring the code with GTop, and running it under strace/gdb. So it goes like this: You run a daemon that watches the processes and prints its memory size via Gtop. You complete 35 hits and now run the server under Apache::DB debugger. So you do 's' by 's' (step by step) and watch the printouts of the GTop daemon. Once you see the growth -- you know the offending module/line - and voila... of course this should be automized... And while you work on this it would be really really cool to have this documented. I can imagine that you will have to write a few snippets of code to automate things... But this would be a treasure for those in the same trouble like you. Hope this helps. _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://jazzvalley.com http://singlesheaven.com http://perlmonth.com perl.org apache.org
Memory leak hell...
For 2 days solid now I've been trying to track down a very bizarre memory leak in AxKit. I've checked everything I can think of - all circular references are now gone, all closures clean up carefully after themselves, and I've reduced the usage of some external modules. But still the processes grow. Now to the wierd bit. I could track this down if it wasn't for this: The memory leak starts after the Nth hit, usually around 35. This is running under httpd -X. So it goes along very happily for 35 hits - memory usage is rock solid stable. Then after the 35th hit the memory usage starts to grow about 4k per hit. Obviously thats an impossible rate of growth to sustain for any amount of time, and soon the server is swamped. Can anyone give me _any_ help in figuring out where this might be coming from? I've tried several things, including adding the ability to display the memory usage in the debug statements (by getting the VmSize out of /proc/self/status). This is driving me absolutely mad, as you might tell. The only thing I can come close to in locating it is some remote possibility that it is in one particular section of the code, but that doesn't actually seem to be the case - sometimes the memory increase goes outside that section of code. The modules I'm using are: Time::HiRes, Apache::* (core apache stuff only), AxKit, Digest::MD5, Compress::Zlib, Fcntl, XML::Parser, XML::XPath, Unicode::Map8, Unicode::String, MIME::Base64, Storable (loaded but not used), XML::Sablotron (loaded but not used). mod_perl is 1.24 and Perl is 5.00503 (Mandrake release). -- Fastnet Software Ltd. High Performance Web Specialists Providing mod_perl, XML, Sybase and Oracle solutions Email for training and consultancy availability. http://sergeant.org | AxKit: http://axkit.org
Mysql+Modperl?
Hi. I installed Apache_1.3.12, mod_perl-1.23 flexible way, also PHP4 as module. I am using mysql-3.22.32(from source) and postgresql. Also perl modules DBI-1.14, Msql-Mysql-modules-1.2214 . Some minor problems during first install with php, but second time all ran without any errors. Apache started nice, php test phpinfo() ran too. Also HTML-Embperl-1.2.1 . As its my job to write scripts under Embperl, I tested Embperl without database connections, everything worked nice. Then I made small script to test database connections: [-use DBI; $dbh=DBI->connect("DBI:mysql:database=test","root",""); $kuupaev=($dbh->selectall_arrayref(qq{ select now()}))->[0][0]; $dbh->disconnect;-] No mingi ilge jama [+ $kuupaev +] I got an empty document(errormessage). I included that error from errorlog(below). With postgres,it worked nicely. Tried different Msql-mysql-modules, but without success. When I built mod_perl statically with Apache(the first step in installation guide :), script worked nice. Btw, I tried it as pure perl script(removed all unnessecary tags), DBI:mysql worked nicely and gave right answers. So, it comes to my mind that there could be one of five problems: Broken 1. Apache_1.3.12 2. mod_perl-1.23 3. Msql-Mysql-modules-1.22xx 4. DBI-1.14 5. mysql-3.22.32 Error log contains only: [Sat Sep 9 21:49:01 2000] [notice] child pid 25528 exit signalSegmentation fault (11) Is there a way to build mod_perl statically(later add Embperl) but with PHP4 ? Main goal is to build a hybrid environment PHP4+Embperl-Mod_perl and databases mysql and postgresql. Greetings, Antti
Re: perlyear or perlmonth
> any body know what happened to perl month, it is on release 11 from a > decade back It'll be back to operation soon. Probably in a few days there will be a new issue online. _ Stas Bekman JAm_pH -- Just Another mod_perl Hacker http://stason.org/ mod_perl Guide http://perl.apache.org/guide mailto:[EMAIL PROTECTED] http://apachetoday.com http://jazzvalley.com http://singlesheaven.com http://perlmonth.com perl.org apache.org
Re: SELECT cacheing
Brian Cocks wrote: > > I'm wondering how much improvement this caching is over the database caching > the parsed SQL statement and results (disk blocks)? > > In Oracle, if you issue a query that is cached, it doesn't need to be parsed. > If the resulting blocks are also cached, there isn't any disk access. If the > database is tuned, you should be able to get stuff out of cache over 90% of > the time. I don't know what other databases other than Oracle do. Oracle is clever in this respect - and you are right if you tune correctly you should hit the cache ... But may other DB's do not have a shared executition plan / results cache. MySQL, msql and postgreSQL do not. As far as I am aware Sybase only has an execution plan cache that is per connection (could be wrong here). The MySQL developers have got a SELECT CACHED idea, where you can define a statement as cacheable, and further calls to SELECT CACHE will return the cached results - this is all on the todo list with no fixed date. > What are the advantages of implementing your own cache? Is there any reason > other than speed? Could be wrong but no - and it has a bad point in that it introduces an added layer of complexity . I am certainly interested as accessing a local cache should be an order of magnitude faster than asking a buzy DB. Greg Cope > > -- > Brian Cocks > Senior Software Architect > Multi-Ad Services, Inc. > [EMAIL PROTECTED]
Re: open(FH,'|qmail-inject') fails
Perrin Harkins wrote: > > On Fri, 8 Sep 2000, Stas Bekman wrote: > > > As far as benchmarks are concerned, I'm sending one mail after having > > > displayed the page, so it shoul'dnt matter much ... > > > > Yeah, and everytime you get 1M process fired up... > > Nevertheless, in benchmarks we ran we found forking qmail-inject to be > quite a bit faster than Net::SMTP. I'd say that at least from a > command-line script qmail-inject is a more scalable approach. Or even better would be to use qmail-remote and avoid the queue, but thats a qmail and not a mod_perl thing. BTW I'm looking at this at the mo and have decided to go back to a dump email to db -> crond robot sends mail to port 25 -> mail delt with there. Why ? Well this is the most scalable, and its cross platform, talking directly via pipes / forks etc usually has a limit somewhere, and all too often I may reach it far sooner than I thought. Greg > > - Perrin
perlyear or perlmonth
hello, any body know what happened to perl month, it is on release 11 from a decade back Issam