Cahill, Earl wrote:
I am finishing up a sort of alpha version of Data::Fallback (my own name)
which should work very well for cache'ing just about anything locally on a
box. We are planning on using it to cache dynamically generated html
templates and images. You would ask a local perl
Corey Holzer wrote:
1 Redownloaded the source tar ball for the version of Apache that I am
running on my Linux RH 72 box
2 untar'ed the source tar ball for apache
3 Executed /configure --with-apache-includes=the /src/includes
directory under the source dir for apache
Between 2 and 3 you
Geoffrey Young wrote:
John Siracusa wrote:
I have something like:
Location /foo
SetHandler perl-script
PerlHandler My::Foo
/Location
Location /
SetHandler perl-script
PerlHandler My::Bar
AuthName Bar
AuthType Basic
PerlAuthenHandler My::Auth::Bar
PerlAuthzHandler
Mark Hazen wrote:
I'm sorry I didn't explain an important component Since I am dealing with
a few hundred requests per minute (this was got me onto mod_perl to begin
with), then using DBI's ability to write to a file would vastly overwhelm my
system
Won't capturing that much data in RAM
Medi Montaseri wrote:
Caller can also buy some content management software like Interwoven's
TeamSite
product that provides a virtual workarea, for about $300,000
It's so easy and effective to run mod_perl on developers' personal
machines, I think there's no excuse not to do it
At eToys we
Cahill, Earl wrote:
Any chance of being able to define a runaway script based on percent of
CPU or percent of memory used as well as time in seconds? This would be
great for us Every so often we get a script that just starts choking on
memory, and gets every process on the box swapping,
John E Leon Guerrero wrote:
in my case, we had a number of scripts that would change
STDOUT in some fashion (usually so they could set $|) but then die due to
some error before resetting STDOUT back
Interesting One safety measure to prevent this would be to install a
cleanup handler that
A.C.Sekhar wrote:
How can I maintain the connections in perl?
Which connections? Connections to a database? A web browser?
Something else?
- Perrin
Andy Lester wrote:
I want my MyFilter to process EVERYTHING that Apache spits out, whether
with mod_perl, mod_php or just reading a .html file from the filesystem,
especially the mod_php stuff.
Assuming you mean you want to look at the generated content from
non-mod_perl handlers and do
Andy Lester wrote:
So, my HTML::Lint checking is only going to work on output from the
mod_perl chain.
If you aren't terribly concerned about performance, there are several
Apache::Proxy modules which should be easy to modify to put your lint
checking in. Do a search for proxy on CPAN to
Nico Erfurth wrote:
your handler could tie the output-handle (is this possible?) and run a
subrequest.
Nope, not possible. You can only do that for mod_perl requests.
- Perrin
Andrew Ho wrote:
I've been investigating other template systems to try to find similar
functionality in an existing package for a non-Tellme related project and
haven't been able to find any embedded-Perl solutions that can be called
from a .pl and still have the benefits of template caching.
F. Xavier Noria wrote:
For example, in the
hangman game in O'Reilly's book a controller would load a session from
the cookie, process user's guest, modify the state and redirect the
request internally to the view.
It would probably be orders of magnitude faster to just call a template
Mat wrote:
Hi all,
I have the following configuration.
Location /my
SetHandler perl-script
PerlAccessHandler MyCheck
PerlHandler MyHomePage
/Location
The PerlAccessHandler checks if the user cookie is valid and set a $r-notes()
entry to pass the user id to the MyHomePage
When I used CGI::SecureState it gave the client a non-versioning (more on
that later) key and stored the state information in the filesystem.
Okay, I only looked at it briefly and thought it stored the data on the
client. Your module is actually more like CGI::EncryptForm I think, but
yours
As I understand it, the session data is state which is committed to
the database on each request (possibly). It would seem to me that
instead of denormalizing the state into a separate session table, you
should just store it in a normal table.
The typical breakdown I use for this is to put
And that is what I am doing for a small project I'm working on now. In my
case, I'm not sure about the capabilities of the remote server, and I know
for sure that I don't have a database available, so session information is
saved via hidden form fields. It's primitive, but was actually a bit
I built and use a module that encodes a session hash into a number of
hidden
fields with a security MD5 sum.
Sounds a lot like CGI::SecureState. Have you ever looked at it?
- Perrin
So, is there an alternative - a module that will take an image
(gif/jpeg) and generate a thumbnail from it?
The GD module seems like a good candidate. There's also the Gimp
modules.
- Perrin
I have a mysterious mistaken identity problem that I have not been
able to solve.
There are two common sources of this problem. One is an ID generation
system that is not unique enough. Another is a bug in your code with
globals (see the section of the Guide about debugging with httpd -X).
The only other way I can think of to solve this is to send my module list
to this audience. Please find it, attached, with home-grown modules
deleted.
Have you tried debugging the old-fashioned way, i.e. remove things until it
works? That's your best bet. I suspect you will find that you
2. I don't think it's a global vairable issue. Basically, I just grab
the cookie by $r-header_in('Cookie')
and decrypt it.
It's what you do after that that matters.
Besides, if it's global then the mistaken ID's should
be from anywhere randomly.
True, but random may not always look
When the cookie is recovered, I simply decode, uncompress, thaw, check
the digest, and thaw the inner object.
It's really a good idea to do this even when the cookie is nothing but a
session ID. A standard module for this like the one Jay mentioned would
definitely be nice.
My strategy for
I dunno... That sounds lie a LOT of overhead for just a session ID
that's gonna result in server lookups too...
It's really not. It adds a negligeble amount of time to the request. As
Jeffrey pointed out, the functions he's using are all in C and very fast.
Why verify session IDs? To make
I think the problem here is that mod_perl sets the assbackward flag
when setting headers via send_cgi_header() (which CGI.pm does).
Is this only an issue when using CGI.pm or PerlSendHeader then? I seem to
recall having no trouble doing this from a normal handler.
- Perrin
However both applications make use of the UNIERVSAL package to create
universally accessible methods (to return the current database handle for
example) within the application.
Better to put those into a package of your own and call them with
fully-qualified names, or import them as Tatsuhiko
A list of things I've noticed:
* If you have two *different* modules which have the same name, then
either one, or the other is loaded in memory, never both. This is
dead annoying. I think Perl standard modules + CPAN modules should be
shared, other modules which are specific to a given
If the UNIVERSAL namespace is shared I
would have thought one or the other (the last one?) would get the
print_error sub and the other loses out but at some point they seem to
coexist just fine. Whilst at some other point they as expected and one
gets
the others. Any theories?
You have a
Keep in mind I tried several version of CGI.pm. Where the problem is
(and yes, I did hack CGI.pm and fixed it but felt it was unnessary to
hack CGI.pm since it wasn't at fault and didn't want to break other
working apps), e, the problem is in the read_from_client() call
where CGI.pm
Here is the part of the httpd.conf that I believe you wanted to see.
Hmmm... I don't see anything wrong with this. It seems like the problem is
simply that Apache 1.3.x is not as fast as IIS at sending static files on
NT. Not too surprising. I've been told that Apache 2 is significantly
I have Apache/mod_perl installed on a NT box, and I am allowing customers
to
do downloads of High-Resolution assets. My problem is the speed of
downloads
is about 1/3 slower than the same box running IIS.
Can you post your httpd.conf? Or at least the parts of it about threads and
processes?
Application's main goals
1. Simple install.
I don't want use cron jobs for cleanup - I think, it can be problem
for some users.
Most of the existing session stuff is written to leave cleanup to you. If
you don't want to use cron jobs, you can do it in a cleanup handler,
possibly exec'ing a
[Mon Jan 28 14:52:35 2002] [error] mkdir : No such file or directory at
/opt/gnu
/depot/perl-5.6.1/lib/site_perl/5.6.1/Cache/FileBackend.pm line 220
Looks to me like your system has no mkdir command, or it isn't in the path,
or it doesn't support an option that's needed (-p maybe?).
Maybe
It all depends on what kind of application do you have. If you code is
CPU-bound these seemingly insignificant optimizations can have a very
significant influence on the overall service performance.
Do such beasts really exist? I mean, I guess they must, but I've never
seen a mod_perl
The point is that I want to develop a coding style which tries hard to
do early premature optimizations.
We've talked about this kind of thing before. My opinion is still the same
as it was: low-level speed optimization before you have a working system is
a waste of your time.
It's much
There are many web testers out there. To put it bluntly, they don't
let you write maintainable test suites. The key to maintainability is
being able to define your own domain specific language.
Have you tried webchat? You can find webchatpp on CPAN.
Gunther Birznieks writes:
the database to perform a test suite, this can get time consuming
and
entails a lot of infrastructural overhead.
We haven't found this to be the case. All our database operations are
programmed. We install the database software with an RPM, run a
program to
I'm interested to know what the opinions are of those on this list with
regards to caching objects during database write operations. I've
encountered
different views and I'm not really sure what the best approach is.
I described some of my views on this in the article on the eToys design,
Perrin Harkins writes:
To fix this, we moved to not generating anything until it was requested.
We
would fetch the data the first time it was asked for, and then cache it
for
future requests. (I think this corresponds to your option 2.) Of
course
then you have to decide on a cache
A site I run uses a fair variety of different programs, the most common
of which are run through Apache::Registry. To cut the memory overhead,
however, less commonly used programs are run through Apache::PerlRun.
I would not expect PerlRun to use less memory than Registry.
Both the
Your system has to be swapping horribly. I bet that the ulimit for
whoever apache is running as has the memory segment set super low.
That's a possibility. I was also thinking that maybe mod_perl was built
against a different version of Perl, possibly one that has a problem
with this
What techniques do you use to insure that your application is not
vulnerable?
Usually I write application so that they do some processing, package up a
chunk of data, and hand it to a template. With this structure, all you need
to do is HTML-escape the data structure before handing it off, or
Yes and no. XSS attacks are possible on old browsers, when the charset is
not
set (something which is often the case with modperl apps) and when the
HTML-escaping bit does not match what certain browsers accept as markup.
Of course I set the charset, but I didn't know that might not be
print STDERR blah blah blah is going to the browser but I am not
really worried about it too much unless it is something I should worry
about - anyone care to comment on that ?
Printing error messages to the public is a potential security risk, so
you have to decide how paranoid you want to
I have a requirement to spin off a SQL loader process after a web page
(a
form which is qualified and accepted) has been submitted. Does it
make
sense, or more importantly, is it dangerous to apply a fork at the
end of
a module such as this:
You're probably better off using a cleanup
Here is the problem: create.pl is owned by test and group test and
has file
permissions 755. When the create.pl script is run it becomes owner
apache
and group apache and has to create new files and directories on the
machine.
All of the new files and directories then become owner apache
under mod_perl this takes 23 seconds. running the perl by hand (via
extracting this piece into a seperate perl script) on the same data
takes
less than 1 second.
Are you sure that the string you're regex'ing is the same in both cases?
Why are you using the /o operator? CG isn't a variable,
Umm it didnt really answer my original query but I guess since no one
has answered it - either I didnt present it correctly or no one has a
answer to it.
Or you posted it late on Saturday night on a weekend when most US
workers have Monday off and may be travelling. Not everyone is on the
Umm I didnt mean to offend anyone in my previous posting - I did say I
probably hadnt presented my situation properly.
No problem, I just meant don't give up so quickly.
Ofcourse you noticed I wrote ePerl/EmbPerl/Mason ?? I clubbed them
together since I assume among other things you can
I register a clean up handler to explicitly untie the session variable.
I have found that it's safer to put things in pnotes than to use globals and
cleanup handlers. We used a lot of cleanup handlers at eToys to clear
globals holding various request-specific things, and we started getting
In a Mason context, which is where I'm using it, I do this in my
top-level autohandler (ignore the main:: subroutines, they're just for
pedagogy):
%init
# 'local' so it's available to lower-level components
local *session;
my $dbh = ::get_dbh;
my $session_id =
Ummm yes... you know, I'm using the Template Toolkit.
Try using the Perl stash instead of the XS stash, and see if your problem
goes away.
It seems as if the
httpd child executes the processing of the template so fast that CGI.pm
has
no time to get the POST data.
I don't think so. It
Well all my modules are written in Perl. When you say some C code you mean
the C code in DBI, or CGI or Template, don't you?
Yes. That's why I suggest trying Template with the Perl stash instead of
the XS one.
- Perrin
GUI builders usually don't work for anything but the
most trivial websites that could be written in anything
and do fine.
consider struts, a popular java mvc framework. it defines
simple interfaces for things like actions and forms. does
struts (and mvc in general) work for non trivial
It's configurable so after
exceeding a threshold the client gets content from the shared memory
cache, and if a second threshold is exceeded (ok this guy is getting
REALLY irritating) then they get the 'come back later' message. They will
only get cached content if they exceed x number of
Each time, the warn is for 'blah' because the value 'test'
is never retained in $var. Is this intended behaviour?
No, that should create a closure that keeps the value of $var. Are you sure
these requests are all going to the same instance?
Weird, it's like the MIME::Types::DATA handle just
I have noticed that Yahoo uses Location: header only for redirect
responses and thought
it may be good to save half of the bandwidth and do the same, as my
particular script/server
is serving redirects mostly. So my question is how to unset Date:,
Server: and
Content-Type: response headers?
For file organization, I'm thinking of making all page modules start
with a common namespace substring (e.g. Projectname::Page) to distinguish
them from the support (model) modules
I like to name the top level modules SiteName::Control::* and the model
modules SiteName::Model::*. Calling the
I assume I'm not the only one seeing a rash of formmail spam lately.
Is THAT what it is? I have a Yahoo mail account which someone has been
sending literally thousands of messages per day to, CC'ing lots of
people on every one, and they all appear to be from some kind of
compromised form
hrm. the problem might not be the double-loading of httpd.conf then -
that's been around since, well, before most of us (I tracked that down
to apache 0.9 once through list archives)
more likely is this:
http://marc.theaimsgroup.com/?l=apache-modperlm=100510779912574w=2
and the other
What are the basic advantages, disadvantages, and limitations of:
(a) stuffing all this setup/framework code into a module (either a new
module or subclassing Apache::RegistryNG as you mention below),
versus,
(b) stuffing it into a handler that all requests for a large subset of
the pages on
After I set up my app (webtool.cgi) and created the
single script version (bigtool.cgi), I ran this script
on my machine and it showed that the single file was
about 10-15% faster than the multiple modules.
No offense, but your script must not have been doing much in this test.
The
I was also thinking it would only make a small
difference, but I see many perl/CGI scripts that boast
'all this functionality in a single script'
They probably don't know any better, but to me that translates to giant
bloated unorganized mess of a script.
# BEGIN MOD_PERL CONFIG
has anybody any ideas?
Apache::Resource.
PerlModule Apache::Resource
PerlSetEnv PERL_RLIMIT_AS 32:64
PerlChildInitHandler Apache::Resource
in httpd.conf, but Apache::Resource uses BSD::Resource in the end and
thus its the same as
use BSD::Resource;
setrlimit RLIMIT_AS, 3200, 6400;
The difference is that
There are many *.par pages (estimate: 70-100 when conversion is
complete),
and they all contain the following code with minor variations that
could be
made consistent (like what constants are imported, what modules are
used,
etc.). I'd like to find a way to prevent having that code (below)
What is the difference between how a BEGIN block and an anonymous block
in a module loaded into mod_perl?
It looks to me like you are confused about our and BEGIN. If you change
the our to a use vars I think it will fix your problems. This is not
mod_perl-specific.
Are anonymous blocks in
By load stage I mean BEGIN blocks, anonymous
subroutines in packages loaded at startup, or even named
subroutines called from startup.pl
All of those things happen during server startup, before any request has
been submitted. There is no form data at that time.
Maybe if you could explain
On Tuesday 08 January 2002 08:16 pm, Dave Morgan wrote:
I'm trying to populate select boxes(or other input types)for my HTML
pages.
An example would be a drop down list of states and/or provinces. A large
number
of these are populated from lookup tables in the dba and are relatively
static.
Ok, now i'm totally confused.
Have you read the documentation for Apache::PerlRun? That might help. Try
a perldoc Apache::PerlRun.
1. I have the following (and ONLY the following related to modperl) in my
httpd.conf file (of course there are other regular apache directives too):
I looked at just about every template system on CPAN and came across
text::template. Anyone use this one?
I'd suggest you read my overview of templating options. It summarizes the
top choices for templating tools, and talks about the strengths of
weaknesses of Text::Template..
Even then, I'd avoid disk-based cache systems, instead
preferring Cache::* if it must be shared, or just global variables if
it doesn't need to be.
Cache::FileCache is disk-based, and it is the fastest of the Cache:: options
for most data sets. There was a thread a little while back about
As far as the cacheing goes, we have had extremely good luck with
IPC::ShareLite used to share info across mod_perl processes.
IPC::ShareLite is not as fast as some of the other options, especially when
dealing with a large data set. The disk-based options tend to be faster.
- Perrin
Does anybody know a template engine, whose templates can be edited with a
WYSIWYG editor (favourably dreamweaver) as they will look when filled
with example data?
HTML_Tree: http://homepage.mac.com/pauljlucas/software/html_tree/
What do you suggest as a good benchmark tool to use that would be
'smart' when testing a whole complete site.
For pounding a bunch of URLs, the best are ab, httperf, and http_load. If
you need something fancier that tests a complex series of actions and
responses, there are several packages
The circular reference was the only way I could think of to force an
object to be destroyed during global destruction.
What happens if you use a global?
Hmm, that may be - Mason does create more closures now than it used to.
It seems like only 'named' closures would create this problem,
The 2.0.28 proxy uses mod_rewrite. When it rewrites url's internally to
go to a static apache server all works great!
Compare the headers sent by your static pages vs. the ones sent by your
mod_perl pages. There might be something not quite 1.1 compliant about it
that ticks of apache 2
I have the book but I don't always have it with
me.
That chapter is actually available for free on-line at
http://www.modperl.com/.
- Perrin
Like this? (using register_cleanup instead of pnotes)
Better to use pnotes. I started out doing this kind of thing with
register_cleanup and had problems like random segfaults. I think it was
because other cleanup handlers sometimes needed access to these resources.
- Perrin
By the way, is there a perl module to do calculations with money?
There's Math::Currency.
- Perrin
He wants to mix cgi-bin mod_perl by testing all of the scripts in
cgi-bin and putting one cgi-script at a time into mod-perl folder.
A very simple way to do this is to use Location directives to add them to
PerlRun one at a time:
Location /cgi-bin/some_scr.pl
SetHandler perl-script
I've looked through the mod_perl docs and guide and am unable to find
something that I can use in a handler to figure out what the current phase
is. This seems like such an obvious thing that I can't believe it doesn't
exist. Therefore I will conclude that I'm completely blind. Anyone care
2. We will use Template-Toolkit and Apache/mod_perl. Problem is that 2
out of 3 people have never used TT or programmed mod_perl and OO Perl.
Only I've made sites this way, they've used Embperl til now. How can I
make this switch for them a little easier?
Get them all copies of the Eagle
Actually I was wondering about writing an Apache::Singleton class, that
works the same as Class::Singleton, but clears the singleton out on each
request (by using pnotes). Would anyone be interested in that?
This sounds a bit like Object::Registrar. If you do it, I'd suggest giving
it a
ALWAYS reinitialize $Your::Singleton::ETERNAL on each query!
mod_perl will *NOT* do it for you.
If you want a per-request global, use $r-pnotes() instead of a standard
perl global. Then mod_perl *WILL* do it for you.
You might think 'ah yeah but it would be nice if
No, it's nothing like Object::Registrar. It's like Class::Singleton.
Okay, wishful thinking. I don't use Class::Singleton, but I have written my
own versions of Object::Registrar a few times to accomplish the same goal.
I don't like to make my core classes dependent on running in a mod_perl
One thing I don't quite
understand is the need to clear out a singleton. Why would a
singleton need to hold transient state?
It's good for holding something request-specific, like a user session.
If you want a per-request global, use $r-pnotes() instead of a standard
perl global. Then mod_perl *WILL* do it for you.
True. But then you are using the Apache object and you're program
doesn't work as a standard CGI anymore :(
I handle this by chekcing for $ENV{MOD_PERL} and just using
as it stands,
the cgi structure looks like this
https://www.foo.co.za/cgi-bin/client1/index.pl
https://www.foo.co.za/cgi-bin/client2/index.pl
it would be better if it was
https://www.foo.co.za/client1
https://www.foo.co.za/client2
You can just use this in your httpd.conf:
Apache::RequestNotes don't work because Apache::Registry expect to
read the
POST/PUT-data from STDIN.
It's important that the cgi-scripts run unmodified and without any
notice of
their unnaturally environment.
I don't think there's any way around the fact that you can only read the
content
One place that Rob and I still haven't found a good solution for
profiling
is trying to work out whether we should be focussing on optimising our
mod_perl code, or our IMAP config, or our MySQL DB, or our SMTP setup,
or
our daemons' code, or...
Assuming that the mod_perl app is the front-end
I am planning to host an application and its size is going to be big one ,
so expect the concurrent number of connection s to be around 2200.
To combat the same , want to perform load sharing on 3-4 servers.
If you really expect 2200 concurrent connections, you should buy dedicated
Aside from the fact I _really_ wouldn't expect that manny actual, live
TCP connections at one time...
Nor would I, although we did see huge numbers of open connections during
peak times at eToys. Mostly to the image serving machines though.
I _really_ hate so-called dedicated boxes. They're
I was using Cache::SharedMemoryCache on my system. I figured, Hey, it's
RAM, right? It's gonna be WAY faster than anything disk-based.
The thing you were missing is that on an OS with an aggressively caching
filesystem (like Linux), frequently read files will end up cached in RAM
anyway.
I spoke to the technical lead at Yahoo who said mod_perl will not scale as
well as c++ when you get to their level of traffic, but for a large
ecommerce site mod_perl is fine.
According to something I once read by David Filo, Yahoo also had to tweak
the FreeBSD code because they had trouble
So I'm trying to show that mod_perl doesn't suck, and that it is, in fact,
a reasonable choice. Though within these limits it is still reasonable to
point out the development cycle, emotionally it is the least compelling
form of argument, because the investor has a hard time removing from
So our solution was caching in-process with just a hash, and using a
DBI/mysql persistent store.
in pseudo code
sub get_stuff {
if (! $cache{$whatever} ) {
if !( $cache{whatever} = dbi_lookup()) {
$cache{$whatever}=derive_data_from_original_source($whatever);
dbi_save($cache_whatever);
}
Some bug report about Apache::SizeLimit diagnostic:
I don't know about Linux and Solaris but under
FreeBSD shared memory shows some incredible numbers:
Okay, I'll ask the guy who wrote the *BSD support to look into it. I don't
have a FreeBSD system to test with.
And some recomendation -
Perrin Harkins wrote:
Try changing the call
$r-child_terminate() to Apache::exit(). If this seems to work better
for you, let me know and I'll consider changing this in a future release
of Apache::SizeLimit.
Geoff wrote:
what about
$r-headers_out-add(Connection = 'close');
I tried
You should use it in an early phase, like PerlFixupHandler. It pushes a
cleanup handler if it needs to exit. It will not exit until after the
request is done.
I didn't know it. I think you should document it.
But any way I think it's better to check size in cleanup.
I agree and I plan
601 - 700 of 1288 matches
Mail list logo