RE: Installing libwww

1999-10-29 Thread Eric Cholet

Man, that's some ps program you've got here. I should run Linux
one of these days.

   PID TTY STAT TIME COMMAND
 1  ?  S0:05 init [2]
 2  ?  SW   0:06 (kflushd)
 3  ?  SW   0:07 (kupdate)
 4  ?  SW   0:00 (kpiod)
 5  ?  SW   0:04 (kswapd)
82  ?  S0:00 /sbin/syslogd
84  ?  SW   0:00 (klogd)
90  ?  S0:00 /usr/sbin/inetd
   369  ?  R0:00  \_ in.telnetd: cobra.ssu.samara.ru [vt100]
   370  p1 S0:00  \_ -tcsh
  6580  p1 S0:00  \_ screen
  6581  ?  R0:00  \_ SCREEN
  6582  a0 S0:00  \_ -/usr/bin/tcsh
  6583  a0 S0:00  |   \_ -tcsh
  6633  a0 S0:00  |   \_ make test
  6656  a0 S0:00  |   \_ /usr/bin/perl t/TEST 0
  6679  a0 S0:00  |   \_ /usr/bin/perl -w
 robot/ua.t
  6680  a0 Z0:00  |   \_ (perl zombie)
  6672  a1 S0:00  \_ -/usr/bin/tcsh
  6683  a1 S0:00  \_ -tcsh
  6699  a1 R0:00  \_ ps axf



Re: modperl in practice

1999-10-29 Thread Vivek Khera

 "j" == jb  [EMAIL PROTECTED] writes:

j comments or corrections most welcome.. i freely admit to not having
j enough time to read the archives of this group before posting.

Well, that would have saved you lots of effort in re-learning the
lessons discussed and documented regarding memory use, and handling
static content.

Most of what you ramble on about memory is what the performance tuning
docs are about.  Perhaps you should review them and send any additions
you may have from your own experience.  I'll be glad to incorporate
them into the distributed version.

-- 
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Vivek Khera, Ph.D.Khera Communications, Inc.
Internet: [EMAIL PROTECTED]   Rockville, MD   +1-301-545-6996
PGP  MIME spoken herehttp://www.kciLink.com/home/khera/



No permission to access document after $r-internal_redirect

1999-10-29 Thread Oleg Bartunov

Hi,

I have modperl handler which redirect user to random document
and it works fie if I use:
 $r-header_out(Location = 'pubs.html?msg_id='.$msg_id);
 return REDIRECT;

but if I use internal redirect:

  $r-internal_redirect($random_uri);
  return OK;

I get:

 You don't have permission to access
   http://mira.sai.msu.ru:5000/db/pubs.html?msg_id=67 on this server.

If I access directly that URL I have no problem.

This is a 2-server configuration with proxy-frontend and cgi-backend - 
port 5000 - proxy and port 7000 - backend.


Oleg 

I 



_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83



hostname sanitation in PerlTransHandler or anywhere else

1999-10-29 Thread Dan Rench

We use a TransHandler to (among other things) manage name-based virtual
hosts (simply put, given the incoming Host: header plus URI, map to a file).

We (of course) sanitize the incoming URI and Host.  It works fine.
I "save" the sanitized hostname like so:

$r-header_in('Host',$host);
$r-subprocess_env('SERVER_NAME',$host);
$r-parsed_uri-hostname($host);

I used to use just the first line, but I added the other two thinking
they might fix our problems...


First problem (somewhat minor):

$ENV{'SERVER_NAME'} remains "unsanitized" (i.e., it's still exactly what
the client sent in the "Host:" header).  This is not a big deal because the
sanitized host gets set properly in $ENV{'HTTP_HOST'}.  Scripts can just
use that variable instead.

Second problem (bigger):

For logging, we use CLF with the virtual host name tacked on the front
of the line (using %V in the LogConfig).  Yes we have "UseCanonicalName On"
and I've read http://apache.org/docs/mod/core.html#usecanonicalname so I
know that %V and SERVER_NAME get set to whatever the client sends.
(and I can't turn it off, because then %V is always ServerName, and
suddenly no "virtual hosts").

I experimented by putting the host sanitation in a PostReadRequestHandler.
Same results.  I thought this phase was "...where you can examine HTTP
headers and change them before Apache gets a crack at them" (TPJ #9, p.6).


Here's the relevant bit of Apache code (1.3.9) in http_core.c in the
ap_get_server_name() function:

if (d-use_canonical_name == USE_CANONICAL_NAME_OFF) {
return r-hostname ? r-hostname : r-server-server_hostname;
}

There's no $r-hostname method in mod_perl that I can find, and
unfortunately $r-server-server_hostname is read-only.

I can only think of a couple options: hack http_core.c to do what I want,
or write a custom LogHandler that uses the sanitized host.

Is there any other way?



PS: I'd still like to hear from anyone who is running mod_perl on
Solaris 2.5.1 with Perl 5.005_03 -- I don't want to stick with 5.004 forever.



Re: hostname sanitation in PerlTransHandler or anywhere else

1999-10-29 Thread Francesc Guasch

Dan Rench wrote:
 
 PS: I'd still like to hear from anyone who is running mod_perl on
 Solaris 2.5.1 with Perl 5.005_03 -- I don't want to stick with 5.004 forever.

If only you could upgrade to solaris 2.6. I have it running:

SunOS 5.6
This is perl, version 5.005_03 built for sun4-solaris
apache-1.3.9
mod_perl-1.21


-- 
 ^-^,-. mailto:[EMAIL PROTECTED]
 o o   )http://www.etsetb.upc.es/~frankie
  Y (_ (__(OOOo



Memory Leaks?

1999-10-29 Thread Ben Bell

Hi,

I'm using the Debian package of mod_perl (1.21) and apache 1.3.9 and
I've noticed quite nasty memory leaks on server restart. I've noticed
unresolved bug reports on the Debian pages about this. Is it a known
issue with this version?

The leak is ca. 2MB each restart (or graceful) with my startup script
enabled (which queries a database, and uses the following modules:
 Apache
 Apache::PerlSections
 Apache::DBI
 Data::Dumper
 Carp
 VI::Utils

All my vars are declared as "my".
When I disable all Perl stuff (my startup script, the perl sections etc)
I still see a memory leak, albeit a smaller one.

Can anyone shed any light on this?

Cheers,
Ben


-- 
+-Ben Bell - "A song, a perl script and the occasional silly sig.-+
  ///  email: [EMAIL PROTECTED]www: http://www.deus.net/~bjb/
  bjbDon't try to drive me crazy... 
  \_/...I'm close enough to walk. 



Re: modperl in practice

1999-10-29 Thread Leslie Mikesell

According to [EMAIL PROTECTED]:
 
 I still have resisted the squid layer (dumb
 stubbornness I think), and instead got myself another IP address on the
 same interface card, bound the smallest most light weight separate
 apache to it that I could make, and prefixed all image requests with 
 http://1.2.3.4/.. voila. that was the single biggest jump in throughput
 that I discovered.

You still have another easy jump, using either squid or the two-apache
approach.  Include mod_proxy and mod_rewrite in your lightweight
front end, and use something like:
RewriteRule ^/images/(.*)$ - [L] 
to make the front-end deliver static files directly, and
at the end:
RewriteRule^(.*)$ http://127.0.0.0:8080$1 [p] 
to pass the rest to your mod_perl httpd, moved to port 8080.
If possible with your content turn off authentication in
the front end server.

.. people were connecting to the site via this link, and packet loss
 was such that retransmits and tcp stalls were keeping httpd heavies
 around for much longer than normal..

Note that with a proxy, this only keeps a lightweight httpd tied up,
assuming the page is small enough to fit in the buffers.  If you
are a busy internet site you always have some slow clients.  This
is a difficult thing to simulate in benchmark testing, though.
 
 comments or corrections most welcome.. i freely admit to not having
 enough time to read the archives of this group before posting.

I probably won't be the only one to mention this, but you might have
a lot more time if you had, or at least gone through the guide
at http://perl.apache.org/guide/ which covers most of the problems.

  Les Mikesell
   [EMAIL PROTECTED] 



Re: hostname sanitation in PerlTransHandler or anywhere else

1999-10-29 Thread Dan Rench


On Fri, 29 Oct 1999, I wrote:

 I can only think of a couple options: hack http_core.c to do what I want,
 or write a custom LogHandler that uses the sanitized host.

We've decided on another option: if you're sending a Host: header that
needs "sanitation," then either 1) you're trying to run some kind of
"exploit" or 2) you're using a very broken browser.  We're going to
just punt and send you a 404 right there.  The end.

BTW, it was a broken client calling itself "NetAttache/2.5" that started
this whole thing.



FW: Apache::Resource

1999-10-29 Thread Simon Miner

Ok, I've had the chance to experiment a bit more with
BSD::Resource::setrlimit function.  After I sent my last message to the
mailing list, I again accessed the Apache::Status Resource Limits page,
and this time RLIMIT_DATA was unset.  I reloaded the page several times,
and the behavior seemed sporadic -- sometimes RLIMIT_DATA was set and
sometimes not. I added a line of code to the Apache::Resource module to
print out the PID before the resource limits table, and it seems that
only one or two of the processes have RLIMIT_DATA set.  These are child
processes, too.

I got this behavior when I put my setrlimit-calling module as a
PerlFixupHanlder or as a PerlChildInitHandler.

Can anyone tell me how to consistently set RLIMIT_DATA for all httpd
processes?

Thanks.

- Simon

 -Original Message-
 From: Simon Miner [SMTP:[EMAIL PROTECTED]]
 Sent: Thursday, October 28, 1999 3:11 PM
 To:   'Vivek Khera'; Simon Miner
 Cc:   'mod_perl Mailing List'; Win Mattina
 Subject:  RE: Apache::Resource
 
 Interesting...
 
 When I use Apache::Resource as my PerlFixupHandler, Apache::Status
 reports that there is no resource limit set for RLIMIT_DATA.  However,
 when I explicitly call BSD::Resource::setrlimit in my custom
 PerlFixupHandler, Apache::Status says that the resource limits are
 set.
 So, it still appears that there's something amiss with
 Apache::Resource.
 Any ideas?
 
 - Simon
 
  -Original Message-
  From:   Vivek Khera [SMTP:[EMAIL PROTECTED]]
  Sent:   Thursday, October 28, 1999 2:37 PM
  To: Simon Miner
  Cc: 'mod_perl Mailing List'; Win Mattina
  Subject:Re: Apache::Resource
  
   "SM" == Simon Miner [EMAIL PROTECTED] writes:
  
  SM Hi:
  SM I'm having difficulty getting Apache::Resource to work.  I've
  tried
  SM calling the module as a PerlChildInitHandler and as a
  PerlFixupHandler,
  SM and I've tried telling it which resource to limit in the
 following
  ways.
  
  Working example:
  
  # protect from runaway child processes.
  # Apache::Resource loaded in startup.perl below.
  #
  # limit CPU usage in seconds
  PerlSetEnv PERL_RLIMIT_CPU 60:600
  # limit DATA segment in MB
  PerlSetEnv PERL_RLIMIT_DATA 32:64
  PerlChildInitHandler Apache::Resource
  
  # startup.perl loads all functions that we want to use within
 mod_perl
  # this script is run while we are still "root".
  PerlScript /home/khera/proj/newprizes/website/conf/startup.perl
  
  And my startup.perl has "use Apache::Resource".
  
  If you configure Apache::Status, it will let you review the
 resources
  set this way.
  
  Now, the only question I might have is: "does Solaris 2.7 support
 BSD
  style resource limits"?  I don't know.  That may be the problem.
  
  -- 
 
 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
  Vivek Khera, Ph.D.Khera Communications, Inc.
  Internet: [EMAIL PROTECTED]   Rockville, MD
 +1-301-545-6996
  PGP  MIME spoken herehttp://www.kciLink.com/home/khera/



Re: Generic Server

1999-10-29 Thread Rudy


I'll use POP3 as my example, although any other service (eg telnet, ssh, FTP, SMTP) 
are equally valid.

Having apache run on a non-http port, say port 110 (POP3), could be handy.  You could 
even have POP3 running elsewhere and use the POP3 module:
 o  to proxy POP3 requests inside a firewall, or 
 o  to proxy to a POP3 running on a non-standard port (eg 10110), or
 o  to get POP3 mail from multiple accounts!
Imagine, a custom mod_perl POP3 server which grabs mail from all
your email boxes all over the net.

There are two major stopping points from being able to do this today with 
Apache/mod_perl:

 [1] POP3 clients do not send HTTP headers.  Is there already a 
 way in mod_perl to get a request before the HEADERS are parsed?

 [2] POP3 clients have 'interactive' connections.  Is there a way in 
 apache/mod_perl to read/write more info from a socket without 
 dropping the connection?


James G Smith wrote:
 
 Matt Sergeant [EMAIL PROTECTED] wrote:
 Would it be possible to have a generic server, like Apache, but not just
 for HTTP - something that could also serve up NNTP connections, FTP
 connections, etc. It seems to me at first look this should be possible.
 
 As I can see it there's a few key components to Apache:
 
 forking tcp/ip server
 file caching/sending
 header parsing
 logging
 
 Sounds a lot like inetd to me, IMHO.

Well, if you are not into performance, you could shut down apache and have a perl 
script run every time an access is made to port 80!  Obviously, there would be some 
benifits to having apache/mod_perl up and running on non-https (eg POP3).

Neil Kandalgaonkar wrote:
 
 Matt Sergeant [EMAIL PROTECTED] wrote:
  Am I completely
 wacko or is this something that potentially could be possible?
 
 Not wacko. Although it may not be desirable, at least in the way you
 envision. Say there's a bug in one of your HTTP mod_perl modules, do you
 want to lose SMTP?

You could have *two* apaches running, one on port 80 and another on port 25 (SMTP).  
YOu would probably want to do this considering would could build your apache's very 
differently.

Rudy



Re: Generic Server

1999-10-29 Thread Matt Sergeant

On Fri, 29 Oct 1999, James G Smith wrote:
 Matt Sergeant [EMAIL PROTECTED] wrote:
 I don't think this is currently possible with the current Apache, but hear
 me out.
 
 Would it be possible to have a generic server, like Apache, but not just
 for HTTP - something that could also serve up NNTP connections, FTP
 connections, etc. It seems to me at first look this should be possible.
 
 As I can see it there's a few key components to Apache:
 
 forking tcp/ip server
 file caching/sending
 header parsing
 logging
 
 Sounds a lot like inetd to me, IMHO.

Maybe I'm wrong, but inetd is just #1 of those points. And slow too.

--
Matt/

Details: FastNet Software Ltd - XML, Perl, Databases.
Tagline: High Performance Web Solutions
Web Sites: http://come.to/fastnet http://sergeant.org
Available for Consultancy, Contracts and Training.



Re: No-Cache Question ?

1999-10-29 Thread Cliff Rayman

I use the following with regular headers, but try it with err headers to
see if it works as well:

  $r-err_header_out("Pragma","no-cache");
  $r-err_header_out("Cache-control","no-cache");

cliff rayman
genwax.com

Naren Dasu wrote:

 Hi,

 I am doing some testing with a  Apache/mod_perl server.  And since it is
 still under development, I do get an occasional server error ;-) .  How can
 I set no-cache on the Error Pages that the server generates ?

 I need to do this because the brower that I am using is very limited in
 capability, once it get an error page, it will not try to reload the
 URL(unless the page has no-cache set), have to reboot the device to get it
 to flush the cache.

 thanks in advance
 naren



RE: Generic Server

1999-10-29 Thread Eric Cholet

 I'll use POP3 as my example, although any other service (eg telnet, ssh, 
FTP, SMTP) are equally valid.

 Having apache run on a non-http port, say port 110 (POP3), could be 
handy.  You could even have POP3 running elsewhere and use the POP3 module:
  o  to proxy POP3 requests inside a firewall, or
  o  to proxy to a POP3 running on a non-standard port (eg 10110), or
  o  to get POP3 mail from multiple accounts!
 Imagine, a custom mod_perl POP3 server which grabs mail from all
 your email boxes all over the net.

 There are two major stopping points from being able to do this today with 
Apache/mod_perl:

  [1] POP3 clients do not send HTTP headers.  Is there already a
  way in mod_perl to get a request before the HEADERS are parsed?

  [2] POP3 clients have 'interactive' connections.  Is there a way in
  apache/mod_perl to read/write more info from a socket without
  dropping the connection?


The whole Apache request model doesn't map at all well into the POP3 model.
You just can't fit the POP3 dialog in the http request model, it's not just
the headers. The hook has to be at a lower level. You have a core engine 
that
handles the process/threading, listens and dispatches incoming connection.
This is where a new pop3 core code steps in. You need to define you own
abstraction for POP3 concepts. You won't have one request, but many. The 
phases
will be different. Once this is implemented and has appropriates hooks and 
a
nice API, you can start writing mod_perl_pop3 which embed the perl 
interpreter
and lets you write pop3 perl scripts.

Actually IIRC apache 2 already has a protocol-independant core with hooks
that can hand over a connection to a specific protocol handler. I don't 
know
however the extent of the support, e.g. for dispatching to modules.

  Matt Sergeant [EMAIL PROTECTED] wrote:
  Would it be possible to have a generic server, like Apache, but not 
just
  for HTTP - something that could also serve up NNTP connections, FTP
  connections, etc. It seems to me at first look this should be 
possible.
  
  As I can see it there's a few key components to Apache:
  
  forking tcp/ip server
  file caching/sending
  header parsing
  logging

I'll add to this list:

configuration
module management
generic i/o

I'm not sure I see how header parsing is a generic thing. Seems to me
different protocols handle headers in different ways, at different times.


  Not wacko. Although it may not be desirable, at least in the way you
  envision. Say there's a bug in one of your HTTP mod_perl modules, do 
you
  want to lose SMTP?

I'd want to dedicate one pool of processes to SMTP trafic, and another to
HTTP, since the resource requirements are different.

 You could have *two* apaches running, one on port 80 and another on port 
25 (SMTP).

Actually to run a useful SMTP server you pretty much have to use port 25 
:-)

Now, a mod_perl enabled SMTP server would be very cool. Forget canonical 
maps,
write a PerlAddyHandler!

--
Eric



Newbie Questions on Apache with embPerl

1999-10-29 Thread Melvin Bernstein

Greetings all, I have a basic Apache/Embedded Perl config. question.
Currently, on my developement server (Apache/1.3.6 (Unix) Debian/GNU
mod_perl/1.19 ) there is only a single directory root that will
recognize ".html" as an embedded Perl file.  I wish to get rid of the
"*Logfile*  *Source only*  *Eval*" debugging links which appear atop
each embedded Perl page processed. Can I simply create an additional
"Directory section  where "SetEnv EMBPERL_DEBUG 10477"  does not
precede it or is somehow not stipulated, without interfering with the
existing setup?  Here is a portion of  the "access.conf" file in
/etc/apache directory:

Location /log
SetHandler perl-script
PerlHandler HTML::Embperl
Options ExecCGI
order allow,deny
allow from all
/Location
SetEnv EMBPERL_DEBUG 10477
PerlSetEnv EMBPERL_VIRTLOG /log
PerlModule Apache::Registry

Directory /zzz/www/www/aDir/
Options Indexes ExecCGI
order allow,deny
allow from all
Files *.*html
SetHandler perl-script
PerlHandler HTML::Embperl
Options ExecCGI
/Files
Files *.pl
SetHandler perl-script
PerlHandler Apache::Registry
Options ExecCGI
/Files
/Directory

Thanks very much.



optRedirectStdout

1999-10-29 Thread randyboy

Does this option redirect stdout for every connection or (hopefully) just
ones that are tied to the embperl handler?

Just making sure.

r.



modperl in practice.

1999-10-29 Thread jb

Hi, I've just emerged from about 5 months of fairly continuous late
night development, what started as a few Berkeley db files and a tiny
cgi perl script has now grown to (for me) a monster size, kept largely
at bay by modperl..

I hope this story will be interesting to those who are taking the first
steps, and forgive any idiocy or simple wrong-headedness, because
i was too dumb to RTFM (or rather, RTFMLArchive), so I learned some
things the hard way, and am still learning.

The site is a not a commercial one, which is why I am free to
ramble on about it.. I am sure there are far more impressive modperl
sites now under NDAs or corporate lawyer mufflers, but without those
to read, you get mine ;)

As it stands, the site is entirely dynamically generated, and visited
by 4 to 10 thousand people a day. I wish that I could say that I read
the modperl FAQ and wrote it to handle that load from day one, but
that was not the case:

My understanding of why modperl was cool came tangentially from
Philip Greenspun on photo.net rather single-mindedly promoting, albeit
with persuasion, the AOLserver... the story that made most impression
was the bill gates personal wealth clock.. getting into netscape home page
for a day he had to cope with 2 hits per second that needed code run.
Thereby proving the point neatly that a fork/exec/cgi isn't going to cut
it. At the time, 2 hits a second seemed pretty impressive.. my god, thats
two people per SECOND.. i mean, oh what, 7600 per hour? 76,000 during
daylight ? well anyway, I digress.

Using AOLserver wasn't an option for me, personally because I am too tired
to get back into TCL and stub my toe 10 times a week once the programs
got so big that side-effects ruled.. I already thought Apache was the
swiss army knife of webservers, perl5 and cpan was already my home, and
the fact that slashdot used it (and modperl, as I slowly discovered),
just cemented the choice.

But just when you've climbed a peak you discover whole new and steeper
slopes ahead...

First quandry

I had no idea what traffic for my website was likely to be, and even 
less idea what it that translated to in terms of bandwidth.. these things
they don't teach you in cnet! I wince to remember at one point,
misunderstanding a Malda comment about an ISDN line to their house,
I even asked in wonderment if they hosted slashdot on 128kbps!
Nevertheless, the relationship between visitors, pages, and bandwidth
is not well understood because asking at some high powered places got
me no good answers, including at Philips site as well.

Well, I figured, bandwidth is at least something you can get more of if
you need it and want to pay, so I decided to watch it, and let that
one answer itself. (more on that later).

The first attempts.

Initially I setup squid in accelerator mode, and looked at the result,
but during development, hits were few, and bugs were many, so dealing
with an extra layer of config files and logs was too much so I put squid 
behind 'in case of emergency break glass', and switched back to
straight apache/modperl.

Due to the excellent online Apache modperl book, which I think back in
June was still being put together, and particularly the performance
tuning guide, I discovered how to setup a startup.pl that gave nice
backtraces for errors (the Carp module and code fragment), plus I managed
to navigate the global variable problem, and always used strict..
to this day though, I find having a small set of global variables 
that needs to be shared amongst some modules to be somewhat of a pain..
my current solution of putting them all in a hash array stored in
another module is ok, but kind of annoys me for some reason.

Newbie mistakes

Watching my machine choke when a syntax error caused apache to pile up
until swap was exhausted was not fun, especially on a live server, so I
became paranoid about syntax checking .. a vimrc from dejanews
made a great perl IDE, allowing me to '!make' and get taken to each
error, as I put them in, rather than in a batch at the end from the
command line, this hack I think, was my personal favourite of 1999
(so sue me, a vi bigot.. I cant/wont sit in emacs all day).

DB files are no way to run a war

Starting with DB files was probably the silliest decision I made. They
work fine in testing, but when you pressure them, (ie, "go live"), they
start to fall apart. Elaborate locking unlocking code fragments exist
but I don't trust them, and have the corrupted DBs to prove it.

MySQL goes with modperl like sushi and rice

I cant say enough good things about MySQL. Of literally all the lego
I've used in this web site, this has been rock solid... Despite doing
at times ridiculously silly things with tables rows and fields, it
has never ONCE crashed or corrupted the data. At my place of employ
you cannot say that about any commercial database they use, and they
are actually under less stress, plus have DBAs running around keeping
them on course... Oracle may not corrupt, but I'm damn 

RE: embperl plain text

1999-10-29 Thread Gary Shea

Hey Gerald --

I just noticed that you had in fact made an optSaveSpaces in 1.2b10.
Thanks!  I had tried to access your CVS stuff via the two methods you
mentioned, couldn't get it working, and more or less gave up until
I could figure out what I was doing wrong... didn't want to bug you
until I knew for sure where the problem was.

You'll be (less than) pleased to know I managed to SEGV HTML::Embperl
today (in normal usage, not mod_perl).  It's in cleanup, and I'm trying
to get the $HTML::Embperl::dbgShowCleanup variable to do what I think
it should so I can see what's happening.  No luck yet but... oh, it
runs fine when the script is run as a normal user, but when run under
BSDI as the same user but with a forced 'login class' (with any luck
you've never had to deal with _that_ little nightmare...), it SEGV's!
Amazing.

More later...

Gary

On Thu, 17 Jun 1999, Gerald Richter wrote:
 Hi,
 
 in epmain.c about line 764 is the following code:
 
 
 /* skip trailing whitespaces */
 while (isspace(*pAfterWS))
 pAfterWS++ ;
 
 if (nType == '+'  pAfterWS  p)
 pAfterWS-- ;
 
 
 if you delete these lines, whitespace shouldn't change. Let me know if it
 works and I make it an option in the next release
 
 Gerald
 
 
 
 ---
 Gerald Richter
 ECOS  Electronic Communication Services
 Internet - Faxabruf - Infodatenbanken
 
 E-Mail: [EMAIL PROTECTED]
 WWW:http://www.ecos.de
 Tel:+49-6133/925151
 Fax:+49-6133/925152
 Faxabruf:   +49-6133/93910100
 
 
  -Original Message-
  From: Gary Shea [mailto:[EMAIL PROTECTED]]
  Sent: Wednesday, June 16, 1999 11:07 PM
  To: [EMAIL PROTECTED]
  Subject: embperl  plain text
 
 
  Hi, I've been using Embperl with excellent results for some time
  now, but recently ran into something I haven't been able to figure
  out.  I use Embperl in a system where it generates both web pages
  and email messages in plain text.  The web pages work great.
  Using it to format email messages is, I know, not exactly what
  you had in mind (it's not HTML!), but my only alternative is
  to go back to my home-brewed pre-Embperl code, which frankly isn't
  as nice as Embperl, or to use ePerl, which I'm not that
  excited about.
 
  My problem is that Embperl seems to play fast and loose as far
  as removing and adding white space.  A page which is composed
  of text, simple variable substitutions, and substitutions which
  are the results of previous Embperl substitutions, shows almost
  (I say almost 'cause it's a computer after all...) random
  additions of leading and trailing white space, and seemingly
  random subtractions of blank lines.  Does this seem possible?
  I am pretty convinced it is really happening...
 
  I understand that an HTML-specific tool has no need to respect
  white space, but I was hoping that it might be straightforward to get
  Embperl to respect white space.
 
  Here's an example of a typical call to Embperl where the problem
  shows up.
 
  require HTML::Embperl;
 
  my $line;
  HTML::Embperl::Execute ({
  'debug' = 0,
  'escmode' = 0,
  'inputfile' = $tpl,
  'options' =
  HTML::Embperl::optDisableChdir ()
  | HTML::Embperl::optDisableEmbperlErrorPage ()
  | HTML::Embperl::optDisableFormData ()
  | HTML::Embperl::optDisableHtmlScan ()
  | HTML::Embperl::optRawInput (),
  'output' = \$line,
  'param' = $pairs,
  });
  $line =~ s/^\s*//;
  $line =~ s/\s*$//;
 
  The s/// stuff is there to get rid of random added leading/trailing
  white space.  But the funniest part is that this exact same piece of code,
  on a different template, will somehow remove ALL blank lines from
  the code!  Except one.  Weird!
 
  I'm beginning to dig through the code now, but hints would be welcome...
 
  Thanks!
 
  Gary
 
 
 

-
Gary Shea   [EMAIL PROTECTED]
Salt Lake City  http://www.xmission.com/~shea



Handling caches, and handling persistent connections in modperl

1999-10-29 Thread Nicolas MONNET


I'm happily using mod_perl to serve content stored in a mysql database.
Basically, the content does not change really often, and it maybe
important (BLOB containing images). 

It seems that handling cache-related stuff would be a big win. For
example, the If-Modified-Since: (sp?) header and that kind of stuff.

Question: 
- does apache/modperl handle some of that stuff?
- is it useful to handle HEAD requests specially

Next, what do I have to do to enable persistent connections on the server
(it's images in a database so that will be actually used!), do I just need
to provide a Content-Length header, and apache/modperl will automagically
reuse the current connection?

Thanks for your help!

-- 
Nicolas MONNET, Technical Director, IT-Xchange

URL:http://www.it-xchange.com
URL:mailto:[EMAIL PROTECTED]
URL:mailto:[EMAIL PROTECTED]



If-Modified-Since, never mind ...

1999-10-29 Thread Nicolas MONNET


Ok, eagle book, p120-121 ... 

-- 
Nicolas MONNET, Technical Director, IT-Xchange

URL:http://www.it-xchange.com
URL:mailto:[EMAIL PROTECTED]
URL:mailto:[EMAIL PROTECTED]



Re: Generic Server

1999-10-29 Thread Rasmus Lerdorf

 I don't think this is currently possible with the current Apache, but hear
 me out.
 
 Would it be possible to have a generic server, like Apache, but not just
 for HTTP - something that could also serve up NNTP connections, FTP
 connections, etc. It seems to me at first look this should be possible.

Apache 2.0's architecture is such that this is possible and there are
going to be other protocols supported at some point.

-Rasmus



Session state without cookies

1999-10-29 Thread Trei Brundrett

I'm reworking an existing web store CGI script to better handle shopping
carts. I'm going to use Apache::Session to manage these shopper sessions.
The store is a mixture of static HTML and CGI generated pages and I want to
maintain the session across the entire site.

The only issue I've encountered is the distinct possibility of users without
cookies. I've searched the list archive for solutions to this problem, but
came up with no definitive answer. The Apache::Session documentation states
that this issue is left up to the developer. Are there any existing modules
which puts the session id on the query string across both static and dynamic
pages and gives you easy access to that value so you can utilize it in a
CGI? If there isn't an existing module - does anyone have anything in
development?

thanks,
Trei Brundrett
-
[EMAIL PROTECTED]
http://www.mediatruck.com
Mediatruck, Inc.
-



Embperl - where are the cookies ?

1999-10-29 Thread George P. Pipkin

Hi Everybody - 

   I have been playing around with Embperl.  I have a little script that
resembles the counter test mentioned in the docs, and it appears to
run.  Problem is, no cookies get set.  And the value of the counter is
erratic.  One browser will appear to pick up the count from another. 
Then it will jump back.  Here is the script:


html
h1Test of session features/h1hr
[+ if($udat{counter} == 0){$udat{counter} = 1} +]
The page is requested [+ $udat{counter}++ +] occasions
since [+ $udat{date} ||= localtime +]
br
cookies: [+ $ENV{HTTP_COOKIE} +]
/html
~

Incidently, $ENV{HTTP_COOKIE} never shows any value at all.  I have the
session mechanics hooked up to a mysql database.  Here's the setup stuff
in startup.pl:

$ENV{EMBPERL_SESSION_CLASSES} = "DBIStore SysVSemaphoreLocker";
$ENV{EMBPERL_SESSION_ARGS}= "DataSource=dbi:mysql:gpp8p_casenet
UserName=gpp
8p Password=xxx";
use Apache::Session;
use HTML::Embperl;

And BTW, I did set up the two tables in that database

Any ideas 

- George Pipkin


-- 
***
George P. Pipkin h - (804)-245-9916
1001 Emmet St.   w - (804)-924-1329
Carruthers Hall  fax -
(804)-982-2777
Charlottesville, Va. 22903  
http://jm.acs.virginia.edu/~gpp8p/
***



Re: Embperl - where are the cookies ?

1999-10-29 Thread Owen Stenseth

George,

This problem was mentioned in a previous post because it is wrong in the
documentation (it may be fixed now). 

The setting of $ENV{EMBPERL...} variables in this case need to be inside
a BEGIN block at the start of the script. This is because the value of
these variables are used to setup session tracking right when the
HTML::Embperl module is used (and this happens before your EMBPERL
environment variables are being set).

So put a BEGIN {} around them and you should be ready to roll.
Incidentally if you are using starting and stopping apache by hand you
will see a message from Embperl when session tracking has been enabled.
If you do not see the message don't waste your time looking at your test
page.

Another thing, the reason things update randomly is because each apache
child is keeping a copy of what you put in $udat. Since it is a special
Embperl variable it will hold it's value and not be cleaned up at the
end of page execution like other variables are each time you reload you
get a different child with a different incrementing number.

-- Owen

"George P. Pipkin" wrote:
 
 Hi Everybody -
 
I have been playing around with Embperl.  I have a little script that
 resembles the counter test mentioned in the docs, and it appears to
 run.  Problem is, no cookies get set.  And the value of the counter is
 erratic.  One browser will appear to pick up the count from another.
 Then it will jump back.  Here is the script:
 
 html
 h1Test of session features/h1hr
 [+ if($udat{counter} == 0){$udat{counter} = 1} +]
 The page is requested [+ $udat{counter}++ +] occasions
 since [+ $udat{date} ||= localtime +]
 br
 cookies: [+ $ENV{HTTP_COOKIE} +]
 /html
 ~
 
 Incidently, $ENV{HTTP_COOKIE} never shows any value at all.  I have the
 session mechanics hooked up to a mysql database.  Here's the setup stuff
 in startup.pl:
 
 $ENV{EMBPERL_SESSION_CLASSES} = "DBIStore SysVSemaphoreLocker";
 $ENV{EMBPERL_SESSION_ARGS}= "DataSource=dbi:mysql:gpp8p_casenet
 UserName=gpp
 8p Password=xxx";
 use Apache::Session;
 use HTML::Embperl;
 
 And BTW, I did set up the two tables in that database
 
 Any ideas 
 
 - George Pipkin
 
 --
 
***
 George P. Pipkin h - (804)-245-9916
 1001 Emmet St.   w - (804)-924-1329
 Carruthers Hall  fax -
 (804)-982-2777
 Charlottesville, Va. 22903
 http://jm.acs.virginia.edu/~gpp8p/
 
***



Re: More on web application performance with DBI

1999-10-29 Thread gangadharan narayan


Hi ,

I have a perl script which connects to the
oracle database. I want to know if i can lock the
script. i.e even if there are many requests to the
server for the same script there will be no
concurrency  update problems.

Also how i implement commit  rollbacks in a script.

thanks for help in advance
Niel



__
Get Your Private, Free Email at http://www.hotmail.com



Re: More on web application performance with DBI

1999-10-29 Thread Michael Peppler

Greg Stark wrote:
 
***  From dbi-users - To unsubscribe, see the end of this message.  ***
*** DBI Home Page - http://www.symbolstone.org/technology/perl/DBI/ ***
 
 Tim Bunce [EMAIL PROTECTED] writes:
  On Mon, Oct 18, 1999 at 07:08:09AM -0700, Michael Peppler wrote:
   Tim Bunce writes:
 On Fri, Oct 15, 1999 at 11:42:29AM +0100, Matt Sergeant wrote:
  Sadly prepare_cached doesn't always work very well - at least not with
  Sybase (and I assume MSSQL). Just a warning.

 Could you be more specific?
 
 Well I doubt it will be nearly as effective as it is on Oracle since I don't
 think Sybase supports placeholders at the database level. I believe the DBD
 driver is emulating them.

Actually not - Sybase creates a temporary stored proc for each prepared
statement,
so it's equivalent to using stored procedures.

Michael



Re: More on web application performance with DBI

1999-10-29 Thread Michael Peppler

Greg Stark writes:
  Michael Peppler [EMAIL PROTECTED] writes:
  
   Greg Stark wrote:
  
   Actually not - Sybase creates a temporary stored proc for each prepared
   statement, so it's equivalent to using stored procedures.
  
  Heh neat, is that DBD::Sybase or the server that's doing that?
  And does it only work for a single statement handle or does it keep
  that procedure around in case i prepare the same statement again?

The prepared statement uses a stored proc built on the fly *if* your
SQL statement has ?-style placeholders. With Sybase you can't have
multiple statements that are active simultaneously over the same
connection, so preparing a second statement will result in DBD::Sybase 
opening a new connection (as a side note you *can't* use this feature
when AutoCommit is OFF because it would require DBD::Sybase to do
two-phase commits, which I'm not prepared to code at this point...)
The stored procs remain around for as long as the $sth is
defined/valid.

Another good point is that Sybase *knows* what types the various
parameter to a prepared statement are, so you don't need to tell it
that something is a VARCHAR or whatever (and I actually ignore those
type params to execute() and bind_param())

That being said Sybase is pretty fast at parsing/compiling SQL, so
using ?-style placeholders is really only usefull if you're going to
call a particular statement more than a couple of times.

And in general, with Sybase I always advocate using stored procs for
all access as this allows the DBA to fine tune the queries without
having to go into the perl code itself (and ensures that you don't get 
someone issuing a very sub-optimal query that brings a system to its
knees!)

Michael
-- 
Michael Peppler -||-  Data Migrations Inc.
[EMAIL PROTECTED]-||-  http://www.mbay.net/~mpeppler
Int. Sybase User Group  -||-  http://www.isug.com
Sybase on Linux mailing list: [EMAIL PROTECTED]