Re: $r-headers_out Location and Set-Cookie

2003-08-14 Thread Joe Schaefer
gerard uolaquetalestem  [EMAIL PROTECTED] writes:

[...]

 But exactly what's the difference between err_headers_out and 
 headers_out? I understand that the first is related with an error message 
 sended by headers, but i mean, really what does apache make different?

Here's a straightforward explanation of the difference-

  http://ken.coar.org/burrow/index?month=2003-07#511


-- 
Joe Schaefer



Re: Set-Cookie2

2003-08-05 Thread Joe Schaefer
Stas Bekman [EMAIL PROTECTED] writes:

 Jie Gao wrote:

[...]

  Sorry. Now let me ask a question that's related to mod_perl: does
  libapreq support Set-Cookie2?
 
 libapreq has its own list, see: http://httpd.apache.org/apreq/
 
 AFAIK, apreq-2 (to be released soon) supports, 

Yup.

 I'm not sure about apreq-1.

No, it doesn't.

-- 
Joe Schaefer



Re: MP1 - libapreq1.2 broke apache / mod_perl installation

2003-08-03 Thread Joe Schaefer
Iphigenie [EMAIL PROTECTED] writes:

[...]

 What puzzles me is that it this worked perfectly on another near
 identical machine.
 
 Why does it want .so files? I thought that happened only if you
 installed it using the ./configure method, which I didnt.

The latest version of ExtUtils::MakeMaker tickles a bug in
libapreq's c/Makefile.PL.  There has been quite a bit of discussion
of this issue in the last week on apreq-dev, p5p, and on this list
as well.

Michael Schwern [hope I spelled it right :] posted two patches to 
the apreq-dev list designed to fix these problems.  As soon as 
enough folks decide which of his patches is best, we'll roll a 1.3 
release that fixes the problem.

[...]

 The question remains: what did i naively do wrong the first time?

Probably nothing- this is a bug in libapreq.

-- 
Joe Schaefer



Re: [mp2] Apache::Cookie

2003-07-15 Thread Joe Schaefer
Perrin Harkins [EMAIL PROTECTED] writes:

 On Tue, 2003-07-15 at 06:43, Swen Schillig wrote:
  Are there any plans to have Apache::Cookie or does
  mp2 code always has to use CGI if there are cookies needed ?
 
 Apache::Cookie is part of libapreq (along with Apache::Request), so you
 should follow libapreq development for this.

Now would be an especially good time for the adventurous
to try out libapreq-2, since user feedback will accelerate
the release process.  The preliminary docs for mp2's Apache::Cookie
are online at

  http://httpd.apache.org/~joes/libapreq-2/group__XS__Cookie.html

Access to the cvs repository for httpd-apreq-2 is described
at the bottom of

  http://httpd.apache.org/apreq/

-- 
Joe Schaefer



Apache::Request for CGI? (was: Re: A::Registry vs. mod_perl handler philosophy)

2003-07-01 Thread Joe Schaefer
Perrin Harkins [EMAIL PROTECTED] writes:

[...]

 I'm late to the party, but here's an old post of mine that sums up my
 opinion:
 
 http://marc.theaimsgroup.com/?l=apache-modperlm=95440118003848w=2

+1.  Scripting _inside_ the server opens up possibilities that
are unimaginable to folks who are content confining themselves 
to the lowest common denominator (CGI).

That said, apreq-dev is looking for someone to adopt and 
develop the CGI port of libapreq-2/Apache::Request. I'm shooting
for a developer release of libapreq-2 before OSCON, and it'd be 
_really_ cool if we had a functional CGI port available.

-- 
Joe Schaefer



Re: [RELEASE CANDIDATE] Apache::Test 1.03-dev

2003-06-24 Thread Joe Schaefer
Rob Bloodgood [EMAIL PROTECTED] writes:

[...]

 *** root mode: changing the fs ownership to 'nobody' (99:99)
 /usr/sbin/httpd -X -d /root/.cpan/build/Apache-Test-1.03/t -f
 /root/.cpan/build/Apache-Test-1.03/t/conf/httpd 

Is the docroot (/root/.cpan/.../t/httpd.conf) inside 
a directory that nobody can read from?  Probably
not, from the looks of it.

Try moving the Apache-Test-1.03 directory to /tmp and
see if you have better luck there.

-- 
Joe Schaefer




[ANNOUNCE] libapreq-1.2 release

2003-06-19 Thread Joe Schaefer

libapreq-1.2.tar.gz (Apache::Request) is now on CPAN:

  file: $CPAN/authors/id/J/JO/JOESUF/libapreq-1.2.tar.gz
  size: 277549 bytes
   md5: ae08726f11ca25a215d4d854d675c3ff

and at www.apache.org:

  file: http://www.apache.org/dist/httpd/libapreq-1.2.tar.gz
   sig: http://www.apache.org/dist/httpd/libapreq-1.2.tar.gz.asc
   key: http://www.apache.org/dist/httpd/KEYS

--

Changes since v1.1:

=over 4

=item 1.2 - June 19, 2003

libapreq-1.2 released.

=item 1.16 - June 19, 2003

Updated INSTALL  README documents to explain how to configure the tests.
Apache::Test v1.03 is now a prerequisite for libapreq.

=item 1.15 - June 13, 2003

Experimental support for SFIO is withdrawn.

=item 1.14 - April 30, 2003

Add documentation for subclassing Apache::Request. [Steve Hay]

=item 1.13 - April 18, 2003

Sync test suite uris with latest Apache::Test [Stas]

Patch mfd parser's Content-Disposition header to do case-insensitive
matches on name  filename keywords.  This feature allows uploads
from Nokia Series 60 phones to work as expected. [Oskari 'Okko' Ojala]

Patch Request.xs to revert to using pre-1.1 $upload-fh code for
pre-5.8 perls.  This patch may get backed out prior to 1.2's release,
unless it is known to resolve problems (zombies?) reported to occur 
with libapreq-1.1 uploads and pre-5.8 perls. [Joe]

=item 1.12 - February 27, 2003

do a better mod_perl version checking, including the test for mod_perl
2.0. [Stas]

Applied IKEBE Tomohiro's patch for handling %u strings.

=item 1.11 - February 27, 2003

Add req-nargs to C API to distinguish between query string and 
request body params.




Re: [ANNOUNCE] libapreq-1.2 release

2003-06-19 Thread Joe Schaefer
Joe Schaefer [EMAIL PROTECTED] writes:

[...]

 and at www.apache.org:
 
   file: http://www.apache.org/dist/httpd/libapreq-1.2.tar.gz
sig: http://www.apache.org/dist/httpd/libapreq-1.2.tar.gz.asc

Correction- those urls for libapreq-1.2 should read

  http://www.apache.org/dist/httpd/libapreq/libapreq-1.2.tar.gz
  http://www.apache.org/dist/httpd/libapreq/libapreq-1.2.tar.gz.asc

-- 
Joe Schaefer



Re: mod_perl slower than expected? - Test Code!

2003-06-18 Thread Joe Schaefer
Trevor Phillips [EMAIL PROTECTED] writes:

[...]

 On my main dev box, ab gives an average of 8.8secs for the mod_perl
 run, and 7.2secs for the FastCGI run. The internal timer and printed
 output reflects these results too. 

How does the cgi/command-line version stack up?  AFAICT your test isn't 
measuring any architectural differences between modperl and fastgci,
just how well the server's embedded perl interpreter performs relative to
perl itself.   I wonder if compiling modperl as a DSO versus compiling it
statically might explain the performance lag.

-- 
Joe Schaefer



new libapreq-1.2 release candidate 2 available

2003-06-17 Thread Joe Schaefer

A new release candidate #2 for libapreq-1.2 is available:

  http://httpd.apache.org/~joes/libapreq-1.2_rc2.tar.gz

Please give it a try and report any problems to the apreq-dev@
list.  I'd like to release this version to CPAN as soon as possible.
Thanks.

Changes since 1.1:

July 13, 2003

  Experimental support for SFIO is withdrawn.

April 30, 2003

  Add documentation for subclassing Apache::Request. [Steve Hay]

April 18, 2003

  Sync test suite uris with latest Apache::Test [Stas]

  Patch mfd parser's Content-Disposition header to do case-insensitive
  matches on name  filename keywords.  This feature allows uploads
  from Nokia Series 60 phones to work as expected. [Oskari 'Okko' Ojala]

  Patch Request.xs to revert to using pre-1.1 $upload-fh code for
  pre-5.8 perls.  This patch may get backed out prior to 1.2's release,
  unless it is known to resolve problems (zombies?) reported to occur
  with libapreq-1.1 uploads and pre-5.8 perls. [Joe]

February 27, 2003

  do a better mod_perl version checking, including the test for mod_perl
  2.0. [Stas]

  Applied IKEBE Tomohiro's patch for handling %u strings.

February 27, 2003

  Add req-nargs to C API to distinguish between query string and
  request body params.

-- 
Joe Schaefer



Re: apache::request parse() doesn't capture abort error

2003-06-14 Thread Joe Schaefer
Hector Pizarro [EMAIL PROTECTED] writes:

[...]

 If the user closes the popup in the middle of an upload, Apache::Request
 parse() isn't throwing any error, and all the following code in my module
 savesthe file incomplete in the system, which of course is garbage data.
 
 Is this a bug, an incomplete feature, or is my configuration? 

It's probably a bug in c/apache_multipart_buffer.c.  We may not
do error-checking well enough on fill_buffer().

 If parse is supposed to return an error code, which one is that? 206 =
 HTTP_PARTIAL_CONTENT?

No, that's not an error code.  Since the error here seems
to arise from ap_get_client_block, $r-status should probably
be -1.

 Ok, now let's suppose that this error is fixed. 

With a patch, I hope ;-).

 I want to do several uploads fromthe same popup at once, so I have 5
 file boxes in the form. If the user closesthe popup before the process
 ends, i'd like to save only the completed files, how could I check
 which files are correctly uploaded, and which ones are incomplete?

You could just ignore the final upload object, which has no - next
pointer:

  for (my $u = $req-upload; $u  $u-next; $u = $u-next) {
# do something with $u 
  }

-- 
Joe Schaefer



Re: problem building libapreq on Solaris

2003-06-13 Thread Joe Schaefer
Xavier Noria [EMAIL PROTECTED] writes:

[...]

 Looks like it's taking t/httpd instead of /usr/local/apache/bin/httpd, 
 though I entered that full path to httpd in a previous prompt.
 
 With similar settings I've just smoothly installed libapreq on Debian, 
 do you know what can be happening?

We've replaced the Apache::test suite in 1.1 with an Apache::Test
based one, and are planning to release it as soon as the kinks are
worked out.  Please try out the latest release candidate at

  http://httpd.apache.org/~joes/libapreq-1.2_rc2.tar.gz

and see if you have better luck with the new tests.  (You may
need to install Apache::Test from CPAN first.)

-- 
Joe Schaefer



Re: CGI.pm not processing POST within AuthenHandler

2003-06-08 Thread Joe Schaefer
Michael L. Artz [EMAIL PROTECTED] writes:

 For some reason, CGI.pm (2.93) does not seem to parse POST data within a
 PerlAuthenHandler.  For example, the following:
 
 sub handler {
 my $r = shift;
 my $q = new CGI($r);
 my @params = $q-param;
 $r-server-log-error(handler params are  . join ::, @params);
 return Apache::OK;
 }

[...]

 Apache/2.0.44 (Gentoo/Linux) mod_perl/1.99_09 Perl/v5.8.0 CGI.pm/2.93

Attempting to read POST data before the content-handler is called
is unsafe with httpd-2.  You'll probably have to wait for
Apache::Request to be ported over in order to do something like that.

-- 
Joe Schaefer



Re: [ANNOUNCE] libapreq-1.1 is out

2003-01-30 Thread Joe Schaefer
Matt Sergeant [EMAIL PROTECTED] writes:

 On 28 Jan 2003, Joe Schaefer wrote:
 
 
  libapreq-1.1 is now available on CPAN,
  and also through the Apache website at
 
http://www.apache.org/dist/httpd/libapreq/libapreq-1.1.tar.gz
 
 Failed badly to install for me. First of all it won't install via the
 CPAN shell as root because the test harness tries to deliver files
 from what becomes a root-owned directory, and it won't do that. 

The test suite was lifted directly from mod_perl.  Are you able to 
able to test/install mod_perl using the same approach?

 Secondly it seems to segfault my apache, so it leaves zombies lying
 around. Not sure if that's libapreq or something else though. 

It would help to know your platform details.  We tried to eliminate
all segfaults related to perl-5.8.0, but in the process we may have
introduced new ones.

-- 
Joe Schaefer



Re: [ANNOUNCE] libapreq-1.1 is out

2003-01-30 Thread Joe Schaefer
Dave Rolsky [EMAIL PROTECTED] writes:

 On Tue, 28 Jan 2003, Joe Schaefer wrote:
 
  libapreq-1.1 is now available on CPAN,
  and also through the Apache website at
 
http://www.apache.org/dist/httpd/libapreq/libapreq-1.1.tar.gz
 
 What are the difference between this version and 1.05?  The changelog
 ends at 1.05.

Other than the release date of 1.1, nothing else should have changed.
I don't think current cvs has changed since I tagged v1_1 for release.

-- 
Joe Schaefer



Re: [ANNOUNCE] libapreq-1.1 is out

2003-01-30 Thread Joe Schaefer
Matt Sergeant [EMAIL PROTECTED] writes:

[...]

  It would help to know your platform details.  We tried to eliminate
  all segfaults related to perl-5.8.0, but in the process we may have
  introduced new ones.
 
 perl 5.00503 on RH 6.2 and mod_perl 1.26 IIRC. The segfault was during 
 the file upload tests.

Thanks for the feedback.  It appears we have more work to do on the new 
upload code in Request.xs.  I think we should aim to release a new version,
quickly, that's dedicated to fixing these immediate problems with the 
1.1 release.

-- 
Joe Schaefer



Re: [ANNOUNCE] libapreq-1.1 is out

2003-01-30 Thread Joe Schaefer
Stas Bekman [EMAIL PROTECTED] writes:

 Joe, I thought you've reverted to use the old test suite?

I did, sorry for the confusion.  The test suite in 1.1 is based 
on Apache::test, not Apache::Test.

-- 
Joe Schaefer



[ANNOUNCE] libapreq-1.1 is out

2003-01-28 Thread Joe Schaefer

libapreq-1.1 is now available on CPAN,
and also through the Apache website at

  http://www.apache.org/dist/httpd/libapreq/libapreq-1.1.tar.gz

-- 
Joe Schaefer



libapreq-1.1 non-release

2003-01-26 Thread Joe Schaefer

libapreq-1.1 was non-released on January 10, 2003.
libapreq-1.1 is NOT an official ASF release, and is 
not available on CPAN, but a non-official version is 
temporarily not non-available here:

  http://www.apache.org/~joes/libapreq-1.1.tar.gz

-- 
Joe Schaefer



Re: Timestamp for Apache::Upload uploads.

2003-01-15 Thread Joe Schaefer
Matthew Hodgson [EMAIL PROTECTED] writes:

[...]

 It seems that the question is less to do with mod_perl, and more to do with
 whether any current browsers give Last-Modified or Modification-Date or
 similar information in the MIME headers for multipart/form-data uploads.
 Whilst I had convinced myself that I'd seen this working, I'm starting to
 doubt my memory.

Not to my knowledge;  I suspect you've been stat()ing the spooled
temp file all along.

-- 
Joe Schaefer



Re: Sticky pnotes with Apache::Registry

2003-01-05 Thread Joe Schaefer
John Heitmann [EMAIL PROTECTED] writes:

[...]

 DirectoryIndex index.pl
 
 DocumentRoot /tmp/web_directory
 
 Directory /tmp/web_directory
  AddHandler perl-script .pl
  PerlHandler Apache::Registry
 /Directory

[...]

 That httpd.conf combined with the code in the previous mail placed in 
 a file named index.pl and placed in /tmp/web_directory/ will exhibit 
 the problem nicely. I see nothing in the list archives or web site 
 about mod_dir or DirectoryIndex.

mod_dir does an internal redirect, which creates a new request_rec
struct and a new request_config.  I *think* the problem is that when 
mod_perl runs its cleanup handler, it runs on the *original* 
request_rec + request_config, which doesn't have the new pnotes.
So they never get cleaned up by mod_perl's per_request_cleanup().

This might be a bug within apache itself.
-- 
Joe Schaefer



libapreq-1.1 Release Candidate 2

2002-12-20 Thread Joe Schaefer

The apreq developers are planning a maintenance release of
libapreq-1.1.  This version does not include support for
modperl-2, but it *could* address some outstanding problems in
1.0:

  * OS X support [1]
  * perl 5.8 segfaults related to file uploads [2]

Please give the tarball at 

  http://www.apache.org/~joes/libapreq-1.1_rc2.tar.gz

a try and report comments/problems/etc. to the apreq-dev list
at [EMAIL PROTECTED]  There are special build 
instructions for OS X, so be sure to have a look at 

  http://www.apache.org/~joes/

for details on OS X support.  We'll be patching the documentation 
based on OS X user feedback.

Note:  We really could use more volunteers participating
in apreq-dev, especially folks with OS X experience.  Even though
libapreq is a small (ASF) project, there's plenty of work to be 
done- both in improving/supporting the libapreq-1.x codebase as
well as porting apreq to modperl-2.

Thanks.
--
[1] - We still need volunteers who to document the OS-X install.
[2] - The segfault problems with 5.8.0 are still being reported.
  The apreq developers need help tracking this problem down.



Re: When perl is not quite fast enough

2002-12-16 Thread Joe Schaefer
Stas Bekman [EMAIL PROTECTED] writes:
[...]

 Me thinking to ask Nicolas to contribute these notes to our tutorial
 section (http://perl.apache.org/docs/tutorials/) and if possible to
 add some more meat to the original notes. If you remember our evil
 plan was to host at perl.apache.org interesting tutorials, relevant to
 mod_perl developers, but which are of interest to non-mod_perl
 developers too, thus bringing them to our site and hopefully getting
 them interested in mod_perl. Nicolas's doc, looks like a good bite ;)

Great idea, Stas.  His comments definitely should be part of the 
mod_perl canon; those notes present *the right approach* to optimization.

-- 
Joe Schaefer



Re: ap_unescape_url can't escape %uXXXX

2002-11-28 Thread Joe Schaefer
Tatsuhiko Miyagawa [EMAIL PROTECTED] writes:

 It seems that Apache's ap_unescape_url() can't handle %u style
 URI-escaped Unicode string, hence Apache::Request cannot neighther,
 while CGI.pm can.

You may want to take this issue up on [EMAIL PROTECTED]
Personally I've never seen this kind of character encoding, 
and my reading of

  Section 8 at http://www.w3.org/TR/charmod/ 
  and RFC 2718, Section 2.2.5, 

seems to indicate that this isn't a recommended practice. OTOH, IIRC the 
apache source claims to support utf8 extension(s) of www-urlencoded
ASCII, so if people really are using such encodings, supporting 
%u in ap_unescape_url shouldn't hurt server performance at all.

In any case, putting together a patch of ap_unescape_url along the lines 
of CGI::Util's utf8_chr() can't hurt :-).

-- 
Joe Schaefer



libapreq-1.1 Release Candidate 1

2002-11-25 Thread Joe Schaefer

The apreq developers are planning a maintenance release of
libapreq-1.1.  This version does not include support for
modperl-2, but it does address some outstanding problems in
1.0:

  * OS X support
  * perl 5.8 segfaults related to file uploads.

Please give the tarball at 

  http://www.apache.org/~joes/libapreq-1.1_rc1.tar.gz

a try and report comments/problems/etc. to the apreq-dev list
at [EMAIL PROTECTED]  There are special build 
instructions for OS X, so be sure to have a look at 

  http://www.apache.org/~joes/

for details on OS X support.  We'll be patching the INSTALL  
README documents based on OS X user feedback.

Note:  We really could use more volunteers participating
in apreq-dev, especially folks with OS X experience.  Even though
libapreq is a small (ASF) project, there's plenty of work to be 
done- both in improving/supporting the libapreq-1.x codebase as
well as porting apreq to modperl-2.

Thanks.

-- 
Joe Schaefer



Re: libapreq-1.1 Release Candidate 1

2002-11-25 Thread Joe Schaefer
Ken Williams [EMAIL PROTECTED] writes:

[...]

 I've attached the full output.

Thanks, Ken.  I looked over the result, but didn't see any
indication that you used the ./configure - make - make install
instructions from the web page.  Did the libtool-based build
of libapreq install successfully before you tried doing the perl
build?

-- 
Joe Schaefer



Re: libapreq-1.1 Release Candidate 1

2002-11-25 Thread Joe Schaefer
Ken Williams [EMAIL PROTECTED] writes:

[...]

 I got through ./configure - make - make install 
 successfully, installing to /usr/local/lib/ and 
 /usr/local/include/.  However, there doesn't seem to be a 
 ldconfig on my system ('locate' couldn't find one anywhere, 
 and I checked manually in /sbin/ and /usr/sbin/).

According to 

http://fink.sourceforge.net/doc/porting/porting.html

OS X doesn't have/need an ldconfig binary (I'll update the web page).

Try running the BUILD.sh script first.  It'll rebuild the libtool/
autoconf/automake utilities by using to your machine's version, 
instead of the ones I included in the tarball (which are based on
my linux box).  You might try running that first, and then doing 
the ./configure - make - make install mojo before building 
the perl tests.

If you still have trouble, I think it'll help apreq-dev to see
the transcript of the full process, from sh BUILD.sh through 
make test.

-- 
Joe Schaefer



Re: UTF8 character issue with Apache::Request?

2002-09-28 Thread Joe Schaefer

[EMAIL PROTECTED] writes:

[...]

 With Kanji filename :
 Size is 0
 UPL:Content-Disposition=form-data; name=UPLOADFILE; 
 filename=.DOC
 UPL:Content-Type=application/octet-stream
 
 Without Kanji filename
 Size is 306688
 UPL:Content-Disposition=form-data; name=UPLOADFILE; 
 filename=copy.DOC
 UPL:Content-Type=application/msword
 
 Any thoughts or input would be great.

Are you certain this is a server-side problem?  The 
varying Content-Types look suspicious to me.  I'd double-check
(via tcpdump or something) that the client is actually sending 
the whole file to the server.

-- 
Joe Schaefer



Re: UTF8 character issue with Apache::Request?

2002-09-28 Thread Joe Schaefer

Peter Bi [EMAIL PROTECTED] writes:

 Please take a serious look. 

I did, and I suspect this problem is caused by OP's client/browser 
failing to open the file with the Kanji filename, so it might be
sending an empty file with the default enctype instead.

 There were several related reports in the mailing list during the
 months: Apache::Request might not handle double-bytes or utf8
 correctly. Or it may be due to the C library.

You seem to know something about this issue.  However, this is the first 
time I've seen utf8 discussed in relation to Apache::Request on this list.
I've tried a few dozen links from google (utf8 Apache::Request), and 
I've searched the epigone archives for this list.  I wasn't able to find 
a single related report.

A reference url, a test case, or a better still, a patch, would be 
considerably more helpful than sending me on a wild goose chase.

-- 
Joe Schaefer



Re: Filehandles

2002-08-31 Thread Joe Schaefer

Stas Bekman [EMAIL PROTECTED] writes:

[...]

 perl 5.8.0 internals are thread-safe, so does mod_perl 2.0-dev.
 
 By saying that perl is thread-safe, I mean that operations like push, =, 
 /, map, chimp, etc. are thread-safe. 
  ^

A thread-safe chimp; amazing!  Try doing *that* in Java.

-- 
Joe Schaefer



Re: ANNOUNCE: the new perl.apache.org is alive now!

2002-07-12 Thread Joe Schaefer

Stas Bekman [EMAIL PROTECTED] writes:

 This new site is a result of a collaborative effort of the mod_perl
 community. Thank you all! Without you the new site won't be as good as
 it is now.
 
 Special acknowledgements go to the following folks:
 
 Allan Juul
 Bill Moseley
 Jonathan M. Hollin
 Per Einar Ellefsen
 Thomas Klausner

The new site is outstanding!  Great work, guys!

-- 
Joe Schaefer



Re: leaks with Apache::Request?

2002-07-10 Thread Joe Schaefer

Joe Schaefer [EMAIL PROTECTED] writes:

[...]

 Somehow the assignment operator MUST be involved in the leak here.
 (You only get a leak when the *same* reference (*SV) is on both sides 
 of the assignment).

Could someone with modperl 1.2x built using a perl 5.8 release candidate 
please test this out:

Perl  
sub Apache::Request::DESTROY{warn DEAD: $_[0]\n}
sub Apache::DESTROY{warn Dead: $_[0]\n}

use Devel::Peek;
use Apache:Request;

package Apache::test;

sub handler {
my $r = shift;
my $apr = Apache::Request-new($r);

Dump($apr); # OK
$r = $apr;  # XXX: what's going on here???
Dump($apr); # fscked

$r-send_header;
$r-print('apr test');
return 200;
}
1;
/Perl

Location /test
  SetHandler perl-script
  PerlHandler Apache::test
/Location



My error log for a request to /test is below.  The above
assignment on the line marked XXX is modifying $apr, which
it shouldn't.

I don't have a 5.8-RC handy to test this on, but if somebody 
else sees the same problem on 5.8, I'll file a bug report to p5p.

Thanks alot.

--
SV = RV(0x814b154) at 0x82309b8
  REFCNT = 1
  FLAGS = (ROK)
  RV = 0x82472f4
SV = PVMG(0x8244f58) at 0x82472f4
  REFCNT = 1
  FLAGS = (OBJECT,RMG,IOK,pIOK)
  IV = 136639644
  NV = 0
  PV = 0
  MAGIC = 0x8244f80
MG_VIRTUAL = 0
MG_TYPE = '~'
MG_FLAGS = 0x02
  REFCOUNTED
MG_OBJ = 0x822dec4
  SV = RV(0x814b14c) at 0x822dec4
REFCNT = 2
FLAGS = (PADBUSY,PADMY,ROK)
RV = 0x82472dc
  SV = PVMG(0x8244f30) at 0x82472dc
REFCNT = 2
FLAGS = (OBJECT,IOK,pIOK)
IV = 136628428
NV = 0
PV = 0
STASH = 0x81420e4   Apache
MG_LEN = -1
MG_PTR = 0x824c8cc  - please notify IZ
  STASH = 0x8224a18 Apache::Request
SV = RV(0x814b154) at 0x82309b8
  REFCNT = 1
  FLAGS = (ROK)
  RV = 0x82472f4
SV = PVMG(0x8244f58) at 0x82472f4
  REFCNT = 2
  FLAGS = (OBJECT,RMG,IOK,pIOK)
  IV = 136639644
  NV = 0
  PV = 0
  MAGIC = 0x8244f80
MG_VIRTUAL = 0
MG_TYPE = '~'
MG_FLAGS = 0x02
  REFCOUNTED
MG_OBJ = 0x822dec4
  SV = RV(0x814b14c) at 0x822dec4
REFCNT = 2
FLAGS = (PADBUSY,PADMY,ROK)
RV = 0x82472f4
  SV = PVMG(0x8244f58) at 0x82472f4
REFCNT = 2

FLAGS = (OBJECT,RMG,IOK,pIOK)
IV = 136639644
NV = 0
PV = 0
MAGIC = 0x8244f80
  MG_VIRTUAL = 0
  MG_TYPE = '~'
  MG_FLAGS = 0x02
REFCOUNTED
  MG_OBJ = 0x822dec4
SV = RV(0x814b14c) at 0x822dec4
  REFCNT = 2
  FLAGS = (PADBUSY,PADMY,ROK)
  RV = 0x82472f4
  MG_LEN = -1
  MG_PTR = 0x824c8cc  - please notify IZ
STASH = 0x8224a18   Apache::Request
MG_LEN = -1
MG_PTR = 0x824c8cc  - please notify IZ
  STASH = 0x8224a18 Apache::Request
Dead: Apache=SCALAR(0x82472dc)
--

-- 
Joe Schaefer



Re: leaks with Apache::Request?

2002-07-10 Thread Joe Schaefer


Does anyone know what's causing the Apache::Request object to
leak here?  See # XXX comment below:

Perl
 package Apache::test;

 sub Apache::Request::DESTROY{warn DEAD: $_[0]\n}
 sub Apache::DESTROY{warn Dead: $_[0]\n}

 use Devel::Peek;
 use Apache::Request;

 sub handler {
 my $r = shift;
 my $apr = Apache::Request-new($r);

 Dump($apr); # OK
 $r = $apr;  # XXX: what's going on here???
 Dump($apr); # fscked

 $r-send_http_header;
 $r-print('apr test');
 return 200;
 }
 1;
/Perl

Location /test
   SetHandler perl-script
   PerlHandler Apache::test
/Location


-- server error log for request to /test --

[Wed Jul 10 07:38:32 2002] [notice] Apache/1.3.27-dev (Unix) 
mod_perl/1.27_01-dev Perl/v5.8.0 configured

[...]

SV = RV(0x87aca6c) at 0x89902b8
   REFCNT = 1
   FLAGS = (PADBUSY,PADMY,ROK)
   RV = 0x8626da4
   SV = PVMG(0x8703130) at 0x8626da4
 REFCNT = 1
 FLAGS = (OBJECT,RMG,IOK,pIOK)
 IV = 148456972
 NV = 0
 PV = 0
 MAGIC = 0x8d7b2c8
   MG_VIRTUAL = 0
   MG_TYPE = PERL_MAGIC_ext(~)
   MG_FLAGS = 0x02
 REFCOUNTED
   MG_OBJ = 0x89975dc
   SV = RV(0x87aca64) at 0x89975dc
 REFCNT = 2
 FLAGS = (PADBUSY,PADMY,ROK)
 RV = 0x8626c78
 SV = PVMG(0x8703110) at 0x8626c78
   REFCNT = 2
   FLAGS = (OBJECT,IOK,pIOK)
   IV = 148451868
   NV = 0
   PV = 0
   STASH = 0x8298510 Apache
   MG_LEN = -1
   MG_PTR = 0x8d9321c  - please notify IZ
 STASH = 0x835f8ec   Apache::Request
SV = RV(0x87aca6c) at 0x89902b8
   REFCNT = 1
   FLAGS = (PADBUSY,PADMY,ROK)
   RV = 0x8626da4
   SV = PVMG(0x8703130) at 0x8626da4
 REFCNT = 2
 FLAGS = (OBJECT,RMG,IOK,pIOK)
 IV = 148456972
 NV = 0
 PV = 0
 MAGIC = 0x8d7b2c8
   MG_VIRTUAL = 0
   MG_TYPE = PERL_MAGIC_ext(~)
   MG_FLAGS = 0x02
 REFCOUNTED
   MG_OBJ = 0x89975dc
   SV = RV(0x87aca64) at 0x89975dc
 REFCNT = 2
 FLAGS = (PADBUSY,PADMY,ROK)
 RV = 0x8626da4
 SV = PVMG(0x8703130) at 0x8626da4
   REFCNT = 2
   FLAGS = (OBJECT,RMG,IOK,pIOK)
   IV = 148456972
   NV = 0
   PV = 0
   MAGIC = 0x8d7b2c8
 MG_VIRTUAL = 0
 MG_TYPE = PERL_MAGIC_ext(~)
 MG_FLAGS = 0x02
   REFCOUNTED
 MG_OBJ = 0x89975dc
 SV = RV(0x87aca64) at 0x89975dc
   REFCNT = 2
   FLAGS = (PADBUSY,PADMY,ROK)
   RV = 0x8626da4
 MG_LEN = -1
 MG_PTR = 0x8d9321c  - please notify IZ
   STASH = 0x835f8ec Apache::Request
   MG_LEN = -1
   MG_PTR = 0x8d9321c  - please notify IZ
 STASH = 0x835f8ec   Apache::Request
Dead: Apache=SCALAR(0x8626c78)






Re: leaks with Apache::Request?

2002-07-10 Thread Joe Schaefer

[resent to modperl list; earlier copy mistakenly cc'd to p5p]

Does anyone know what's causing the Apache::Request object to
leak here?  See # XXX comment below:

Perl
 package Apache::test;

 sub Apache::Request::DESTROY{warn DEAD: $_[0]\n}
 sub Apache::DESTROY{warn Dead: $_[0]\n}

 use Devel::Peek;
 use Apache::Request;

 sub handler {
 my $r = shift;
 my $apr = Apache::Request-new($r);

 Dump($apr); # OK
 $r = $apr;  # XXX: what's going on here???
 Dump($apr); # fscked

 $r-send_http_header;
 $r-print('apr test');
 return 200;
 }
 1;
/Perl

Location /test
   SetHandler perl-script
   PerlHandler Apache::test
/Location


-- server error log for request to /test --

[Wed Jul 10 07:38:32 2002] [notice] Apache/1.3.27-dev (Unix) 
mod_perl/1.27_01-dev Perl/v5.8.0 configured

[...]

SV = RV(0x87aca6c) at 0x89902b8
   REFCNT = 1
   FLAGS = (PADBUSY,PADMY,ROK)
   RV = 0x8626da4
   SV = PVMG(0x8703130) at 0x8626da4
 REFCNT = 1
 FLAGS = (OBJECT,RMG,IOK,pIOK)
 IV = 148456972
 NV = 0
 PV = 0
 MAGIC = 0x8d7b2c8
   MG_VIRTUAL = 0
   MG_TYPE = PERL_MAGIC_ext(~)
   MG_FLAGS = 0x02
 REFCOUNTED
   MG_OBJ = 0x89975dc
   SV = RV(0x87aca64) at 0x89975dc
 REFCNT = 2
 FLAGS = (PADBUSY,PADMY,ROK)
 RV = 0x8626c78
 SV = PVMG(0x8703110) at 0x8626c78
   REFCNT = 2
   FLAGS = (OBJECT,IOK,pIOK)
   IV = 148451868
   NV = 0
   PV = 0
   STASH = 0x8298510 Apache
   MG_LEN = -1
   MG_PTR = 0x8d9321c  - please notify IZ
 STASH = 0x835f8ec   Apache::Request
SV = RV(0x87aca6c) at 0x89902b8
   REFCNT = 1
   FLAGS = (PADBUSY,PADMY,ROK)
   RV = 0x8626da4
   SV = PVMG(0x8703130) at 0x8626da4
 REFCNT = 2
 FLAGS = (OBJECT,RMG,IOK,pIOK)
 IV = 148456972
 NV = 0
 PV = 0
 MAGIC = 0x8d7b2c8
   MG_VIRTUAL = 0
   MG_TYPE = PERL_MAGIC_ext(~)
   MG_FLAGS = 0x02
 REFCOUNTED
   MG_OBJ = 0x89975dc
   SV = RV(0x87aca64) at 0x89975dc
 REFCNT = 2
 FLAGS = (PADBUSY,PADMY,ROK)
 RV = 0x8626da4
 SV = PVMG(0x8703130) at 0x8626da4
   REFCNT = 2
   FLAGS = (OBJECT,RMG,IOK,pIOK)
   IV = 148456972
   NV = 0
   PV = 0
   MAGIC = 0x8d7b2c8
 MG_VIRTUAL = 0
 MG_TYPE = PERL_MAGIC_ext(~)
 MG_FLAGS = 0x02
   REFCOUNTED
 MG_OBJ = 0x89975dc
 SV = RV(0x87aca64) at 0x89975dc
   REFCNT = 2
   FLAGS = (PADBUSY,PADMY,ROK)
   RV = 0x8626da4
 MG_LEN = -1
 MG_PTR = 0x8d9321c  - please notify IZ
   STASH = 0x835f8ec Apache::Request
   MG_LEN = -1
   MG_PTR = 0x8d9321c  - please notify IZ
 STASH = 0x835f8ec   Apache::Request
Dead: Apache=SCALAR(0x8626c78)





Re: leaks with Apache::Request?

2002-07-09 Thread Joe Schaefer

Dave Rolsky [EMAIL PROTECTED] writes:

 On 8 Jul 2002, Joe Schaefer wrote:

[...]

my $r = shift;
my $apr = Apache::Request-new($r);
 
  That's not going to leak, either.  At least I hope not :-)
 
 I ended up using something like this and the leak went away.
 
 It seems to me that this might actually be a Perl bug.

I doubt it's solely perl's fault.  The problem is that you 
have to be an internals wizard to write safe XS; otherwise
you just keep your fingers crossed.  :-)

Hopefully some generous soul will post a patch to Request.xs that
prevents this problem from ever arising.

 If I do this:
 
  my $x = shift;
  $x = make_something_from($x);
 
 then it seems like the original $x should go out of scope when it is
 assigned to, so its refcount should stay at 1.
   ^^

Right, it should stay at 1.  But all bets are off when
$x is has magic and make_something_from() is an XS sub :-).

-- 
Joe Schaefer



Re: leaks with Apache::Request?

2002-07-09 Thread Joe Schaefer

darren chamberlain [EMAIL PROTECTED] writes:

 * Joe Schaefer [EMAIL PROTECTED] [2002-07-09 12:47]:
  Dave Rolsky [EMAIL PROTECTED] writes:
   On 8 Jul 2002, Joe Schaefer wrote:
   If I do this:
   
my $x = shift;
$x = make_something_from($x);
   
   then it seems like the original $x should go out of scope when it is
   assigned to, so its refcount should stay at 1.
 ^^
  
  Right, it should stay at 1.  But all bets are off when
  $x is has magic and make_something_from() is an XS sub :-).
 
 But the leak is not with $x, it's with what $_[0] is; $x is just a
 reference to that, and the reassignment in the second line should
 reassign $x, and decrement the ref count for what $x is pointing to at
 that point.  So, it all depends on what make_something_from() does with
 the $x's referent.

I don't think it's as simple as that-

THIS LEAKS:

  my $r = shift;
  $r = Apache::Request-new($r);

  my $r = shift;
  $r = Apache::Request-new($_) for $r;


DOES NOT LEAK:

  my $apr = Apache::Request-new(shift);

  my $r = shift;
  my $apr = $r; # $r and $apr have the same referent now right?
  $apr = Apache::Request-new($r);

  my $r = shift;
  my $apr = Apache::Request-new($r);

  my $r = shift;
  $r = Apache::Request-new($_) for map $_, $r;


Somehow the assignment operator MUST be involved in the leak here.
(You only get a leak when the *same* reference (*SV) is on both sides 
of the assignment).

-- 
Joe Schaefer



Re: leaks with Apache::Request?

2002-07-08 Thread Joe Schaefer

Dave Rolsky [EMAIL PROTECTED] writes:

[...]

 Here's some code that I think demonstrates the leak:
 
   package My::APRTest;
 
   use strict;
 
   use Apache::Request;
   sub Apache::Request::DESTROY{warn DEAD: $_[0]\n}
   sub handler
   {
   my $r = shift;
   $r = Apache::Request-new($r);
^

Write that like this, and I think your leak will
disappear:

my $r = Apache::Request-new( shift );

AFAICT, Apache::Request::new is NOT leaking here, since the 
REFCNT of its returned object IS 1.  There might be some
magic-related bug in perl that causes the assignment to bump 
$r's refcount to 2.  This MIGHT be circumventable with some better
code in Request.xs, but I really don't know how to fix it.

Until some perl guru enlightens us, as a personal rule I 
try hard to avoid expressions like

  $foo = make_something_out_of($foo);

I realize that this isn't always possible, but it often/usually
is.  Such advice would serve you well in this case; you could
even get away with this

  my $r = shift;
  my $apr = Apache::Request-new($r);

That's not going to leak, either.  At least I hope not :-)

HTH
-- 
Joe Schaefer



Re: errors installing libapreq

2002-06-26 Thread Joe Schaefer


[cross-posted to apreq-dev]

Tim Bolin [EMAIL PROTECTED] writes:

 ok, im at the end of my proverbial rope on this one and dont know how
 to proceed... i am trying to install libapreq for Apache::Request, and
 when i try to run make the thing just pukes up a huge long string of
 errors like :
 
 -=-=-=-=-=-=-
 make[1]: Entering directory `/root/.cpan/build/libapreq-0.33/c'
 gcc -c -I/usr/local/httpd//include -I/usr/local/httpd//include 
 -fno-strict-aliasing -I/usr/local/include -O2 -march
 =i386 -mcpu=i686   -DVERSION=\0.10\ -DXS_VERSION=\0.10\ -fPIC 
 -I/usr/lib/perl5/5.6.1/i386-linux/CORE  apache_re
 quest.c
 In file included from apache_request.c:58:
 apache_request.h:38: parse error before `table'
   ^^

Your include files aren't defining the table struct.  gcc can't find 
your apache header files (maybe they're in /usr/local/apache/include ?).
Also, the typical compile lines should have a whole lot more -I flags.
In your case, modperl may not be properly installed- otherwise your
compile lines would look more like this (wrapped):

  make[1]: Entering directory `/home/joe/src/apache/cvs/httpd-apreq/c'
  cc -c -I/usr/lib/perl5/site_perl/5.005/i386-linux/auto/Apache/include 
  -I/usr/lib/perl5/site_perl/5.005/i386-linux/auto/Apache/include/modules/perl
  -I/usr/lib/perl5/site_perl/5.005/i386-linux/auto/Apache/include/include 
  -I/usr/lib/perl5/site_perl/5.005/i386-linux/auto/Apache/include/regex 
  -I/usr/lib/perl5/site_perl/5.005/i386-linux/auto/Apache/include/os/unix
  -I/usr/local/apache/include -Dbool=char -DHAS_BOOL
  -I/usr/local/include -O2-DVERSION=\0.10\ -DXS_VERSION=\0.10\ 
  -fpic -I/usr/lib/perl5/5.00503/i386-linux/CORE  apache_request.c

You probably need to reinstall modperl (preferably from source) to get
libapreq to build correctly on your OS.

-- 
Joe Schaefer



Re: libapreq: could not create/open temp file

2002-06-08 Thread Joe Schaefer


 Jean-Denis Girard wrote:
  
  Everything worked flawlesly, the web site was still working, but after a 
  few days, visitors started to complain  that uplaods didn't work. 
  mod_perl dies with the message:
  [libapreq] could not create/open temp file
  What is really funny, is that it works after rebooting the system, and 
  the error shows up later.

Where are the temp files being created, on a RAM disk or something?
pre 1.0 apreq's had a bug that caused filehandles to leak (there's
a refcount problem in the IO parts of perl's ExtUtils/typemap), which
would eventually fill up /tmp (this is the default location of your 
spooled apreq files) until apache was restarted.


  I upgraded libapreq to 1.0, which didn't solve the problem. Next step 
  will be to upgrade APache, mod_perl, etc. but I would like some help.

In 1.0, we no longer use perl's ExtUtils/typemap for this, which should
take care of the aforementioned leak.  One other possible candidate is 
that your apache server is segfaulting after the file is received, which 
prevents apache from cleaning up the temp files.  If so, you should see
a bunch of apreq files filling up your spool directory.

-- 
Joe Schaefer
[EMAIL PROTECTED]




Re: SUCCESS: libapreq working in Mac OS X

2002-02-22 Thread Joe Schaefer

John Siracusa [EMAIL PROTECTED] writes:

  On Thursday, February 14, 2002, at 04:44 PM, John Siracusa wrote:
  Next, I downloaded all the source code.  Note that I'm using Joe
  Schaefer's special versions of apache and libapreq.
  
  Will their changes be merged into the main distro soon?
 
 That's a question for Joe, I think...

For the pending 1.0 release, I'm pretty sure the answer is no.
We don't understand the problem with libapreq on OS X well enough 
to try and support it, yet.

In the interim I'll try and maintain the experimental versions.
Stas has been kind enough to work on adding a test suite to libapreq, 
which will very likely be included in the next release. Perhaps by 
then we'll have this OS X loader problem nailed as well.

OTOH, I'd like to solicit some feedback on yet another installation 
process- first making libapreq a shared library, and then linking
the Perl interfaces to *that* library (instead of libapreq.a).

Here are the steps:

  1) build mod_perl + apache as normal (do NOT use the experimental
 versions I've been providing, and be sure your modperl config 
 options include support for Apache::Table).

  2) Grab the soon-to-be-released version of libapreq-1.0 from Jim's 
 webpage:

http://www.apache.org/~jimw/libapreq-1.0.tar.gz

  3) Install libapreq.so.1.0.0 (to /usr/local/lib) using:

% ./configure
% make
% make install

  4) Now install Apache::Request and Apache::Cookie using

% perl Makefile.PL
% make
% make install

You might need to use ldconfig to tell ld.so where libapreq's
shared library is.

  5) report successes/failures using this approach on OS X to

[EMAIL PROTECTED]


Thanks again.

-- 
Joe Schaefer



Re: SUCCESS: libapreq working in Mac OS X

2002-02-22 Thread Joe Schaefer

Ged Haywood [EMAIL PROTECTED] writes:

 Hi there,
 
 On 22 Feb 2002, Joe Schaefer wrote:

3) Install libapreq.so.1.0.0 (to /usr/local/lib) using:
  
  % ./configure
  % make
  % make install
  
4) Now install Apache::Request and Apache::Cookie using
  
  % perl Makefile.PL
  % make
  % make install
 
 Should that be
 
 % make
 % su
 # make install

Sure.

-- 
Joe Schaefer



Re: inheritance and Apache::Request

2002-02-15 Thread Joe Schaefer


Paul Lindner wrote:

 If you look you'll see that the new() method is written in C.  It
 should be blessing itself into the passed in class, not using
 Apache::Request.  

You're right- except that this is exactly how the Apache class,
from which Apache::Request is derived, expects to be subclassed.  

It requires all derived classes to override new().

 My XS is a little foggy, but it appears that these lines should be
 modified to use the passed in class:

CLEANUP:
apreq_add_magic(ST(0), robj, RETVAL);

 which ends up as:

ST(0) = sv_newmortal();
sv_setref_pv(ST(0), Apache::Request, (void*)RETVAL);

No- that code in Request.c comes from this line in the libapreq
typemap:

T_APROBJ
sv_setref_pv($arg, \${ntype}\, (void*)$var);

$ntype is converted to Apache::Request by xsubpp. We could
pull the package info from SvPV(class),  but then the same fix 
probably should be incorporated into modperl as well.

 In any case all you need to do to get around this is define your own
 new method and call the Apache::Request object directly and store it
 as the value 'r'..  Here's an example:

 sub new {
  my ($class, $r) = @_;

 return bless { r  = Apache::Request-new($r),
   }, $class;
 }

Yes, exactly.  sv_2apreq will look for an Apache::Request object
attribute in the r  or _r keys: the behavior is consistent
with the eagle's Chapter 7 section on Subclassing the Apache
Class:

  To be successful, the new subclass must add Apache (or another
  Apache subclass) to its @ISA array.  In addition, the subclass's
  new() method must return a blessed hash reference which contains
  either an r or _r key. This key must point to a bona fide Apache
  object.

The reason this also works OK with a derived class like 
Apache::Request is that an the ApacheRequest C struct has a 
request_rec pointer as its *first* attribute, so a cast from 
(ApacheRequest *) to (request_rec *) will be safe.

-- 
Joe Schaefer



Re: libapreq problem and mozilla 0.97

2002-02-06 Thread Joe Schaefer

Rob Mueller (fastmail) [EMAIL PROTECTED] writes:

 Note the extra blank line, which I think the lack of is causing the
 problem under 0.97.

I'm pretty sure we can come up with a reasonable work-around for 1.0,
but it would really help if you can:

  1) rebuild libapreq with debugging:
  % perl Makefile.PL DEFINE=-DDEBUG
  % make  make install

  2) test it again and submit both the raw upload data
  and your error log to [EMAIL PROTECTED]

Thanks alot.
-- 
Joe Schaefer



Re: MacOSX Requests and Cookies

2002-02-03 Thread Joe Schaefer

John Siracusa [EMAIL PROTECTED] writes:

 Well, I can confirm that it still doesn't work for me... :-/  Is everyone
 using Perl 5.6.1 here?  Because somehow some of the files I downloaded had
 the string perl500503 embedded in them.  Even after search/replacing all
 that, I ended up with an httpd that pukes with the same old symbol
 conflicts when I try to start it.

Don't try to use the modperl tree that's in the apache tarball- it was 
left in there by mistake (I didn't realize make distclean won't remove
modperl from the apache source tree).  You need the modperl source 
to compile everything from scratch, starting with modperl.  When you're 
testing Apache::Cookie and Apache::Request, be sure you're not trying to
load the old versions of these packages.

Sorry for the confusion.

-- 
Joe Schaefer



Re: Apache::args vs Apache::Request speed

2002-02-01 Thread Joe Schaefer

Ian Ragsdale [EMAIL PROTECTED] writes:

 How about setting something up on SourceForge?  I know they have OS X
 environments available for compiling and testing.

apreq is an ASF project; IMO what we need now is a hero, not a 
change of venue.

[...]

  On 1/28/02 2:02 PM, Joe Schaefer [EMAIL PROTECTED] wrote:

[...]

  I hope a new release will be just around the corner, but if you want
  to test out some of the latest stuff, have a look at
  
  http://www.apache.org/~joes/


Would someone PLEASE volunteer to try to compile and test
apache+mod_perl  libapreq on OS/X using the experimental
code I posted there?  Even if you can't get it working,
ANY feedback about what happened when you tried would be 
VERY helpful.

Thanks alot.

-- 
Joe Schaefer




Re: MacOSX Requests and Cookies

2002-02-01 Thread Joe Schaefer

Rick Frankel [EMAIL PROTECTED] writes:

 Joe- ApacheCookie_* was not being boostrapped into the executable, so it 
 was being optimized of the linkage.
 
 The following patch, while probably not correct (and probably the cause 
 of the silent failure), covers it.

Great -  thanks a ton!

 --- http_main.c Fri Feb  1 19:22:51 2002
 +++ http_main.c~Mon Jan 28 04:07:46 2002
 @@ -7805,12 +7805,5 @@
   {
   return ApacheRequest_new(r);
   }
 -/*RAF*/
 -#include apache_cookie.h
 -ApacheCookie *suck_in_apcookie(request_rec *r);
 -ApacheCookie *suck_in_apcookie(request_rec *r)
 -{
 -return ApacheCookie_new(r);
 -}
   #endif /* USE_APREQ */

I've incorporated your patch and uploaded it to the website.
Hopefully other OS X'ers will be able to confirm it works now.

 Also, the all-in-one compile method doesn't setup apache correctly,
 so the steps taken were:
 
 unpack everthing in the same root directory
 
 mod_perl:
 
 $ perl Makefile.PL APACHE_PREFIX=/usr/local/apache DO_HTTP=1 \
 PREP_HTTP=1 USE_APACI=1 EVERYTHING=1
 $ make
 $ make install
 
 http_apreq:
 
 $ perl Makefile.PL
 $ make
 $ make install
 
 apache:
 
 $ CFLAGS='-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64' ./configure 
 --prefix=local/apache  \
   --enable-shared=max --disable-rule=EXPAT --with-layout=Apache \
 --activate-module=src/modules/perl/libperl.a --disable-shared=perl
 
 $ make
 $ make install

You're my hero ;-)

-- 
Joe Schaefer




[OT] Re: New mod_perl name was [Re: New mod_perl Logo]

2002-01-30 Thread Joe Schaefer

___cliff rayman___ [EMAIL PROTECTED] writes:

 how about Everest?  Niagara? Multiphase? Slipstream?
 DigiServer? Pointillion? Web Mammoth? SharpWeb?
 Web Enterprise?  EnterWeb?

  % perl -wlne 'print if delmopr eq join , sort /./g' dictionary
  premold
  %

Actually I just wish we could standardize on one of either
mod_perl or modperl, so I can stop bouncing emails off 
to the wrong list :)


-- 
Joe Schaefer




Re: Apache::args vs Apache::Request speed

2002-01-29 Thread Joe Schaefer

John Siracusa [EMAIL PROTECTED] writes:

 I'm cc-ing this to the Mac OS X Perl list in the hopes that someone can
 provide a test environment for you.  (I would, but my OS X box is behind a
 firewall at work.)
 
 So how about it, [EMAIL PROTECTED] folks, can any of you help get libapreq up
 and running on OS X an long last? (See message quoted below)
 

Maybe this will help:

  http://fink.sourceforge.net/doc/porting/porting.html

The Unix build of libapreq goes in 3 steps:

  1) build a static c library: libapreq.a

  2) build object files Request.o, Cookie.o

  3) link each object file against libapreq.a to
make shared libraries Request.so Cookie.so

So each file gets it's own copy of libapreq, symbols and
all.  For ELF-based systems, that's not a problem since
the linking executable knows not only the library's filename,
but also the location of the desired symbol within the file  
(actually there's something called a jump table inside the 
.so that provides some necessary indirection, but that's a 
technicality that's not particularly relevant to the issue.)

AIUI, the problem with OS/X is that their linker doesn't
provide the executable with a similar addressing scheme
for unresolved symbols; it just tells the executable which 
libraries are needed to resolve them without also telling 
it where to look for a given symbol.  So because Request.bundle 
and Cookie.bundle both (accidentally?) provide resolution for 
libapreq's symbols, the executable gets confused and bails out.

If I'm right here, then removing the libapreq symbols from
Cookie.bundle and Request.bundle should do the trick.  There
are lots of ways to reorganize things in order to achieve this,
and I'm somewhat surprised that none of the OS/X folks have
made any progress in this regard.

So maybe I'm wrong, but I don't think any of us will learn the
answer until some OS/X person actually _attempts_ to fix it.

-- 
Joe Schaefer





Re: Apache::args vs Apache::Request speed

2002-01-28 Thread Joe Schaefer

Stas Bekman [EMAIL PROTECTED] writes:

 Great! Now we have an even broader benchmark. Please tell me when 1.0 is 
 released (in case I get carried away with other things and don't notice 
 the announce) and I'll make sure to update my benchmarking package, 
 re-run the benchmarks and correct the results in the guide.

Great- there's a typo or two in the handler_do sub, but they should be 
pretty obvious when you try to run it.

I hope a new release will be just around the corner, but if you want
to test out some of the latest stuff, have a look at

  http://www.apache.org/~joes/

I don't think we'll have a 1.0 that works on OS/X, but I might be able
to include a patch in the distro that will build the C api of libapreq 
directly into httpd.  This might allow OS/X to run Apache::Request and
Apache::Cookie at the same time, but that platform is unavailable to me
for testing.

-- 
Joe Schaefer




Re: Apache::args vs Apache::Request speed

2002-01-27 Thread Joe Schaefer

Stas Bekman [EMAIL PROTECTED] writes:

 Well, I've run the benchmark and it wasn't the case. Did it change 
 recently? Or do you think that the benchmark is not fair?
 
 we are talking about this item
 http://perl.apache.org/guide/performance.html#Apache_args_vs_Apache_Request

Right- param() was rewritten as XS about 6-8 months ago; since then
I've benchmarked it a few times and found param() to be a bit faster than
args().  We'll be releasing a 1.0 version of libapreq as soon as Jim 
approves of the current CVS version.  Here's what I got using it on 
your benchmark (some differences: the tests were run against localhost 
running perl 5.00503 + mod_perl 1.26 + apache 1.3.22 and using Perl 
handlers instead of Apache::RegistryLoader scripts):

Stas's strings:
  my $query = [
join(, map {$_=.'e' x 10}  ('a'..'b')),
join(, map {$_=.'e' x 10}  ('a'..'z')),
  ];
Joe's strings:

  %Q = qw/ one alpha two beta three gamma four delta /;
  my $query = [ 
join(, map $_=$Q{$_}, keys %Q),
join(, map $_=.escape($_), %Q),
  ];

 Stas's QueryJoe's Query
 short   long   short   long

  table  124  91119 112 
  args   125  93116 110
  do 124 103121 118

  param  132 106128 123
  noparse138 136133 131
REQUESTS PER SECOND

Here I used ab with concurrency = 1 to avoid complications,
but that shouldn't make a difference if we're talking subroutine
performance.  The real disappointment here is handler_table,
which would be the fastest if perl's tied hash implementation
didn't suck so badly.  IMO perl's performance for tied-variable
access is shameful, but apparently the problem is unfixable in
perl5.

HANDLERS:

sub handler_args {
my $r = shift;
my %args = $r-args;

$r-send_http_header('text/plain');
print join \n, %args;
}

sub handler_table {
my $r = Apache::Request-new(shift);
my %args = %{ $r-param };

$r-send_http_header('text/plain');
print join \n, %$args;
}

sub handler_do {
my $r = Apache::Request-new(shift);
my args; $r-param-do( sub {$args{$_[0]}=$_[1];1} );

$r-send_http_header('text/plain');
print join \n, %$args;
}

sub handler_param {
my $r = Apache::Request-new(shift);
my %args = map +( $_ = $r-param($_) ), $r-param;

$r-send_http_header('text/plain');
print join \n, %args;
}

sub handler_noparse {
my $r = shift;
$r-send_http_header('text/plain');
print OK;
}

-- 
Joe Schaefer




Re: performance coding project? (was: Re: When to cache)

2002-01-25 Thread Joe Schaefer

Stas Bekman [EMAIL PROTECTED] writes:

 I even have a name for the project: Speedy Code Habits  :)
 
 The point is that I want to develop a coding style which tries hard to  
 do early premature optimizations.

I disagree with the POV you seem to be taking wrt write-time 
optimizations.  IMO, there are precious few situations where
writing Perl in some prescribed style will lead to the fastest code.
What's best for one code segment is often a mediocre (or even stupid)
choice for another.  And there's often no a-priori way to predict this
without being intimate with many dirty aspects of perl's innards.

I'm not at all against divining some abstract _principles_ for
accelerating a given solution to a problem, but trying to develop a 
Speedy Style is IMO folly.  My best and most universal advice would 
be to learn XS (or better Inline) and use a language that was _designed_
for writing finely-tuned sections of code.  But that's in the
post-working-prototype stage, *not* before.

[...]

 mod_perl specific examples from the guide/book ($r-args vs 
 Apache::Request::param, etc)

Well, I've complained about that one before, and since the 
guide's text hasn't changed yet I'll try saying it again:  

  Apache::Request::param() is FASTER THAN Apache::args(),
  and unless someone wants to rewrite args() IN C, it is 
  likely to remain that way. PERIOD.

Of course, if you are satisfied using Apache::args, than it would
be silly to change styles.

YMMV
-- 
Joe Schaefer




Re: make failure for Apache::Request

2002-01-06 Thread Joe Schaefer

Stathy G. Touloumis [EMAIL PROTECTED] writes:

[...]

 usethreads=define use5005threads=define useithreads=undef

[...]

 Request.xs: In function `upload_hook':
 Request.xs:230: `thr' undeclared (first use in this function)

[...]

 Any ideas would be appreciated : )

Try experimenting with something like

  dTHX;

somewhere above line 230, and read the section on threads in
perlguts.  If you can get apreq working with use5005threads, please
submit a patch to 

  [EMAIL PROTECTED]; 

This is one of the two outstanding issues that's holding up a new 
1.0 release.  The other one is a Mac OS-X solution, but nobody's 
volunteered one so far; or provided evidence of a successful build 
on that platform.

Thanks in advance.

-- 
Joe Schaefer




Re: Form Submit Question and Apache::Request

2001-12-16 Thread Joe Schaefer

El Capitan [EMAIL PROTECTED] writes:

 use Apache::Requst;
 
 sub handler() {
   my $r = Apache::Request-new(shift);
   my @list = $r-param('multi_list') || undef;
   ^^

scalar context is ruining your day :(
Try this instead:

my @list = $r-param('multi_list');

HTH
-- 
Joe Schaefer




Re: Q - Apache::Request-new(undef) works?

2001-12-12 Thread Joe Schaefer

darren chamberlain [EMAIL PROTECTED] writes:

 Jay Lawrence [EMAIL PROTECTED] said something to this effect on
 12/11/2001:  
  In my development I neglected to supply the Apache request
  object when I called Apache::Request-new( $r ). Actually $r
  was undef. It still works! I am just wondering if this is
  expected behaviour and if it will be supported going forward or
  was this just a fluke?
 
 The Apache instance that gets passed to Apache::Request::new
 apepars to not be required:
 
 # From libapreq-0.33/Request/Request.xs:
 165 static ApacheRequest *sv_2apreq(SV *sv)
^

Unfortunately this isn't the relevant function here- the typemap for 
Apache objects is governed by sv2request_rec, which is part of
mod_perl's perl_util.c file.

-- 
Joe Schaefer




Re: Q - Apache::Request-new(undef) works?

2001-12-11 Thread Joe Schaefer

Jay Lawrence [EMAIL PROTECTED] writes:

 In my development I neglected to supply the Apache request object when
 I called Apache::Request-new( $r ). Actually $r was undef. It still
 works! I am just wondering if this is expected behaviour and if it
 will be supported going forward or was this just a fluke?

I'd say it's a fluke- $r needs to be attached somehow to the actual
request object; otherwise I think you're introducing a memory leak 
and/or possibly a segfault.  Unless Doug says otherwise, I wouldn't 
rely on this behavior for an undef'd arg to Apache::Request::new.

-- 
Joe Schaefer




Re: Can't install Apache::Request on Linux

2001-11-13 Thread Joe Schaefer

Jonathan Bennett [EMAIL PROTECTED] writes:

 Request.xs:40 mod_perl.h: No such file or directory
 
 just after attempting to compile Request.c

libapreq expects to find it using Apache::src-new-inc :

  % perl -MApache::src -wle 'print grep {s/^-I//; -f $_ . /mod_perl.h}
 split / /, Apache::src-new-inc'
  /usr/lib/perl5/site_perl/.../auto/Apache/include/modules/perl

 mod_perl.h is present in
 /usr/local/apache/src/modules/perl/
 
 Where do I need to stick the option to make the makefile include the 
 correct paths?
 

You might try tinkering with the INC and LIB settings in 
Request/Makefile.PL.


-- 
Joe Schaefer




Re: Excellent article on Apache/mod_perl at eToys

2001-10-27 Thread Joe Schaefer

Rob Nagler [EMAIL PROTECTED] writes:

 Gunther wrote:
  If you do not have a strongly typed system, then when you break
  apart and rebuild another part of the system, Perl may very well not
  complain when a subtle bug comes up because of the fact that it is
  not strongly typed. Whereas Java will complain quite often and
  usually early with compile time checking.
 
 I don't think there's an objective view about this.  I also think
 the it compiles, so it works attitude is dangerous.  You don't know
 it works until your unit and acceptance tests pass.  I've been in too
 many shops where the nightly build was the extent of the quality
 assurance program.

Exactly- the statically typed languages I am familiar with have a casting 
mechanism to utterly subvert compile-time type checks. While static typing
allows better compile-time optimization, it's value as a debugging 
mechanism is near the bottom of the list of advantages for engineering
a large project.  If interface guarantees are important between Perl 
objects, I'd have a look at Class::Contract.

  Compile time checking can definitely be a friend of yours especially
  when dealing with large systems. But it's also a friend that's
  judgemental (strongly typed) so he's a pain to drag along to a party
 
 To me, strongly vs weakly typed is less descriptive than statically vs
 dynamically typed.  

To me, it is utterly nondescriptive in a PHB buzzwordy way, whereas 
static vs. dynamic typing is meaningful (and what I think most people
really mean by the former).

[...]

 Here's a strong statement: Threads have no place in information
 systems.  The NYSE is run on Tandem boxes.  Tandem's OS does not have
 threads.  The NYSE can process over a billion stock transactions a
 day.  The EJB spec says you can't fire off threads in a bean.  I think
 there's a reason for the way these systems have been architected.
 
 Threads are a false economy for systems which have to scale.  As some
 people have joked, Java is Sun's way of moving E10K servers.  SMP
 doesn't scale.  As soon as you outgrow your box, you are hosed.  A
 shared memory cache doesn't work well over the wire.  In my
 experience, the only way to build large scale systems is with
 stateless, single-threaded servers.
  ^^

Could you say some more about what you mean by this?  Do you mean 
something like

  use a functional language (like Haskell or Scheme), rather 
   than an imperative language (like C, Java, Perl ...),

or are you talking more about the application's platform and design
(e.g. http://www.kegel.com/c10k.html )?

-- 
Joe Schaefer




IBM patents Template Systems?

2001-10-17 Thread Joe Schaefer

Has anyone else noticed this?

http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2Sect2=HITOFFp=1u=/netahtml/search-bool.htmlr=10f=Gl=50co1=ANDd=ft00s1=HTMLOS=HTMLRS=HTML

A causal reading seems to suggest that most mod_perl-based
templating systems do exactly what this patent will cover: 
i.e. set up a non-HTML based website where templates 
dynamically convert non-HTML files into HTML.

The patent was filed June 19, 1998: surely there must be
prior art out there?

-- 
Joe Schaefer




Re: IBM patents Template Systems?

2001-10-17 Thread Joe Schaefer

Ged Haywood [EMAIL PROTECTED] writes:

 On Wed, 17 Oct 2001, Nathan Torkington wrote:
 
  Joe Schaefer writes:
   A causal reading seems to suggest that most mod_perl-based
   templating systems do exactly what this patent will cover: 
  
  the tool generates the customized Web site without the web site
  creator writing any HTML or other programming code.
 
 An only slightly less casual reading indicates that anyone who writes
 
 use strict;
 
 or 
 
 html
 
 isn't at risk of violating this patent.

s/write/must write/ and I'd agree with you.  However, like the 
one-click patent, it appears to stake a dubious claim that makes 
me worry about content management systems that are too user 
friendly, or perhaps just not HTML-centric.  From

DESCRIPTION OF THE PREFERRED EMBODIMENTS

...The tool facilitates the creation of a customized Web site 
without requiring a Web site creator to write or edit HTML code

IANAL, and I'm not trying to be overly alarmist, but the 
listed embodiments in that section are to me quite sweeping 
(apparently covering dynamic, offline, and mixed content 
generation tools).  The functionality described there seemingly 
existed in commercial products like MS Frontpage 97, and still 
the patent was granted.

I also don't know how XML fits into all this ( is it legally
considered HTML, a programming language, or a data format ?),
so I'll shut up now.  I just thought people here should be 
aware of the issue so perhaps the right people can do something 
about it (if need be).

-- 
Joe Schaefer




Re: Apache::Request UPLOAD_HOOK

2001-10-08 Thread Joe Schaefer

Issac Goldstand [EMAIL PROTECTED] writes:

 The documentation on how to use this feature is a bit sketchy... 

Yes, I agree. Doc patches are always welcome.  

Comments below are from memory since I last tested this feature 
about 6 months ago.

 Can anyone explain: 1) What the variables passed to the callback
 function are (looks like the Apache::Upload object is the first, but
 what's been filled in there when the hook gets called? 

The current upload object goes there when the hook is called.

 The second looks like the current bunch of data that's been
 recieved[?], 

Right, it's the buffer from apache's ap_get_client_block,
which is usually around 2-4 KB.  The hook runs before the
buffer gets written to the underlying tempfile, but as soon 
as your hook has completed, Apache::Request will write it 
automatically.

 the third is the length, but is that the length recieved so far or the
 length recieved between the last time it was called and this time?

The length of the buffer; the same as length($buffer).

 And lastly, what can be placed in HOOK_DATA - scalar only?)

Yes, but the scalar can also be a ref to an array or hash.

 2) Is there any way of knowing how often the hook will get called? 

Not really- it's called when apache calls ap_get_client_block.

 3) Is there a specific phase of the Request that Apache::Request 
 must be called and initialized with the callback before?  

The hooks get run as the data is uploaded to the server,
which IOW is when the data is first being parsed.  This
can happen at any phase you choose, but it only happens
once per request.

 4) Are there any specific issues for using this with
 Apache::Request-instance ?

Other than (3), I don't think so- but as I said before
this is not a well-tested feature (yet :)

HTH
-- 
Joe Schaefer




Re: OT: secure mod_perl enabled apache on MacOSX 10.1

2001-10-04 Thread Joe Schaefer

John Siracusa [EMAIL PROTECTED] writes:

 P.S.- d) Apache::Request and Apache::Cookie still can't be 
 loaded simultaneously! :(

Why not?  Does it work OK with a statically-linked mod_perl?

-- 
Joe Schaefer




[OT] clp.moderated, was- Re: Backticks as fast as XS

2001-09-27 Thread Joe Schaefer

Matt Sergeant [EMAIL PROTECTED] writes:

  -Original Message-
  From: Doug MacEachern [mailto:[EMAIL PROTECTED]]
  
  i'm guessing part of the difference in your code is due to 
  fprintf having a pre-allocated buffer, whereas the SV's SvPVX 
  has not been pre-allocated and gets realloc-ed each time you 
  call sv_catpv.  have a look at the code below, fprintf is faster 
  than sv_catpvn, but if the SvPVX is preallocated,
  sv_catpvn becomes faster than fprintf:
  
  timethese(1_000, {
  fprintf   = sub { TickTest::fprintf() },
  svcat = sub { TickTest::svcat() },
  svcat_pre = sub { TickTest::svcat_pre() },
  });
  
  Benchmark: timing 1000 iterations of fprintf, svcat, svcat_pre...
 fprintf:  9 wallclock secs ( 8.72 usr +  0.00 sys =  8.72 
  CPU) @ 114.68/s (n=1000)
   svcat: 13 wallclock secs (12.82 usr +  0.00 sys = 12.82 
  CPU) @ 78.00/s (n=1000)
   svcat_pre:  2 wallclock secs ( 2.75 usr +  0.00 sys =  2.75 
  CPU) @ 363.64/s (n=1000)
 
 Very interesting. I'll try that (I wish you'd been listening to clp.mod when
 I posted - other perl gurus weren't much help).
 
 Matt.

OT type=rant 
IMO clp.mod is a sinkhole for perl-related discussions, and like
Abigail[1], despite its noble intentions I think clp.mod fails to 
serve the Perl community in any useful way.  IME, this mailing list (and 
p5p when appropriate) is one of the best places to ask about perl 
performance-related issues (that aren't covered by Stas' guide, 
of course :).  You could also don the high rubber boots and try 
clp.misc, which IMO seems to be a bit improved since the creation 
of the beginners' [2] mailing lists.

[1]: (long, unwrapped url)
  
http://groups.google.com/groups?hl=engroup=comp.lang.perl.miscrnum=1selm=slrn820ckc.ls3.abigail%40alexandra.delanet.com

  Personally I had similar experiences with clp.moderated, 
  so I don't read it anymore either.

[2]: IIRC, they are at
  http://learn.perl.org/

 but I can't connect to it at the moment.

/OT

-- 
Joe Schaefer




Re: IPC::Shareable, or, it's supposed to SHARE it, not make more!

2001-09-01 Thread Joe Schaefer

Rob Bloodgood [EMAIL PROTECTED] writes:

 The code in expire_old_accounts is creating a new tied ARRAYREF instead of
 replacing the value of the hash key on this line:
 
 $ACCOUNTS{'QUEUE'} = [@accounts]; #also tried \@accounts;
 
 This didn't happen w/ IPC::Shareable 0.52.  But 0.6 is apparently very
 different, and I can't make the code look like it wants, so the new
 reference is a replacement, not an autovivication.
 

Sorry, but I think you're SOL with IPC::Shareable.  Use something else
instead- I like BerkeleyDB for things like this, but you'll have to 
serialize/stringify your hash values first.

-- 
Joe Schaefer




Re: Problems installing libapreq

2001-08-15 Thread Joe Schaefer

Nick Tonkin [EMAIL PROTECTED] writes:

 I'm trying to install libapreq on a new box on which I've rolled an
 apache-mod_ssl-openssl combo. 
  ^^

Have you already built modperl into your apache server?  You should 
do that first, if you haven't done so already.  If you have, the 
include directories (-I) that apreq uses to find the header files
are listed with

  % perl -MApache::src -wle 'print Apache::src-new-inc'

 apache_request.h:5: httpd.h: No such file or directory
 apache_request.h:6: http_config.h: No such file or directory
 apache_request.h:7: http_core.h: No such file or directory
 apache_request.h:8: http_log.h: No such file or directory
 apache_request.h:9: http_main.h: No such file or directory
 apache_request.h:10: http_protocol.h: No such file or directory
 apache_request.h:11: util_script.h: No such file or directory
 *** Error code 1

Somehow they are not getting setup right.

-- 
Joe Schaefer




Re: problem installing libapreq

2001-08-14 Thread Joe Schaefer

Nick Tonkin [EMAIL PROTECTED] writes:

 I'm trying to install libapreq on a new box on which I've rolled an
 apache-mod_ssl-openssl combo. As you may know this puts apache in
 /usr/local/apachessl.
 
 I followed the hint in INSTALL and did ./configure
 --with-apache-includes=/usr/local/apachessl/include which works fine, but
 make generates an error:


./configure was removed last December; unfortunately the INSTALL
document wasn't updated to reflect this.  Try doing the 
vanilla install procedure

  % perl Makefile.PL
  % make
  % su perladmin
  % make install

on the latest version (0.33) on CPAN, or from 

  http://httpd.apache.org/apreq/

If you hit a bump somewhere, please report it to the 
apreq mailing list:

  [EMAIL PROTECTED]

Best wishes.

-- 
Joe Schaefer




Re: Apache::Upload bug?

2001-08-09 Thread Joe Schaefer

Jeffrey Hartmann [EMAIL PROTECTED] writes:

 After much tracing I found that the problem occurs in the command my
 $fh = $file-fh; where Apache::Upload dup()'s the filehandle and
 passed the duplicate to the perl script.  Then when the program exits
 the perl script still has an open filehandle to that file.  I fixed my
 problem by adding a close $$fh;  to my program which closed the
 duplicate filehandle. 

You have hit the nail on the head; the refcount for $$fh is too high.
I see the problem with apache1.3.19 + modperl 1.25 + perl 5.6.1, 
but I don't know if the same holds for earlier versions.  

It appears that the C code generated by xsubpp is the culprit, but I'm 
not exactly sure about this.

In the meantine, try inserting the following line at the top of the 
CLEANUP: section of ApacheUpload_fh in Request.xs:

  SvREFCNT_dec(SvRV(ST(0))); /* ~ line 523 in Request.xs */

It seems to do the trick for single file uploads.  Please let us
know if this fixes the problem for you.

 Now I'm not sure if this is a bug or if it's supposed to be like that, 
 but the documentation makes it sound like it gives you the actually 
 filehandle of the tempfile, and not a copy.  I just assumed that it 
 would be closed by Apache::Upload when the request was finished.

The duping behavior is desirable, but the filehandle remaining open 
is definitely a bug; and it may have been around for quite a while.

Thanks a bunch for fleshing it out.

-- 
Joe Schaefer




Re: Apache::Upload and Image::Magick problems

2001-08-06 Thread Joe Schaefer

Jeffrey Hartmann [EMAIL PROTECTED] writes:

 2).  Apache::Upload seams to delete it's temp file, however when I run df
 the memory that file used is still allocated but there are no files in the
 /tmp dir.  I've commented out all of the Image::Magick code in that block so
 that Image::Magick never uses or accesses the file and the allocated memory
 problem still exists.  So this means that I end up with 100 meg of used
 space with only half that showing up as files in the dir.  I can delete all
 the Image::Magick files in the dir, but that's only half of the space used.
 
 Now I used to use a normal /tmp dir for this, and I still had the
 Image::Magick problem, but I never noticed the Apache::Upload one until now.
 So I don't know if this problem exists without with tmpfs.  Also if I
 restart/graceful Apache it seems to release the disk space that it's
 holding.

It sounds like the apreq? file in /tmp is remove()d at the end of the
request, but the file somehow stays open until the process exits. Using
lsof or running a stack trace on httpd -X should tell you if this is 
the case.  The relevant cleanup code is the remove_tmpfile() function in 
c/apache_request.c. If the ap_pfclose() call is failing, it should show 
up in your apache logs.  

If that's not the problem, maybe there's a persistent Apache::Upload
object somewhere in your code?

[...]

  foreach my $file (@upload)
 {
 
 my $i = Image::Magick-new(magick=$type);
 $err = $i-ReadImage(file=$$fh);
 $i-Set(magick=$type);
 $err = $i-WriteImage(filename=$filename));
 warn $err if $err;
 undef $i;
 }
 

Try appending 

  undef @upload;

to make sure there are no Apache::Upload objects still floating 
around when your handler exits.

HTH
-- 
Joe Schaefer





Proxy and KeepAlives in 1.3.19

2001-06-13 Thread Joe Schaefer

Just wondering- has anybody noticed that starting with 1.3.19,
mod_proxy now maintains the browser-proxy connection status 
(provided the browser is using HTTP/1.1, which NS 4.7* doesn't)?
ICBW, but I don't think the current 1.3.* mod_proxy claims to be 
a true HTTP/1.1 proxy (yet).

In the past mod_proxy always forcibly downgraded the connection 
to HTTP/1.0 and closed the connection.

Personally, I didn't care much for the pre-1.3.19 behavior :-)
-- 
Joe Schaefer





Re: Apache growing (memory)

2001-04-25 Thread Joe Schaefer

Kurt George Gjerde [EMAIL PROTECTED] writes:

 Each time a page is downloaded the Apache service process claims more
 memory. Well, not each time but like for every 20th download the task
 manager shows Apache using 20-30K more...
 
 A test showed that reloading the same page 2000 times raised Apaches
 memory by approx 1000k.
 

Even if your script were coded perfectly, it is still possible for this
to happen in modperl.  Three main reasons are

1) the copy-on-write effect will cause memory growth (~4K per page),
2) Perl's garbage collection and overall memory management does not 
   really mesh well with modperl,
3) Some XS modules do leak, and occasionally so does perl itself :(

Personally, I would consider an average growth rate of only .5kB/hit 
absolutely wonderful :)

However, odds are that if you are just starting out with mod_perl,
your script can be written in a more memory-friendly way.  Be sure
to consult the guide for tips:

  http://perl.apache.org/guide/


-- 
Joe Schaefer



Re: is Apache::Request upload-fh lazy or not?

2001-04-15 Thread Joe Schaefer

"Thomas K. Burkholder" [EMAIL PROTECTED] writes:

 Hi again,
 
 Thanks to those who helped me with the install issue.  I've mostly
 resolved that, and now have a new question that seems much more
 mod-perl-ish:
 
 I have code that does essentially this:
 
 sub handler {
   my $r = shift;
   $r = Apache::Request-new($r);
   my $handle = $r-upload('filename');
   my $fh = $handle-fh;
   ...
 }
 
 I'd like to write upload code that shows progress (via a fork and
 refresh header trick - don't worry about that).  What I'm wondering is,
 by the time I get $fh above (or even by the time I'm in the handler for
 all I know) do I already have the entire file uploaded, or is it done
 lazily as reads are performed on $fh?  I'm not very good at reading this
 XS stuff, but I looked at libapreq and from that I suspect it's all done
 up front.  Is there any way to read it incrementally?
 
 Has anyone solved this problem before?  I want to provide an upload
 function for my web app that shows the user periodic progress or maybe
 even allows them to cancel it part way through.  Is this too much of a
 reach for http/mod_perl?

I've managed to do this in the past by modifying mod_proxy on the
front-end server, but I think the ability to do this from within 
Apache::Request would be a nice feature.  Recently David Prosa added 
an upload hook to the C API for libapreq, and yesterday I put together 
a patch to Request.xs that provides a Perl interface.  

It would work something like this:

  my $hook = sub {
my ($upload, $buf, $len, $data) = @_;
my $fh = $upload-fh;
print $fh $buf;
warn "HOOK_DATA = $data: wrote $len bytes for " . $upload-name;
  }

  my $apr = Apache::Request-new( Apache-request,
  HOOK_DATA = "Some parameters",
  UPLOAD_HOOK = $hook,
);

  my $parm_table = $apr-param; # causes upload parsing to begin
  my $upload = $apr-upload('file');
  my $fh = $upload-fh;
  ...

In this case, $hook will be called in place of the normal internal 
(buffered) writes to a temp file.  Note that $upload-fh now
must auto-vivify a temp file if none was created by libapreq itself.
That's probably a feature, but it may require some care if the hook sub 
is poorly designed.  $upload-size will now count the size of the data
sent by the browser, which may or may not be the same as the size of
the temp file (depending on what the hook does with it).  Otherwise
upload objects should behave normally ($upload-fh still dup's and 
seek's outside of the hook sub).

I'll post the diff to the apreq-dev list after I've tested it a bit
more.  Also, please contact me or follow up to the list if anyone has 
some comments/suggestions for improving the API before committing it to
CVS.

Thanks.

-- 
Joe Schaefer



Re: is Apache::Request upload-fh lazy or not?

2001-04-15 Thread Joe Schaefer

Joe Schaefer [EMAIL PROTECTED] writes:

 Apache::Request would be a nice feature.  Recently David Prosa added 
 ^^^

Ugh- he's David *Welton*, and he's using libapreq in his mod_dtcl 
module.  Damn, that's twice I've done that to him - I'm very, very 
sorry, David :(


-- 
Joe Schaefer



Repost with typos corrected--Instance variable inheritance

2001-01-30 Thread Joe Schaefer


"Christopher L. Everett" [EMAIL PROTECTED] writes:

 sub handler ($$) {
   my ($self, $q);
 
   my $r = Apache::Request-new($q);
 
   # send headers here
 
   print $self-name;
  
$self is a string ("simian"), not an object ref; for this 
line to work, you need to make one by calling "new" somewhere,
or guard against misuse via:

print $self-name if ref $self;

HTH
-- 
Joe Schaefer



Re: Instance variable inheritance: is it just me or am I doing something dumb?

2001-01-29 Thread Joe Schaefer

"Christopher L. Everett" [EMAIL PROTECTED] writes:

 Check these modules:
 
 package simian;
 use fields qw (name);
 
 use strict; #very important!
 use Apache;
 use Apache::Request;
 use Apache::Constants qw(:common);
 
 sub new {
   my $type = shift;
   my class1 $self = fields::new(ref $type || $type);
   ^^
ITYM   simian

   $self-{name} = 'Jane';
   return $self-{name}; # error here
^

"new" should return a blessed reference ($self here); 
instead you are returning "Jane".  Any later attempts 
to dereference the result of "new" ( aka "Jane") will 
produce an error under strictures.

 You will get an error like the following:
 
 Can't use string ("Exchange::MyAccount3") as a HASH ref while "strict
 refs" in use at /usr/local/lib/perl5/site_perl/MyApp/Framework3.pm line
 245.
 
 What the heck am I doing wrong?

Check line 245 in Framework3.pm ; if it tries to dereference a 
simian object constructed from your simian::new sub, then you'll 
get an error.

More free advice- "fields" are fairly unreliable in 5.005_03, 
especially regarding inherited object attributes.  (I don't know 
if they're more reliable in 5.6, since I don't view 5.6 as reliable 
enough itself :)

I'd recommend you take another approach, like Class::Contract 
or Tie::SecureHash instead.  See Conway's _Object Oriented Perl_ 
for details.


HTH
-- 
Joe Schaefer



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2001-01-04 Thread Joe Schaefer

Roger Espel Llima [EMAIL PROTECTED] writes:

 "Jeremy Howard" [EMAIL PROTECTED] wrote:

I'm pretty sure I'm the person whose words you're quoting here,
not Jeremy's.

  A backend server can realistically handle multiple frontend requests, since
  the frontend server must stick around until the data has been delivered
  to the client (at least that's my understanding of the lingering-close
  issue that was recently discussed at length here). 
 
 I won't enter the {Fast,Speedy}-CGI debates, having never played
 with these, but the picture you're painting about delivering data to
 the clients is just a little bit too bleak.

It's a "hypothetical", and I obviously exaggerated the numbers to show
the advantage of a front/back end architecture for "comparative benchmarks" 
like these.  As you well know, the relevant issue is the percentage of time 
spent generating the content relative to the entire time spent servicing 
the request.  If you don't like seconds, rescale it to your favorite 
time window.

 With a frontend/backend mod_perl setup, the frontend server sticks
 around for a second or two as part of the lingering_close routine,
 but it doesn't have to wait for the client to finish reading all the
 data.  Fortunately enough, spoonfeeding data to slow clients is
 handled by the OS kernel.

Right- relative to the time it takes the backend to actually 
create and deliver the content to the frontend, a second or
two can be an eternity.  

Best.
-- 
Joe Schaefer




Re: Apache::Request param() problem?

2001-01-03 Thread Joe Schaefer

"James Sheridan-Peters" [EMAIL PROTECTED] writes:

 Quick summary:
 
 Pulling parameters from a POST method using Apache::Request, largely to make
 it easier to deal with multiple value variables.  The problem occurs if I
 have two variables, differentiated only by case (eg. wanthelp=something
 and wantHelp=somethingelse).
 
 Given the about pseudo-values, when using apr-param('wanthelp') OR
 apr-param('wantHelp') I get the same response, namely an array of two
 values something and somethingelse for each.
 
 Unfortunately, I have no control over what the variable names will be nor
 how many parameters they will have, so I must handle all possible cases.  Am
 I missing some parameter to make this case-sensitive?  Is there a better way
 to do this than using Apache::Request?

Apache::Request parses the request data into apache tables, which
by nature use case-insensitive keys.  I'm afraid you're SOL if you
need to differentiate case-sensitive parameter keys like "wantHelp" and
"wanthelp" with the current implementation of libapreq/Apache::Request.  
Probably CGI is your best bet for now- but you could submit a request/bug 
report to the apreq list at

[EMAIL PROTECTED]

HTH.
-- 
Joe Schaefer



Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl withscriptsthat contain un-shared memory

2000-12-22 Thread Joe Schaefer

"Jeremy Howard" [EMAIL PROTECTED] writes:

 Perrin Harkins wrote:
  What I was saying is that it doesn't make sense for one to need fewer
  interpreters than the other to handle the same concurrency.  If you have
  10 requests at the same time, you need 10 interpreters.  There's no way
  speedycgi can do it with fewer, unless it actually makes some of them
  wait.  That could be happening, due to the fork-on-demand model, although
  your warmup round (priming the pump) should take care of that.

A backend server can realistically handle multiple frontend requests, since
the frontend server must stick around until the data has been delivered
to the client (at least that's my understanding of the lingering-close
issue that was recently discussed at length here). Hypothetically speaking,
if a "FastCGI-like"[1] backend can deliver it's content faster than the 
apache (front-end) server can "proxy" it to the client, you won't need as 
many to handle the same (front-end) traffic load.

As an extreme hypothetical example, say that over a 5 second period you
are barraged with 100 modem requests that typically would take 5s each to 
service.  This means (sans lingerd :) that at the end of your 5 second 
period, you have 100 active apache children around.

But if new requests during that 5 second interval were only received at 
20/second, and your "FastCGI-like" server could deliver the content to
apache in one second, you might only have forked 50-60 "FastCGI-like" new 
processes to handle all 100 requests (forks take a little time :).

Moreover, an MRU design allows the transient effects of a short burst 
of abnormally heavy traffic to dissipate quickly, and IMHO that's its 
chief advantage over LRU.  To return to this hypothetical, suppose 
that immediately following this short burst, we maintain a sustained 
traffic of 20 new requests per second. Since it takes 5 seconds to 
deliver the content, that amounts to a sustained concurrency level 
of 100. The "Fast-CGI like" backend may have initially reacted by forking 
50-60 processes, but with MRU only 20-30 processes will actually be 
handling the load, and this reduction would happen almost immediately 
in this hyothetical.  This means that the remaining transient 20-30 
processes could be quickly killed off or _moved to swap_ without adversely 
affecting server performance.

Again, this is all purely hypothetical - I don't have benchmarks to
back it up ;)

 I don't know if Speedy fixes this, but one problem with mod_perl v1 is that
 if, for instance, a large POST request is being uploaded, this takes a whole
 perl interpreter while the transaction is occurring. This is at least one
 place where a Perl interpreter should not be needed.
 
 Of course, this could be overcome if an HTTP Accelerator is used that takes
 the whole request before passing it to a local httpd, but I don't know of
 any proxies that work this way (AFAIK they all pass the packets as they
 arrive).

I posted a patch to modproxy a few months ago that specifically 
addresses this issue.  It has a ProxyPostMax directive that changes 
it's behavior to a store-and-forward proxy for POST data (it also enabled 
keepalives on the browser-side connection if they were enabled on the 
frontend server.)

It does this by buffering the data to a temp file on the proxy before 
opening the backend socket.  It's straightforward to make it buffer to 
a portion of RAM instead- if you're interested I can post another patch 
that does this also, but it's pretty much untested.


[1] I've never used SpeedyCGI, so I've refrained from specifically discussing 
it. Also, a mod_perl backend server using Apache::Registry can be viewed as 
"FastCGI-like" for the purpose of my argument.

-- 
Joe Schaefer




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl with scripts that contain un-shared memory

2000-12-21 Thread Joe Schaefer


[ Sorry for accidentally spamming people on the
  list.  I was ticked off by this "benchmark",
  and accidentally forgot to clean up the reply 
  names.  I won't let it happen again :(  ]

Matt Sergeant [EMAIL PROTECTED] writes:

 On Thu, 21 Dec 2000, Ken Williams wrote:
 
  Well then, why doesn't somebody just make an Apache directive to control how
  hits are divvied out to the children?  Something like
 
NextChild most-recent
NextChild least-recent
NextChild (blah...)
 
  but more well-considered in name.  Not sure whether a config directive
  would do it, or whether it would have to be a startup command-line
  switch.  Or maybe a directive that can only happen in a startup config
  file, not a .htaccess file.
 
 Probably nobody wants to do it because Apache 2.0 fixes this "bug".
 

KeepAlive On

:)

All kidding aside, the problem with modperl is memory consumption, 
and to use modperl seriously, you currently have to code around 
that (preloading commonly used modules like CGI, or running it in 
a frontend/backend config similar to FastCGI.)  FastCGI and modperl
are fundamentally different technologies.  Both have the ability
to accelerate CGI scripts;  however, modperl can do quite a bit
more than that. 

Claimed benchmarks that are designed to exploit this memory issue 
are quite silly, especially when the actual results are never 
revealed. It's overzealous advocacy or FUD, depending on which 
side of the fence you are sitting on.


-- 
Joe Schaefer




Re: Fwd: [speedycgi] Speedycgi scales better than mod_perl with scripts that contain un-shared memory

2000-12-21 Thread Joe Schaefer

Gunther Birznieks [EMAIL PROTECTED] writes:

 But instead he crafted an experiment to show that in this particular case 
 (and some applications do satisfy this case) SpeedyCGI has a particular 
 benefit.

And what do I have to do to repeat it? Unlearn everything in Stas'
guide?

 
 This is why people use different tools for different jobs -- because 
 architecturally they are designed for different things. SpeedyCGI is 
 designed in a different way from mod_perl. What I believe Sam is saying is 
 that there is a particular real-world scenario where SpeedyCGI likely has 
 better performance benefits to mod_perl.

Sure, and that's why some people use it.  But to say

"Speedycgi scales better than mod_perl with  scripts that contain un-shared memory"

is to me quite similar to saying

"SUV's are better than cars since they're safer to drive drunk in."

 
 Discouraging the posting of experimental information like this is where the 
 FUD will lie. This isn't an advertisement in ComputerWorld by Microsoft or 
 Oracle, it's a posting on a mailing list. Open for discussion.

Maybe I'm wrong about this, but I didn't see any mention of the 
apparatus used in his experiment.  I only saw what you posted,
and your post had only anecdotal remarks of results without
detailing any config info.

I'm all for free and open discussions because they can
point to interesting new ideas.  However, some attempt at 
full disclosure (comments on the config used are as important 
important than anecdotal remarks about the results) is 
necessary so objective opinions can be formed.

-- 
Joe Schaefer



Re: Linux Hello World Benchmarks...

2000-12-11 Thread Joe Schaefer

Joshua Chamas [EMAIL PROTECTED] writes:

 RESULTS:
 
 [hello]# ./bench.pl -time=60
 ...
 Test Name Test FileHits/sec Total Hits   Total Time   Total 
Bytes  
       
 
 HTML Static   hello.html   1211.6   5 hits   41.27 sec
12450249 byt 
 ModPerl Handler   hello.bench   888.9   5 hits   56.25 sec
6700268 byte 
[...]

IME, simple mod_perl handlers typically run around 50% as fast as 
HTML static pages.  Your hello world benchmark seems to be slightly 
misleading in this respect, since the content-length is small 
relative to the header size.

For your HTML Static, I would guess that the headers delivered are
significantly larger than the ones returned by your modperl handler.
Hence for 5 hits, there is a significant discrepancy in the 
total bytes delivered.  This skews the hits/sec numbers in the
favor of content handlers that deliver shorter headers.

To use HTML Static as a baseline comparison for dynamic content 
handlers, I think you should ensure that the headers delivered
are comparable to those that are dynamically generated.

Best.
-- 
Joe Schaefer



Re: Linux Hello World Benchmarks...

2000-12-11 Thread Joe Schaefer

Joshua Chamas [EMAIL PROTECTED] writes:

 Joe Schaefer wrote:
  
  IME, simple mod_perl handlers typically run around 50% as fast as
  HTML static pages.  Your hello world benchmark seems to be slightly
  misleading in this respect, since the content-length is small
  relative to the header size.
  
 
 I'll send you my benchmark suite separately so you can
 submit your results for http://www.chamas.com/bench.  I have
 never seen modperl handler faster than static HTML.

Me neither - what I said was that modperl handlers are about twice
as slow as static pages.  That's what I meant, but I guess didn't
make it clear.

Best.
-- 
Joe Schaefer



Re: Does mod_perl have anything to do with KeepAlive?

2000-11-28 Thread Joe Schaefer

Stas Bekman [EMAIL PROTECTED] writes:

 This is not a real world benchmark. It may generate the real world
 load, but not the real world usage. And once you realize that this
 cool speedup that you show doesn't really happen.
 

Of course not- I only posted those results to show that under certain
real conditions, it can make a real difference.  Often times a single
url will contain links to 20+ url's on the same server (many of them
304's). If you have keepalives enabled (even if only for 1 or 2 sec's), 
the page loading wait is often noticable for modem users.

In my particular line of work, any excess latency is intolerable.  
My customers want almost instantaneous page loads, so my servers 
are built to accomodate them.  Heavy server loads are not of 
paramount concern.

I'm sure that is quite different from what your needs are, and 
that's my point. It all depends.

 
 That's true, my point was the same -- check your numbers and see
 whether it helps or not. Definitely not with ab or any other benchmark
 tool.
 

Nothing wrong with ab - just know it's limitations (or how to make it
say what you want ;).

-- 
Joe Schaefer


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Proxy adding headers

2000-11-27 Thread Joe Schaefer

Dave Rolsky [EMAIL PROTECTED] writes:

 On Mon, 27 Nov 2000, Sander van Zoest wrote:
 
  Normally the mod_proxy code doesn't touch the headers, it simply sends
  the headers on from the remote server you are proxying too. I doubt
  it is mod_rewrite but it could be. The extra headers are probably coming
  from your remote server you are proxying to.
 
 I don't think so.  If I do a telnet to the proxy server (port 80, no
 mod_perl) I get the extra headers.  If I telnet to the mod_perl enabled
 server (port 12345), I get what I want (no extra headers).
 

Technically mod_proxy doesn't generate headers (here I'm 
taking the simple case of a non-cacheing http request). 
After stripping the connection header, it simply passes 
the headers it got from the backend server right on along
to r-connection via ap_proxy_send_hdr_line (or
ap_proxy_send_headers for cached files IIRC).

Moreover if your backend server calls ap_send_http_header 
during its response, it *must* return a Server and Date field 
unless your request is assbackwards. See the src for 
ap_basic_http_header in src/main/http_protocol.c.

mod_proxy will upgrade assbackwards requests to HTTP/1.0 
before passing them along to the backend server, which may 
explain why the date field shows up in your telnet experiments. 
Why not post the full output of your telnet sessions so we 
can see what is really going on?


HTH.
-- 
Joe Schaefer


-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Proxy adding headers

2000-11-27 Thread Joe Schaefer

Dave Rolsky [EMAIL PROTECTED] writes:

 I'm afraid of C.

Don't be. (perl is C with cream and sugar added ;)  Grab
a copy of Kernighan and Ritchie's _The C Programming Language_
(it's authoritative, and all of 274 pages cover to cover).  
Then start monkeying around in src/modules/proxy/proxy_http.c 
and/or src/main/http.protocol.c and see what you can cook up.  
Tweaking and/or commenting out a few lines should do the trick.
Just be sure to back up your apache tree before you start tinkering.

Best.
-- 
Joe Schaefer

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: libapreq and file upload

2000-11-25 Thread Joe Schaefer

"John Reid" [EMAIL PROTECTED] writes:

 Hi All
 
 I am helping someone debug an application in using mod_perl on Win32. Our
 page parser handler handles fileuploads automatically using the
 Apache::Upload object. The person in question is uploading word documents
 and they are being corrupted in transit. It looks like CRLF substitution. We
 have tried all the usual binmode stuff but with no success. Can anyone shed
 some light?

Are you using a patched libapreq, or is it "stock"?  Did you do a tcpdump
on the connection to compare the transmitted file against the contents
of your Apache::Upload object? I don't think CRLF translation should be 
occuring within libapreq, but then again I don't run apache on windows
either :)

Please contact me at the email below if you're having problems with the
patch I submitted.

HTH
-- 
Joe Schaefer
[EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: [ libapreq ] desesperatly need of .32 or .33

2000-11-22 Thread Joe Schaefer

Matt Sergeant [EMAIL PROTECTED] writes:

 Doug mentioned to me at ApacheCon (or it may have been back at TPC) that
 he would like someone else to take over maintainence of
 Apache::Request. If nobody volunteers, I'm willing to look at doing so,
 although I've only just started that long road into using XS, so I'm more
 likely to spend time applying patches than writing them.

I'd be happy to the help with the bulk of whatever needs fixing,
however I'm somewhat reluctant to volunteer to maintain such 
a critical package since

1) I have no experience maintaining CPAN'd perl packages,
2) other than broken stuff, I wouldn't seek to change much code 
  (especially *not* the API, but anything that reduces the memory
   footprint and/or increases performance would be of interest)
3) my familiarity with XS and perlguts is cursory at best.

But before anyone bites off more than they can chew, perhaps some 
discussion of the current bugs and future needs for libapreq should 
be aired out.

My own problems with libapreq revolved around the multipart buffer 
code, and since I patched it a while back, I haven't bumped into 
any other snags.

What are the other unresolved issues with libapreq? How much of the
"undocumented" API (like $q-parms) is in use?

Regards.
-- 
Joe Schaefer
[EMAIL PROTECTED]

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: proxy front-ends (was: Re: ApacheCon report)

2000-11-03 Thread Joe Schaefer

Gunther Birznieks [EMAIL PROTECTED] writes:

 Although I don't have much to add to the conversation, I just wanted to say 
 that this is one of the most absolutely technically enlightening posts I've 
 read on the mod_perl list in a while. It's really interesting to finally 
 clarify this once and for all.

You bet - brilliant detective/expository work going on here!  
On a side note, a while back I was trying to coerce the TUX developers 
to rework their server a little.  I've included snippets of the 
email correspondence below:



From: Joe Schaefer [EMAIL PROTECTED]
Subject: Can tux 'proxy' for the user space daemon?
Date: 06 Oct 2000 13:32:10 -0400

It would be great if TUX is someday capable of 
replacing the "reverse proxy" kludge for mod_perl.
From skimming the docs, it seems that 

TUX on port 80 +
apache on 8080

seems to fit this bill.

Question: In this setup, how does TUX behave wrt 
HTTP/1.1 keepalives to/from apache? 

Say apache is configured with mod_perl, and 
keepalives are disabled on apache.
Is TUX capable of maintaining keepalives on the 
browser - TUX connection, while maintaining a 
separate "pool" of (closed) TUX - apache connections?

If I'm way off here on how TUX works  (or will work),
please correct me!

Thanks.

==

From: Ingo Molnar [EMAIL PROTECTED]
Subject: Re: Can tux 'proxy' for the user space daemon?
Date: Sat, 7 Oct 2000 13:42:49 +0200 (CEST)

if TUX sees a request that is redirected to Apache, then all remaining
requests on the connection are redirected to Apache as well. TUX wont ever
see that connection again, the redirection works by 'trimming' all
previous input up to the request which goes to Apache, then the socket
itself is hung into Apache's listen socket, as if it came as a unique
request from the browser. This technique is completely transparent both to
Apache and to the browser. There is no mechanizm to 'bounce back' a
connection from Apache to TUX. (while connections do get bounced back and
forth between the kernel and user-space TUX modules.)

so eg. if the first 2 request within a single persistent HTTP/1.1
connection can be handled by TUX then it will be handled by TUX, and the
third (and all succeeding) requests will be redirected to Apache. Logging
will happen by TUX for the first 2 requests, and the remaining requests
will be logged by Apache.

==

From: Joe Schaefer [EMAIL PROTECTED]
Subject: Re: Can tux 'proxy' for the user space daemon?
Date: 07 Oct 2000 19:52:31 -0400

Too bad- this means that HTTP/1.1 pages generated by an apache module won't 
benefit from TUX serving the images and stylesheet links contained therein. I 
guess disabling keepalives on the apache connection is (still) the only way 
to go.

I still think it would be cool if there was some hack to make this work- 
perhaps a TUX "gateway" module could do it?  Instead of handing off a 
request directly to apache, maybe a (user-space) TUX module could hand it 
off and then return control back to TUX when the page has been delivered.
Is such a "gateway" TUX module viable?

=

From: Ingo Molnar [EMAIL PROTECTED]
Subject: Re: Can tux 'proxy' for the user space daemon?
Date: Mon, 9 Oct 2000 11:42:57 +0200 (CEST)

depends on the complexity of the module. If it's simple functionality then
it might be best to write a dedicated TUX module for it, without Apache.

but if it's too complex then the same code that is used to hand a TCP
connection over to Apache can be used by Apache to send a connection back
to TUX as well. A new branch of the TUX system-call could handle this.

Ingo




This might be worth looking in to (for linux anyway :).
-- 
Joe Schaefer



Re: Authenticated Sessions, Performance

2000-10-26 Thread Joe Schaefer

"Marinos J. Yannikos" [EMAIL PROTECTED] writes:

 Hi!
 
 I need to implement user authentication / session management for a
 relatively busy web site and I have observed the following:
 
 1) one database query per visited page is out of the question, at this time
 we can't afford the hardware to scale up to our projected 10-15 mil. page
 impressions / month (currently  2,5 mil.), so we can't simply take the
 user/password data from a cookie or through the Basic Authentication
 mechanism at each request and authenticate. Session management using an SQL
 database or even GDBM/DB is thus also not easily possible.
 
 2) caching the database lookups for authentication and sessions / user state
 would require some sort of synchronization between the apache processes for
 the case of password / user data changes, IPC::Cache turned out to be very
 slow on our FreeBSD box (so it's out)
 
 3) Ticket-based authentications seems like a high-performance solution (user
 name, password, perhaps some session state are MD5-hashed and stored in a
 cookie, no database queries needed for authentication, no state must be
 shared among apache processes), a "permanent" login can be achieved by
 combining this method with a common login cookie (1 database query per
 session). This seems to be the best solution, although the amount of session
 state that can be kept easily is limited (larger stuff can be stored in a
 database if accessed less frequently).
 

I recommend you use option 3.  It is the easiest to manage, has the best
performance, and is very scalable.  In the future, you can add additional
security by maintaining an altogether separate authentication server that
runs over SSL.  No more plaintext passwords over the wire, and the performance
hit you suffer is limited to the sign-on process. The only downside it that 
you are requiring your users to enable cookies. They'll get over it :)

 Since a variety of mod_perl, CGI and JSP scripts need to use the
 authentication mechanism, a PerlAuthenHandler seems necessary (it sets
 $ENV{REMOTE_USER}).
 
 Has anyone of those who have faced a similar problem got any other ideas?
 Suggestions? Success stories?

The eagle book contains an excellent discussion on how to set one up. See

http://www.modperl.org/book/chapters/ch6.html

Then go out and buy it :).

-- 
Joe Schaefer

SunStar Systems, Inc.



Re: [OT] Will a cookie traverse ports in the same domain?

2000-10-19 Thread Joe Schaefer

martin langhoff [EMAIL PROTECTED] writes:

 hi,
 
   this HTTP protocol (definition and actual implementation) question is
 making me mad. Will (and should) a cookie be valid withing the same
 host/domain/subdirectory when changing PORT numbers?
 
   All my cookies have stopped working as soon as I've set my mod_perl
 apache on a high port with a proxying apache in port 80  [ see thread
 "AARRRGH! The Apache Proxy is not transparent wrt cookies!" ]
 
 martin
 

It would help if you post both the piece of httpd.conf
that has your Proxy* directives on your light server,
as well as the VirtualHost.../VirtualHost block from the 
heavy server's config file.

Otherwise you could try running

% tcpdump -i eth0 -l -s 1500 -w - port 80 or port 8080 | strings  /tmp/http

(the flags above probably need adjustment)

on your server to examine the http headers and see where the cookies get 
dropped. Pay attention to the "Host" and "Cookie" lines.

-- 
Joe Schaefer

SunStar Systems, Inc.



Re: Fast DB access

2000-10-11 Thread Joe Schaefer

"Differentiated Software Solutions Pvt. Ltd" [EMAIL PROTECTED] writes:

 We have an application where we will have to service as high as 50 =
 queries a second.
 We've discovered that most database just cannot keep pace.
 
 The only option we know is to service queries out of flat files.
 Can somebody give us pointers o n what modules are available to create =
 flat file based database.
 Specically we want a mechanism to be able service queries which can =
 return rows where values are greater than specified value.
 We are experiementing currently with dbm and DB::File. These seem to =
 handle hashes quite comfortably. How do we handle these inequality =
 queries.

You might look at BerkeleyDB's cursor implementation, although
50 queries / second should be do-able with mysql and optimized tables.  

Also consider cacheing the results (as others have suggested), 
if many of the queries are reused  not changed between queries.

-- 
Joe Schaefer




Re: PerlAuthenHandler advice needed.

2000-09-28 Thread Joe Schaefer

Todd Chapman [EMAIL PROTECTED] writes:

 Duh! Thanks.
 
 Now, is there any way to determine the realm the browser thinks it's
 authentication to? Is the realm stored in the Authorization header or any
 other headers?
 

I wouldn't try to use realms in any serious way- various browsers
do various things.  The only reliable way to have the browser send
different passwords to different locations is to use different 
server names.

-- 
Joe Schaefer



Re: ?? Server-push or sending multipart (multiple pages) data to clie nt

2000-09-28 Thread Joe Schaefer

"Lang, Cliff" [EMAIL PROTECTED] writes:


[...]

 request : www.mysite.com/userdir/index.html  When this request comes in and
 based on some settings in the authen-db, we need to generate not only the
 data from index.html, but also send the file www.mysite.com/core/info.html
 which would be opened in another window.

Server push has been dead for some time- IE5 doesn't even support the
x-multipart-replace header. The known IE5 "workaround" is to just send 
entire html../html bodies in sequence w/o additional header info, 
but you probably shouldn't invoke heavy browser voodoo unless you're 
seriously `charmed'.

A lesser (and safer) magic would be to insert a javascript snippet 
in each page. The javascript opens the window and makes the request for 
/core/info.html. Install an appropriate mod_perl content filter for 
"Location /userdir" with a line like

s/body/body onLoad='open("/core/info.html", "other_window")'/io;

Some variation on this might do the trick.

-- 
Joe Schaefer
[EMAIL PROTECTED]

SunStar Systems, Inc.



Re: mod_perl guide corrections

2000-09-25 Thread Joe Schaefer


Doug MacEachern [EMAIL PROTECTED] writes:

   my $args = $q-param; # hash ref
 
 you mean parms() ?   the Apache::Request::parms hash ref is tied, so
 there are still method calls, but less than calling params(), which does
 extra stuff to emulate CGI::params.  

I just looked at line 21 from Request.pm; it looks like $q-param() returns
the same thing as $q-parms does; but surely $q-parms is even better!

 parms() is going to be renamed (to something less like params()) and 
 documented as faster than using the params() wrapper, in the next release.

A new libapreq release? Great news! Here's YAP for libapreq - I added Dave Mitchell's
memmem in multipart_buffer.c for better portability, and made some minor changes
to apache_request.c to eliminate some unnecessary copying. I'd be glad to send
you a url to a production server, if you'd like to see it in action.

HTH, and thanks again.
-- 
Joe Schaefer
[EMAIL PROTECTED]

SunStar Systems, Inc.



diff -ur libapreq-0.31/c/apache_request.c libapreq/c/apache_request.c
--- libapreq-0.31/c/apache_request.c	Fri Jul  2 21:00:17 1999
+++ libapreq/c/apache_request.c	Sun Sep 24 22:10:18 2000
@@ -64,8 +64,20 @@
 for(x=0;str[x];x++) if(str[x] == '+') str[x] = ' ';
 }
 
-static int util_read(ApacheRequest *req, const char **rbuf)
+static int util_read(ApacheRequest *req, char **rbuf)
 {
+/* could make pointer array (LoL) to capture growth
+ * p[1] - [...\0]
+ * p[2] - [\0]
+ * p[3] - [..\0]
+ *
+ * would need hi-tech flushing routine (per string)
+ * also need a hi-tech reader. is it really worth the trouble?
+ *
+ *
+ *
+ */
+
 request_rec *r = req-r;
 int rc = OK;
 
@@ -84,9 +96,9 @@
 	return HTTP_REQUEST_ENTITY_TOO_LARGE;
 	}
 
-	*rbuf = ap_pcalloc(r-pool, length + 1); 
+	*rbuf = ap_pcalloc(r-pool, length + 1); /* can we improve this? */
 
-	ap_hard_timeout("[libapreq] util_read", r);
+	ap_hard_timeout("[libapreq] util_parse", r);
 
 	while ((len_read =
 		ap_get_client_block(r, buff, sizeof(buff)))  0) {
@@ -234,56 +246,56 @@
 return req;
 }
 
-static int urlword_dlm[] = {'', ';', 0};
+/* static int urlword_dlm[] = {'', ';', 0}; */
 
-static char *my_urlword(pool *p, const char **line)
+char *my_urlword(ApacheRequest *req, char **line)
 {
-int i;
-
-for (i = 0; urlword_dlm[i]; i++) {
-	int stop = urlword_dlm[i];
-	char *pos = strchr(*line, stop);
-	char *res;
-
-	if (!pos) {
-	if (!urlword_dlm[i+1]) {
-		int len = strlen(*line);
-		res = ap_pstrndup(p, *line, len);
-		*line += len;
-		return res;
-	}
-	continue;
-	}
+char dlm[] = ";"; 
+long int len;
+char *word, *dlm_ptr;
 
-	res = ap_pstrndup(p, *line, pos - *line);
+if (! *line) { return NULL; }
 
-	while (*pos == stop) {
-	++pos;
-	}
+dlm_ptr = strpbrk(*line, dlm);
 
-	*line = pos;
+if (!dlm_ptr) {
+	word = *line;
+	*line = NULL;
+	return word;
 
-	return res;
+} else {
+	*dlm_ptr = '\0';
+	word = *line;
+	*line = dlm_ptr + 1; 
+	return word;
 }
-
-return NULL;
 }
 
-static void split_to_parms(ApacheRequest *req, const char *data)
+static void split_to_parms(ApacheRequest *req, char *data)
 {
 request_rec *r = req-r; 
-const char *val;
-
-while (*data  (val = my_urlword(r-pool, data))) {
-	const char *key = ap_getword(r-pool, val, '=');
+char dlm[] = ";";
+char *word;
+char *val;
+
+do { 
+	word = my_urlword(req, data); /* modifies data */
+
+	if (!(val = strchr( word, '='))) {
+	continue; /* ignores words w/o an = sign */
+	}	
+	
+	*val = '\0';	
+	++val;
 
-	req_plustospace((char*)key);
-	ap_unescape_url((char*)key);
 	req_plustospace((char*)val);
 	ap_unescape_url((char*)val);
+	req_plustospace((char*)word);
+	ap_unescape_url((char*)word);
+	
+	ap_table_add(req-parms, word, val);
 
-	ap_table_add(req-parms, key, val);
-}
+} while ( data ) ;
 
 }
 
@@ -293,7 +305,8 @@
 int rc = OK;
 
 if (r-method_number == M_POST) { 
-	const char *data, *type;
+	char *data = NULL; 
+	const char *type;
 
 	type = ap_table_get(r-headers_in, "Content-Type");
 
@@ -304,12 +317,13 @@
 	return rc;
 	}
 	if (data) {
-	split_to_parms(req, data);
+split_to_parms(req, data);
 	}
+
 }
 
 if (r-args) {
-	split_to_parms(req, r-args);
+	split_to_parms(req, ap_pstrdup(r-pool,r-args));
 }
 
 return OK;
@@ -439,7 +453,7 @@
 }
 
 if (r-args) {
-	split_to_parms(req, r-args);
+	split_to_parms(req, ap_pstrdup(r-pool,r-args));
 }
 
 return OK;
diff -ur libapreq-0.31/c/multipart_buffer.c libapreq/c/multipart_buffer.c
--- libapreq-0.31/c/multipart_buffer.c	Sat May  1 02:44:28 1999
+++ libapreq/c/multipart_buffer.c	Fri Sep 22 02:14:25 2000
@@ -55,134 +55,180 @@
  *
  */
 
+/* JS: 6/30/2000
+ * This should fix the memory allocation issues, and handle client disconnects
+ * gracefully. Comments in the code should exp

Re: patches to mod_proxy (was: Re: mod_perl guide corrections)

2000-09-20 Thread Joe Schaefer

Roger Espel Llima [EMAIL PROTECTED] writes:

 On Tue, Sep 19, 2000 at 03:24:50PM -0400, Joe Schaefer wrote:
  On linux, the ext2 filesystem is VERY efficient at buffering filesystem 
  writes (see http://www.tux.org/lkml/#s9-12).  If the post data is small 
  ( I don't know what the default size is, but the FILE buffer for the tmpfile
  is adjustable with setvbuf) it's never written to disk.  AFAIK, the only 
  problem with this arrangement for small posts is the extra file descriptor 
  consumed by the apache process.  
 
 Yeah, I know it's fairly negligible, but I'm not sure the FILE buffer is
 the one that matters here.  If I fwrite(), rewind() and then fread()
 again, AFAIK libc's stdio still translates this into real kernel
 write(), lseek(), read()  [strace woudl be the final judge here].  From
 there, the kernel can be smart enough to not actually even touch the
 disk, but that doesn't work with e.g journaling filesystems which impose
 stronger sequential conditions on disk writes, or systems like BSD that
 do synchronous metadata updates.  And in any case, you're still doing
 extra memory copies to and from kernel space.
 
 If it was hard to do it otherwise i'd agree with you, but it sounds so
 simple to keep it in a memory buffer when it's under 16k or some similar
 limit, that I just think it's much more "obviously right" to do it that
 way.

Sounds good- thanks for the details. How about making a lower limit 
(for switching from a memory buffer to tmpfile) configurable? 

Any thoughts on what the directive should be called?

-- 
Joe Schaefer
[EMAIL PROTECTED]

SunStar Systems, Inc.



Re: I'm missing something in Apache::Cookie (fwd)

2000-09-17 Thread Joe Schaefer

"Thomas S. Brettin" [EMAIL PROTECTED] writes:

 from the looks of the code you guys posted, I would guess that
 Cookie-fetch returns a hash reference, not a hash.  Could this be the
 problem?
 

It can return either a hash or a hash reference, depending on whether or not 
you wantarray.  The problem many people encounter is that the values in the
returned hash(ref) are Apache::Cookie OBJECTS, not simply value of the
cookie.  There's more to a cookie than just it's value. 
I think it goes something like this.

%cookies = Apache::Cookie-fetch;

print ref \%cookies; # prints HASH ??
print ref $cookies{'SESSION'}; # prints Apache::Cookie ??
print $cookies{'SESSION'}-value; # prints the value of SESSION cookie

RTFM to be sure, or run it and see what you get!

-- 
Joe Schaefer
[EMAIL PROTECTED]

SunStar Systems, Inc.



Re: mod_perl guide corrections

2000-09-17 Thread Joe Schaefer

[EMAIL PROTECTED] writes:

 What if you wanted the functionality of the fase handlers before and after
 the loading of the file..

 Could this also be accomplished by proper use of configuration statements
 in http.conf?
 Right now I do not think so, so getting the child tied up for the time
 of the upload I take for granted.
 

I'm not quite sure what your driving at.  Let me see if I can
describe how things work now, and what I'm trying to accomplish with the 
patch.

Setup: 
   A = mod_proxy enabled front-end server; 
   keepalives enabled 
   delivers static content (images, stylesheets, etc)
   proxies dynamic content 

   B = mod_perl server; responsible for dynamic content; 
   keepalives disabled

   Z = browser 

Event:
1) Z requests a dynamic page from A.

Z -GET 1.1- A -PROXY- B -PROXY- A -CLOSE- Z

The current mod_proxy CLOSES the connection from A to Z,
even if Z requests keepalives, and A implements them.  This
is bad since subsequent requests for static content (images/stylesheets,etc.)
will require a new connection.

The patch should prevent mod_proxy from forcibly closing the 
A-Z connection.


2) Z posts form data that will ultimately be handled by B.

Z -POST- A -PROXY- B

Currently, mod_proxy opens the connection to B as soon as it
determines B is the ultimate destination.  As the POST data 
is read from Z to A, it is passed along directly to B.  This
will tie up both A and B if the A-Z connection is slow and/or
the post data is huge.

The patch makes mod_proxy buffer the post data in a temp file
by setting the (new) ProxyPostMax directive to a positive number.
If the Content-Length header supplied by Z is greater than this
number, mod_proxy rejects the post request.

Once the post data has been uploaded from Z-A, the patched
mod_proxy opens the connection to B and delivers the POST data
directly from the temp file.

That's what I'm trying to accomplish with the mod_proxy patch.
I've done only minimal testing on http requests; https is NOT
implemented at all.

I'd need something like this implemented, since I use mod_perl 
for authenticating "POSTers". In my case the POST data must
be processed by the mod_perl server.

Any help/suggestions are welcome and appreciated!

-- 
Joe Schaefer
[EMAIL PROTECTED]

SunStar Systems, Inc.



mod_perl guide corrections

2000-09-14 Thread Joe Schaefer

Stas,

I was looking over the latest version of the performance
section, and I have a few suggestions/comments regarding

http://perl.apache.org/guide/performance.html

1) Your description of keep-alive performance is confusing. 
Every browser I've seen that implements keep-alives
will open at least 2 connections per server (HTTP/1.1 mandates
a max of 2, but 1.0 browsers like netscape open 3 or more).  The
browsers are usually smart enough to round-robin the requests
between connections, so there's really no sequential delay.

Furthermore, HTTP/1.1 keepalive connections can be pipelined, to
make multiple requests on each connection without waiting for each 
server response.  In any real-world implementation, keepalives 
are a clear winner over closed connections, even if they're 
only left idle for a second or two.

Since most of us put a reverse proxy between the mod_perl server
and the browser; running keepalives on the browser-proxy connection
is desirable.

I think your section on keepalives and their usefulness should include
some of the above comments.  Recently I posted a patch for mod_proxy 
to the mod_perl mailing list that enables keep-alives on the browser side;
I've also written a small Apache Perl module that does the same thing.
They also do store-and-forward on the request body (POST data), which 
addresses an issue you raised in 

http://perl.apache.org/guide/scenario.html#Buffering_Feature 
...
There is no buffering of data uploaded from the client browser to the proxy, 
thus you cannot use this technique to prevent the heavy mod_perl server from 
being tied up during a large POST such as a file upload. Falling back to mod_cgi 
seems to be the best solution for these specific scripts whose major function is 
receiving large amounts of upstream data. 
...


2) Apache::Request is better than your performance numbers indicate.

The problem I have with your comparison with Apache::args vs Apache::Request vs CGI
is that your benchmark code isn't fair.  You're comparing method calls against
hash-table lookups, which is apples and oranges.  To get more representative
numbers, try the following code instead

processing_with_apache_request.pl
 -
 use strict;
 use Apache::Request ();
 my $r = shift;
 my $q = Apache::Request-new($r);
 $r-send_http_header('text/plain');
 my $args = $q-param; # hash ref
 print join "\n", map {"$_ = ".$$args{$_} } keys %$args;

and similiarly for CGI.  The numbers you get should more accurately reflect
the performance of each.

HTH
-- 
Joe Schaefer
[EMAIL PROTECTED]

SunStar Systems, Inc.