Re: mod_perl and Transfer-Encoding: chunked

2013-07-03 Thread Jim Schueler
In light of Joe Schaefer's response, I appear to be outgunned.  So, if 
nothing else, can someone please clarify whether "de-chunked" means 
re-assembled?


 -Jim

On Wed, 3 Jul 2013, Jim Schueler wrote:

Thanks for the prompt response, but this is your question, not mine.  I 
hardly need an RTFM for my trouble.


I drew my conclusions using a packet sniffer.  And as far-fetched as my 
answer may seem, it's more plausible than your theory that Apache or modperl 
is decoding a raw socket stream.


The crux of your question seems to be how the request content gets
magically re-assembled.  I don't think it was ever disassembled in the first 
place.  But if you don't like my answer, and you don't want to ignore it 
either, then please restate the question.  I can't find any definition for 
unchunked, and Wiktionary's definition of de-chunk says to "break apart a 
chunk", that is (counter-intuitively) chunk a chunk.




Second, if there's no Content-Length header then how
does one know how much
data to read using $r->read?   

One answer is until $r->read returns zero bytes, of
course.  But, is
that guaranteed to always be the case, even for,
say, pipelined requests?  
My guess is yes because whatever is de-chunking the


read() is blocking.  So it never returns 0, even in a pipeline request (if no 
data is available, it simply waits).  I don't wish to discuss the merits 
here, but there is no technical imperative for a content-length request in 
the request header.


-Jim






On Wed, 3 Jul 2013, Bill Moseley wrote:


Hi Jim,
This is the Transfer-Encoding: chunked I was writing about:

http://tools.ietf.org/html/rfc2616#section-3.6.1



On Wed, Jul 3, 2013 at 11:34 AM, Jim Schueler 
wrote:
  I played around with chunking recently in the context of media
  streaming: The client is only requesting a "chunk" of data.
   "Chunking" is how media players perform a "seek".  It was
  originally implemented for FTP transfers:  E.g, to transfer a
  large file in (say 10K) chunks.  In the case that you describe
  below, if no Content-Length is specified, that indicates "send
  the remainder".

  >From what I know, a "chunk" request header is used this way to
  specify the server response.  It does not reflect anything about
  the data included in the body of the request.  So first, I would
  ask if you're confused about this request information.

  Hypothetically, some browsers might try to upload large files in
  small chunks and the "chunk" header might reflect a push
  transfer.  I don't know if "chunk" is ever used for this
  purpose.  But it would require the following characteristics:

    1.  The browser would need to originally inquire if the server
  is
        capable of this type of request.
    2.  Each chunk of data will arrive in a separate and
  independent HTTP
        request.  Not necessarily in the order they were sent.
    3.  Two or more requests may be handled by separate processes
        simultaneously that can't be written into a single
  destination.
    4.  Somehow the server needs to request a resend if a chunk is
  missing.
        Solving this problem requires an imaginitive use of HTTP.

  Sounds messy.  But might be appropriate for 100M+ sized uploads.
   This *may* reflect your situation.  Can you please confirm?

  For a single process, the incoming content-length is
  unnecessary. Buffered I/O automatically knows when transmission
  is complete.  The read() argument is the buffer size, not the
  content length.  Whether you spool the buffer to disk or simply
  enlarge the buffer should be determined by your hardware
  capabilities.  This is standard IO behavior that has nothing to
  do with HTTP chunk.  Without a "Content-Length" header, after
  looping your read() operation, determine the length of the
  aggregate data and pass that to Catalyst.

  But if you're confident that the complete request spans several
  smaller (chunked) HTTP requests, you'll need to address all the
  problems I've described above, plus the problem of re-assembling
  the whole thing for Catalyst.  I don't know anything about
  Plack, maybe it can perform all this required magic.

  Otherwise, if the whole purpose of the Plack temporary file is
  to pass a file handle, you can pass a buffer as a file handle.
   Used to be IO::String, but now that functionality is built into
  the core.

  By your last paragraph, I'm really lost.  Since you're already
  passing the request as a file handle, I'm guessing that Catalyst
  

Re: mod_perl and Transfer-Encoding: chunked

2013-07-03 Thread Jim Schueler
Thanks for the prompt response, but this is your question, not mine.  I 
hardly need an RTFM for my trouble.


I drew my conclusions using a packet sniffer.  And as far-fetched as my 
answer may seem, it's more plausible than your theory that Apache or 
modperl is decoding a raw socket stream.


The crux of your question seems to be how the request content gets
magically re-assembled.  I don't think it was ever disassembled in the 
first place.  But if you don't like my answer, and you don't want to 
ignore it either, then please restate the question.  I can't find any 
definition for unchunked, and Wiktionary's definition of de-chunk says to 
"break apart a chunk", that is (counter-intuitively) chunk a chunk.




Second, if there's no Content-Length header then how
does one know how much
data to read using $r->read?   

One answer is until $r->read returns zero bytes, of
course.  But, is
that guaranteed to always be the case, even for,
say, pipelined requests?  
My guess is yes because whatever is de-chunking the


read() is blocking.  So it never returns 0, even in a pipeline request (if 
no data is available, it simply waits).  I don't wish to discuss the 
merits here, but there is no technical imperative for a content-length 
request in the request header.


 -Jim






On Wed, 3 Jul 2013, Bill Moseley wrote:


Hi Jim,
This is the Transfer-Encoding: chunked I was writing about:

http://tools.ietf.org/html/rfc2616#section-3.6.1



On Wed, Jul 3, 2013 at 11:34 AM, Jim Schueler 
wrote:
  I played around with chunking recently in the context of media
  streaming: The client is only requesting a "chunk" of data.
   "Chunking" is how media players perform a "seek".  It was
  originally implemented for FTP transfers:  E.g, to transfer a
  large file in (say 10K) chunks.  In the case that you describe
  below, if no Content-Length is specified, that indicates "send
  the remainder".

  >From what I know, a "chunk" request header is used this way to
  specify the server response.  It does not reflect anything about
  the data included in the body of the request.  So first, I would
  ask if you're confused about this request information.

  Hypothetically, some browsers might try to upload large files in
  small chunks and the "chunk" header might reflect a push
  transfer.  I don't know if "chunk" is ever used for this
  purpose.  But it would require the following characteristics:

    1.  The browser would need to originally inquire if the server
  is
        capable of this type of request.
    2.  Each chunk of data will arrive in a separate and
  independent HTTP
        request.  Not necessarily in the order they were sent.
    3.  Two or more requests may be handled by separate processes
        simultaneously that can't be written into a single
  destination.
    4.  Somehow the server needs to request a resend if a chunk is
  missing.
        Solving this problem requires an imaginitive use of HTTP.

  Sounds messy.  But might be appropriate for 100M+ sized uploads.
   This *may* reflect your situation.  Can you please confirm?

  For a single process, the incoming content-length is
  unnecessary. Buffered I/O automatically knows when transmission
  is complete.  The read() argument is the buffer size, not the
  content length.  Whether you spool the buffer to disk or simply
  enlarge the buffer should be determined by your hardware
  capabilities.  This is standard IO behavior that has nothing to
  do with HTTP chunk.  Without a "Content-Length" header, after
  looping your read() operation, determine the length of the
  aggregate data and pass that to Catalyst.

  But if you're confident that the complete request spans several
  smaller (chunked) HTTP requests, you'll need to address all the
  problems I've described above, plus the problem of re-assembling
  the whole thing for Catalyst.  I don't know anything about
  Plack, maybe it can perform all this required magic.

  Otherwise, if the whole purpose of the Plack temporary file is
  to pass a file handle, you can pass a buffer as a file handle.
   Used to be IO::String, but now that functionality is built into
  the core.

  By your last paragraph, I'm really lost.  Since you're already
  passing the request as a file handle, I'm guessing that Catalyst
  creates the tempororary file for the *response* body.  Can you
  please clarify?  Also, what do you mean by "de-chunking"?  Is

>   that the same think as re-assembling?


  Wish I cou

Re: mod_perl and Transfer-Encoding: chunked

2013-07-03 Thread Jim Schueler
I played around with chunking recently in the context of media streaming: 
The client is only requesting a "chunk" of data.  "Chunking" is how media 
players perform a "seek".  It was originally implemented for FTP 
transfers:  E.g, to transfer a large file in (say 10K) chunks.  In the 
case that you describe below, if no Content-Length is specified, that 
indicates "send the remainder".


From what I know, a "chunk" request header is used this way to specify the 
server response.  It does not reflect anything about the data included in 
the body of the request.  So first, I would ask if you're confused about 
this request information.


Hypothetically, some browsers might try to upload large files in small 
chunks and the "chunk" header might reflect a push transfer.  I don't know 
if "chunk" is ever used for this purpose.  But it would require the 
following characteristics:


  1.  The browser would need to originally inquire if the server is
  capable of this type of request.
  2.  Each chunk of data will arrive in a separate and independent HTTP
  request.  Not necessarily in the order they were sent.
  3.  Two or more requests may be handled by separate processes
  simultaneously that can't be written into a single destination.
  4.  Somehow the server needs to request a resend if a chunk is missing.
  Solving this problem requires an imaginitive use of HTTP.

Sounds messy.  But might be appropriate for 100M+ sized uploads.  This 
*may* reflect your situation.  Can you please confirm?


For a single process, the incoming content-length is unnecessary. Buffered 
I/O automatically knows when transmission is complete.  The read() 
argument is the buffer size, not the content length.  Whether you spool 
the buffer to disk or simply enlarge the buffer should be determined by 
your hardware capabilities.  This is standard IO behavior that has nothing 
to do with HTTP chunk.  Without a "Content-Length" header, after looping 
your read() operation, determine the length of the aggregate data and pass 
that to Catalyst.


But if you're confident that the complete request spans several smaller 
(chunked) HTTP requests, you'll need to address all the problems I've 
described above, plus the problem of re-assembling the whole thing for 
Catalyst.  I don't know anything about Plack, maybe it can perform all 
this required magic.


Otherwise, if the whole purpose of the Plack temporary file is to pass a 
file handle, you can pass a buffer as a file handle.  Used to be 
IO::String, but now that functionality is built into the core.


By your last paragraph, I'm really lost.  Since you're already passing the 
request as a file handle, I'm guessing that Catalyst creates the 
tempororary file for the *response* body.  Can you please clarify?  Also, 
what do you mean by "de-chunking"?  Is that the same think as 
re-assembling?


Wish I could give a better answer.  Let me know if this helps.

-Jim


On Tue, 2 Jul 2013, Bill Moseley wrote:


For requests that are chunked (Transfer-Encoding: chunked and no
Content-Length header) calling $r->read returns unchunked data from the
socket.
That's indeed handy.  Is that mod_perl doing that un-chunking or is it
Apache?

But, it leads to some questions.   

First, if $r->read reads unchunked data then why is there a
Transfer-Encoding header saying that the content is chunked?   Shouldn't
that header be removed?   How does one know if the content is chunked or
not, otherwise?

Second, if there's no Content-Length header then how does one know how much
data to read using $r->read?   

One answer is until $r->read returns zero bytes, of course.  But, is
that guaranteed to always be the case, even for, say, pipelined requests?  
My guess is yes because whatever is de-chunking the request knows to stop
after reading the last chunk, trailer and empty line.   Can anyone elaborate
on how Apache/mod_perl is doing this? 


Perhaps I'm approaching this incorrectly, but this is all a bit untidy.

I'm using Catalyst and Catalyst needs a Content-Length.  So, I have a Plack
Middleware component that creates a temporary file writing the buffer from
$r->read( my $buffer, 64 * 1024 ) until that returns zero bytes.  I pass
this file handle onto Catalyst.

Then, for some content-types, Catalyst (via HTTP::Body) writes the body to
another temp file.    I don't know how Apache/mod_perl does its de-chunking,
but I can call $r->read with a huge buffer length and Apache returns that.
 So, maybe Apache is buffering to disk, too.

In other words, for each tiny chunked JSON POST or PUT I'm creating two (or
three?) temp files which doesn't seem ideal.


--
Bill Moseley
mose...@hank.org



Re: [OT] Apache::DBI

2013-05-31 Thread Jim Schueler

With regards to Apache::DBI, it is very much supported :)


No.  It is not.  What little I know of you, you seem knowledgable and 
experienced.  But you don't seem to have read this thread.  The 
documentation says that the module will be supported by this list, and the 
facts now demonstrate otherwise.


Several contributors have responded now.  To paraphrase, they will (and I 
will and so will others) help the best they can.  But that's not what the 
documentation says.  I guess I'm just one of those whiners who expects the 
documentation to be reliable.


I followed this thread from the beginning.  I compared the original 
observations with the documentation.  And either the documentation is 
wrong or (more likely) incomplete.  I think it's reasonable to assume that 
if no one steps up to take ownership, there is no owner.  And hence the 
code is unsupported.


Early on, I tried to clarify.  Which I'll repeat:

  If the code has no significant user base and no identifiable
  owner/maintainer, use at your own risk.  Pretty much what you say
  below.

  If the code has a significant user base but no identifiable owner,
  there's a lot less risk because you can get support from other users.

  Modules with reliable owners, such as Soap::Lite, deserve the highest
  level of confidence.

Apache::DBI has no owner and therefore falls in category #2.  Or maybe 
someone will step foward, and thus category #3.  Otherwise, your comments 
below say the same thing.  Yet for some reason, you turned the big 
platitude guns on me.


By omission, there seems to be consensus on these guidelines.  All the 
quibbling revolves around my estimate that most modules fall in the first 
category.  Personally, I would prefer no one estimated that 97.5% (or 
118,000 perl modules) are still actively supported by their authors/ 
designated successors.  Because I think that claim strains credibility.



...this comes with the general open source software caveat - "Using open
source software doesn't mean someone will do *your work* for free".


I didn't originate the thread, but this response offends me.  If someone 
observes a problem with a module, is the point to discredit them instead?


So far, there seems to be a tendency to overlook the substance of the 
discussion and react defensively to outsiders (as though I haven't 
participated here for 14 or 15 years).  What's up with that?


Thanks for letting me get that off my chest.

 -Jim



On Fri, 31 May 2013, Fred Moyer wrote:


In their absence, I'd note that your post has an interesting ambiguity: Is
the number of unsupported modules 2.5% or 25%?


The 'supported' metric doesn't really translate the same in reference
to open source software as it does to commercial software. When a
commercial software product becomes unsupported (think IE6), then you
are out in the cold. You don't have the source code, so you can't fix
an issue with it, or hire someone to fix the issue. Unless you are
really good with a hex code editor and patching binary files, you're
out of luck.

With open source software like Perl, you may see statements like 'Perl
5.6 is no longer officially supported'. This means you probably won't
be able to get the P5P team to fix bugs or security issues if they
come up. Still, you have the source code, so you can fix it yourself.

CPAN is a bit more murky in that individual authors can decide to
deprecate modules, or they can drop off the face of the earth, but
widely used modules such as Apache::DBI, SOAP::Lite (maintenance
recently stewarded by yours truly) will almost always have volunteers
step up and maintain them, because those volunteers need those modules
to be functioning for own work. In terms of a supported metric, I'd
say modules that are used by more than a few people are supported
100%.

With regards to Apache::DBI, it is very much supported :) But this
comes with the general open source software caveat - "Using open
source software doesn't mean someone will do *your work* for free". If
there's a feature that appeals to more than a couple users, or a bug
that affects more than a couple users, odds are that it will get
fixed. Features that only one user is after will likely not be
implemented by the maintainers, but patches for those features are
usually readily accepted.

On Fri, May 31, 2013 at 10:30 AM, Jim Schueler  wrote:

No apology please.  In terms of trying to qualify any of this, a larger
statistical pool is better.  And I am no authority.  My perceptions are
largely based on forum postings which causes an inherent bias.

I'd love to see this conversation continue, especially if participants
included those who commit significant resources to their technology
decisions.  In other words, people who hire and pay Perl programmers.
They're likely to be as skeptical as I am.  I've never been a cheerle

Re: [OT] Apache::DBI

2013-05-31 Thread Jim Schueler
No apology please.  In terms of trying to qualify any of this, a larger 
statistical pool is better.  And I am no authority.  My perceptions are 
largely based on forum postings which causes an inherent bias.


I'd love to see this conversation continue, especially if participants 
included those who commit significant resources to their technology 
decisions.  In other words, people who hire and pay Perl programmers.

They're likely to be as skeptical as I am.  I've never been a cheerleader.

In their absence, I'd note that your post has an interesting ambiguity: Is 
the number of unsupported modules 2.5% or 25%?  (For more rhetorical 
nit-picking, you probably don't use the ones that don't work :)  Also, the 
significant question seems to be whether Apache::DBI is supported or not. 
From Mr. Zheng's point-of-view (in this case, the one that matters) the 

number might be much higher.

 -Jim

On Fri, 31 May 2013, André Warnier wrote:


Just butting in, apologies.

It may not have been Jim's intention below, but I get the impression that his 
comments on CPAN are a bit harsh.


It is true that a number of modules are apparently no longer supported.  I 
have used many modules over the years, and sometimes have had problems with 
some of them (mostly not though). And when for these problematic cases, I 
have tried to get help, the results have beem mixed; but the mix for me has 
been rather good. I would say that in my case, 90% of the CPAN modules I ever 
used worked out of the box.  For the 10% remaining, in 75% of the cases I did 
get help from the person advertised as the author or the maintainer, and in 
25% of cases I never got a response.
But then, as Jim himself indicated, people move on, without necessarily 
changing their email addresses.  Considering how old some of these modules 
are, I guess people also retire, or even pass away.


But the fact of the matter is that CPAN is still an incredible resource, 
unequalled in my view by any other similar module library of any other 
language anywhere. And I find it amazing that at least 90% of the modules 
which I have downloaded from CPAN and used over the last 15 years, just work, 
and moreover keep on working through many, many iterations of programs and 
perl versions, and that in fact one very rarely needs additional support for 
them.  When I compare this with other programming languages and support 
libraries, I believe we perl programmers are incredibly spoiled.


Another area where CPAN shines, is the documentation of most modules.  I 
cannot count the times where I was faced with a request in an area of which I 
knew nothing at all, and have just browsed CPAN for modules related to that 
area, just to read their documentation and get at least an idea of what this 
was all about.
In recent years, Wikipedia may slowly becoming a runner-up, in terms of 
general information.  But when it comes down to the nitty-gritty of 
interfacing with whatever API (or lack of ditto) programmers in their most 
delirious moments might have come up with, these CPAN modules are unbeatable. 
Even if after that you decide to program your stuff in another language than 
perl, it's still useful.
(Just for fun, go into CPAN and search for "NATO" (or more pragmatically, for 
"sharepoint" e.g.)(or even, God forbid, for "Google" or "Facebook" ;-)); who 
thinks of such things ?)


So, to summarise : that some modules on CPAN would be marked as "maintained" 
or "supported" and would turn out on closer inspection not to really be 
anymore, I find this a very small price to pay for the wealth of good 
information and working code that lives there.


My sincerest thanks to CPAN and all its contributors and maintainers over the 
years (that includes you of course, Jim).  What you have done and are doing 
is of incredible benefit to many, many programmers worldwide.


André


Jim Schueler wrote:
I still use Alpine.  And they never fixed the bug where ctrl-c (to cancel a 
message) and ctrl-x (to send) are so easily confused.  Oops.  Maybe it's 
time to start using a mouse.


Having wasted so much time, I'll try to be succinct:

  Most modules on CPAN are bascially throwaways and not supported at all.
  Use them at your own risk.

  There are some modules that are just obsolete.  Good intentions aside,
  the developers lost interest and moved on.  These are less risky if
  there's an established user base.

  There are some very good modules, widely used, that are fully supported
  and perfectly safe for a production environment.

Most mod_perl modules, especially the core modules, fall into that last, 
gold standard, category.  In many cases, support is transferred from one 
individual to another.  And so that commitment is documented.  But if a 
module is no longer supported, don't lie about it.  Support forums are an 
incredible resource.  But if 

Re: Apache::DBI

2013-05-31 Thread Jim Schueler
I still use Alpine.  And they never fixed the bug where ctrl-c (to cancel 
a message) and ctrl-x (to send) are so easily confused.  Oops.  Maybe it's 
time to start using a mouse.


Having wasted so much time, I'll try to be succinct:

  Most modules on CPAN are bascially throwaways and not supported at all.
  Use them at your own risk.

  There are some modules that are just obsolete.  Good intentions aside,
  the developers lost interest and moved on.  These are less risky if
  there's an established user base.

  There are some very good modules, widely used, that are fully supported
  and perfectly safe for a production environment.

Most mod_perl modules, especially the core modules, fall into that last, 
gold standard, category.  In many cases, support is transferred from one 
individual to another.  And so that commitment is documented.  But if a 
module is no longer supported, don't lie about it.  Support forums are an 
incredible resource.  But if commercial software developers similarly 
blurred this distinction, every p.o.s. would be advertising free 24x7 tech 
support.


Apache::DBI seems like a #2 pretending to be a #3.  On the basis of your 
response, I've concluded that Apache::DBI is no longer supported and has 
been superceded by newer modules.  Especially if no one responds and 
explicitly accepts the responsibility, this seems like the most 
appropriate answer for the poster of the original thread.


I owe you a :) from a couple posts ago.  :)

 -Jim

On Fri, 31 May 2013, Perrin Harkins wrote:


Hi Jim,
I appreciate the thought, but I'm not the mod_perl list.  If you look at who
has done the most support around here recently, it's probably Torsten.
 (Thanks Torsten!)  More to the point, there are many people on the list who
know enough perl to help with a question about Apache::DBI.  It's a common
practice to point people here for support on mod_perl modules.

What are you getting at?  Is there a module that you're having trouble with
and can't get support for?

- Perrin


On Fri, May 31, 2013 at 10:56 AM, Jim Schueler 
wrote:
  There's an existing thread with an Apache::DBI question.  But
  since I want to post a separate question to this list, I decided
  to start a new thread.

  Just got done reading the Man page for Apache::DBI.  One of the
  last notes suggests that this package is obsolete (having been
  replaced by Class::DBI or DBIx::CLASS).  Beyond that is the
  following:

    Edmund Mergl was the original author of Apache::DBI. It is now
  supported
    and maintained by the modperl mailinglist, see the mod_perl
  documentation
    for instructions on how to subscribe.

  Unless Perrin Harkins agreed to take over support for this
  module, then that statement is not true.  Otherwise, out of
  respect for Perrin, I'll try to be general.

  (Aside:  Am I the only developer that comes across 'unless () {}
  else {}' constructions?)

  It seems very few distros on CPAN are actually supported.  For
  my part, I still monitor this list to support my own
  contributions from *many* years ago.  And I k





Apache::DBI

2013-05-31 Thread Jim Schueler
There's an existing thread with an Apache::DBI question.  But since I want 
to post a separate question to this list, I decided to start a new thread.


Just got done reading the Man page for Apache::DBI.  One of the last 
notes suggests that this package is obsolete (having been replaced by 
Class::DBI or DBIx::CLASS).  Beyond that is the following:


  Edmund Mergl was the original author of Apache::DBI. It is now supported
  and maintained by the modperl mailinglist, see the mod_perl documentation
  for instructions on how to subscribe.

Unless Perrin Harkins agreed to take over support for this module, then 
that statement is not true.  Otherwise, out of respect for Perrin, I'll 
try to be general.


(Aside:  Am I the only developer that comes across 'unless () {} else {}' 
constructions?)


It seems very few distros on CPAN are actually supported.  For my part, I 
still monitor this list to support my own contributions from *many* years 
ago.  And I k


Re: Apache::DBI "connection lost contact" error

2013-05-31 Thread Jim Schueler
I'm afraid I'm out of my league.  I just noticed the following comment on 
the Apache::DBI man page:


  Edmund Mergl was the original author of Apache::DBI. It is now supported
  and maintained by the modperl mailinglist, see the mod_perl documentation
  for instructions on how to subscribe.

 -Jim

On Fri, 31 May 2013, Xinhuan Zheng wrote:


I believe I am using "my" declaration rather than "local". I also tried
explicitly disconnect but still have same issue. Since it only happens in
parent/child processes, I don't know a good way to debug parent/child, nor
reproducing the same error using a simple program. Can you guys help me
with that?

Thanks,
- xinhuan

On 5/31/13 9:02 AM, "Jim Schueler"  wrote:


Perrin is right.  But fundamentally, I'd say that you're confusing
'local' and 'my' variable scoping:

   http://www.perlmonks.org/?node_id=94007

 -Jim

On Fri, 31 May 2013, Perrin Harkins wrote:


Try an explicit disconnect() call.
- Perrin


On Thu, May 30, 2013 at 7:46 PM, Xinhuan Zheng

wrote:
  The db handle is declared local and once it's out of scope, the
  destroy
  call will disconnect. But it appears even though variable is out
  of scope,
  we still get that error. Don't know why.
  - xinhuan

  On 5/30/13 8:31 AM, "Jim Schueler" 
  wrote:

 >Did this solve your problem?
 >
 >  -Jim
 >
 >On Wed, 29 May 2013, Perrin Harkins wrote:
 >
 >> Hi,
 >> Apache::DBI is supposed to skip caching if you connect during
  startup.
 >>You
 >> should just need to disconnect your database handle after you
  finish
 >>with
 >> it.  It sounds like you're opening it and then leaving it
  open.
 >>
 >> - Perrin
 >>
 >>
 >> On Wed, May 29, 2013 at 3:24 PM, Xinhuan Zheng
 >>
 >> wrote:
 >>   Hi,
 >>
 >> I have apache 2.2.23 statically compiled with mod_perl2
  (prefork).
 >> perl binary is 5.10.1. In startup.pl file there is call
 >> Apache::DBI->connect_on_init.
 >>
 >> 
 >> use Apache::DBI;
 >> Apache::DBI->connect_on_init( $DB_DRIVER, $DB_USER,
  $DB_PASSWORD );
 >>
 >> use DBI;
 >> 
 >>
 >> I need to call DBI->connect to load some data during server
  startup
 >> stage. There is problem with this setup. Whenever apachectl
 >> startup/shutdown, we got connection error like this:
 >>
 >> DBD::Oracle::db DESTROY failed: ORA-03135: connection lost
  contact
 >> Process ID: 0
 >> Session ID: 3252 Serial number: 15131 (DBD ERROR:
  OCISessionEnd) at
 >> /usr/local/lib/perl5/site_perl/5.10.1/Apache/DBI.pm line 228.
 >>
 >> I am trying to fix this error. I think it's related to
  DBI->connect
 >> in startup.pl. My question is:
 >>  1. How do I accomplish loading data into database during
  server
 >> startup using Apache::DBI?
 >>  2. Once data is loaded during server startup, how do I
  safely destroy
 >> this database handle but not affect the children
  instantiate their
 >> database handles?
 >> Thanks in advance,
 >>
 >> Xinhuan
 >>
 >>
 >>








Re: Apache::DBI "connection lost contact" error

2013-05-31 Thread Jim Schueler
Perrin is right.  But fundamentally, I'd say that you're confusing 
'local' and 'my' variable scoping:


   http://www.perlmonks.org/?node_id=94007

 -Jim

On Fri, 31 May 2013, Perrin Harkins wrote:


Try an explicit disconnect() call.
- Perrin


On Thu, May 30, 2013 at 7:46 PM, Xinhuan Zheng 
wrote:
  The db handle is declared local and once it's out of scope, the
  destroy
  call will disconnect. But it appears even though variable is out
  of scope,
  we still get that error. Don't know why.
  - xinhuan

  On 5/30/13 8:31 AM, "Jim Schueler" 
  wrote:

  >Did this solve your problem?
  >
  >  -Jim
  >
  >On Wed, 29 May 2013, Perrin Harkins wrote:
  >
  >> Hi,
  >> Apache::DBI is supposed to skip caching if you connect during
  startup.
  >>You
  >> should just need to disconnect your database handle after you
  finish
  >>with
  >> it.  It sounds like you're opening it and then leaving it
  open.
  >>
  >> - Perrin
  >>
  >>
  >> On Wed, May 29, 2013 at 3:24 PM, Xinhuan Zheng
  >>
  >> wrote:
  >>       Hi,
  >>
  >> I have apache 2.2.23 statically compiled with mod_perl2
  (prefork).
  >> perl binary is 5.10.1. In startup.pl file there is call
  >> Apache::DBI->connect_on_init.
  >>
  >> 
  >> use Apache::DBI;
  >> Apache::DBI->connect_on_init( $DB_DRIVER, $DB_USER,
  $DB_PASSWORD );
  >>
  >> use DBI;
  >> 
  >>
  >> I need to call DBI->connect to load some data during server
  startup
  >> stage. There is problem with this setup. Whenever apachectl
  >> startup/shutdown, we got connection error like this:
  >>
  >> DBD::Oracle::db DESTROY failed: ORA-03135: connection lost
  contact
  >> Process ID: 0
  >> Session ID: 3252 Serial number: 15131 (DBD ERROR:
  OCISessionEnd) at
  >> /usr/local/lib/perl5/site_perl/5.10.1/Apache/DBI.pm line 228.
  >>
  >> I am trying to fix this error. I think it's related to
  DBI->connect
  >> in startup.pl. My question is:
  >>  1. How do I accomplish loading data into database during
  server
  >>     startup using Apache::DBI?
  >>  2. Once data is loaded during server startup, how do I
  safely destroy
  >>     this database handle but not affect the children
  instantiate their
  >>     database handles?
  >> Thanks in advance,
  >>
  >> Xinhuan
  >>
  >>
  >>





Re: Apache::DBI "connection lost contact" error

2013-05-30 Thread Jim Schueler

Did this solve your problem?

 -Jim

On Wed, 29 May 2013, Perrin Harkins wrote:


Hi,
Apache::DBI is supposed to skip caching if you connect during startup.  You
should just need to disconnect your database handle after you finish with
it.  It sounds like you're opening it and then leaving it open.

- Perrin


On Wed, May 29, 2013 at 3:24 PM, Xinhuan Zheng 
wrote:
  Hi,

I have apache 2.2.23 statically compiled with mod_perl2 (prefork).
perl binary is 5.10.1. In startup.pl file there is call
Apache::DBI->connect_on_init.


use Apache::DBI;
Apache::DBI->connect_on_init( $DB_DRIVER, $DB_USER, $DB_PASSWORD );

use DBI;


I need to call DBI->connect to load some data during server startup
stage. There is problem with this setup. Whenever apachectl
startup/shutdown, we got connection error like this:

DBD::Oracle::db DESTROY failed: ORA-03135: connection lost contact
Process ID: 0
Session ID: 3252 Serial number: 15131 (DBD ERROR: OCISessionEnd) at
/usr/local/lib/perl5/site_perl/5.10.1/Apache/DBI.pm line 228.

I am trying to fix this error. I think it's related to DBI->connect
in startup.pl. My question is:
 1. How do I accomplish loading data into database during server
startup using Apache::DBI?  
 2. Once data is loaded during server startup, how do I safely destroy
this database handle but not affect the children instantiate their
database handles?
Thanks in advance,

Xinhuan





Re: Apache::DBI "connection lost contact" error

2013-05-29 Thread Jim Schueler

A few questions:

  Precisely when do you get this error?  When startup.pl exits or before?

  Can you send a copy of your startup.pl file?

  You get exactly the same error on startup and shutdown?

  If PerlRequire startup.pl is commented out, do you still get errors?

  Do you get errors when a child starts or ends?  Or just the main
  process?

  Why does your error message say Process ID=0?  Are other messages
  different?  Does the error show up on the command line or in the log?

 -Jim

On Wed, 29 May 2013, Xinhuan Zheng wrote:


Hi,

I have apache 2.2.23 statically compiled with mod_perl2 (prefork). perl
binary is 5.10.1. In startup.pl file there is call
Apache::DBI->connect_on_init.


use Apache::DBI;
Apache::DBI->connect_on_init( $DB_DRIVER, $DB_USER, $DB_PASSWORD );

use DBI;


I need to call DBI->connect to load some data during server startup stage.
There is problem with this setup. Whenever apachectl startup/shutdown, we
got connection error like this:

DBD::Oracle::db DESTROY failed: ORA-03135: connection lost contact
Process ID: 0
Session ID: 3252 Serial number: 15131 (DBD ERROR: OCISessionEnd) at
/usr/local/lib/perl5/site_perl/5.10.1/Apache/DBI.pm line 228.

I am trying to fix this error. I think it's related to DBI->connect
in startup.pl. My question is:
 1. How do I accomplish loading data into database during server startup
using Apache::DBI?  
 2. Once data is loaded during server startup, how do I safely destroy this
database handle but not affect the children instantiate their database
handles?
Thanks in advance,

Xinhuan



Re: Segmentation fault on form posting

2013-05-23 Thread Jim Schueler

Here's the code I mentioned in my last post.  It's included in my distro
NoSQL::PL2SQL

#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"

#include "ppport.h"

SV* typeis ( SV* what ) ;

SV* typeis ( SV* what )
{
if ( SvIOK( what ) )
return newSVpvs( "integer" ) ;
else if ( SvNOK( what ) )
return newSVpvs( "double" ) ;
else if ( SvPOK( what ) )
return newSVpvs( "string" ) ;

return newSVpvs( "unknown" ) ;
}


MODULE = NoSQL::PL2SQL  PACKAGE = NoSQL::PL2SQL::Node

PROTOTYPES: ENABLE

SV* 
typeis( what )

SV* what



On Thu, 23 May 2013, Neil Bowers wrote:


Hi,
I've got a mod_perl handler which has been working fine for a long time, but
just recently two people have managed to trigger a seg fault under specific
circumstances.

 *  They are POSTing form data
 *  Only happens over https - doesn't happen via http (ie without SSL)
 *  A certain combination of bytes in the form seems to trigger this.
Doesn't appear to be the *number* of bytes, but can't really be sure.
 *  It only happens if the end-user is on 64-bit Windows (Win 7 only so
far), on IE9 or Chrome 26 (27 seems to be ok). Doesn't happen on Firefix
on 64bit, or on any browser on 32-bit Windows.

In my handler, if the first thing I do is print out the POST parameters,
then the segfault doesn't happen. So it smells like some kind of memory
overwrite.

This happens on combinations of:

 *  CentOS 5.5 and 6.3
 *  openssl 1.0.0d and 1.0.1e
 *  Apache 2.2.22 and 2.2.24
 *  Perl 5.12.3 and 5.16.3
 *  mod_perl 2.0.5, 2.0.7 and 2.0.8

I'll probably try 5.18, though I don't expect any change with that.

So now, some questions:

 *  Anyone seen anything like this, and have an idea where to look?
 *  Any thoughts on where to look / what else to try?
 *  What's the best approach to tracking this down? Valgrind?

I'm going to try attaching a debugger to an httpd process to see if I can
see where it's dying, though I suspect the problem may be happening earlier.
I'll have a go with valgrind after that.

Cheers,
Neil





Re: Segmentation fault on form posting

2013-05-23 Thread Jim Schueler
I also encounter this problem occasionally.  So your post is quite 
familiar.


If the first thing you do is print the parameters, what's the second 
thing?  Form posts almost always trigger external processes, databases, 
mail servers, etc.  The external process is more likely to be causing the 
fault than mod_perl.


At its heart, a perl scalar is a pretty complicated data object.  I think 
it's more likely that the scalar gets modified as the result of the print 
operation.  For example:


  sprintf "%d", $cgi{quantity} ;

To my knowledge, this statement modifies the scalar $cgi{quantity} so that 
the next operation views the scalar slightly differently.  I have a 
prototype library that examines the scalar.  If I can find it, I'll 
forward separately.


Finally, there are specific operations for parsing form input.  Are you 
using a package like CGI?  Have you tried an alternative?  It's only a 
couple lines of code to roll your own.


-Jim

On Thu, 23 May 2013, Neil Bowers wrote:


Hi,
I've got a mod_perl handler which has been working fine for a long time, but
just recently two people have managed to trigger a seg fault under specific
circumstances.

 *  They are POSTing form data
 *  Only happens over https - doesn't happen via http (ie without SSL)
 *  A certain combination of bytes in the form seems to trigger this.
Doesn't appear to be the *number* of bytes, but can't really be sure.
 *  It only happens if the end-user is on 64-bit Windows (Win 7 only so
far), on IE9 or Chrome 26 (27 seems to be ok). Doesn't happen on Firefix
on 64bit, or on any browser on 32-bit Windows.

In my handler, if the first thing I do is print out the POST parameters,
then the segfault doesn't happen. So it smells like some kind of memory
overwrite.

This happens on combinations of:

 *  CentOS 5.5 and 6.3
 *  openssl 1.0.0d and 1.0.1e
 *  Apache 2.2.22 and 2.2.24
 *  Perl 5.12.3 and 5.16.3
 *  mod_perl 2.0.5, 2.0.7 and 2.0.8

I'll probably try 5.18, though I don't expect any change with that.

So now, some questions:

 *  Anyone seen anything like this, and have an idea where to look?
 *  Any thoughts on where to look / what else to try?
 *  What's the best approach to tracking this down? Valgrind?

I'm going to try attaching a debugger to an httpd process to see if I can
see where it's dying, though I suspect the problem may be happening earlier.
I'll have a go with valgrind after that.

Cheers,
Neil





RE: Download then display page

2013-04-30 Thread Jim Schueler

To clarify, I meant to say, "I only occassionally write handlers". :)

 -Jim

On Tue, 30 Apr 2013, Chris Faust wrote:


Thanks Jim, I'm going to give that a try and see if I can get it to work.

-Chris

-Original Message-
From: Jim Schueler [mailto:jschue...@eloquency.com]
Sent: Tuesday, April 30, 2013 2:28 PM
To: Chris Faust
Cc: modperl@perl.apache.org
Subject: RE: Download then display page

Yes, that's what I have in mind.  I only occassionally write headers.
But I envision something similar to what you've got below:

  $redirect = ... ; ## URL to the spreadsheet

  $r->content_type('text/html') ;
  $r->headers_out->set( Location => $redirect ) ;
  $r->send_http_header ;

  $r->print( $content->output ) ;
  return Apache2::Const::REDIRECT ;

Originally, I wondered about using a "multipart/mixed" response type.
I've never heard that any browser supports such a thing.  Although that
seems like a more elegant solution.

 -Jim

On Tue, 30 Apr 2013, Chris Faust wrote:


But the response should be a redirect to a URL that returns the

spreadsheet instead of a 200 OK.  I believe that the body of the
original response will be displayed until the redirect succeeds.

I'm not sure what I follow you, something like this?

$r->content_type('text/html');
print $content->output;
$r->headers_out->set(Location => $redirect); return
Apache2::Const::REDIRECT;

And the $redirect URL would then do the sending of the file itself?

Thanks!


-Original Message-
From: Jim Schueler [mailto:jschue...@eloquency.com]
Sent: Tuesday, April 30, 2013 1:53 PM
To: Chris Faust
Cc: modperl@perl.apache.org
Subject: Re: Download then display page

I believe the following will work  (never tried it though):

The request should return a 'text/html' type document that displays
the instructions.  But the response should be a redirect to a URL that
returns the spreadsheet instead of a 200 OK.  I believe that the body
of the original response will be displayed until the redirect succeeds.

In the old days, we performed this trick by using meta tag equivalents
of the response headers.  And I expect browsers will respond to actual
HTTP headers the same way.  I say "the old days" because for last 18
years, I've relied on javascript.  But there may be reasons for not
wanting a different type solution.

 -Jim




On Tue, 30 Apr 2013, Chris Faust wrote:



Hi,

 

I'm trying to have a form submission package up the results in a xls
file and then start the download for the user as well as present a
page where they can click on the file if the download has not already
automatically started.

 

I can do each separately but not both together, I have something like

this:


 

... Make up our xls file download and put it in $output

 

$r->content_type('application/xls');

$r->err_headers_out->add('Content-Disposition' => 'attachment;

filename="'

.

$download_name . '"');

$r->print($output);

$content->param('set some html template vars');

$r->content_type('text/html');

print $content->output;

 

When I due the above, then I get prompted for the download but that
is it, I never get the page. Even if I reverse the order and try to
do the page
first:

 

$r->content_type('text/html');

print $content->output;

$r->content_type('application/xls');

$r->err_headers_out->add('Content-Disposition' => 'attachment;

filename="'

.

$download_name . '"');

$r->print($output);

$content->param('set some html template vars');

 

That still doesn't work. Probably not a mod_perl specific question
but I'm hoping someone can shed some light

 

TIA!

-Chris

 

 












RE: Download then display page

2013-04-30 Thread Jim Schueler
Yes, that's what I have in mind.  I only occassionally write headers. 
But I envision something similar to what you've got below:


  $redirect = ... ; ## URL to the spreadsheet

  $r->content_type('text/html') ;
  $r->headers_out->set( Location => $redirect ) ;
  $r->send_http_header ;

  $r->print( $content->output ) ;
  return Apache2::Const::REDIRECT ;

Originally, I wondered about using a "multipart/mixed" response type. 
I've never heard that any browser supports such a thing.  Although that 
seems like a more elegant solution.


 -Jim

On Tue, 30 Apr 2013, Chris Faust wrote:


But the response should be a redirect to a URL that returns the

spreadsheet instead of a 200 OK.  I believe that the body of the original
response will be displayed until the redirect succeeds.

I'm not sure what I follow you, something like this?

$r->content_type('text/html');
print $content->output;
$r->headers_out->set(Location => $redirect);
return Apache2::Const::REDIRECT;

And the $redirect URL would then do the sending of the file itself?

Thanks!


-Original Message-
From: Jim Schueler [mailto:jschue...@eloquency.com]
Sent: Tuesday, April 30, 2013 1:53 PM
To: Chris Faust
Cc: modperl@perl.apache.org
Subject: Re: Download then display page

I believe the following will work  (never tried it though):

The request should return a 'text/html' type document that displays the
instructions.  But the response should be a redirect to a URL that returns
the spreadsheet instead of a 200 OK.  I believe that the body of the
original response will be displayed until the redirect succeeds.

In the old days, we performed this trick by using meta tag equivalents of
the response headers.  And I expect browsers will respond to actual HTTP
headers the same way.  I say "the old days" because for last 18 years, I've
relied on javascript.  But there may be reasons for not wanting a different
type solution.

 -Jim




On Tue, 30 Apr 2013, Chris Faust wrote:



Hi,

 

I'm trying to have a form submission package up the results in a xls
file and then start the download for the user as well as present a
page where they can click on the file if the download has not already
automatically started.

 

I can do each separately but not both together, I have something like

this:


 

... Make up our xls file download and put it in $output

 

$r->content_type('application/xls');

$r->err_headers_out->add('Content-Disposition' => 'attachment; filename="'

.

$download_name . '"');

$r->print($output);

$content->param('set some html template vars');

$r->content_type('text/html');

print $content->output;

 

When I due the above, then I get prompted for the download but that is
it, I never get the page. Even if I reverse the order and try to do
the page
first:

 

$r->content_type('text/html');

print $content->output;

$r->content_type('application/xls');

$r->err_headers_out->add('Content-Disposition' => 'attachment; filename="'

.

$download_name . '"');

$r->print($output);

$content->param('set some html template vars');

 

That still doesn't work. Probably not a mod_perl specific question but
I'm hoping someone can shed some light

 

TIA!

-Chris

 

 








Re: Download then display page

2013-04-30 Thread Jim Schueler

I believe the following will work  (never tried it though):

The request should return a 'text/html' type document that displays the 
instructions.  But the response should be a redirect to a URL that returns 
the spreadsheet instead of a 200 OK.  I believe that the body of the 
original response will be displayed until the redirect succeeds.


In the old days, we performed this trick by using meta tag equivalents of 
the response headers.  And I expect browsers will respond to actual HTTP 
headers the same way.  I say "the old days" because for last 18 years, 
I've relied on javascript.  But there may be reasons for not wanting a

different type solution.

 -Jim




On Tue, 30 Apr 2013, Chris Faust wrote:



Hi,

 

I'm trying to have a form submission package up the results in a xls file
and then start the download for the user as well as present a page where
they can click on the file if the download has not already automatically
started.

 

I can do each separately but not both together, I have something like this:

 

... Make up our xls file download and put it in $output

 

$r->content_type('application/xls');

$r->err_headers_out->add('Content-Disposition' => 'attachment; filename="' .
$download_name . '"');

$r->print($output);

$content->param('set some html template vars');

$r->content_type('text/html');

print $content->output;

 

When I due the above, then I get prompted for the download but that is it, I
never get the page. Even if I reverse the order and try to do the page
first:

 

$r->content_type('text/html');

print $content->output;

$r->content_type('application/xls');

$r->err_headers_out->add('Content-Disposition' => 'attachment; filename="' .
$download_name . '"');

$r->print($output);

$content->param('set some html template vars');

 

That still doesn't work. Probably not a mod_perl specific question but I'm
hoping someone can shed some light

 

TIA!

-Chris

 

 




RE: highscalability.com report

2012-04-12 Thread Jim Schueler
Chicken and egg problem.  I have posted quite a bit about the pain of 
migrating my skill set to PHP.  And doing whatever I can do stop that

descent.  That's what makes the news so "sad", as I posted originally.

 -Jim

On Thu, 12 Apr 2012, eric.b...@barclays.com wrote:


Well, finding (good) developers is certainly an issue.

Here in NYC, it's very difficult to find proper Perl programmers as opposed to 
dabblers and scripters.

One of the larger web sites here in the city was built by a great Perl guy, but 
as they've grown and become successful, finding Perl talent has become a big 
issue.  I'm told that they're somewhere down the road to moving to PHP.


-Original Message-
From: Octavian Rasnita [mailto:orasn...@gmail.com]
Sent: Thursday, April 12, 2012 1:02 PM
To: Clinton Gormley; Jim Schueler
Cc: modperl@perl.apache.org
Subject: Re: highscalability.com report

From: "Clinton Gormley" 
Subject: Re: highscalability.com report



On Tue, 2012-04-03 at 22:50 -0400, Jim Schueler wrote:

Hope this doesn't get trapped by too many spam filters.

Sad news.  Just saw a blog

   http://www.highscalability.com/

that reports YouPorn.com switched from Perl to PHP.  Apparently

there's a

reported 10% improvement in speed, but I haven't noticed :).


I think the bigger factor in the speed improvement is probably to do
with switching from MySQL to Redis




Yeah, so there should be other reasons for passing from Perl to PHP.

What could be?

Octavian



___
Barclays is one of the world's leading banks, and we believe that by continuing 
to integrate the organisation we can better deliver the full power of Barclays 
to customers, clients and the communities in which we work.
As a visible sign of that integration we are moving to a single Barclays brand 
for the majority of our divisions, including those formerly known as Barclays 
Capital, Barclays Wealth and Barclays Corporate.

___

This e-mail may contain information that is confidential, privileged or 
otherwise protected from
disclosure. If you are not an intended recipient of this e-mail, do not 
duplicate or redistribute
it by any means. Please delete it and any attachments and notify the sender 
that you have received
it in error. Unless specifically indicated, this e-mail is not an offer to buy 
or sell or a
solicitation to buy or sell any securities, investment products or other 
financial product or
service, an official confirmation of any transaction, or an official statement 
of Barclays. Any
views or opinions presented are solely those of the author and do not 
necessarily represent those
of Barclays. This e-mail is subject to terms available at the following link: 
www.barcap.com/emaildisclaimer.
By messaging with Barclays you consent to the foregoing.  Barclays offers 
premier investment banking
products and services to its clients through Barclays Bank PLC, a company 
registered in England
(number 1026167) with its registered office at 1 Churchill Place, London, E14 
5HP.  This email may
relate to or be sent from other members of the Barclays Group.

___



highscalability.com report

2012-04-03 Thread Jim Schueler

Hope this doesn't get trapped by too many spam filters.

Sad news.  Just saw a blog

  http://www.highscalability.com/

that reports YouPorn.com switched from Perl to PHP.  Apparently there's a 
reported 10% improvement in speed, but I haven't noticed :).


After a couple months of total immersion, I have less inclination towards
PHP than ever.  Fight's not out of me yet.

-Jim



how does mod_perl handle client certificates

2012-02-29 Thread Jim Schueler

How does mod_perl handle client certificates using the Apache object?

Thanks!




PHP Question

2012-02-29 Thread Jim Schueler
At least one person in yesterday's discussion wondered if mod_perl might 
be obsolete given the overwhelming dominance of PHP.  I just want to share 
a few observations.


When someone asks me the difference between PHP and Perl.  I usually 
respond that the PHP's core API is bigger by a magnitude of 2.  I 
estimate the core PHP api is around 10K functions.


Fundamentally, the difference is one of entropy, or irreversibility. 
It's trivial to implement the PHP API as a perl module- which with a 
little extra parsing, could even handle the syntax differences. 
Conversely, it seems nearly impossible to map all the perl idioms to PHP 
functions.  A perl standard would accomodate the PHP framework.  PHP 
accomodates nothing else.  I'm pretty sympathetic when PHP evangelists 
discuss the advantages of standardization, but that seems beside the 
point.


The current trend is towards bigger and bigger API's.  Frameworks like 
Joomla and Drupal are falling over each other to push more and more API's 
out to their developers.  There's an obvious advantage to the publishing 
and training support businesses; it might even explain where all this 
momentum originates.  But there's bound to be a backlash.


Almost all the best practices I've learned in the past 20 years are being 
ridiculed as old-fashioned.  I might as well be evangelizing a punch card 
IDE.  I'll be the first to admit that I'm plying my trade in a backwater. 
Here, PHP is the only game in town, and I feel like some old guy crashing 
the party.


Aah!

 -Jim


Re: Registry and CGI::Carp

2012-01-27 Thread Jim Schueler
There's no question or anything resembling a request in your email.  So my 
response may waste a lot of time.


Is this your original post?
  http://www.perlmonks.org/?node_id=949773

If so, I might be able to help.

Admittedly, I can't follow the thread.  The PerlMonds responder refers to 
a function set_progname().  But I can't figure out what that refers to.


However, in the third exchange, you referenced a problem I have some 
experience with:  Apache::Registry executes the BEGIN{} block once, and 
the END{} block repeatedly.  Fundamentally, the Perl specification expects 
them to be balanced, and I'm still amazed at this shortcoming.  I wrote a 
workaround that might get you over your hurdle.  Please check out 
Apache::ChildExit.


I'm amazed that my solution wasn't generally adopted.  As you note, it 
seems like this would be a pretty common scenario.  Give it a try, and 
please let me know whether this solution gives you any traction.


Cheers!

 -Jim



On Fri, 27 Jan 2012, Brett Lee wrote:


Hi Folks,

Running several scripts under ModPerl::Registry that use CGI::Carp.  Am
seeing problems with the logging.  The message that is logged is correct,
however the name of the script that generated the event is not.

Each script contains a line similar to:

use CGI::Carp qw(name=my_script_X);

When the scripts are precompiled in startup.pl, the *same* script name is
logged for each and every script.  When scripts are not precompiled the name
is frequently correct, but it is not correct all of the time.

A post earlier to Perl Monks came back with the suggestion to extend
CGI::Carp.pm to support running under Registry.  As what I am trying to do
seems like it would be a pretty common scenario, am thinking there may be
another option.

Thanks for considering this one.
Brett



Re: cms as an apache incubator project?

2012-01-12 Thread Jim Schueler

Thanks for the quick response, Joe.

Based on the links you forwarded earlier, I understand that this 
application was written in-house by ASF operations staff.  Is that you?


Reading the rationale discussion reminds me of tooth-pulling 
conversations I've had with managers convinced that there's an 
out-of-the-box turnkey solution that *exactly* meets their business 
requirements and has no life-cycle costs.  When these managers eventually 
concede the need to hire software professionals, the conversation starts 
all over again.  Essentially:  OK, we'll invest in software development, 
then we'll release the code to the OS community, where it will be 
supported and maintained indefinitely by volunteers.  These sunny 
optimists make it hard to earn a living.


So while I'm thrilled to participate in an ASF project, I'm trying to 
establish some justification.  Some examples:


  1.  Learn best practices in application development
  2.  Develop marketable expertise in CMS technology
  3.  Support mod_perl's viability as an enterprise solution

Where I live, most of the local economy is supported by foundation and 
government grants.  (Sign of the times.)  I'm sure I know people who could 
capitalize on the right FOSS project.


Justification is always the first step in any undertaking.  And I couldn't 
find it anywhere using your links.  Is there anything else you can send 
along?


Thanks again!

Sincerely,

Jim Schueler


Re: Fw: cms as an apache incubator project?

2012-01-08 Thread Jim Schueler

Hello Joe.

I'm definitely interested and I'll take look at the links below.

What is your role in this process?  How many volunteers are you looking 
for?  Any other response?  About how much time is required.?


Thanks!

Jim Schueler

On Mon, 2 Jan 2012, Joe Schaefer wrote:


FYI: mod_perl based CMS currently in use at the ASF:

http://www.apache.org/dev/cms
http://www.apache.org/dev/cmsref

Looking for a few volunteers to start an Apache project
based on it...


- Forwarded Message -

From: Joe Schaefer 
To: Apache Infrastructure 
Sent: Tuesday, December 27, 2011 1:41 PM
Subject: cms as an apache incubator project?


Way back when the cms was first being implemented
some people suggested making it a formal project.
I pushed back on that issue at the time because it
was so heavily tied into our infra that trying to
productize it made little sense to me.


As time progressed it became clear tho that I was
the only person willing and able to work on the cms
plumbing, and maybe now that there are several projects
using the cms it's time to reask the question.


Would it make sense to anyone to volunteer to work
on the cms towards making it adoptable by other orgs?
Keep in mind people can work on and adopt the current
code right now as the svn tree is public:


https://svn.apache.org/repos/infra/websites/cms


The real issue becomes whether or not this community
has the right skills and interests to help other orgs
adopt it, and to modify the software to make that

a feasible proposition.


Any takers?








Re: BerkeleyDB error

2011-01-02 Thread Jim Schueler
I wrote the module Apache::ChildExit specifically to resolve the 
incompatibility between BerkeleyDB and Apache::Registry

  http://search.cpan.org/~tqisjim/ChildExit_0-1/

 -Jim

> Subject: Re: BerkeleyDB error
> From: Perrin Harkins 
> To: "Peram, Sudhakara" 
> Cc: modperl@perl.apache.org
> Content-Type: text/plain; charset=ISO-8859-1
> 
> Hi,
> 
> > I am able to execute that perl script
> > successfully few times (less than 5 times) after every restart of Apache web
> > server. After that I am getting following BerkeleyDB error message in log
> > file of that script (i.e., run_command).
> 
> Are you using Apache::Registry for this?
> 
> It sounds as if your requests fail on the second attempt to run the
> script in a process.  If you start apache with the -X option, do they
> fail on the second request?
> 
> - Perrin
> 


PerlTransHandler

2006-04-25 Thread Jim Schueler
What happened to PerlTransHandler in mod_perl-1.29?

Jim Schueler
Motor City Interactive