Re: please comment on new art for perl.apache.org

1999-10-11 Thread Remi Fasol

-- Matt Arnold [EMAIL PROTECTED] wrote:

 http://www.novia.net/~marnold/mod_perl/sample_3/ 

i really like the camel against the sun with the
apache feather.

 sure to include any good ideas you have about
 alternate designs.)

maybe you can make it more 'desert-y'. more like
sandstone. maybe add a little red.

remi.

=

__
Do You Yahoo!?
Bid and sell for free at http://auctions.yahoo.com



Re: authentication via login form

1999-10-11 Thread Ajit Deshpande

On Sun, Oct 10, 1999 at 12:34:56AM -0700, Randal L. Schwartz wrote:
  "Jeffrey" == Jeffrey W Baker [EMAIL PROTECTED] writes:
 
 Jeffrey Cookies are an acceptable way to make the browser remember
 Jeffrey something about your site.
 
 Speak for yourself.  I'd change that to "... one possible way ..." instead
 of "acceptable way", and add "... for a single session".
 
 Cookies are evil when used for long-term identification.

So basically we have a problem with cookies that persist beyond 
one browser session. Do you have any other reservations against 
using cookies?

Ajit



http headers for cache-friendly modperl site

1999-10-11 Thread Oleg Bartunov

Andreas,

sorry for bothering you :-)

I found your nice introduction to http-headers
(Apache-correct_headers-1.16) and want to ask you
some questions.

1. Do you have some examples on-line to illustrate
   cache-friendly dynamical pages ?

2. I'm building server with fully dynamic content using
   Apache, modperl and HTML::Mason and would like to implement
   cache-friendly strategy you described. But I have some problem:
   In Russia we have several encodings for russian language
   ( koi8-r - mostly Unix, win-1251 - mostly windows and several
   others). Documents generated in native server's encoding and
   translated to another encoding on-fly depending on several
   parameters (user directly specify port number for example or
   server understand on some logic (by User Agent string for example) what 
   encoding would be the best for user). If user directly selected
   port number URL would changed, say http://some.host:8100/ for koi8-r
   and http://some.host:8101/ for win-1251. In such situation there
   are no problem with caching on proxy servers because URL's are different.
   But in case when server automagically recognize encoding of client
   URL stays the same for differnet encodings - just http://some.host/
   and this cause a trouble with proxy. Suppose if user1 from windows machine
   and user2 from Unix request the same document using the same proxy.
   Then if user1 was the first and proxy caches document in win-1251 encoding
   and user2 will get document from the proxy but in wrong encoding.
   user2 could press reload button and gets proxy refresh document
   now in koi8-r encoding and so on. This is a pain.
   So here is my question: What's correct way to configure http header 
   in this situation ? Actually, the situation I described probably
   would be closer to you if, for example, server generates content
   depending on User Agent, no translation, just different design.
 
  I see probable solution: Lets browser use local cache and when document
  is expired lets proxy to revalidate it from server and not get it from cache.
  It's already big win if user could use local cache.  
  I don't know how to realize this solution in frame of HTTP 1.0.
  Most proxy support only HTTP 1.0. In my experiments I used
  Apache's Expire and headers modules.
ExpiresByType text/html "access plus 2 hours"  
  and this enough for browsers to use local cache 
  but unfortunately
  proxy servers also cache documents and I got the problem I described above.
  Is it possible to separate proxy's cache and browser's cache using
  http-headers ? Again using HTTP 1.0 ?

I search mailing lists and internet resources but didn't found
solution for my problem. I'm doing Cc: to modperl mailing list
in hope people already found the solution.
 
  Best regards,

Oleg 

_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83




RE: Re: please comment on new art for perl.apache.org

1999-10-11 Thread ricarDo oliveiRa

I agree with remi. Make a little bit more... Tuareg, y'know?

But it's great as it is!

./Ricardo

--Original Message--
From: Remi Fasol [EMAIL PROTECTED]
To: Matt Arnold[EMAIL PROTECTED], [EMAIL PROTECTED]
Sent: October 11, 1999 8:51:40 AM GMT
Subject: Re: please comment on new art for perl.apache.org


-- Matt Arnold [EMAIL PROTECTED] wrote:

 http://www.novia.net/~marnold/mod_perl/sample_3/

i really like the camel against the sun with the
apache feather.

 sure to include any good ideas you have about
 alternate designs.)

maybe you can make it more 'desert-y'. more like
sandstone. maybe add a little red.

remi.

=

__
Do You Yahoo!?
Bid and sell for free at http://auctions.yahoo.com

###
Guitar fan Chris Black of London actually had a wedding 
ceremony to officially marry his Fender Stratocaster last year.
###
__
FREE Email for ALL! Sign up at http://www.mail.com



Re: http headers for cache-friendly modperl site

1999-10-11 Thread Andreas J. Koenig

 On Mon, 11 Oct 1999 13:18:12 +0400 (MSD), Oleg Bartunov [EMAIL PROTECTED] said:

  1. Do you have some examples on-line to illustrate
 cache-friendly dynamical pages ?

On www.stadtplandienst.de the headers for the graphics have optimal
headers, I think. The headers for HTML could be improved though.

On the other machines where I have prepared everything to be
cache-friendly, I yet have to decide about a good expiration schedule.
And as often, without a pressing need, I haven't yet come around to
finetune it.

  2. I'm building server with fully dynamic content using
 Apache, modperl and HTML::Mason and would like to implement
 cache-friendly strategy you described. But I have some problem:
 In Russia we have several encodings for russian language
 ( koi8-r - mostly Unix, win-1251 - mostly windows and several
 others). Documents generated in native server's encoding and
 translated to another encoding on-fly depending on several
 parameters (user directly specify port number for example or
 server understand on some logic (by User Agent string for example) what 
 encoding would be the best for user). If user directly selected
 port number URL would changed, say http://some.host:8100/ for koi8-r
 and http://some.host:8101/ for win-1251. In such situation there
 are no problem with caching on proxy servers because URL's are different.
 But in case when server automagically recognize encoding of client
 URL stays the same for differnet encodings - just http://some.host/
 and this cause a trouble with proxy. Suppose if user1 from windows machine
 and user2 from Unix request the same document using the same proxy.

This is exactly the same problem for any content negotiation. If you
are using content negotiation, you *must* specify the Vary header as
described in my document. But as soon as you have a Vary header, you
are out of luck with regard to caching proxies because squid is unable
to cache documents with a Vary header (it just expires them
immediately) and I believe there is no other Proxy available that does
handle Vary headers intelligenty. So although you are acting
cache-friendly and correct, the current available cache technology
isn't up to the task.

But as a workaround you can and should work with a redirect.

1. Decide about a parameter in the querystring or in the pathinfo or
   in the path that codifies everything you would normally handle by
   interpreting an incoming header, like Accept, Accept-Encoding,
   Accept-Charset, User-Agent, etc.

2. As one of the first things your program should check for the
   precense of this parameter in the requested URI.

3. If it is there, you have a unique URI and can answer in a
   cache-friendly way. If it isn't there, you code it into the
   received URI and answer with a redirect to the URI you just
   constructed.

An example: www.meta-list.net, where we roughly do the following,
where $mgr is an Apache::HeavyCGI object we created earlier and $cgi
is an Apache::Request object.

  my $acc = $cgi-param('acc');

  if  (defined($acc)) {
my $lang;
($mgr-{CAN_UTF8},$mgr-{CAN_GZIP},$mgr-{CAN_PNG},$mgr-{Lang}) =
unpack "a a a a*", $acc;
  } else {
my $utf8  = $mgr-can_utf8;
my $gzip  = $mgr-can_gzip;
my $png   = $mgr-can_png;
my $lang  = $r-header_in("Accept-Language");
my $param = $utf8 . $gzip . $png . $mgr-uri_escape($lang);
my $redir_to;
if ($r-method_number == M_GET) {
  my $args = $r-args;
  $redir_to = $mgr-myurl . "?acc=$param";
  $redir_to .= "$args" if $args;
} elsif ($r-method_number == M_POST) {
  warn "We got a POST but we are only prepared for GET!";
  return;
}
$r-header_out("Location",$redir_to);
require Apache::Constants;
my $stat = Apache::Constants::REDIRECT();
$r-status($stat);
$r-send_http_header;
  }

This code doesn't work exactly as posted because I simplified a few
things to illustrate the point, but I hope it helps clarify things.

-- 
andreas



RE: new for embperl...

1999-10-11 Thread Gerald Richter


 I restart the apache and delete an error log file and comment
 PerlFreshRestart out from the httpd.conf.

 Then, I get the error after I called the page like...

 [Sun Oct 10 12:47:55 1999] [error] [client 129.174.124.121] File does
 not exist: /usr/local/apache/htdocs/emperl/eg/x/loop.htm


So the redefintion only aears after a restart with FreshRestart On. This is
normal and isn't a problem.

Ok, look's like your Apache searches in the wrong directory. How does your
httpd.conf look's like and which is the URL you use to request the document?

Gerald



---
Gerald Richter  ecos electronic communication services gmbh
Internet - Infodatenbanken - Apache - Perl - mod_perl - Embperl

E-Mail: [EMAIL PROTECTED] Tel:+49-6133/925151
WWW:http://www.ecos.de  Fax:+49-6133/925152
---




Re: authentication via login form

1999-10-11 Thread Randal L. Schwartz

 "Jeffrey" == Jeffrey W Baker [EMAIL PROTECTED] writes:

Jeffrey Randal, how do you suppose that HTTP basic auth works?  The
Jeffrey user agent stores the username and password and transmits
Jeffrey them to the server on every request.

The difference between a cookie and a basic-auth password is that for
a basic-auth, *I* am carrying the credential (the user/password), and
the browser is merely caching it, and I have some control over that.
A cookie is its own credential and therefore non-portable.  Until
someone invents a "cookie wallet" that I can plug into each browser
I'm using at the moment, cookies for long-term auth are truly
unusable.

Jeffrey This is exactly identical to a cookie which is set to have a
Jeffrey short expiration time.  That's why I say replacing basic auth
Jeffrey with cookies is acceptable: both of them are a totally
Jeffrey inadequate way to authenticate users.

Yes, and I agree with you.  For *short term* auth, cookies are OK.
But I've seen too many apps out there that use cookies for unique ID
for long term.  Wrong.  Broken.  Busted.  Basic-auth would be way
better, although still not ideal.

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!



Re: authentication via login form

1999-10-11 Thread Randal L. Schwartz

 "John" == John D Groenveld [EMAIL PROTECTED] writes:

John Well if you're going to generate your HTML on the fly, URL mangling
John isn't too bad. HTML::Mason and probably the other embedded perl modules
John would allow you to more selectively and consistently place session id
John into your HREFs and the strip session code from the Eagle book is very
John easy to implement.
John Your options are limitless, have fun!
John John
John [EMAIL PROTECTED] 

I was actually looking at a PerlTransHandler that I'd drop into
my site-wide files that would do something like the following:

my $uri = $r-uri;
if ($uri =~ s#/@@(\d+)@@/#/#) {
  $session = $1;
  $r-uri($uri);
  $r-header(Session = $session);
}

This way, a session ID could be generated of the form

/some/path/@@123456@@/foo/bar.html

And could be universally included in *any* URL handed to the user, but
only those things that generate HTML and wish to maintain the session
would notice and re-include $session = $r-header(Session) in their
strings.

-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!



Re: please comment on new art for perl.apache.org

1999-10-11 Thread Jesse Kanner



On Sun, 10 Oct 1999, Matt Arnold wrote:

I have created a new page layout/template for perl.apache.org.  You can take
a look at it at http://www.novia.net/~marnold/mod_perl/sample_3/  Please let
me know if you think it's suitable for use on perl.apache.org.  If not, how
could it be improved?

The look and feel seems OK, although I too like the idea of making it
more 'desert-ish'. I'm more curious about your plans for content
organization. The current perl.apapche.org home page is a bit of a mess.
There's very little sense of organization - just a lot of text and links
scattered all over the page.

Will you have any comps of the second level pages? Which information would
go where? So far, your left hand nav seems pretty good, provided the
current home page content will fit nicely into the buckets you've
proposed.


-j-



Re: http headers for cache-friendly modperl site

1999-10-11 Thread Oleg Bartunov

On 11 Oct 1999, Andreas J. Koenig wrote:

 Date: 11 Oct 1999 12:48:54 +0200
 From: "Andreas J. Koenig" [EMAIL PROTECTED]
 To: Oleg Bartunov [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED], [EMAIL PROTECTED]
 Subject: Re: http headers for cache-friendly modperl site
 
  On Mon, 11 Oct 1999 13:18:12 +0400 (MSD), Oleg Bartunov [EMAIL PROTECTED] said:
 
   1. Do you have some examples on-line to illustrate
  cache-friendly dynamical pages ?
 
 On www.stadtplandienst.de the headers for the graphics have optimal
 headers, I think. The headers for HTML could be improved though.
 
 On the other machines where I have prepared everything to be
 cache-friendly, I yet have to decide about a good expiration schedule.
 And as often, without a pressing need, I haven't yet come around to
 finetune it.

Thanks for references. I could recommend also mailing list archive
http://www.progressive-comp.com/Lists/ which is cache-friendly
with dynamic content site. Also is very useful.


 
   2. I'm building server with fully dynamic content using
  Apache, modperl and HTML::Mason and would like to implement
  cache-friendly strategy you described. But I have some problem:
  In Russia we have several encodings for russian language
  ( koi8-r - mostly Unix, win-1251 - mostly windows and several
  others). Documents generated in native server's encoding and
  translated to another encoding on-fly depending on several
  parameters (user directly specify port number for example or
  server understand on some logic (by User Agent string for example) what 
  encoding would be the best for user). If user directly selected
  port number URL would changed, say http://some.host:8100/ for koi8-r
  and http://some.host:8101/ for win-1251. In such situation there
  are no problem with caching on proxy servers because URL's are different.
  But in case when server automagically recognize encoding of client
  URL stays the same for differnet encodings - just http://some.host/
  and this cause a trouble with proxy. Suppose if user1 from windows machine
  and user2 from Unix request the same document using the same proxy.
 
 This is exactly the same problem for any content negotiation. If you
 are using content negotiation, you *must* specify the Vary header as
 described in my document. But as soon as you have a Vary header, you
 are out of luck with regard to caching proxies because squid is unable
 to cache documents with a Vary header (it just expires them
 immediately) and I believe there is no other Proxy available that does
 handle Vary headers intelligenty. So although you are acting
 cache-friendly and correct, the current available cache technology
 isn't up to the task.
 
 But as a workaround you can and should work with a redirect.
 
 1. Decide about a parameter in the querystring or in the pathinfo or
in the path that codifies everything you would normally handle by
interpreting an incoming header, like Accept, Accept-Encoding,
Accept-Charset, User-Agent, etc.
 
 2. As one of the first things your program should check for the
precense of this parameter in the requested URI.
 
 3. If it is there, you have a unique URI and can answer in a
cache-friendly way. If it isn't there, you code it into the
received URI and answer with a redirect to the URI you just
constructed.

Yes,

I already found workaround to generate unique URL (using different port number)
and it works quite fine. There were some problems in 2 servers setup
(it's easy to get redirection loop) but I found a right way.

Thanks for help,

Oleg

 
 An example: www.meta-list.net, where we roughly do the following,
 where $mgr is an Apache::HeavyCGI object we created earlier and $cgi
 is an Apache::Request object.
 
   my $acc = $cgi-param('acc');
 
   if  (defined($acc)) {
 my $lang;
 ($mgr-{CAN_UTF8},$mgr-{CAN_GZIP},$mgr-{CAN_PNG},$mgr-{Lang}) =
 unpack "a a a a*", $acc;
   } else {
 my $utf8  = $mgr-can_utf8;
 my $gzip  = $mgr-can_gzip;
 my $png   = $mgr-can_png;
 my $lang  = $r-header_in("Accept-Language");
 my $param = $utf8 . $gzip . $png . $mgr-uri_escape($lang);
 my $redir_to;
 if ($r-method_number == M_GET) {
   my $args = $r-args;
   $redir_to = $mgr-myurl . "?acc=$param";
   $redir_to .= "$args" if $args;
 } elsif ($r-method_number == M_POST) {
   warn "We got a POST but we are only prepared for GET!";
   return;
 }
 $r-header_out("Location",$redir_to);
 require Apache::Constants;
 my $stat = Apache::Constants::REDIRECT();
 $r-status($stat);
 $r-send_http_header;
   }
 
 This code doesn't work exactly as posted because I simplified a few
 things to illustrate the point, but I hope it helps clarify things.
 
 -- 
 andreas
 

_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical 

Re: authentication via login form

1999-10-11 Thread Dave Hodgkinson


"Jamie O'Shaughnessy" [EMAIL PROTECTED] writes:

 
 On 11 Oct 99 15:05:23 +0100, you wrote:
 
 I was actually looking at a PerlTransHandler that I'd drop into
 my site-wide files that would do something like the following:
 
  my $uri = $r-uri;
  if ($uri =~ s#/@@(\d+)@@/#/#) {
$session = $1;
$r-uri($uri);
$r-header(Session = $session);
  }
 
 This way, a session ID could be generated of the form
 
  /some/path/@@123456@@/foo/bar.html
 
 
 But isn't the problem then that if the user cuts  pastes the URL for
 someone else to use (e.g. mails it to someone), they're also then passing
 on their authentication? 
 
 Doesn't this also mean you can only have links from sessioned pages -
 non-sessioned pages or sessioned pages - sessioned pages and not
 non-sessioned pages - sessioned pages. I'd classify a non-sessioned page
 as a static HTML page.
 
 Have I missed something here?

Perhaps an MD2 or MD5 hash that has an IP and the username or some
other bumf as semi-authentication might do the trick?

We've done something similar for embedding URLs into newsletter type
emails so when people click through they come to something
personalised for them. 

Still, that's only for us pushing to them, anything involving money
requires a full session login on the secure server.


-- 
David Hodgkinson, Technical Director, Sift PLChttp://www.sift.co.uk
Editor, "The Highway Star"   http://www.deep-purple.com
Dave endorses Yanagisawa saxes, Apache, Perl, Linux, MySQL, emacs, gnus



Re: authentication via login form

1999-10-11 Thread Michael Peppler

Dave Hodgkinson writes:
  
  "Jamie O'Shaughnessy" [EMAIL PROTECTED] writes:
  
   
   On 11 Oct 99 15:05:23 +0100, you wrote:
   
   I was actually looking at a PerlTransHandler that I'd drop into
   my site-wide files that would do something like the following:
   
 my $uri = $r-uri;
 if ($uri =~ s#/@@(\d+)@@/#/#) {
   $session = $1;
   $r-uri($uri);
   $r-header(Session = $session);
 }
   
   This way, a session ID could be generated of the form
   
 /some/path/@@123456@@/foo/bar.html
   
   
   But isn't the problem then that if the user cuts  pastes the URL for
   someone else to use (e.g. mails it to someone), they're also then passing
   on their authentication? 
   
   Doesn't this also mean you can only have links from sessioned pages -
   non-sessioned pages or sessioned pages - sessioned pages and not
   non-sessioned pages - sessioned pages. I'd classify a non-sessioned page
   as a static HTML page.
   
   Have I missed something here?
  
  Perhaps an MD2 or MD5 hash that has an IP and the username or some
  other bumf as semi-authentication might do the trick?

Don't use the IP address. Some proxy systems have a non-static IP
address for requests coming from the same physical client (some of
AOLs proxies work that way, if I remember correctly...)

Michael
-- 
Michael Peppler -||-  Data Migrations Inc.
[EMAIL PROTECTED]-||-  http://www.mbay.net/~mpeppler
Int. Sybase User Group  -||-  http://www.isug.com
Sybase on Linux mailing list: [EMAIL PROTECTED]



Re: authentication via login form

1999-10-11 Thread Dave Hodgkinson


Michael Peppler [EMAIL PROTECTED] writes:

 Don't use the IP address. Some proxy systems have a non-static IP
 address for requests coming from the same physical client (some of
 AOLs proxies work that way, if I remember correctly...)

"...or something..." ;-)

-- 
David Hodgkinson, Technical Director, Sift PLChttp://www.sift.co.uk
Editor, "The Highway Star"   http://www.deep-purple.com
Dave endorses Yanagisawa saxes, Apache, Perl, Linux, MySQL, emacs, gnus



Re: authentication via login form

1999-10-11 Thread James G Smith

Dave Hodgkinson [EMAIL PROTECTED] wrote:

Michael Peppler [EMAIL PROTECTED] writes:

 Don't use the IP address. Some proxy systems have a non-static IP
 address for requests coming from the same physical client (some of
 AOLs proxies work that way, if I remember correctly...)

"...or something..." ;-)

I've been trying to put together a ticket scheme for authentication here and 
one of the issues is security and knowing someone else can't copy the ticket.  
I went ahead and made the IP address an optional part of the ticket -- the 
user can choose strong, medium, and weak security to include their IP, their 
network, or no IP information at all.  Might be something to consider -- 
default to something sane for some well-known networks like AOL.
-- 
James Smith [EMAIL PROTECTED], 409-862-3725
Texas AM CIS Operating Systems Group, Unix




RE: internal_redirect and POST

1999-10-11 Thread Dmitry Beransky

My apologies for continuing this topic, but I've been thinking some more 
about this issue over the weekend.  I'm still perplexed by this seemingly 
arbitrary limitation on the number of times a request body can be read.  It 
seems that, at least theoretically, it should be possible to cache the 
result of $r-content() or even $r-read() the first time it's called and 
return the cached data on subsequent calls.  It should also be possible to 
have $r-read return the cached data even when called from an internal 
redirect (by delegating calls to $r-prev-read, etc).  As the size of a 
request body can be arbitrarily large (e.g. file uploads), perhaps it would 
be better not to have the caching behavior turned on by default, but rather 
enable it on a per-request basis.

Again, this is all hypothetical.  Can anyone comment on feasibility (and 
usefulness) of such a feature?

Cheers
Dmitry


  Right :) You just missed the point. I want to read content
  (using $r-content) to check what is being passed and then redirect
  that same content to another handler. That new handler should be able
  to read it from some place. I fool it to believe that method is get
  not post, so it'll try to read $r-args. But it will also return empty
  string.

But if you set $r-args to what you read from $r-content before 
redirecting, won't that work?



Re: internal_redirect and POST

1999-10-11 Thread James G Smith

Dmitry Beransky [EMAIL PROTECTED] wrote:
My apologies for continuing this topic, but I've been thinking some more 
about this issue over the weekend.  I'm still perplexed by this seemingly 
arbitrary limitation on the number of times a request body can be read.  It 
seems that, at least theoretically, it should be possible to cache the 
result of $r-content() or even $r-read() the first time it's called and 
return the cached data on subsequent calls.  It should also be possible to 
have $r-read return the cached data even when called from an internal 
redirect (by delegating calls to $r-prev-read, etc).  As the size of a 
request body can be arbitrarily large (e.g. file uploads), perhaps it would 
be better not to have the caching behavior turned on by default, but rather 
enable it on a per-request basis.

Again, this is all hypothetical.  Can anyone comment on feasibility (and 
usefulness) of such a feature?

This would be very nice, but looking through the source code for Apache last 
week, the socket connection is closely tied to it's idea of stdin for POST 
operations.  I could not find a way to present the body of the request to 
arbitrary handlers.  Hopefully Apache 2.0 can eliminate this problem with the 
layered I/O that I've heard rumors about.
-- 
James Smith [EMAIL PROTECTED], 409-862-3725
Texas AM CIS Operating Systems Group, Unix




Logging Session IDs from environment variables

1999-10-11 Thread Clinton Gormley

Hi all

I have asked this before, but I still haven't managed to shed any light
on it, so I was hoping that somebody might be able to shed some more
light.

(While you're about it, have a look at the site we have just launched :
http://www.orgasmicwines.com - mod_perl  mysql based site.)

If I store my sessionID in an environment variable (or in $r-notes), I
can see it in other Apache:: modules, but when I try to log it using the
the CustomLog directive in httpd.conf, the environment variable (or
$r-notes) is blank.

Does this have anything to do with the fact that I'm setting the ENV
variable in a PerlAuthenHandler, before %ENV is set up?  And that
shouldn't affect $r-notes, should it?

Any help greatly appreciated

Many thanks

Clint



Missing ORA_OCI.al file

1999-10-11 Thread Simon Miner

Good afternoon:

We've been having some trouble with our web servers' database
connections.  About once a day, our logs will be flooded with the
following error message.

[Fri Sep 17 12:38:11 1999] [error] DBD::Oracle initialisation failed:
Can't locate auto/DBD/Oracle/ORA_OCI.al in @INC (@INC contains:
/raid/staging/modules /www/modules /www/modules
/usr/local/lib/perl5/5.00503/sun4-solaris /usr/local/lib/perl5/5.00503
/usr/local/lib/perl5/site_perl/5.005/sun4-solaris
/usr/local/lib/perl5/site_perl/5.005 . /usr/local/apache-1.3.9/
/usr/local/apache-1.3.9/lib/perl) at
/usr/local/lib/perl5/site_perl/5.005/sun4-solaris/DBD/Oracle.pm line 48.

The /raid/staging/modules and /www/modules directories have been added
to @INC via "use lib" for in-house Perl modules.

The above error occurs when users are unable to establish a connection
to our Oracle database due to a high number of TNS connections being
open at the same time.  What is this ORA_OCI.al file, and what
information does it contain?  Should DBD::Oracle have installed it when
the module was built?  It seems to me that the only logical place for
the file to be is in the
/usr/local/lib/perl5/site_perl/5.005/sun4-solaris/auto/DBD/Oracle/
directory, since that's where the Oracle.bs and Oracle.so files reside
for DBD::Oracle.  But it's not there, nor can I find it anywhere else on
our system.

Can anyone shed some light on this ORA_OCI.al file for me?

System Specifics:

Solaris 2.7
Perl 5.005_03
Apache 1.3.9
mod_perl 1.21
DBI 1.13
DBD::Oracle 1.03

Thanks in advance for your help.

- Simon


*
Simon D. Miner
Web Developer
Christian Book Distributors
http://www.christianbook.com
*



Re: authentication via login form

1999-10-11 Thread Ofer Inbar

Eugene Sotirescu [EMAIL PROTECTED] wrote:
 I'd like to authenticate users via a login form (username, password
 text fields) instead of using the standard dialog box a browser pops
 up in response to a 401 response code.

Here's what I do in an application I'm currently working on...

Application has a table of users stored locally, and along with the
usual /etc/passwd-ish sort of info stored, each user row also has a
random per-user secret.  The application also has its own secret
stored in the source code (and changed from time to time).

When a browser session comes in without appropriate authentication
cookies, they get a login screen.  When they post username and
password, check that against the locally stored user table, and if
they match, issue a set of authentication cookies.  These hold three
pieces of information:
 - the username
 - the date-time (seconds since epoch) these cookies were issued
 - an MD5 hash

The hash is of: username, per-user secret, application secret,
 application's version number, IP address of browser session, and
 time cookies were issued.

If a session comes in with these cookies already set, they are
checked.  Application reads the username and time from the cookies,
the IP address from the Apache request object, then looks up the
per-user secret in the local users table, and using the secret and
version number already known to it, recomputes the MD5 hash.  If this
matches the hash read from the cookie, the cookie set is valid.

If the cookie set is invalid, treat the user as if they had no cookies
and send them to the login screen.  If the cookie set is valid,
compare the cookie time to current time, and see if the difference is
greater than some configurable "expire" (say, 30 minutes).  If the
difference is lesser, then compute a new set of cookies with a new
time, and issue them to the browser with this page.  If the difference
is greater than the expire, send them to a screen that says the login
is expired and asking the user to resubmit username/password.

Finally, use SSL to protect the whole mess from sniffing.  SSL is a
reasonable protection against the initial username/password submission
from being sniffed.  It's also a protection against the cookie set
being copied and used by another user, along with the IP address.  In
this particular situation, the application is not for the general
public, and we can safely assume that all legitimate users will have
consistent IP addresses for the life of a browser session,a nd will
have cookies enabled on their browser.

The real key to using cookies this way, I think, and the one that
transfers well to a more public environment, is the expiration, which
is *not* under browser control.  Any authentication cookie set handed
out by this application is good only for a limited time, so if anyone
can "break" it it'll only do them good if they can do it within that
time period (hence attacks against SSL  MD5 are impractical).  Since
both the time encoded in the cookie hash, and the time it is compared
against, are both from the server's clock, the client can't affect
this by futzing with its own clock.  The tradeoff between convenience
and security can be controlled by changing the expire period.

Note that since a new set of cookies is issued for *every* access, a
valid login will not actually "expire" until the user fails to load
any new pages within the application for the duration of an expire
period.  Even if you set the expire to something really short, say a
minute, a user can continue using it without resubmitting
login/password as long as they do at least one thing that causes a new
HTTP access every minute.  As soon as they step away for a minute,
when they come back they'll be expired and have to log in again at the
next access.

I didn't use any modules for this, so I don't know if modules are
available for it, but the basic cookie generation and checking code
was rather short and simple.  The login  expired login page and
dealing with those form submissions took some more time to write, but
that's the sort of thing you probably want to design yourself anyway.

Thorough testing of every possible path through the login code is
essential here!  Remember also that users can "submit forms" that you
never gave them.  Test every possible path through the code to make
sure you're making the right decisions about a user's authentication
status, and do not limit the testing to only those form submissions
that make sense given what forms you present the user with.  This is
easy to overlook but you don't have security without it, if anyone
ever sees your code (and it's good to assume that someone will see it).

  -- Cos (Ofer Inbar)  --  [EMAIL PROTECTED] http://www.leftbank.com/CosWeb/
  -- Exodus Professional Services -- [EMAIL PROTECTED] http://www.exodus.net/
 Now, for the Quote out of Context(TM), from Mike Haertel [EMAIL PROTECTED]
 "I think it's pretty clear that Unix is dead as a research system."



RE: please comment on new art for perl.apache.org

1999-10-11 Thread Alex Schmelkin

Hi All,

While I'm sure Matt Arnold's effort to redesign perl.apache.org is greatly
appreciated by everyone on this list, it seems to me that a bit more
preparation and interface design should actually go into the final product.
Before we redesign the mod_perl site, we need to have a more common vision
of the goals and limitations of any redesign efforts. Design is not simply
an exercise that exists to make things 'pretty' - it is a discipline to
solve communications problems, and then to make things look 'pretty'. I ran
our community's site by a 'human/computer interaction designer with more
than 5 years web interface design experience' and he spit back a few quick
questions:

1. What browsers are different percentages of your user populations using?
2. What screen resolutions are different percentages of your user
populations using?
3. What purpose will the site serve - simply providing information, or a
marketing/evangelical need? How important are each of these considerations?
4. Who is the ultimate audience? - Who needs this information, and why?
5. What does the audience already know, and what do they need to know?
6. What is the feeling or mood to be?
7. What are some of the positive attributes of the current sites as you see
them? What are some of the negative attributes?

If some of the 'powers' that be could answer these questions to the best of
your ability, it would greatly aid any web site redesign process to take
place, especially those that have the access logs and can give us better
insight into Question 1.  I think it's great that someone (thanks Matt) took
the initiative to get this "redesign perl.apache.org" ball rolling again,
and if we get enough responses to these questions I think we'll wind up with
something great that really improves on the usability of the current site.
I'm more than happy to compile the responses.

Thanks,
Alex

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Jesse Kanner
 Sent: Monday, October 11, 1999 11:12 AM
 To: [EMAIL PROTECTED]
 Subject: Re: please comment on new art for perl.apache.org




 On Sun, 10 Oct 1999, Matt Arnold wrote:

 I have created a new page layout/template for perl.apache.org.
 You can take
 a look at it at http://www.novia.net/~marnold/mod_perl/sample_3/
  Please let
 me know if you think it's suitable for use on perl.apache.org.
 If not, how
 could it be improved?

 The look and feel seems OK, although I too like the idea of making it
 more 'desert-ish'. I'm more curious about your plans for content
 organization. The current perl.apapche.org home page is a bit of a mess.
 There's very little sense of organization - just a lot of text and links
 scattered all over the page.

 Will you have any comps of the second level pages? Which information would
 go where? So far, your left hand nav seems pretty good, provided the
 current home page content will fit nicely into the buckets you've
 proposed.


 -j-




Re: please comment on new art for perl.apache.org

1999-10-11 Thread Randal L. Schwartz

 "Jeffrey" == Jeffrey Baker [EMAIL PROTECTED] writes:

Jeffrey 1) and 2) are easy.  All of them, and all of them.  To design
Jeffrey considering anything else is suicide.

Seconded.  Properly designed HTML works in the Nokia Communicator and
on my[*] 2000 pixel Monster Screen.  It's not that hard.


[*] (Not really, but in my dreams... :)
-- 
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
[EMAIL PROTECTED] URL:http://www.stonehenge.com/merlyn/
Perl/Unix/security consulting, Technical writing, Comedy, etc. etc.
See PerlTraining.Stonehenge.com for onsite and open-enrollment Perl training!



Re: please comment on new art for perl.apache.org

1999-10-11 Thread Jeffrey Baker

Alex Schmelkin wrote:
 1. What browsers are different percentages of your user populations using?
 2. What screen resolutions are different percentages of your user
 populations using?
 3. What purpose will the site serve - simply providing information, or a
 marketing/evangelical need? How important are each of these considerations?
 4. Who is the ultimate audience? - Who needs this information, and why?
 5. What does the audience already know, and what do they need to know?
 6. What is the feeling or mood to be?
 7. What are some of the positive attributes of the current sites as you see
 them? What are some of the negative attributes?

1) and 2) are easy.  All of them, and all of them.  To design
considering anything else is suicide.

-jwb
-- 
Jeffrey W. Baker * [EMAIL PROTECTED]
Critical Path, Inc. * we handle the world's email * www.cp.net
415.808.8807



[SITE] Preparation and ideas (was: please comment on new art for perl.apache.org)

1999-10-11 Thread Robin Berjon

At 17:37 11/10/1999 -0400, Alex Schmelkin wrote:
While I'm sure Matt Arnold's effort to redesign perl.apache.org is greatly
appreciated by everyone on this list, it seems to me that a bit more
preparation and interface design should actually go into the final product.

Indeed. I know the current site quite well as I spend a lot of time there,
but the lack of easy navigation makes it a less valuable resource in that
too much thought is spent on "How do I get there ?" (No offense to whoever
is presently maintaining this site, taking care of a site can be a lot of
work and simply adding new links now and then is already much appreciated).


   1. What browsers are different percentages of your user populations using?
   2. What screen resolutions are different percentages of your user
populations using?
   3. What purpose will the site serve - simply providing information, or a
marketing/evangelical need? How important are each of these considerations?
   4. Who is the ultimate audience? - Who needs this information, and why?
   5. What does the audience already know, and what do they need to know?
   6. What is the feeling or mood to be?
   7. What are some of the positive attributes of the current sites as you see
them? What are some of the negative attributes?

These are probably the right questions, however I think we need to change
the order and put questions 3  4 first. If we only want to provide
information to developers, then ordering the links and sections is probably
all we want, and we can forget about the eye candy. On the other hand if we
want to use this site for evangelical needs, then we might need a bit more
design. I don't have to deal much with people questionning my choice of
mod_perl (against MS ASP, Cold Fusion, etc...) so I don't really mind too
much about the evangelical/marketing side, but from my from experience when
one does need to convince a boss or a client, being able to point to a
"good-looking" site helps a lot.

And that influences the answers to questions 1  2. Of course, everyone
here will want to support all browsers and all screen sizes. But that is
hardly compatible (within the same HTML page) with a "good-looking" site
(for evangelical/marketing values of good-looking). Hence if we decide to
have a site that works for promotion purposes, a nice design plus a link to
a text-only version would probably be the best choice.

I know that means having two versions of the site, but I guess that a
simple templating system separating the content from the layout would be a
good idea anyway. I think maybe that can be done in mod_perl ;-) (though it
doesn't seem to be running on perl.apache.org, while PHP and JServ are).


Once these questions have been answered, we should come up with easy to
understand top-level sections and subsections, a design that scales well
with information growth and changes, and a simple way to update content.
All in all, it shouldn't be *too* hard if we can leverage our common
experiences.

As a side note, reading about that desert idea this morning triggered a
neuron somehow, so I quickly modified an old template of mine that hadn't
been used and uploaded it at http://www.knowscape.org/modperl/ . It has a
flew flaws (eg: it was made to fit into 800x600) but they can be easily
fixed. No competition here, just a thought. It is anyway too early as yet
to make design decisions. (nb: the eagle and the feather graphics have been
modified and used *without* permission, as has the text from the present
mod_perl page. This is not meant to be a public page.). Also, if you don't
see the link with the desert idea, don't ask me ;-)

I too am glad that this ball is rolling again, I'm sure we can do something
good if we manage to avoid the all-graphics-are-evil-developer vs
why-bother-with-text?-webdesigner flamewars.




.Robin
Radioactive cats have 18 half-lives.