[HELP] How to authorize only one user to fetch a given URI ?

2002-05-07 Thread Jose . auguste-etienne

Hi everybody,

I'm designing an Web interface to edit configuration data stored in an XML
file.

To ensure data integrity, I have to avoid multiple users accessing URLs
that allow config modification I.E. file reading/writing.

Let say these URLs are located under location '/admin/config/'

I already set an authentication stage for user 'admin' for URLs under
location '/admin/'

Do you have any suggestion on a way to achieve a single-user authorization
for a given location ?

Actually, I didn't try file locking as I don't know how it would work when
exiting the browser.

My idea was to set a kind of handler detecting whether user 'admin'
is already browsing some URL under '/admin/config/',
but I don't know how to achieve that.

FYI, I'm using Apache 1.3.22 / mod_perl 1.26 / HTML::Mason 1.04

Thanks by advance

José




Dynamic config from db?

2002-05-07 Thread Tim Burden



Hi List,
 
New to mod_perl, but have a specific project in 
mind, that will immerse me in it.
 
Wondering if someone could comment on both the 
possibility of, and the wisdom of, pulling server configs dynamically from a 
database WITHOUT requiring server restart.
 
Prime example: you've added a new name-based 
virtual host, and you've stored all of his permissions and stuff in a MySQL 
database. This host has paid to have PHP turned on in his directories, so you 
want to do something like
php_admin_flag engine on
for this host only. No problem...restart the server 
and have mod_perl read the info it needs to reconfigure the server from the 
database. My question is, is there a way to do this WITHOUT having to restart 
the server. Would there be too much overhead to read from the database on every 
request? So would there be a way to 'cache' the lookup? Are there any existing 
examples I could be directed to?
 
Tim
 


OpenInteract 1.40 released

2002-05-07 Thread Chris Winters

A new version (1.40) of OpenInteract has been released to CPAN.
OpenInteract is an extensible web application server built on
Apache, mod_perl, the Template Toolkit and SPOPS object persistence.

OpenInteract now runs on Oracle -- with one minor caveat: sessions. Many
thanks to Ben Avery <[EMAIL PROTECTED]> for patient debugging and
installs. Package configuration customization is centralized now --
package upgrades won't overwrite changes you've made. And different
session storage backends -- filesystem and SQLite -- are now available.

As always, the many other modifications in this release are listed in
the 'Changes' file.

Source (also via CPAN):
 http://prdownloads.sourceforge.net/openinteract/OpenInteract-1.40.tar.gz

Detailed changes:
 http://sourceforge.net/project/shownotes.php?release_id=88422

Thanks,

Chris

-- 
Chris Winters ([EMAIL PROTECTED])
Building enterprise-capable snack solutions since 1988.




[Fwd: ApacheCon US 2002: Call for Participation]

2002-05-07 Thread Stas Bekman

FYI

 Original Message 
Subject: ApacheCon US 2002: Call for Participation
Date: Mon, 06 May 2002 13:52:22 -0400
From: Rodent of Unusual Size <[EMAIL PROTECTED]>
Organization: The Apache Software Foundation
To: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED], 
[EMAIL PROTECTED], 
[EMAIL PROTECTED], [EMAIL PROTECTED]
Newsgroups: 
comp.infosystems.www.servers.unix,comp.infosystems.www.servers.ms-windows,comp.infosystems.www.servers.misc,de.comm.infosystems.www.servers

-BEGIN PGP SIGNED MESSAGE-

Call for Participation: ApacheCon US 2002
=
November 18-21, 2002, Las Vegas, Nevada, US

SUBMISSION DEADLINE: Friday, 31 May 2002, 17:30 EDT

Come share your knowledge of Apache software at this
educational and fun-filled gathering of Apache users,
vendors, and friends.

Apache Software Foundation members are designing the
technical program for ApacheCon US 2002 that will include
over 40 sessions planned.

We are particularly interested in session proposals
covering:

o Apache Web server topics (installation, compilation,
   configuration, migration, Version 2.0)
o All Apache Software Foundation projects (Jakarta,
   mod_perl, Xerces, et cetera)
o scripting languages and dynamic content
   (Java, PHP, Perl, TCL, Python, XML, XSL, etc.)
o Security and eCommerce
o Performance tuning, load balancing, high availability
o tips for writing Apache Web server modules
o Technical and non-technical case studies
o new Web-related technologies

Only educational sessions related to projects of the Apache
Software Foundation or the Web in general will be considered
(commercial sales or marketing presentation won't be accepted;
please contact [EMAIL PROTECTED] should you be interested in
giving a vendor presentation).

If you would like to be a speaker at the ApacheCon US 2002
event, please go to the ApacheCon Web site, log in, and choose
the 'Submit a CFP' option from the list there:

  http://ApacheCon.Com/html/login.html

NOTE: If you were a speaker or delegate at a past ApacheCon,
please log in using the email address you used before; this
will remember your information and pre-load the CFP form for
you.  If this is your first time being involved with ApacheCon,
please create a new account.
- --
#ken 
P-)}

Ken Coar, Sanagendamgagwedweinini  http://Golux.Com/coar/
Author, developer, opinionist  http://Apache-Server.Com/

"Millennium hand and shrimp!"

-BEGIN PGP SIGNATURE-
Version: PGPfreeware 6.5.8 for non-commercial use 

iQCVAwUBPNbCv5rNPMCpn3XdAQEOIwQAoU7FRkqL7yNhzjtUcPlEeNE/+ezgsfyN
tNbt9eVqLTm/1s0kO7LK1zKG1MAckHLXF7JYHXFlD4J9TTspMDtTUUkp5HThF6Ay
C3hm7ZnrfSO+DWKbOMZ3hHZytv8rRC8MhZe2wY5Ps5qtFRt5o4QdGAja5FRNrUwM
ozPV4l2Tk2s=
=uj+B
-END PGP SIGNATURE-

-
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

-- 


__
Stas BekmanJAm_pH --> Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide ---> http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com




Re: Cheap and unique

2002-05-07 Thread jjore

I would have sent both to the client. The sequence would be *the* id and 
is guaranteed to be uinique by the database (or whatever else is around 
that does this reliably). The idea is that by combining the random secret 
with the ID and sending the digest with that the ID number can't just be 
incremented or fooled with. The digest isn't unique but it would keep the 
unique bit from being fiddled with.

That said, I'm just a paranoid person regarding security (especially for 
my out-side-of-work work at http://www.greentechnologist.org) and I 
wouldn't want to keep the random bits around for too long to prevent them 
from being brute-forced. I'm imagining that someone with a fast computer, 
the ID number and knowledge of how that combines with randomness for the 
digest source might be able to locate the bits just by trying a lot of 
them. I would expire them after a while just to prevent that from 
happening by stating that if there is a 15 minute session, new random bits 
are generated each five minutes. New sessions would be tied to the most 
recent random data. The random data might be expired at the session 
timeout. This assumes that I'm tracking which random bits are associated 
with the session to verify that the digest was ok. All that means is that 
the random-ness is valid as long as the session is still active and 
normally expires after a time period otherwise. Perhaps other people would 
get by just keeping a static secret on the server. That may be overkill 
for many people, it might not be for the apps I'm working with.

Joshua b. Jore
Domino Developer by day, political by night




James G Smith <[EMAIL PROTECTED]>
05/06/2002 01:45 PM
Please respond to JGSmith

 
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED]
Subject:Re: Cheap and unique


[EMAIL PROTECTED] wrote:
>I've been following this conversation and I'd like to clarify whether my 
>idea (since I and others want to do this as well) would be use an 
>incrementing counter for uniqueness. Then also store a bit of secret 
>randomness, concatenate both values together and create a digest hash. 
>That hash would be sent along with the sequence as well. This would allow 

>uniqueness and prevent guessing since the digest would have to match as 
>well. Depending on my paranoia I could either get fresh random bits each 
>time (and have a good hardware source for this then) or keep it around 
for 
>a bit and throw it away after a period.

I think I understand you correctly, but I'm not sure.

You mention the sequence being incremented for uniqueness and the
digest.  I think you propose to send the sequence along with the
digest (the digest containing that bit of randomness along with the
sequence), but you also mention keeping the random bits around for
only a short time, which would indicate they aren't being used to
verify the sequence, but produce the sequence via the hash.

A digest is not unique, especially with the random bit of data thrown
in.  For example, MD5 has 128 bits, but can hash any length string.
There are more than 2^128 strings that MD5 can take to 128 bits.
Therefore, MD5 does not produce a unique value, though it is a
reproducable value (the same input string will always produce the
same output string).  You can replace MD5 with MHX (my hash X) and
the number of bits with some other length and the results are still
the same -- in other words, no hash will give unique results.

The secret string concatenated with the unique number and then hashed
can be used to guarantee that the number has not been tampered with,
but the secret string would need to be constant to be able to catch
tampering.  Otherwise, how can you tell if the hash is correct?
-- 
James Smith <[EMAIL PROTECTED]>, 979-862-3725
Texas A&M CIS Operating Systems Group, Unix






Re: Cheap and unique

2002-05-07 Thread James G Smith

[EMAIL PROTECTED] wrote:
>I would have sent both to the client. The sequence would be *the* id and 
>is guaranteed to be uinique by the database (or whatever else is around 
>that does this reliably). The idea is that by combining the random secret 
>with the ID and sending the digest with that the ID number can't just be 
>incremented or fooled with. The digest isn't unique but it would keep the 
>unique bit from being fiddled with.
>
>That said, I'm just a paranoid person regarding security (especially for 
>my out-side-of-work work at http://www.greentechnologist.org) and I 
>wouldn't want to keep the random bits around for too long to prevent them 
>from being brute-forced. I'm imagining that someone with a fast computer, 
>the ID number and knowledge of how that combines with randomness for the 
>digest source might be able to locate the bits just by trying a lot of 
>them. I would expire them after a while just to prevent that from 
>happening by stating that if there is a 15 minute session, new random bits 
>are generated each five minutes. New sessions would be tied to the most 
>recent random data. The random data might be expired at the session 
>timeout. This assumes that I'm tracking which random bits are associated 
>with the session to verify that the digest was ok. All that means is that 
>the random-ness is valid as long as the session is still active and 
>normally expires after a time period otherwise. Perhaps other people would 
>get by just keeping a static secret on the server. That may be overkill 
>for many people, it might not be for the apps I'm working with.

Thanks for the clarification -- makes a lot more sense.  At first
glance, I think that would work.
-- 
James Smith <[EMAIL PROTECTED]>, 979-862-3725
Texas A&M CIS Operating Systems Group, Unix



HTML::Entities chokes on XML::Parser strings

2002-05-07 Thread John Siracusa

I ran into this problem during mod_perl development, and I'm posting it to
this list hoping that other mod_perl developers have dealt with the same
thing and have good solutions :)

I've found that strings collected while processing XML using XML::Parser do
not play nice with the HTML::Entities module.  Here's the sample program
illustrating the problem:

#!/usr/bin/perl -w

use strict;

use HTML::Entities;
use XML::Parser;

my $buffer;

my $p = XML::Parser->new(Handlers => { Char  => \&xml_char });

my $xml = '' .
  chr(0xE9) . '';

$p->parse($xml);

print encode_entities($buffer), "\n";

sub xml_char
{
  my($expat, $string) = @_;
  
  $buffer .= $string;
}

The output unfortunately looks like this:

é

Which makes very little sense, since the correct entity for 0xE9 is:

é

My current work-around is to run the buffer through a (lossy!?) pack/unpack
cycle:

my $buffer2 = pack("C*", unpack("U*", $buffer));
print encode_entities($buffer2), "\n";

This works and prints:

é

I hope it is not lossy when using iso-8859-1 encoding, but I'm guessing it
will maul UTF-8 or UTF-16.  This seems like quite an evil hack.

So, what is the Right Thing to do here?  Which module, if any, is at fault?
Is there some combination of Perl Unicode-related "use" statements that will
help me here?  Has anyone else run into this problem?

-John




Re: Cheap and unique

2002-05-07 Thread Simon Oliver

> [EMAIL PROTECTED] wrote:
> >digest source might be able to locate the bits just by trying a lot of
> >them. I would expire them after a while just to prevent that from
> >happening by stating that if there is a 15 minute session, new random bits
> >are generated each five minutes.

I missed the start of this thread, but how about generating a new id (or
random bits) on every vists: on first connect client is assigned a session
id, on subsequent connects, previous id is verified and a new id is
generated and returned.  This makes it even harder to crack.

--
  Simon Oliver



Re: HTML::Entities chokes on XML::Parser strings

2002-05-07 Thread Paul Lindner

The output from your example looks like UTF-8 data (Ã is a
commonly seen UTF-8 escape sequence).  XML::Parser converts all
incoming text into UTF-8.  You will need to convert it back to
iso-8859-1.

My favorite is Text::Iconv

 use Text::Iconv;
 $utf8tolatin1 = Text::Iconv->new("UTF-8", "ISO8859-1");

 my $buffer_latin1 = $converter->convert($buffer);


On Tue, May 07, 2002 at 10:51:10AM -0400, John Siracusa wrote:
> I ran into this problem during mod_perl development, and I'm posting it to
> this list hoping that other mod_perl developers have dealt with the same
> thing and have good solutions :)
> 
> I've found that strings collected while processing XML using XML::Parser do
> not play nice with the HTML::Entities module.  Here's the sample program
> illustrating the problem:
> 
> #!/usr/bin/perl -w
> 
> use strict;
> 
> use HTML::Entities;
> use XML::Parser;
> 
> my $buffer;
> 
> my $p = XML::Parser->new(Handlers => { Char  => \&xml_char });
> 
> my $xml = '' .
>   chr(0xE9) . '';
> 
> $p->parse($xml);
> 
> print encode_entities($buffer), "\n";
> 
> sub xml_char
> {
>   my($expat, $string) = @_;
>   
>   $buffer .= $string;
> }
> 
> The output unfortunately looks like this:
> 
> é
> 
> Which makes very little sense, since the correct entity for 0xE9 is:
> 
> é
> 
> My current work-around is to run the buffer through a (lossy!?) pack/unpack
> cycle:
> 
> my $buffer2 = pack("C*", unpack("U*", $buffer));
> print encode_entities($buffer2), "\n";
> 
> This works and prints:
> 
> é
> 
> I hope it is not lossy when using iso-8859-1 encoding, but I'm guessing it
> will maul UTF-8 or UTF-16.  This seems like quite an evil hack.
> 
> So, what is the Right Thing to do here?  Which module, if any, is at fault?
> Is there some combination of Perl Unicode-related "use" statements that will
> help me here?  Has anyone else run into this problem?
> 
> -John

-- 
Paul Lindner[EMAIL PROTECTED]   | | | | |  |  |  |   |   |

mod_perl Developer's Cookbook   http://www.modperlcookbook.org/
 Human Rights Declaration   http://www.unhchr.ch/udhr/



Perrenial Sessions: An Object Interface?

2002-05-07 Thread Jeff

Folks,

A pol of gee's in advance - this is probably an inane question for ye
olde mod_perl gods but I'll ask it anyway to see if I get struck by the
lightning of enlightenment!

All the e dot gee's that I can find, perldoc and guide pages show
sessions being used with a tied old hash interface - I was wondering if
there is an new style object interface?

Something like:

  my $session;
  if ( exists $jar->{session} ) {
# restore the session from server storage
$session_id = $jar->{session}->value();
$session = Apache::Session::File->open( $session_id, { isnew => 0,
opened => ht_time(time()) } );
LogFatal "Didn't find session: $session_id" unless $session;

  } else {
# Create a new session and remember the ID in a cookie
$session = Apache::Session::File->new( $session_id, { isnew => 1,
newtime => ht_time(time()) } );
$session_id = $session->{_session_id};
my $sessionCookie = Apache::Cookie->new( $r,
  -name   => 'session',
  -value  => $session_id,
  -path   => '/',
  -domain => 'www.nowhere.com',
);
$sessionCookie->bake();
  }

TIA

Jeff





Re: HTML::Entities chokes on XML::Parser strings

2002-05-07 Thread Rafael Garcia-Suarez

John Siracusa wrote:
> I ran into this problem during mod_perl development, and I'm posting it to
> this list hoping that other mod_perl developers have dealt with the same
> thing and have good solutions :)

I did ;-)

> I've found that strings collected while processing XML using XML::Parser do
> not play nice with the HTML::Entities module.  Here's the sample program
> illustrating the problem:
> 
> #!/usr/bin/perl -w
> 
> use strict;
> 
> use HTML::Entities;
> use XML::Parser;
> 
> my $buffer;
> 
> my $p = XML::Parser->new(Handlers => { Char  => \&xml_char });
> 
> my $xml = '' .
>   chr(0xE9) . '';
> 
> $p->parse($xml);
> 
> print encode_entities($buffer), "\n";
> 
> sub xml_char
> {
>   my($expat, $string) = @_;
>   
>   $buffer .= $string;
> }
> 
> The output unfortunately looks like this:
> 
> é
> 
> Which makes very little sense, since the correct entity for 0xE9 is:
> 
> é

That's an XML::Parser issue.
XML::Parser gives UTF-8 to your Char handler, as specified in the manpage :
"Whatever the encoding of the string in the original document,
this is given to the handler in UTF-8."

The workaround I used is to write the handler like this :

sub xml_char
{
   my ($expat) = @_;
   $buffer .= $expat->original_string;
}

Reading the original string, no need to convert UTF-8 back to iso-8859-1.

> My current work-around is to run the buffer through a (lossy!?) pack/unpack
> cycle:
> 
> my $buffer2 = pack("C*", unpack("U*", $buffer));
> print encode_entities($buffer2), "\n";
> 
> This works and prints:
> 
> é
> 
> I hope it is not lossy when using iso-8859-1 encoding, but I'm guessing it
> will maul UTF-8 or UTF-16.  This seems like quite an evil hack.
> 
> So, what is the Right Thing to do here?  Which module, if any, is at fault?
> Is there some combination of Perl Unicode-related "use" statements that will
> help me here?  Has anyone else run into this problem?
> 
> -John
> 



-- 
Rafael Garcia-Suarez




Re: HTML::Entities chokes on XML::Parser strings

2002-05-07 Thread John Siracusa

On 5/7/02 10:58 AM, Paul Lindner wrote:
> The output from your example looks like UTF-8 data (Ã is a
> commonly seen UTF-8 escape sequence).  XML::Parser converts all
> incoming text into UTF-8.  You will need to convert it back to
> iso-8859-1.
> 
> My favorite is Text::Iconv
> 
>use Text::Iconv;
>$utf8tolatin1 = Text::Iconv->new("UTF-8", "ISO8859-1");
> 
>my $buffer_latin1 = $converter->convert($buffer);

So HTML::Entities only works with ISO8859-1 (or ASCII, presumably)?  What if
I have actual UTF-8 data?  Won't conversion to ISO8859-1 in service of
HTML::Entities result in data loss?

-John




Re: HTML::Entities chokes on XML::Parser strings

2002-05-07 Thread John Siracusa

On 5/7/02 11:06 AM, Rafael Garcia-Suarez wrote:
> The workaround I used is to write the handler like this :
> 
> sub xml_char
> {
>  my ($expat) = @_;
>  $buffer .= $expat->original_string;
> }
> 
> Reading the original string, no need to convert UTF-8 back to iso-8859-1.

Doh!  I dunno why I didn't think of that, since I've used that expat method
plenty of times before.  This seems safer than forcing a conversion from
UTF-8 to something else (although the other technique is nice to know too :)

-John




Re: HTML::Entities chokes on XML::Parser strings

2002-05-07 Thread Gisle Aas

John Siracusa <[EMAIL PROTECTED]> writes:

> On 5/7/02 10:58 AM, Paul Lindner wrote:
> > The output from your example looks like UTF-8 data (Ã is a
> > commonly seen UTF-8 escape sequence).  XML::Parser converts all
> > incoming text into UTF-8.  You will need to convert it back to
> > iso-8859-1.
> > 
> > My favorite is Text::Iconv
> > 
> >use Text::Iconv;
> >$utf8tolatin1 = Text::Iconv->new("UTF-8", "ISO8859-1");
> > 
> >my $buffer_latin1 = $converter->convert($buffer);
> 
> So HTML::Entities only works with ISO8859-1 (or ASCII, presumably)?

Not true.  But the unicode support in perl-5.6.x has many bugs.  With
5.8 things will be better.  It is a bad idea for XML::Parser to give
out strings with the UTF8 flag set.

Regards,
Gisle



mod_perl failing on win32 (CVS snapshots)

2002-05-07 Thread Alessandro Forghieri

Greetings.

Compiling and running the snapshots for BOTH apache and modperl (May 6th
snapshots)
on windowsNT sp6, I observe the following:

i) the tests punt on conftree.t (it goes on forever)

ii) installing and testing in a registry situation the request succeeds on
the first
invocation, but hangs on subsequent requests (either browser or CLI).
The static server runs unabashed in the meantime.

iii) Running conftree.t in isolation succeeds, so it is very likely that i)
is a manifestation of (ii): conftree.t hangs because it is the second
invocation into modperl.

This experience validates an earlier thread of a poster having the same
problem.
BTW I see exactly the same when using th "gold" 2.0.35

Cheers,
alf

P.S. Compiling the httpd snapshot on Win32 is a royal pain, with tons of
includes amiss:
os.h, mod_core.h, mod_dav.h





Re: HTML::Entities chokes on XML::Parser strings

2002-05-07 Thread John Siracusa

On 5/7/02 11:25 AM, Gisle Aas wrote:
> John Siracusa <[EMAIL PROTECTED]> writes:
>> On 5/7/02 10:58 AM, Paul Lindner wrote:
>>> The output from your example looks like UTF-8 data (Ã is a
>>> commonly seen UTF-8 escape sequence).  XML::Parser converts all
>>> incoming text into UTF-8.  You will need to convert it back to
>>> iso-8859-1.
>>> 
>>> My favorite is Text::Iconv
>>> 
>>>use Text::Iconv;
>>>$utf8tolatin1 = Text::Iconv->new("UTF-8", "ISO8859-1");
>>> 
>>>my $buffer_latin1 = $converter->convert($buffer);
>> 
>> So HTML::Entities only works with ISO8859-1 (or ASCII, presumably)?
> 
> Not true.  But the unicode support in perl-5.6.x has many bugs.  With
> 5.8 things will be better.  It is a bad idea for XML::Parser to give
> out strings with the UTF8 flag set.

Well, I'll let your guys figure it out (all fixed in 5.8, right? :)  In the
meantime, I guess I'll stick with the workaround(s) posted... :)

-John




Re: Cheap and unique

2002-05-07 Thread jjore

(Anyone else, is there a module that already does this?)

That misses two things: random data is not unique and random data is 
scarce.

The thread started where someone else wanted a cheap way to generate 
difficult to guess and unique session ids. It went on around how using a 
random function doesn't provide uniqueness and eventually ended up where 
we're at now (verified sequential IDs). Part of the issue is that the 
output from a digest (SHA1, MD5) or random data is not unique and cannot 
ever be expected to be. So if you want uniqueness, you should use 
something that produces values without looping - like simple iteration. 
You could use some other number series but that's just pointless since you 
don't need to keep your session IDs secret and it will just confuse the 
next person to look at the code. You also run out of numbers faster if you 
{in|de}crement by more than 1.

A lot of other smarter people can tell you why random data is scarce. Just 
accept it. /dev/urandom is not an infinite font of quality entropy. If you 
use too much then you fall back to simpler algorithms that will enter into 
loops which are highly non-random.

So what I said was keep some random data secret for a bit, use it for the 
hashes and after a while get new random data. A malicious attacker can 
attempt to brute-force your secret during the period that the secret is 
still valid. Once the secret is invalidated then the attacker has to start 
the key-space over again. This is like asking distributed.net to start 
over every few minutes. While distributed.net would eventually make it's 
way through vast keyspaces for a single secret, it can't keep up with 
volataile secrets.

Josh




Simon Oliver <[EMAIL PROTECTED]>
05/07/2002 10:53 AM

 
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject:Re: Cheap and unique


> [EMAIL PROTECTED] wrote:
> >digest source might be able to locate the bits just by trying a lot of
> >them. I would expire them after a while just to prevent that from
> >happening by stating that if there is a 15 minute session, new random 
bits
> >are generated each five minutes.

I missed the start of this thread, but how about generating a new id (or
random bits) on every vists: on first connect client is assigned a session
id, on subsequent connects, previous id is verified and a new id is
generated and returned.  This makes it even harder to crack.

--
  Simon Oliver






Re: HTML::Entities chokes on XML::Parser strings

2002-05-07 Thread Paul Lindner

On Tue, May 07, 2002 at 11:13:43AM -0400, John Siracusa wrote:
> On 5/7/02 10:58 AM, Paul Lindner wrote:
> > The output from your example looks like UTF-8 data (Ã is a
> > commonly seen UTF-8 escape sequence).  XML::Parser converts all
> > incoming text into UTF-8.  You will need to convert it back to
> > iso-8859-1.
> > 
> > My favorite is Text::Iconv
> > 
> >use Text::Iconv;
> >$utf8tolatin1 = Text::Iconv->new("UTF-8", "ISO8859-1");
> > 
> >my $buffer_latin1 = $converter->convert($buffer);
> 
> So HTML::Entities only works with ISO8859-1 (or ASCII, presumably)?  What if
> I have actual UTF-8 data?  Won't conversion to ISO8859-1 in service of
> HTML::Entities result in data loss?

Yes, HTML::Entities is based on ISO8859-1 input only.  BTW, for better
performance in mod_perl consider using Apache::Util::escape_html()


 escape_html
   This routine replaces unsafe characters in $string
   with their entity representation.

my $esc = Apache::Util::escape_html($html);


Anyway, back to character entities..

Text::Iconv will fail if you try to convert unconvertable text, so at
least you can test for that condition (and adjust accordingly)

BasisTech sells a comprehensive unicode library called Rosette that
knows how to automatically convert to a target character set while
incorporating SGML entities for any character set.  Perhaps it's time
for an open implementation of that..

Also see http://rf.net/~james/perli18n.html for a perl i18n faq.




-- 
Paul Lindner[EMAIL PROTECTED]   | | | | |  |  |  |   |   |

mod_perl Developer's Cookbook   http://www.modperlcookbook.org/
 Human Rights Declaration   http://www.unhchr.ch/udhr/



convention on logging?

2002-05-07 Thread F . Xavier Noria

I am writing a web application that uses Apache modules and core classes
in a MVC style.  AFAICT using $r->log->debug() is the standard way to
print debug messages in Apache modules, but which would be the right way
to print debug messages in the core classes provided both types of
modules are going to run together?

-- fxn



Re: Fw: How do I determine end of request? (mod_perl 2.0)

2002-05-07 Thread Douglas Younger

Hello,
Thanks for the suggestion, but it doesn't seem to make any difference.

I tried setting:
ProxyIOBufferSize 32768
ProxyReceiveBufferSize 32768

in my httpd.conf, and it is still calling my handler several times per 
request...

I put in:
warn "Size: " . length($buffer) . "\n";

in my while ($filter->read) loop and get the following for a single page 
(page is ~11k):

Size: 1101
Size: 3109
Size: 987
Size: 4096
Size: 1697

(Before I increased my buffer size in the read it would break down the 
larger of the above into further chunks.)

I think the best way would be to somehow determine where the actual end of 
the document is to call $p->eof;. Because even if increasing the various 
buffers worked, I don't want to make them insanely large, but I could end 
up having pages larger than the buffer, which would leave me with problems 
again. I'd rather not use a solution like looking for  as I need to 
use this for .css and other non-html files. Also, some of the proxied 
documents use SSI and may contain multiple instances of . (I tested 
it by checking for  and then calling $p->eof; and it does solve the 
problem, but as I explained this is not an ideal solution.)

At 11:34 PM 5/6/2002 +0200, pascal barbedor wrote:
 > hi
 >
 > you could maybe set the ProxyIOBufferSize
 > or  Proxyreceivebuffersize
 > in the front end server so that response from modperl server would not be
 > chunked but one shot
 >
 > also static ressources like gif in server B documents could be retrieved
 > from server A only with an alias not proxied to server B
 >
 >
 > pascal
 >
 >
 > - Original Message -
 > From: "Douglas Younger" <[EMAIL PROTECTED]>
 > To: <[EMAIL PROTECTED]>
 > Sent: Monday, May 06, 2002 10:26 PM
 > Subject: How do I determine end of request? (mod_perl 2.0)
 >
 >
 > > Hello,
 > >I'm fairly new to using mod_perl. I've been able to find lots of
 > > resources dealing with mod_perl 1.x, but the documentation for 2.0 is
 > > rather sparse.
 > >
 > > I'm pretty sure what I need to do can only be handled by Apache 2.0 &
thus
 > > I'm forced to use mod_perl 2.0... (well 1.99)
 > >
 > > I'm trying to proxy ServerB through ServerA... ok that's simple enough
 > with
 > > mod_proxy. However, links, embedded images, etc in the proxied document
 > end
 > > up broken if they are non-relative links (ie. start with a slash).
 > >
 > > Example: on ServerB is a document say: /sales/products.html
 > > in products.html it links to /images/logo.gif
 > > accessing /sales/products.html using ServerB everything is fine. But, if
I
 > > want to proxy ServerB via ServerA... say
 > > ProxyPass /EXTERNAL http://ServerB
 > >
 > > If I goto http://ServerA/EXTERNAL/sales/products.html the embedded image
 > > /images/logo.gif is requested from ServerA.
 > >
 > > So to handle this I wanted to write a filer for ServerA to parse all
pages
 > > served via Location /EXTERNAL and "fix" the links.
 > >
 > > I wrote a handler (see below) using HTML::Parser to extract the tags
that
 > > would contain links and process them.
 > >
 > > It works great for the most part... however, it seems like instead of
 > > ServerA getting the entire output from ServerB, it gets it in
 > > chunks   which get processed individually. This causes my handler to
fail
 > > when a tag is split between 2 chunks.
 > >
 > > What I think needs to be done is to build up the document in a variable
 > > $html .= $buffer; and then call the $p->$parse($html) once the entire
 > > document has been received by ServerA (or maybe as simple of only
calling
 > > $p->eof; at that point).
 > >
 > > Or is there a better way to do this? One problem I've found so far is I
 > > need to fix style sheets, but I can probably write a special handler for
 > > them once I get this problem fixed.
 > >
 > > Thanks!
 > >




Logging Perl errors to browser

2002-05-07 Thread Jeff

Folks,

How do I get to log my mod_perl handler Perl errors to the browser
instead of into the Apache logs?

TIA
Jeff





Re: Logging Perl errors to browser

2002-05-07 Thread Geoffrey Young



Jeff wrote:

> Folks,
> 
> How do I get to log my mod_perl handler Perl errors to the browser
> instead of into the Apache logs?


see recipes 4.5 and (the more interestingly but less robust) 16.6 in the cookbook

the code for each is here
   http://www.modperlcookbook.org/code/ch04/Cookbook/ErrorsToBrowser.pm
   http://www.modperlcookbook.org/code/ch16/Cookbook-DivertErrorLog-0.01.tar.gz

if you don't have the book you can see chapter 4 online at
   http://www.webreference.com/programming/perl/cookbook/

HTH

--Geoff







Re: convention on logging?

2002-05-07 Thread Perrin Harkins

F.Xavier Noria wrote:
> I am writing a web application that uses Apache modules and core classes
> in a MVC style.  AFAICT using $r->log->debug() is the standard way to
> print debug messages in Apache modules, but which would be the right way
> to print debug messages in the core classes provided both types of
> modules are going to run together?

You can use Apache->server->log().  If you want your modules to work 
outside of mod_perl, you can write a wrapper class that uses the server 
log when it sees you are in mod_perl and uses something else when you're 
not.  There are several fancy logging modules on CPAN.

- Perrin