Problem with Directory in Perl sections

2001-10-29 Thread James Stalker

Hi,

I have hit a problem with the latest couple of versions of mod_perl, and I wondered if 
anyone might know a solution.

We're using Apache 1.3.22 with mod_perl 1.26, and there appears to be a problem with 
the Directory directive in perl sections...
e.g.

$Directory{$DocumentRoot}={
Options =  '-Indexes FollowSymLinks',
AllowOverride   =  'None',
SetHandler  =  'perl-script',
PerlHandler =  'Apache::EnsEMBL::SendDecPage',
order   =  'deny,allow',
allow   =  'from all',
};

does not work in a Perl section but it works fine outside:

Directory /mysql/ensembl/www/sanger-test/htdocs 
Options -Indexes FollowSymLinks
AllowOverride   None
SetHandler  perl-script
PerlHandler Apache::EnsEMBL::SendDecPage
order   deny,allow
allow   from all
/Directory

/perl-status reports that the former looks as I would expect:

%Directory = (
  '/mysql/ensembl/www/server/htdocs' = {
'Deny' = 'from all',
'AllowOverride' = 'None',
'Order' = 'deny,allow',
'PerlHandler' = 'Apache::EnsEMBL::SendDecPage',
'Options' = '-Indexes FollowSymLinks',
'SetHandler' = 'perl-script'
  },
);

There are no errors reported, but the handler simply doesn't kick in.  We have been 
using this config for some months now, and this worked fine in 1.21.  It stopped 
working by at least version 1.25.

Anybody have any ideas?

Regards,

James

-- 
James Stalker
Senior Web Developer - Project Ensembl - http://www.ensembl.org




RE: [OT] pdf creation

2001-10-29 Thread Matt Sergeant

 -Original Message-
 From: Lon Koenig [mailto:[EMAIL PROTECTED]]
 
 I apologize for the OT post, but the members of this list seem to be 
 authoritive resource for all web/perl solutions.
 
 I'm currently bidding a project, and the client's all in favor of a 
 mod_perl solution. Phase 2 of the project requires on-the-fly pdf 
 creation.
 
 I've done page layout in other languages, so I'm not too concerned 
 about coding the thing. My question is:
 
 Does anyone have success/horror stories generating pdf files 
 under mod_perl?
 Recommendations?

About 6 months ago I had a good look at the various modules available for
PDF creation (for on the fly conversion of XML to PDF using AxKit/mod_perl),
and found that they all suffered from C-like interfaces (i.e. it was
$pdf-start_page, $pdf-end_page, instead of that stuff being automatic),
interface complexity (i.e. changing font and colour mirrored how the
low-level PDF format worked), and very poor support for incorporating
images. Anyway, so I wrote my own, based on pdflib (www.pdflib.com), called
PDFLib.pm. The base pdflib comes with it's own interface called
pdflib_pl.pm, but again it's a C-like interface. So PDFLib.pm is more OO.
Well, I'm biased, but I think it works pretty well (though it's lacking
support for graphics primitives right now), and I use it in AxKit's AxPoint
package to create all my slideshows (see
http://217.158.50.178/docs/presentations/tpc2001/).

One bonus about PDFLib.pm is that the underlying PDF stuff is all done in C,
so it's likely a bit faster than all the other (pure perl) options.

I've heard good stuff about PDF::Create though (but I think that's one of
the ones that didn't support images when I was looking).

Matt.

_
This message has been checked for all known viruses by Star Internet
delivered through the MessageLabs Virus Scanning Service. For further
information visit http://www.star.net.uk/stats.asp or alternatively call
Star Internet for details on the Virus Scanning Service.



RE: Apache::Compress - any caveats?

2001-10-29 Thread Matt Sergeant

 -Original Message-
 From: Ged Haywood [mailto:[EMAIL PROTECTED]]
 
 Hi there,
 
 On Wed, 24 Oct 2001, Mark Maunder wrote:
 
  I noticed that there are very few sites out there using
  Content-Encoding: gzip - in fact yahoo was the only one I could
  find. Is there a reason for this
 
 I think because many browsers claim to accept gzip encoding and then
 fail to cope with it.

Such as? I've been delivering gzip from axkit.org and take23.org for nearly
a year now, and had only one complaint (a Netscape 3 user on Solaris who
didn't have a /usr/bin/gunzip installed), so I'm curious. (Admittedly I
don't exactly have a large number of users).

_
This message has been checked for all known viruses by Star Internet
delivered through the MessageLabs Virus Scanning Service. For further
information visit http://www.star.net.uk/stats.asp or alternatively call
Star Internet for details on the Virus Scanning Service.



[OT] FW: OWASP Update

2001-10-29 Thread Matt Sergeant

Not sure if this should really be considered off topic, as it should be
required reading. Anyway, go to owasp *now*, and read all the COV's you can
get through. These should be required knowledge for any web developer, and
the site seems to have detailed the various possible vulnerabilities really
well.

http://www.owasp.org/projects/cov/index.htm

(and no, I'm not affiliated in any way - just excited to see all this stuff
explicitly detailed so succinctly).

-Original Message-
From: Mark Curphey [mailto:[EMAIL PROTECTED]]
Sent: 29 October 2001 07:40
To: [EMAIL PROTECTED]
Subject: OWASP Update


Prepare for the avalanche !

OWASP folks have been quiet authoring content for the OWASP
(http://www.owasp.org) Classes of Vulnerabilities (COV) project and we are
pleased to say we are about to start sending DRAFT content to the list for
comment. The first 15 will be sent out tonight and others will follow this
week and next.

The classes of vulnerabilities (COV) project is a basic reference for much
of the work at OWASP. It's aim is to define classes of vulnerabilities that
web applications can be vulnerable to; and the attacks components (AC) that
exploit these vulnerabilities. An attack on a system may be (and is
typically) composed of several components spanning multiple classes of
vulnerabilities. The COV will not catalogue individual vulnerabilities like
Nimba or ISAPI overflows. Instead it describes generic attacks on web
applications and services.

It does offer a clear definition of each attack component and a common
unambiguous naming scheme to avoid duplication or mis-interpretation through
semantics. It enables security professionals to unambiguously talk the same
language.
It does offers the building blocks to describe complicated chained attacks
of sequences of using the attack components described and the UML models
that will be provided. UML sequence diagrams will be added after content is
finalized.

Each COV has a description and a list of associated AC's.

Each attack component will have

A Name
A Description
An Analysis
A UML Description
Link to How to Test for this Problem
Typical Countermeasures

Example
Take for example the security issues associated with the Phone Book Script.
We use this example as its well known, one of the simplest applications
(single CGI) and well documented. The attack usually is described by an
example URL;
http://www.victim.com/cgi-bin/phf?Qalias=x%0a/bin/cat%20/etc/passwd
The script itself uses the escape_shell_cmd() fucntion which does not check
input with the new line character \n adequately. This is described in
OWASP-IV-MC-1. In practice an attacker would first determine if the script
itself exists. This would be done by using file  application enumeration as
described in OWASP-FAE-1. If successful an attacker could use the result to
chain one of several other attacks (the payload) such as executing direct
operating system commands (OWASP-IV-DOSCI-1) or Direct database calls
(OWASP-IV-DSQLI-1).

Each draft will be sent to the list with a subject (the OWASP name)heading
and a link to the web site. We had hoped to have our navigation working by
this time and each draft linked to our new style sheet but we haven't had
time. That will be done by the end of the week.

This is an open community effort and so are looking for all positive
feedback that will improve the write-ups. These are first DRAFTS and we know
the English language can be improved. We are most concerned now with the
technical content. Just reply to the list with your comments about the
relevant section and the feedback / discussion will be noted and if
appropriate incorporated. The first 14 or so DRAFTS will go out tonight and
will be finalized next Sunday night (12pm Pacific).

It seems to me that the list of issues identified as the original classes of
vulnerabilities are very black-box orientated and we would welcome more
debate about other classes we should include and of course people to help
author the content. Candidates are run time issues like open API's, SUID
programming etc..

Kind regards,

Mark





_
This message has been checked for all known viruses by Star Internet
delivered through the MessageLabs Virus Scanning Service. For further
information visit http://www.star.net.uk/stats.asp or alternatively call
Star Internet for details on the Virus Scanning Service.

_
This message has been checked for all known viruses by Star Internet
delivered through the MessageLabs Virus Scanning Service. For further
information visit http://www.star.net.uk/stats.asp or alternatively call
Star Internet for details on the Virus Scanning Service.



Re: [OT] FW: OWASP Update

2001-10-29 Thread Jon Molin

only me that get 404 Not Found ? 
both on http://www.owasp.org/projects/cov/index.htm and
http://www.owasp.org

is this the beginning of a new word? the site has been modperled :)

/jon



Matt Sergeant wrote:
 
 Not sure if this should really be considered off topic, as it should be
 required reading. Anyway, go to owasp *now*, and read all the COV's you can
 get through. These should be required knowledge for any web developer, and
 the site seems to have detailed the various possible vulnerabilities really
 well.
 
 http://www.owasp.org/projects/cov/index.htm
 
 (and no, I'm not affiliated in any way - just excited to see all this stuff
 explicitly detailed so succinctly).

 snip



Re: [OT] FW: OWASP Update

2001-10-29 Thread James Stalker

On Mon, Oct 29, 2001 at 12:07:09PM +0100, Jon Molin wrote:
 only me that get 404 Not Found ? 
 both on http://www.owasp.org/projects/cov/index.htm and
 http://www.owasp.org

No, the site has some bad javascript and it tries to load 
http://www.owasp.org/Templates/_js/default.js which gives the 404.  Try either turning 
off javascript in your browser, or using a different, more tolerant, browser.

James

 is this the beginning of a new word? the site has been modperled :)
 
 /jon
 
 
 
 Matt Sergeant wrote:
  
  Not sure if this should really be considered off topic, as it should be
  required reading. Anyway, go to owasp *now*, and read all the COV's you can
  get through. These should be required knowledge for any web developer, and
  the site seems to have detailed the various possible vulnerabilities really
  well.
  
  http://www.owasp.org/projects/cov/index.htm
  
  (and no, I'm not affiliated in any way - just excited to see all this stuff
  explicitly detailed so succinctly).
 
  snip

-- 
James Stalker
Senior Web Developer - Project Ensembl - http://www.ensembl.org



Re: [OT] pdf creation

2001-10-29 Thread Dave Baker

 Does anyone have success/horror stories generating pdf files under 
 mod_perl?
 Recommendations?


No horror stories except trying to go about it the wrong way a few times
and ended up with multi-hundred megabyte TIFF files as intermediate steps.

I ended up using htmldoc (http://www.easysw.com) which does html-pdf in a
breeze (as well as html-ps).  Handy if you want to pdfify something that
you've already rendered into HTML for online display.

Dave

-- 

- Dave Baker  :  [EMAIL PROTECTED]  :  [EMAIL PROTECTED]  :  http://dsb3.com/ -
GnuPG:  1024D/D7BCA55D / 09CD D148 57DE 711E 6708  B772 0DD4 51D5 D7BC A55D




Re: New mod_perl hacker wannabe . . .

2001-10-29 Thread Louis LeBlanc

On 10/28/01 08:29 PM, Jeremy Rusnak sat at the `puter and typed:
  Just today, I finished a new module - my first from scratch - for
  handling 404 errors.  I know Apache::404 isn't a real imaginative
  name, but it works.
 
 I took a look at this, it's a good idea for smaller sites.  I would
 suggest that you figure out a way to put a rate limit on the number
 of E-mails that are sent warning the admin, though.  On my site
 we receive over a million page views a day...When something gets
 broken I don't want to have 10,000 E-mails in my inbox.
 
 In addition, there are many black hats out there who might be
 tempted to use this an exploit.  I once had a script that told the
 user Error blah blah...An E-mail has been sent to our support
 staff to notify them of the problem.  Of course some people
 decided to call the script thousands of times and fill up the
 hard drive on our mail server.
 
 Of course, not all sites are going to have issues like this...But
 I think if you're going to be releasing modules it might be a
 good idea to include some notes for sites that MIGHT.

Very good point.  The CodeRed and Nimda modules have a cache mechanism
(File::Cache, IIRC) that prevents repeat reports within a 24 hour
period.  Definitely a good idea.

This would also help with the likes of the Nimda and CodeRed hits, so
you would only get one per day of any of the unique URIs generating
reports.

Now I DO see the value of opening discussion *prior* to just throwing
a module out! :)

Look for V1.01 in the next week :)

Thanks!
Lou
-- 
Louis LeBlanc   [EMAIL PROTECTED]
Fully Funded Hobbyist, KeySlapper Extrordinaire :)
http://www.keyslapper.org ԿԬ

Second Law of Final Exams:
  In your toughest final -- for the first time all year -- the most
  distractingly attractive student in the class will sit next to you.




[OT] Nimda etc (was Re: New mod_perl hacker wannabe . . .)

2001-10-29 Thread Nick Tonkin


Er, you might look at http://www.tonkinresolutions.com/MSIISProbes.pm.html
...

Always a good idea to search the mod_perl list archives, as well as put
out ideas in the present tense :)

Nick



~~~
Nick Tonkin

On Mon, 29 Oct 2001, Louis LeBlanc wrote:

 On 10/28/01 08:29 PM, Jeremy Rusnak sat at the `puter and typed:
   Just today, I finished a new module - my first from scratch - for
   handling 404 errors.  I know Apache::404 isn't a real imaginative
   name, but it works.
  
  I took a look at this, it's a good idea for smaller sites.  I would
  suggest that you figure out a way to put a rate limit on the number
  of E-mails that are sent warning the admin, though.  On my site
  we receive over a million page views a day...When something gets
  broken I don't want to have 10,000 E-mails in my inbox.
  
  In addition, there are many black hats out there who might be
  tempted to use this an exploit.  I once had a script that told the
  user Error blah blah...An E-mail has been sent to our support
  staff to notify them of the problem.  Of course some people
  decided to call the script thousands of times and fill up the
  hard drive on our mail server.
  
  Of course, not all sites are going to have issues like this...But
  I think if you're going to be releasing modules it might be a
  good idea to include some notes for sites that MIGHT.
 
 Very good point.  The CodeRed and Nimda modules have a cache mechanism
 (File::Cache, IIRC) that prevents repeat reports within a 24 hour
 period.  Definitely a good idea.
 
 This would also help with the likes of the Nimda and CodeRed hits, so
 you would only get one per day of any of the unique URIs generating
 reports.
 
 Now I DO see the value of opening discussion *prior* to just throwing
 a module out! :)
 
 Look for V1.01 in the next week :)
 
 Thanks!
 Lou
 -- 
 Louis LeBlanc   [EMAIL PROTECTED]
 Fully Funded Hobbyist, KeySlapper Extrordinaire :)
 http://www.keyslapper.org ԿԬ
 
 Second Law of Final Exams:
   In your toughest final -- for the first time all year -- the most
   distractingly attractive student in the class will sit next to you.
 
 




subroutines

2001-10-29 Thread Arshavir Grigorian


Hello All,

This might be a very obvious question to many of you, but for me it's
still somewhat unclear.

I am running Apache 1.3.19 mod_perl/1.24_01 on a RedHat 7.1 box (PC).

I have 2 versions of code running under 2 different virtual hosts. As
you probably guessed, my subroutine definitions are getting overriden,
and subroutines from package on one tree, get called by programs in the
other  tree.
The mod_perl documentation suggests that by using the Apache::PerlVINC
module it is possible to load
the appropriate module based upon the name of the virtual host used.
The 2 questions I have are:

1)  as my modules are not used as PerlHandler (as is the case with the
example in the online documentation 'PerlHandler Apache::Status'), I am
not quite sure how to configure Apache to simply
reload a simple module when a request is made against a specific
virtual host.

Previously, I used the PerlScript directive to load the modules:

httpd.conf
-
SetHandler perl-script
PerlHandler Apache::Registry
Options +ExecCGI -Indexes

PerlScript/path_to_cgi/main.pl
--

main.pl
-
use lib (/path_to_cgi/main.pl)
use Module1;
use Module2;
...
sub subroutine1 {}
sub subroutine2 {}
-

How can I configure it so that when a request is made against

- http://qa/cgi-bin/main.plex = /path_to_qa/main.plex is loaded (with
all underlying modules)
- http://devel/cgi-bin/main.plex = /path_to_devel/main.plex is loaded
(with all underlying modules)

2) and if it is possible to do this, are the modules cached via mod_perl
(Apache::Registry) when several consequtive requests are made against
the same vhost (say http://qa/)

Thanks in advance!

Arshavir







RE: Apache::Compress - any caveats?

2001-10-29 Thread Ged Haywood

Hi Matt,

On Mon, 29 Oct 2001, Matt Sergeant wrote:

  -Original Message-
  From: Ged Haywood [mailto:[EMAIL PROTECTED]]
  
  On Wed, 24 Oct 2001, Mark Maunder wrote:
   I noticed that there are very few sites out there using
   Content-Encoding: gzip - in fact yahoo was the only one I could find.
  
  I think because many browsers claim to accept gzip encoding and then
  fail to cope with it.
 
 Such as?

It's second hand information - Josh had some trouble last year when we
were working on the same project, and I think he eventually gave up
with gzip because of it.  He doesn't read the mod_perl list all the
time (being a busy chap:) so I've copied him in on this and maybe
he'll give us the benefit of his experience.  Maybe he'll tell you I'm
talking through my hat too...

73,
Ged.





Re: Neo-Classical Transaction Processing (was Re: Excellent article...)

2001-10-29 Thread Perrin Harkins

 The mod_perl servers are the work horses, just like the custom
 servers.  In a classical OLTP system, the customer servers are
 stateless, that is, if a server goes down, the TM/mod_proxy server
 routes around it.  (The TM rollsback any transactions and restarts the
 entire request, which is interesting but irrelevant for this
 discussion.)  If the work servers are fully loaded, you simply add
 more hardware.  If all the servers are stateless, the system scales
 linearly, i.e. the number of servers is directly proportional to the
 number of users that can be served.

The mod_perl servers in the eToys setup were essentially stateless.  They
had some local caching of user-specific data which made it desirable to go
back to the same one if possible for the same user, but it would still work
if the user switched to another server, since all state was stored in the
central database.

The trouble here should be obvious: sooner or later it becomes hard to scale
the database.  You can cache the read-only data, but the read/write data
isn't so simple.  Theoretically, the big players like Oracle and DB2 offer
clustering solutions to deal with this, but they don't seem to get used very
often.  Other sites find ways to divide their traffic up (users 1 - n go to
this database, n - m go to that one, etc.)  However, you can usually scale
up enough just by getting a bigger box to run your database on until you
reach the reach the realm of Yahoo and Amazon, so this doesn't become an
issue for most sites.

 If the whole multi-threaded server ever has to wait on
 a single shared resource (e.g. the garbage collector), all
 simultaneous requests in progress lose.  This doesn't happen if the
 work servers are single threaded, i.e. shared nothing.  You can
 easily model the resources required to process a request and no
 request has to wait on another, except to get CPU.

But how can you actually make a shared nothing system for a commerce web
site?  They may not be sharing local memory, but you'll need read/write
access to the same data, which means shared locking and waiting somewhere
along the line.

 The front-end servers are different.  They do no work.  Their job is
 switching requests/responses.  Some of the requests are going directly
 to the OS (e.g. get an icon) and others are going to the work servers.
 The front-end servers wait most of the time.  Here I buy into the
 the asynchronous I/O model of serving multiple requests and not the
 lightweight process model.

I do feel that servers built on non-blocking I/O would have been much more
efficient on the eToys front end.  I've seen how well thttpd can do.  But
since these boxes already scaled well beyond our needs with basic
Apache/mod_proxy, we didn't bother to look for something more complicated.
If I ran into problems with this now, I would probably look at Red Hat's Tux
server.

- Perrin




Re: Apache::Compress - any caveats?

2001-10-29 Thread Joshua Chamas

Ged Haywood wrote:
 
   I think because many browsers claim to accept gzip encoding and then
   fail to cope with it.
 
  Such as?
 
 It's second hand information - Josh had some trouble last year when we
 were working on the same project, and I think he eventually gave up
 with gzip because of it.  He doesn't read the mod_perl list all the
 time (being a busy chap:) so I've copied him in on this and maybe
 he'll give us the benefit of his experience.  Maybe he'll tell you I'm
 talking through my hat too...
 

There was one odd browser that didn't seem to deal with gzip encoding
for type text/html, it was an IE not sure 4.x or 5.x, and when set
with a proxy but not really using a proxy, it would render garbage
to the screen.  This was well over a year ago at this point when this 
was seen by QA.  The compression technique was the same used as 
Apache::Compress, where all of the data is compressed at once.  
Apparently, if one tries to compress in chunks instead, that will 
also result in problems with IE browsers.

Note that it wasn't I that gave up on compression for the project,
but a lack of management understanding the value of squeezing 40K
of HTML down to 5K !!  I would compress text/html output to 
netscape browsers fearlessly, and approach IE browsers more 
carefully.

--Josh
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks Founder   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Re: Apache::Compress - any caveats?

2001-10-29 Thread Mark Maunder

 Ged Haywood wrote:

 There was one odd browser that didn't seem to deal with gzip encoding
 for type text/html, it was an IE not sure 4.x or 5.x, and when set
 with a proxy but not really using a proxy, it would render garbage
 to the screen.  This was well over a year ago at this point when this
 was seen by QA.  The compression technique was the same used as
 Apache::Compress, where all of the data is compressed at once.
 Apparently, if one tries to compress in chunks instead, that will
 also result in problems with IE browsers.

We've been testing with Opera, Konqueror, NS 4.7 and 6 and IE 5, 5.5 and 6,
AOL and Lynx and haven't had any probs. (haven't tested IE below version 5
though *gulp*) The only real problem was NS 4.7 which freaked out when you
compressed the style sheet and the HTML (it wouldn't load the style sheet) so
we're just compressing text/html.

 Note that it wasn't I that gave up on compression for the project,
 but a lack of management understanding the value of squeezing 40K
 of HTML down to 5K !!  I would compress text/html output to
 netscape browsers fearlessly, and approach IE browsers more
 carefully.

I differ in that NS instils fear and IE seems to cause less migranes. Agree on
your point about management ignorance though. Isn't bandwidth e-commerce's
biggest expense?





Re: ApacheBench says my site is unstable?

2001-10-29 Thread Joshua Chamas

Philip Mak wrote:

 Time taken for tests:   21.109 seconds
 Complete requests:  1000
 Failed requests:22
(Connect: 0, Length: 22, Exceptions: 0)
 Total transferred:  196578 bytes
 HTML transferred:   12714 bytes
 Requests per second:47.37
 Transfer rate:  9.31 kb/s received


If ApacheBench complains about length problems, it means
that the length of subsequent requests differs from the 
output length of the first request, so dynamic content usually
screws up ab's response in this way.

--Josh
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks Founder   Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051



Re: ApacheBench says my site is unstable?

2001-10-29 Thread Philip Mak

On Mon, 29 Oct 2001, Joshua Chamas wrote:

  Complete requests:  1000
  Failed requests:22
 (Connect: 0, Length: 22, Exceptions: 0)

 If ApacheBench complains about length problems, it means
 that the length of subsequent requests differs from the
 output length of the first request, so dynamic content usually
 screws up ab's response in this way.

I thought about that, but the test script just prints Hello world every
time (static length). The failed requests got 0 bytes, according to the
access_log.

I haven't figured out this problem yet, but I'm hoping it's a problem with
ab (after all, it really shouldn't just crash with Broken pipe like
that...).




New user help

2001-10-29 Thread Rudi



Hi,
 I'd like to join the mod_perl / 
apache community.
I'm having install problems that I've been trying 
to solve for 2 days with no luck. So I'd like bother you folks with a beginer 
question.
I'm using Debian 2.2, Apache 1.3.14 and mod_perl 
1.24_1.
I've tried several times and this is a close as I 
can get.
a) unzip both Apache and mod_perl.
b) mod_perl is first : perl Makefile.Pl EVERYTHNG = 
1,make,make install.
c) Now apache :
./configure 
--prefix=/usr/local/apache
--enable-module=rewrite 
--enable-shared=rewrite
--activate-module=src/modules/perl/libperl.a 
--enable-shared=perl

Everything compiles OK.
However, my httpd.conf file now has LoadModule 
perl_module libexec/libperl.so
But this is no libperl.so on my 
hardrive.
I've run updatedb and search but there is no 
libperl.so.
As a result apache will no start ( error about 
libperl.so )

At work I use Coldfusion,PHP and some CGI with 
perl. I'm looking forward to getting into mod_perl heaps as perl is a 
tool
I can use for both web development and system 
admin.
Thanks in advance.
Kind regards Rudi.


Re: [OT] pdf creation

2001-10-29 Thread Mike808

Dave Baker wrote:
 I ended up using htmldoc (http://www.easysw.com) which does html-pdf in a
 breeze (as well as html-ps).

So does HTML2PS, which is also GPL'd, and written in 100% Perl. Ghostscript or
the Acrobat reader can do the PS2PDF output.
See http://www.tdb.uu.se/~jan/html2ps.html

Mike808/
-- 
perl -le $_='7284254074:0930970:H4012816';tr[0-][ BOPEN!SMUT];print



[OT] P3P policies and IE 6 (something to be aware of)

2001-10-29 Thread Mark Maunder

Just thought I'd share a problem I've found with IE 6 and sites (like
mine) that insist on cookie support.

If you use cookies on your site and you send a customer an email
containing a link to your site:
If the customer's email address is based at a web based mail service
like hotmail, IE 6's default behaviour is to disable cookies when the
customer clicks on the link and your site is opened in frames. This is
because IE 6 considers your site to be a third party (the frames cause
this) and unless you have a compact P3P policy set up, which it approves
of, it disables cookies. A compact P3P policy is just an HTTP header
containing an abbreviated version of a full P3P policy, which is an XML
document. Here's how you set up a compact P3P policy under mod_perl:

#This policy will make IE6 accept your cookies as a third party, but you
should generate
# your own policy using one of the apps at the W3C site.
my $p3p_compact_policy = CP=\ALL DSP COR CURa ADMa DEVa TAIa PSAa PSDa
IVAa IVDa CONa TELa OUR STP UNI NAV STA PRE\;
$r-err_header_out(P3P = $p3p_compact_policy);
$r-header_out(P3P = $p3p_compact_policy);

Check out http://www.w3.org/P3P/ for the full info on P3P.
Check out
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnpriv/html/ie6privacyfeature.asp

for M$ info on IE6 and cookies/privacy

Appologies for the OT post, but I'm just hoping I'll save someone else
the same trouble I just went through.

~mark




Re: New user help

2001-10-29 Thread Stephen Reppucci


Build apache first, then build mod_perl. The mod_perl install
modifies the apache tree (it asks you for a path to the apache tree
to modify, but defaults to ../apachelatest version)

If you're new to mod_perl, you'll want to head on over to the guide
(http://perl.apache.org/guide) for Stas' great descriptions of the
build procedure and lots of other subjects.

Also, I find it easier to build apache statically when doing a
mod_perl build.  (Others are much bolder than me... ;^)

Good luck,
Steve Reppucci

On Tue, 30 Oct 2001, Rudi wrote:

 Hi,
 I'd like to join the mod_perl / apache community.
 I'm having install problems that I've been trying to solve for 2 days with no luck. 
So I'd like bother you folks with a beginer question.
 I'm using Debian 2.2, Apache 1.3.14 and mod_perl 1.24_1.
 I've tried several times and this is a close as I can get.
 a) unzip both Apache and mod_perl.
 b) mod_perl is first : perl Makefile.Pl EVERYTHNG = 1,make,make install.
 c) Now apache :
 ./configure
 --prefix=/usr/local/apache
 --enable-module=rewrite --enable-shared=rewrite
 --activate-module=src/modules/perl/libperl.a --enable-shared=perl

 Everything compiles OK.
 However, my httpd.conf file now has LoadModule perl_module libexec/libperl.so
 But this is no libperl.so on my hardrive.
 I've run updatedb and search but there is no libperl.so.
 As a result apache will no start ( error about libperl.so )

 At work I use Coldfusion,PHP and some CGI with perl. I'm looking forward to getting 
into mod_perl heaps as perl is a tool
 I can use for both web development and system admin.
 Thanks in advance.
 Kind regards Rudi.


-- 
Steve Reppucci   [EMAIL PROTECTED] |
Logical Choice Software  http://logsoft.com/ |
=-=-=-=-=-=-=-=-=-=-  My God!  What have I done?  -=-=-=-=-=-=-=-=-=-=




Re: Neo-Classical Transaction Processing

2001-10-29 Thread Rob Nagler

Perrin Harkins writes:
 The trouble here should be obvious: sooner or later it becomes hard to scale
 the database. You can cache the read-only data, but the read/write data
 isn't so simple.

Good point.  Fortunately, the problem isn't new.  

 Theoretically, the big players like Oracle and DB2 offer clustering
 solutions to deal with this, but they don't seem to get used very
 often.

Oracle was built on an SMP assumption.  They added clustering later.
It doesn't scale well, which is probably why you haven't heard of
people using their parallel server solutions.  I don't know much about
DB2, but I'm pretty sure it assumes shared memory.  Tandem's Non-Stop
SQL is a shared nothing architecture.  It scales well, but isn't cheap
to walk in the door.

 Other sites find ways to divide their traffic up (users 1 - n go to
 this database, n - m go to that one, etc.)

Partitioning is a great way to get scalability, if you can do it.

 However, you can usually scale up enough just by getting a bigger
 box to run your database on until you reach the reach the realm of
 Yahoo and Amazon, so this doesn't become an issue for most sites.

I agree.  This is why I think Apache/mod_perl is a great solution for
the majority of web apps.  The scaling issues supposedly being solved
by J2EE don't exist.

On another note, one of the ways to make sure your database scales
better is to keep the database as simple as possible.  I've seen a lot
of solutions which rely on stored procedures to get performance.
All this does is make the database slower and more of a bottleneck.

 But how can you actually make a shared nothing system for a commerce web
 site?  They may not be sharing local memory, but you'll need read/write
 access to the same data, which means shared locking and waiting somewhere
 along the line.

I meant shared nothing in the sense of multiprocessor architectures.
SMP (symmetric multiprocessing) relies on shared memory.  This is the
J2EE/E10K model.  shared nothing is the Neo Classical model.  Really
these are NUMAs (non-uniform memory architecture), because most
servers are SMPs.  Here's a classic from Stonebraker on the subject:

http://db.cs.berkeley.edu/papers/hpts85-nothing.pdf

DeWitt has a lot of papers on parallelism and distributed db design:
http://www.cs.wisc.edu/~dewitt/

Cheers,
Rob



RE: Apache::Compress - any caveats?

2001-10-29 Thread Paul G. Weiss

My bad experience was with Netscape 4.7.  The problem was if the
*first* compressed thing it saw was *not* html, e.g. if it was
Javascript when the corresponding html file was not compressed.

Once it saw compressed html, though, it could then reliably
uncompress Javascript as long as you kept the browser open.

-P

-Original Message-
From: Mark Maunder [mailto:[EMAIL PROTECTED]]
Sent: Monday, October 29, 2001 2:20 PM
To: Joshua Chamas
Cc: Ged Haywood; Matt Sergeant; [EMAIL PROTECTED]
Subject: Re: Apache::Compress - any caveats?


 Ged Haywood wrote:

 There was one odd browser that didn't seem to deal with gzip encoding
 for type text/html, it was an IE not sure 4.x or 5.x, and when set
 with a proxy but not really using a proxy, it would render garbage
 to the screen.  This was well over a year ago at this point when this
 was seen by QA.  The compression technique was the same used as
 Apache::Compress, where all of the data is compressed at once.
 Apparently, if one tries to compress in chunks instead, that will
 also result in problems with IE browsers.

We've been testing with Opera, Konqueror, NS 4.7 and 6 and IE 5, 5.5 and 6,
AOL and Lynx and haven't had any probs. (haven't tested IE below version 5
though *gulp*) The only real problem was NS 4.7 which freaked out when you
compressed the style sheet and the HTML (it wouldn't load the style sheet)
so
we're just compressing text/html.

 Note that it wasn't I that gave up on compression for the project,
 but a lack of management understanding the value of squeezing 40K
 of HTML down to 5K !!  I would compress text/html output to
 netscape browsers fearlessly, and approach IE browsers more
 carefully.

I differ in that NS instils fear and IE seems to cause less migranes. Agree
on
your point about management ignorance though. Isn't bandwidth e-commerce's
biggest expense?




RE: [OT] pdf creation

2001-10-29 Thread Oleg Bartunov

Matt,

do you have a plan to release PDFLib.pm ?

Oleg
On Mon, 29 Oct 2001, Matt Sergeant wrote:

  -Original Message-
  From: Lon Koenig [mailto:[EMAIL PROTECTED]]
 
  I apologize for the OT post, but the members of this list seem to be
  authoritive resource for all web/perl solutions.
 
  I'm currently bidding a project, and the client's all in favor of a
  mod_perl solution. Phase 2 of the project requires on-the-fly pdf
  creation.
 
  I've done page layout in other languages, so I'm not too concerned
  about coding the thing. My question is:
 
  Does anyone have success/horror stories generating pdf files
  under mod_perl?
  Recommendations?

 About 6 months ago I had a good look at the various modules available for
 PDF creation (for on the fly conversion of XML to PDF using AxKit/mod_perl),
 and found that they all suffered from C-like interfaces (i.e. it was
 $pdf-start_page, $pdf-end_page, instead of that stuff being automatic),
 interface complexity (i.e. changing font and colour mirrored how the
 low-level PDF format worked), and very poor support for incorporating
 images. Anyway, so I wrote my own, based on pdflib (www.pdflib.com), called
 PDFLib.pm. The base pdflib comes with it's own interface called
 pdflib_pl.pm, but again it's a C-like interface. So PDFLib.pm is more OO.
 Well, I'm biased, but I think it works pretty well (though it's lacking
 support for graphics primitives right now), and I use it in AxKit's AxPoint
 package to create all my slideshows (see
 http://217.158.50.178/docs/presentations/tpc2001/).

 One bonus about PDFLib.pm is that the underlying PDF stuff is all done in C,
 so it's likely a bit faster than all the other (pure perl) options.

 I've heard good stuff about PDF::Create though (but I think that's one of
 the ones that didn't support images when I was looking).

 Matt.

 _
 This message has been checked for all known viruses by Star Internet
 delivered through the MessageLabs Virus Scanning Service. For further
 information visit http://www.star.net.uk/stats.asp or alternatively call
 Star Internet for details on the Virus Scanning Service.


Regards,
Oleg
_
Oleg Bartunov, sci.researcher, hostmaster of AstroNet,
Sternberg Astronomical Institute, Moscow University (Russia)
Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/
phone: +007(095)939-16-83, +007(095)939-23-83




RE: [OT] pdf creation

2001-10-29 Thread Matt Sergeant

It's on CPAN already.

 -Original Message-
 From: Oleg Bartunov [mailto:[EMAIL PROTECTED]]
 Sent: 29 October 2001 09:40
 To: Matt Sergeant
 Cc: 'Lon Koenig'; [EMAIL PROTECTED]
 Subject: RE: [OT] pdf creation
 
 
 Matt,
 
 do you have a plan to release PDFLib.pm ?

_
This message has been checked for all known viruses by Star Internet
delivered through the MessageLabs Virus Scanning Service. For further
information visit http://www.star.net.uk/stats.asp or alternatively call
Star Internet for details on the Virus Scanning Service.