User tracking in IE 6

2001-05-26 Thread Differentiated Software Solutions Pvt. Ltd.,



Hi,

This is related to other thread on user 
tracking
I've been reading up a bit on the upcoming IE6 
release.
Couple of facts which may bite some of our software 
are
a) By default 3rd party cookies are to be disabled 
in IE6
b) Implementation of P3P in IE 6.

Our software is a 3rd party adserver which uses 
cookies. (We are not using it for privacy but find it more convenient to track 
clickthroughs).
We are planning to have a minor release of our 
software in the post IE6 era.
We wanted thots on
-- How are other people going to handle cookies 
with IE 6. Have any of you tested it ?
-- Links / material which will help us understand 
P3P in the context of cookies. Most links we found are talking about the 
full-policy of disclosing privacy info. We wanted more info on the compact 
policy of handling cookies.

Thanks,

Murali

Differentiated Software Solutions Pvt. Ltd.,90, 
3rd Cross,2nd Main, Ganga Nagar,Bangalore - 560 032Phone : 91 80 
3631445, 3431470Visit us at www.diffsoft.com

BEGIN:VCARD
VERSION:2.1
N:;Murali
FN:Murali
ORG:Differentiated Software Solutions Pvt. Ltd
TITLE:Director
TEL;WORK;VOICE:91 80 3431470
TEL;HOME;VOICE:91 80 3431470
ADR;WORK;ENCODING=QUOTED-PRINTABLE:;;90, 3rd Cross, 2nd Main,=0D=0AGanga Nagar;Bangalore;Karnataka;560032;India
LABEL;WORK;ENCODING=QUOTED-PRINTABLE:90, 3rd Cross, 2nd Main,=0D=0AGanga Nagar=0D=0ABangalore, Karnataka 560032=
=0D=0AIndia
ADR;HOME;ENCODING=QUOTED-PRINTABLE:;;90, 3rd Cross, 2nd Main,=0D=0AGanga Nagar;Bangalore;Karnataka;560032;India
LABEL;HOME;ENCODING=QUOTED-PRINTABLE:90, 3rd Cross, 2nd Main,=0D=0AGanga Nagar=0D=0ABangalore, Karnataka 560032=
=0D=0AIndia
X-WAB-GENDER:2
URL:http://www.diffsoft.com
URL:http://www.diffsoft.com
EMAIL;PREF;INTERNET:[EMAIL PROTECTED]
REV:20010526T114005Z
END:VCARD



[OT] Fast DB access

2001-05-21 Thread Differentiated Software Solutions Pvt. Ltd.,



Hi,

This is a follow up of mails sent to this mailing 
list last month.
We were benchmarking several db access methods and 
comparing them with postgres.
Lots of people advised us to try pg 7.1 instead of 
pg 6.5.3

This turns out to be good advice as regards 
performance. (We would like to implement one application before commenting on 
stability)
We ran the same benchmark as last time. Benchmark 
is some configurable number of selects on a composite primary key.
The results are
number of selects: 
100postgres4605 wallclock secs 
(858.69 usr + 115.92 sys = 974.61 CPU)pg 7.13297 wallclock secs (835.19 
usr + 96.86 sys = 932.05 CPU)mldbm1286 wallclock secs (.71 usr + 
161.86 sys = 1273.57 CPU)
As you can see pg 7.1 is 30% faster than pg 6.5.3 
and 3 times slower than MLDBM.
If people are interested in the benchmark script 
itself, please write to us.

Thanks and Regards,

Murali  S Muthu Ganesh

Differentiated Software Solutions Pvt. Ltd.,90, 
3rd Cross,2nd Main, Ganga Nagar,Bangalore - 560 032Phone : 91 80 
3631445, 3431470Visit us at www.diffsoft.com

BEGIN:VCARD
VERSION:2.1
N:;Murali
FN:Murali
ORG:Differentiated Software Solutions Pvt. Ltd
TITLE:Director
TEL;WORK;VOICE:91 80 3431470
TEL;HOME;VOICE:91 80 3431470
ADR;WORK;ENCODING=QUOTED-PRINTABLE:;;90, 3rd Cross, 2nd Main,=0D=0AGanga Nagar;Bangalore;Karnataka;560032;India
LABEL;WORK;ENCODING=QUOTED-PRINTABLE:90, 3rd Cross, 2nd Main,=0D=0AGanga Nagar=0D=0ABangalore, Karnataka 560032=
=0D=0AIndia
ADR;HOME;ENCODING=QUOTED-PRINTABLE:;;90, 3rd Cross, 2nd Main,=0D=0AGanga Nagar;Bangalore;Karnataka;560032;India
LABEL;HOME;ENCODING=QUOTED-PRINTABLE:90, 3rd Cross, 2nd Main,=0D=0AGanga Nagar=0D=0ABangalore, Karnataka 560032=
=0D=0AIndia
X-WAB-GENDER:2
URL:http://www.diffsoft.com
URL:http://www.diffsoft.com
EMAIL;PREF;INTERNET:[EMAIL PROTECTED]
REV:20010522T042158Z
END:VCARD



Re: [OT] Fast DB access

2001-04-20 Thread Differentiated Software Solutions Pvt. Ltd.,

Hi,

Cees has found a bug in our benchmark. We were using rtrim in our select
statement while doing the benchmark and this was forcing postgres to perform
a table scan.

We've corrected the code for this bug. We are reposting results without
rtrim (creating the tables with varchar).
In fact with postgres even ignores the trailing blanks with 'char' datatype.

When we ran these benchmarks at 40 seconds, we could not make out any
difference in the results.
Instead we increased the number of selects to 2000. Here are the new results
between only postgres(6.5.3) and mldbm
postgres
18 wallclock secs ( 1.88 usr +  0.18 sys =  2.06 CPU)
mldbm
 3 wallclock secs ( 1.77 usr +  0.21 sys =  1.98 CPU)

Results still compare favourably towards MLDBM.

Summary of our learning from the benchmarks and these discussions

a) We have to use PG 7.1. It is a major improvement over 6.5.3
b) When we need a completely read-only high-performance data structure,
MLDBM is a good option against postgres. This is provided we are able to
cast our database in MLDBM style datastructures.
c) Generic benchmarks may not be useful for most applications we need to
device our own benchmark which represents critical processing requirements
and try out various options.

Thanks to all of you who have contributed to this thread.

Regards,

V Murali  S Muthu Ganesh
Differentiated Software Solutions Pvt. Ltd.,
90, 3rd Cross,2nd Main,
Ganga Nagar,
Bangalore - 560 032
Phone : 91 80 3631445, 3431470
Visit us at www.diffsoft.com

- Original Message -
From: Cees Hek [EMAIL PROTECTED]
To: Murali V [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, April 20, 2001 1:45 AM
Subject: [OT] Re: Fast DB access



 On Thu, 19 Apr 2001, Murali V wrote:

  Hi,
 
  If you read the code more deeply, you'll find that the timeit is only
  wrapped around select and not around insert.
  We've written the insert code so that in the first round you can
populate
  the database.
  You comment out the insert code after the first round and run the
benchmark
  several times. This would only do select and time select.
 

 Hi Murali,

 OK, to start off, I was not specifically aiming my rant at you, I was
 replying to someone who had modified your code and was now comparing MySQL
 and PostgreSQL, and he was implying that the timings were for inserts and
 selects.  I took this at face value, and didn't check the code close
 enough which I really should have done in the first place.

  Connecting this error to an axiom that "Benchmarks are useless" is bad
  indeed. Shouldn't we be ironing out errors and runing benchmarks which
are
  good.

 Perhaps I should have said published benchmarks.  In your case, you are
 using benchmarks for exactly what they are intended for...  Creating a
 system that closely resembles your application and putting it through it's
 paces.  What I find dangerous about publishing benchmarks, is that they
 are almost always heavily swayed to a specific application, and most of
 the time they show what the user wants them to show.

 In your original message, you clain to have a bias against Postgres, and
 your benchmark shows that bias.  I however am a happy user of postgres,
 and am therefore biased towards it.  I modified your benchmark script
 slightly, and I got the following results (I have include a diff of my
 changes at the bottom):

 postgres
  0 wallclock secs ( 0.02 usr + 0.01 sys = 0.03 CPU)
 postgres
  0 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)

 Whereas if I run it with your version I get the following:

 postgres
 27 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU)
 postgres
 27 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)

 So what does that tell you about the benchmark?  that the postgres part of
 this benchmark is useless...  It may have given you the answer that you
 wanted, but it is misleading to anyone else out there.

 This is why there are always flame wars about benchmarking databases (by
 the way I think this whole thread has been very civilized and i hope is
 stays that way).  Invariably the benchmark has missed some critical idea
 or optimization which drastically skew the results.

  Your recommendation is to pick a DB best suited to your app. But How ??
  a) Either by hiring a guru who has seen all kinds of apps with different
DBs
  who can give you the answer with which we can run
  b) Run a benchmark on critical programs which represent you app across
  databases and find what performs best.
  I've read too much literature on DB features. All DBs have all features
  (except MySQL which does not have commit )
  You can't make a thing out of DB literature.

 What I would recommend is exactly what you have done in this case.  Get
 access to any and all the systems that you feel may do the job for you ,
 and try them out.  Browse the web for other users experiences, but don't
 use other peoples benchmarks, because the odds are good that they are
 wrong...  Create your own, or modify an ex

Re: Fast DB access

2001-04-19 Thread Differentiated Software Solutions Pvt. Ltd.,

We fully support this view.

Why Databases... just read this mail.
There are only 2 tracks
a) Totally off-track discussing oracle.
b) Other track making us defend our benchmarks.(Wish we had not used the
word benchmark)

People are saying either this benchmark is bad or all benchmarks are
useless.

 I get a feeling that the point we were trying to make is going to be
missed.
MLDBM is not a bad alternative to databases under specific conditions !!

S Muthu Ganesh  V Murali
Differentiated Software Solutions Pvt. Ltd.,
90, 3rd Cross,2nd Main,
Ganga Nagar,
Bangalore - 560 032
Phone : 91 80 3631445, 3431470
Visit us at www.diffsoft.com

 - Original Message -
 From: Perrin Harkins [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]; Joe Brenner [EMAIL PROTECTED]
 Sent: Thursday, April 19, 2001 4:13 AM
 Subject: Re: Fast DB access


   "Chutzpah" is an interesting way of putting it.  I've been thinking
   of them as "slimeballs in the busy of conning webkids into
   thinking they have a real RDBM product".
  
   (It isn't a moot point, because it's the same people working on
   it: human character issues are actually relevant when making
   technical decisions.)
 
  Why does discussion of databases - possibly the most boring subject on
the
  planet - always degenerate to name-calling?
 
  MySQL is an excellent solution for a wide range of problems, as are dbm
  files and flat files.  The developers give the code away for free, and
do
  not hide the fact that it doesn't support transactions.  There's no need
 for
  this kind of vitriol.
 
  - Perrin
 





Fw: [OT] Re: Fast DB access

2001-04-19 Thread Differentiated Software Solutions Pvt. Ltd.,

Thanks for pointing out the mistake in postgres.
Your Advice makes lots of sense.
We will recreate the benchmark and post the results

V Murali
Differentiated Software Solutions Pvt. Ltd.,
90, 3rd Cross,2nd Main,
Ganga Nagar,
Bangalore - 560 032
Phone : 91 80 3631445, 3431470
Visit us at www.diffsoft.com

 - Original Message -
 From: Cees Hek [EMAIL PROTECTED]
 To: Murali V [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Friday, April 20, 2001 1:45 AM
 Subject: [OT] Re: Fast DB access


 
  On Thu, 19 Apr 2001, Murali V wrote:
 
   Hi,
  
   If you read the code more deeply, you'll find that the timeit is only
   wrapped around select and not around insert.
   We've written the insert code so that in the first round you can
 populate
   the database.
   You comment out the insert code after the first round and run the
 benchmark
   several times. This would only do select and time select.
  
 
  Hi Murali,
 
  OK, to start off, I was not specifically aiming my rant at you, I was
  replying to someone who had modified your code and was now comparing
MySQL
  and PostgreSQL, and he was implying that the timings were for inserts
and
  selects.  I took this at face value, and didn't check the code close
  enough which I really should have done in the first place.
 
   Connecting this error to an axiom that "Benchmarks are useless" is bad
   indeed. Shouldn't we be ironing out errors and runing benchmarks which
 are
   good.
 
  Perhaps I should have said published benchmarks.  In your case, you are
  using benchmarks for exactly what they are intended for...  Creating a
  system that closely resembles your application and putting it through
it's
  paces.  What I find dangerous about publishing benchmarks, is that they
  are almost always heavily swayed to a specific application, and most of
  the time they show what the user wants them to show.
 
  In your original message, you clain to have a bias against Postgres, and
  your benchmark shows that bias.  I however am a happy user of postgres,
  and am therefore biased towards it.  I modified your benchmark script
  slightly, and I got the following results (I have include a diff of my
  changes at the bottom):
 
  postgres
   0 wallclock secs ( 0.02 usr + 0.01 sys = 0.03 CPU)
  postgres
   0 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)
 
  Whereas if I run it with your version I get the following:
 
  postgres
  27 wallclock secs ( 0.00 usr + 0.00 sys = 0.00 CPU)
  postgres
  27 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)
 
  So what does that tell you about the benchmark?  that the postgres part
of
  this benchmark is useless...  It may have given you the answer that you
  wanted, but it is misleading to anyone else out there.
 
  This is why there are always flame wars about benchmarking databases (by
  the way I think this whole thread has been very civilized and i hope is
  stays that way).  Invariably the benchmark has missed some critical idea
  or optimization which drastically skew the results.
 
   Your recommendation is to pick a DB best suited to your app. But How
??
   a) Either by hiring a guru who has seen all kinds of apps with
different
 DBs
   who can give you the answer with which we can run
   b) Run a benchmark on critical programs which represent you app across
   databases and find what performs best.
   I've read too much literature on DB features. All DBs have all
features
   (except MySQL which does not have commit )
   You can't make a thing out of DB literature.
 
  What I would recommend is exactly what you have done in this case.  Get
  access to any and all the systems that you feel may do the job for you ,
  and try them out.  Browse the web for other users experiences, but don't
  use other peoples benchmarks, because the odds are good that they are
  wrong...  Create your own, or modify an existing one, and scrutinize
  exactly what it is doing.  And if you want to share your results with
  anyone else, tell them what you choose in the end, and why.  Tell them
you
  choose database x because it did this and this for you.  Don't say
  database y is a piece of crap, so we went with database x.
 
  But whatever you do, don't choose your database based on other peoples
  benchmarks (that is all I'm trying to say, and I guess I didn't
  say it clearly enough)
 
  When I first read your message, I tucked it away somewhere, so I could
  reference it again in the future, because I was interested in the MLDBM
  work that you had done, and I thank you for that.  But it also made me
  think that maybe I shouldn't be using Postgres, because your results
were
  so poor (only for a second or too though :).  But I'll bet that a lot of
  people who have never used postgres before are now less likely to
download
  it and try it out for themself, because a benchmark swayed them away
from
  it.  That sounds like a good closer, so I'll stop it there :-)
 
 
  Cees
 
 
  Here is the diff of my changes a

Re: Fast DB access

2001-04-18 Thread Differentiated Software Solutions Pvt. Ltd.,



Hi,

There are4 responses to our results. We will 
answer them to the best of our ability.

MATT This is a very very old version of 
postgresql. Try it again with 7.1 forMATT 
more respectable results.
Accepted. We knew this when we conducted the 
benchmarks.
We've had terrible experience with postgres. 
Firstly on performance and more importantly on availablity.
Some of you should try pounding postgres with 
upwards of 25 queries a second and see the results. The postgres server will 
spew out error messages and shutdown. Last year we had a several nightouts 
writing code to protect postgres from an overload of queries.
I've written several mails to postgres mailing 
lists and even to mod_perl in desperation. Problem wasn't solved.
We'll try out 7.1. Maybe it is a major improvement 
over 6.5.3. I find it difficult to believe that it will improve performance by 
36 times 
Here I have to add. We met one of Oracle support 
people in India to know whether Oracle will be a good alternative. He was a nice 
guy andtold us that postgres is a thinner DB and should perform better 
under most circumstances. People go in for Oracle more for features and perhaps 
corporate support not for performance !!

BRUCE  It's more likely you are seeing hardware 
bottlenecks with this configuration

...followed by a long list of options to 
try.
2 replies
a) We've monitored the CPU and memory usage. Hardly 
anything to write home about. If the CPU/Memory where anywhere near maxing out 
then I agree. of course except of course when we use postgres. Postgres is 
not swapping, only hogging CPU. when postgres benchmarks are running we have 
more than 70% of our RAM free.
Postgres almost always maxes out. read next 
point for further details
b) We have repeated these benchmarks on dual-cpu 
pentium 3-700 with 1 GB RAM with almost identical relative results. Postgres 
still performs poorly relative to others. In these benchmarks too postgres takes 
100% of CPU while running the query !!! 

CLAYTON i 
wanted a good benchmark for postgres and mysql
We haven't tried this one. We are doing a project 
on mysql. Our preliminary assessment is, it's a shocker. They justify not having 
commit and rollback!! Makes us think whether they are even lower end than 
MS-Access.

PERRIN You might get better performance by 
using a combined key, hashing it, andPERRIN 
splitting into directories after the first 2 characters in the 
key. ThisPERRIN would mean 2 
directories to traverse for each lookup, rather than 4. IPERRIN believe the File::Cache module works this 
way, so you could steal code fromPERRIN 
there.PERRIN However, dbm 
is a good choice for this. You may find SDBM_File faster thanPERRIN DB_File if your records are small enough for 
it (I think the limit is 2K perPERRIN 
record).
These are good options to try. We will try them 
(hope we have time) and postback results.

Regards,

S Muthu Ganesh  V Murali


  - Original Message - 
  From: 
  Differentiated Software 
  Solutions Pvt. Ltd., 
  To: [EMAIL PROTECTED] 
  Sent: Tuesday, April 17, 2001 4:41 
  PM
  Subject: Fast DB access
  
  Hi,
  
  A few months back we asked modperl mailing list 
  on alternate methods of DB access to postgres (with the same subject). We got 
  some decent alternatives. We are putting back some of the work we have done on 
  this issue.
  
  We had a project to program an ad server. This is 
  not really an OLTP application, i.e., we hada few screens where some 
  data is captured. Based on this data we had to pick up an advertisement to 
  serve it.
  Essence of the application is to have a highly 
  scaleable program to deliver ads... which means we wanted a method to be able 
  to pickads given a criteria and choose one among them.
  We had written a benchmark program, after which 
  we decided to go for MLDBM for our purposes.
  Though this is not directly related to modperl, 
  we are taking the liberty of posting this message. We hope you find it 
  useful.
  
  Specification and results of the benchmark is as 
  follows
  
  Objective : To choose one of the alternate access 
  methods for an read-only DB program
  
  Program logic :
  Choose row from a table which has a composite key 
  containing4 attributes.
  The4attributes which we used are 
  publishers, size, type and ip number
  Given values of these4 attributes, we get a 
  list ofadvertisements for these attributes.
  In the live application we will choose one these 
  ads based on a weighted random number.
  For the purpose of benchmark we want to create a 
  hash or hash reference of the ads given these 4 criteria
  
  Benchmark Data :
  Our benchmark data consists of 100 
  publishers,3 sizes, 4 types and 20 ip numbers which makes it a data 
  structure containing 24,000 combination of attributes. Each combination in 
  turn contains 10 advertisements
  
  Benchmark alternatives :
  We have populated this data into
  a) A pure in memory multi-level

Re: Fast DB access

2001-04-18 Thread Differentiated Software Solutions Pvt. Ltd.,



Hi,

We've continuing this discussions

Reponses to queries raised in the last 24 
hours.

WIM  Could you post the SQL statements used to 
create the tables as well?
See our posting on April 17th. Our attachments have 
the create table sql too.

CLAYTON  [drfrog]$ perl fast_db.plCLAYTON  postgresCLAYTON   16 wallclock secs ( 0.05 usr + 
0.00 sys = 0.05 CPU) @ 400.00/s (n=20)CLAYTON 
 mysqlCLAYTON 
 3 wallclock secs ( 0.07 usr + 0.00 sys 
= 0.07 CPU) @ 285.71/s (n=20)CLAYTON  
postgresCLAYTON   
17 wallclock secs ( 0.06 usr + 0.00 sys = 0.06 CPU) @ 333.33/s 
(n=20)CLAYTON  mysqlCLAYTON  3 wallclock secs ( 0.01 usr + 
0.01 sys = 0.02 CPU) @ 1000.00/s (n=20)
MATHEW  Again, checkout PostgreSQL 7.1 -- I 
believe "commit" and "rollback" (asMATHEW  
you put it) are available. BTW, I would like to see that comment 
aboutMATHEW  MS-Access posted to 
pgsql-general... I dare ya. :P
We were saying the mySQL is a shocker that they 
were justifying lack of commit and rollback. We have no complaints with pg on 
the features front. 

Several people have recommended pg 7.1
We take this as valid feedback. We'll install and 
use pg 7.1 

MATHEW  I'm on several postgresql mailing lists and couldn't find a 
recent postMATHEW  from you complaining about 6.5.3 performance problems 
(not even by anMATHEW  archive search). Your benchmark is worthless 
until you try postgresqlMATHEW  7.1. There have been two major releases 
of postgresql since 6.5.x (ie.MATHEW  7.0 and 7.1) and several minor 
ones over a total of 2-3 years. It's noMATHEW  secret that they have 
tremendous performance improvements over 6.5.x. SoMATHEW  why did you 
benchmark 6.5.x?
I've not posted anything to postgres newsgroups for a long... time. I was 
too cheesed off. They kept defending postgres without accepting/solving 
problems. Let's not go into this

We are as of now ignoring any discussions into Oracle... etc., We would be 
glad to hear more suggestions on our benchmark.

Several people complain that this is not a fair test. We are not 
professionals in benchmarking. Rather we are software developers using 
benchmarks as a way of choosing among alternatives.
If peoplehave specific suggestions on ways of improving our benchmark 
we will be very happy.
Also, welcome are links on how to design and run these benchmarks for 
amateurs like us.

Thanks and Regards,

S Muthu Ganesh  V Murali
Differentiated Software Solutions Pvt. Ltd.,90, 3rd Cross,2nd Main, 
Ganga Nagar,Bangalore - 560 032Phone : 91 80 3631445, 
3431470Visit us at www.diffsoft.com
- Original Message - 

  From: 
  Differentiated Software 
  Solutions Pvt. Ltd., 
  To: [EMAIL PROTECTED] 
  Sent: Tuesday, April 17, 2001 4:41 
  PM
  Subject: Fast DB access
  
  Hi,
  
  A few months back we asked modperl mailing list 
  on alternate methods of DB access to postgres (with the same subject). We got 
  some decent alternatives. We are putting back some of the work we have done on 
  this issue.
  
  We had a project to program an ad server. This is 
  not really an OLTP application, i.e., we hada few screens where some 
  data is captured. Based on this data we had to pick up an advertisement to 
  serve it.
  Essence of the application is to have a highly 
  scaleable program to deliver ads... which means we wanted a method to be able 
  to pickads given a criteria and choose one among them.
  We had written a benchmark program, after which 
  we decided to go for MLDBM for our purposes.
  Though this is not directly related to modperl, 
  we are taking the liberty of posting this message. We hope you find it 
  useful.
  
  Specification and results of the benchmark is as 
  follows
  
  Objective : To choose one of the alternate access 
  methods for an read-only DB program
  
  Program logic :
  Choose row from a table which has a composite key 
  containing4 attributes.
  The4attributes which we used are 
  publishers, size, type and ip number
  Given values of these4 attributes, we get a 
  list ofadvertisements for these attributes.
  In the live application we will choose one these 
  ads based on a weighted random number.
  For the purpose of benchmark we want to create a 
  hash or hash reference of the ads given these 4 criteria
  
  Benchmark Data :
  Our benchmark data consists of 100 
  publishers,3 sizes, 4 types and 20 ip numbers which makes it a data 
  structure containing 24,000 combination of attributes. Each combination in 
  turn contains 10 advertisements
  
  Benchmark alternatives :
  We have populated this data into
  a) A pure in memory multi-level hash : Before 
  starting the actual benchmark the program populates a multi-level hash... each 
  of which finally points to the advertisements. Objective is to pick the last 
  level hash of advertisements
  b) Flat file : Create a Linux directory structure 
  with the same hierarchy as the attributesi.e., directory structure has 
  publishers/sizes/types/ip numbers. ip numbers 
  is the file

Re: Fast DB access

2001-04-18 Thread Differentiated Software Solutions Pvt. Ltd.,

Hi,

If you read the code more deeply, you'll find that the timeit is only
wrapped around select and not around insert.
We've written the insert code so that in the first round you can populate
the database.
You comment out the insert code after the first round and run the benchmark
several times. This would only do select and time select.

Connecting this error to an axiom that "Benchmarks are useless" is bad
indeed. Shouldn't we be ironing out errors and runing benchmarks which are
good.

Your recommendation is to pick a DB best suited to your app. But How ??
a) Either by hiring a guru who has seen all kinds of apps with different DBs
who can give you the answer with which we can run
b) Run a benchmark on critical programs which represent you app across
databases and find what performs best.
I've read too much literature on DB features. All DBs have all features
(except MySQL which does not have commit )
You can't make a thing out of DB literature.

We believe that we have extracted the core of our application in this small
program. We also believe that there will be many more such applications
which will benefit from this benchmark.
Clearly if there is a non-transactional system (System with heavy selects
and very few updates), they can use this benchmark as a relative comparison
among different access methods.

 Wakeup Cees. you can't just preside over a discussion like this :-)

 Thanks and Regards,

 S Muthu Ganesh  V Murali
Differentiated Software Solutions Pvt. Ltd.,
90, 3rd Cross,2nd Main,
Ganga Nagar,
Bangalore - 560 032
Phone : 91 80 3631445, 3431470
Visit us at www.diffsoft.com

 - Original Message -
 From: Cees Hek [EMAIL PROTECTED]
 To: Clayton Cottingham aka drfrog [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Thursday, April 19, 2001 8:08 PM
 Subject: [OT] Re: Fast DB access


  On 18 Apr 2001, Clayton Cottingham aka drfrog wrote:
 
   [drfrog]$ perl fast_db.pl
   postgres
   16 wallclock secs ( 0.05 usr + 0.00 sys =  0.05 CPU) @ 400.00/s (n=20)
   mysql
3 wallclock secs ( 0.07 usr + 0.00 sys =  0.07 CPU) @ 285.71/s (n=20)
   postgres
   17 wallclock secs ( 0.06 usr + 0.00 sys =  0.06 CPU) @ 333.33/s (n=20)
   mysql
3 wallclock secs ( 0.01 usr + 0.01 sys =  0.02 CPU) @ 1000.00/s
(n=20)
  
  
   correct me if im wrong but if fast_db.pl is
   working right
   first set is insert
   second set is select
 
  I am mad at myself for getting dragged into this, but I couldn't help
  myself...
 
  You are crippling postgreSQL by doing a tonne of inserts with a commit
  after each statement.  This completely misses the fact that postgreSQL
is
  transaction based whereas MySQL is not.  Turn off AutoCommit and do a
  commit at the end of the insert loop.
 
  Also, if your selects are taking just as long as your inserts then you
  must have other problems as well.  Did you set up any indeces for the
  columns of your table, or is that considered "optimizing the database"
and
  therefore not valid in your benchmark?
 
  Benchmarks like this are pretty much useless (actually 99% of all
  benchmarks are useless).
 
  Use the database that best fits your needs based on the features it
  supports, and the experience you have using it.  If you find your
database
  is too slow, look into optimizing it because there are usually hundreds
of
  things you can do to make a database faster (faster disks, more ram,
  faster CPU, fixing indeces, optimizing queries, etc...).
 
  Don't pick a database because a benchmark on the web somewhere says it's
  the fastest...
 
  Sorry for the rant, I'll go back to sleep now...
 
  Cees
 
  
   find attached the modified ver of fast_db.pl
   i sued to conduct this test
  
  
   comp stats
   running stock rpms from mandrake 7.2 for both
   postgresql and mysql
3.23.23-beta of mysql and
   7.02 of postgresql
  
   [drfrog@nomad desktop]$ uname -a
   Linux nomad.localdomain 2.2.18 #2 Tue Apr 17 22:55:04 PDT 2001 i686
 unknown
  
   [drfrog]$ cat /proc/meminfo
   total:   used:free:  shared: buffers:  cached:
   Mem:  257511424 170409984 87101440 24219648 96067584 44507136
   Swap: 2549432320 254943232
   MemTotal:251476 kB
   MemFree:  85060 kB
   MemShared:23652 kB
   Buffers:  93816 kB
   Cached:   43464 kB
   SwapTotal:   248968 kB
   SwapFree:248968 kB
   [drfrog]$ cat /proc/cpuinfo
   processor : 0
   vendor_id : AuthenticAMD
   cpu family : 6
   model : 3
   model name : AMD Duron(tm) Processor
   stepping : 1
   cpu MHz : 697.535
   cache size : 64 KB
   fdiv_bug : no
   hlt_bug : no
   sep_bug : no
   f00f_bug : no
   coma_bug : no
   fpu : yes
   fpu_exception : yes
   cpuid level : 1
   wp : yes
   flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat
   pse36 psn mmxext mmx fxsr 3dnowext 3dnow
   bogomips : 1392.64
  
  
  
   i will recomp both the newest postgresql and  mysql
  
   not using any optimizing techs at all i'll post the
  
   config scripts i use
   On

Fast DB access

2001-04-17 Thread Differentiated Software Solutions Pvt. Ltd.,



Hi,

A few months back we asked modperl mailing list on 
alternate methods of DB access to postgres (with the same subject). We got some 
decent alternatives. We are putting back some of the work we have done on this 
issue.

We had a project to program an ad server. This is 
not really an OLTP application, i.e., we hada few screens where some data 
is captured. Based on this data we had to pick up an advertisement to serve 
it.
Essence of the application is to have a highly 
scaleable program to deliver ads... which means we wanted a method to be able to 
pickads given a criteria and choose one among them.
We had written a benchmark program, after which we 
decided to go for MLDBM for our purposes.
Though this is not directly related to modperl, we 
are taking the liberty of posting this message. We hope you find it 
useful.

Specification and results of the benchmark is as 
follows

Objective : To choose one of the alternate access 
methods for an read-only DB program

Program logic :
Choose row from a table which has a composite key 
containing4 attributes.
The4attributes which we used are 
publishers, size, type and ip number
Given values of these4 attributes, we get a 
list ofadvertisements for these attributes.
In the live application we will choose one these 
ads based on a weighted random number.
For the purpose of benchmark we want to create a 
hash or hash reference of the ads given these 4 criteria

Benchmark Data :
Our benchmark data consists of 100 
publishers,3 sizes, 4 types and 20 ip numbers which makes it a data 
structure containing 24,000 combination of attributes. Each combination in turn 
contains 10 advertisements

Benchmark alternatives :
We have populated this data into
a) A pure in memory multi-level hash : Before 
starting the actual benchmark the program populates a multi-level hash... each 
of which finally points to the advertisements. Objective is to pick the last 
level hash of advertisements
b) Flat file : Create a Linux directory structure 
with the same hierarchy as the attributesi.e., directory structure has 
publishers/sizes/types/ip numbers. ip numbers is 
the file name which contains a list of ads. Objective is to pick the right file, 
open this file and create a hash with the contents of the file.
c) Postgres : Create a table with composite primary 
keypublisher,sizes,types,ip numbers. 
Objective is to pick an hash reference of all rows given an attribute 
combination
d) Storable : Store the multi-level hash into disk 
file using Storable.pm. Objective :Read the storable file into 
memory and pick last level hash of ads.
e) MLDBM : Populate an MLDBM data structure using 
MLDBM. Identical to Storable except we are using MLDBM.pm

H/W : Celeron 433 with 64 MB RAM, IDE HDD using RH 
6.1, perl 5.005, Postgres 6.5.3

Benchmark run :
A benchmark run consists of accessing each of the 
Benchmark alternatives 40 times. i.e., we generate a random combination of the 4 
selection attributes 40 times, access a particular data structure and pick up 
the ads.
We repeat this process twice to ensure that we are 
getting consistent results

Benchmark results :
hash: 0 wallclock secs ( 0.00 usr + 
0.00 sys = 0.00 CPU)flatfile:5 wallclock secs ( 0.08 usr + 
0.11 sys = 0.19 CPU)postgres: 36 wallclock secs ( 0.04 usr + 
0.01 sys = 0.05 CPU)storable: 17 wallclock secs (16.24 usr + 
0.61 sys = 16.85 CPU)mldbm: 0 wallclock secs ( 0.08 usr + 0.08 
sys = 0.16 CPU)
Benchmark interpretation :
We did not want to write this section... but we 
have interpreted these results and chosen mldbm.
the firstoptionis not viable for us... 
we want to carry forward the values between 2 runs of the program and can't 
recreate the hash everytime in the live app.
We had experimented with postgres earlier with 
disastrous results. In fact, postgres motivated us to seek 
alternatives.
We had a choice between flatfile and MLDBM (from 
the results).
It was a close one... but we found MLDBM far more 
compact. Users were more comfortable maintaining a system with single file 
rather than multitude of files on the file system.
We also suspect the if we add more rows, flatfile 
won't scale though we've not tested it fully.

At the end  we have chosen MLDBM. We have 
developed the software called opticlik (www.opticlik.com). It's doing just great. 
Serving around 1.2 million ads a day. In short bursts it serves upto 100 ads a 
second.

We've attached our benchmark program and sql along 
with this mail. If any of you have time to run through the program and give us 
feedback it would be great. We also welcome and clarifications that may be 
required.

Regards,

S Muthu Ganesh  V 
Murali

Differentiated Software Solutions Pvt. Ltd.,90, 
3rd Cross,2nd Main, Ganga Nagar,Bangalore - 560 032Phone : 91 80 
3631445, 3431470Visit us at www.diffsoft.com
 fast_db.pl
 benchmark.sql


[NQ] Newbie Question about mod_perl

2001-03-19 Thread Differentiated Software Solutions Pvt. Ltd.,




Hi,

Now Iam doing 
mod_perl programing And I have gone through the related documents 
also.
I have written two 
mod_perl programs whose output is same (through browser).
I want to know what 
are the difference between them! If there is any difference then what are 
the pros and cons in using both of them?

one.cgi

if(exists 
$ENV{MOD_PERL}) { my $r = Apache-request; 
$r-content_type("text/html"); 
$r-send_http_header; $r-print("Hi 
There!");}

two.cgi

if(exists 
$ENV{MOD_PERL}) { print "Content-Type: 
text/html\n\n"; print "Hi 
There!";}

Thanks,
Muthu S 
Ganesh



P.S.

[NQ] - Newbie 
Question





Differentiated Software Solutions Pvt. Ltd.,90, 3rd Cross,2nd Main, 
Ganga Nagar,Bangalore - 560 032Phone : 91 80 3631445Visit us at 
www.diffsoft.com


Debugging Apache::ASP

2000-12-06 Thread Differentiated Software Solutions Pvt. Ltd.,

Hi,

We've been using Apache::ASP over the past one month.
We are having problems debugging an ASP program.
We have set the debug level to 3 but still.

At debug level 3, the line in which the error occurs is displayed.
But if we have multiple statements withing % and % tags then, the error display only 
gives only one line number (line number of the first statement) is displayed. We are 
finding it very difficult to identify the exact line where the error has occured.
In general we find Apache::ASP support for debugging quite poor. Are there any tools 
which we are missing out ??

How do others tackle this ??
How do people track "use of uninitialised variable".

Thanks,

S Muthu Ganesh

Differentiated Software Solutions Pvt. Ltd.,
176, Ground Floor, 6th Main,
2nd Block, RT Nagar,
Bangalore - 560032
Visit us at www.diffsoft.com



ASP Editor

2000-11-10 Thread Differentiated Software Solutions Pvt. Ltd



Hi,

We are using Apache::ASP. Could anybody refer us to 
a decent windows based editor for this.

We want an editor which will have syntax 
highlighting features for both ASP objects as well as perlscript.

Thanks,

Murali

Differentiated Software Solutions Pvt. Ltd.176, 
Ground Floor, 6th Main,2nd Block, RT NagarBangalore - 560032Phone : 
91 80 3431470www.diffsoft.com


Re: Fast DB access

2000-11-09 Thread Differentiated Software Solutions Pvt. Ltd

Dear Tim,

As you had rightly pointed out we have data which is not volatile. This data
gets updated once an hour by another process (cron job). Concurrency is not
really an issue, because we are not updating the data.

We're now continuing our benchmark on some scaling issues basically when
does dbm degenerate. We are increasing number of entries in the dbm file to
see when it will break.

Murali
Differentiated Software Solutions Pvt. Ltd.
176, Ground Floor, 6th Main,
2nd Block, RT Nagar
Bangalore - 560032
Phone : 91 80 3431470
www.diffs-india.com
- Original Message -
From: Tim Sweetman [EMAIL PROTECTED]
To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, November 09, 2000 8:59 PM
Subject: Re: Fast DB access


 Hi,

 Firstly, thanks for bringing these results back to the mailing list...
 having seen this sort of problem previously, but without (IIRC) having
 done side-by-side comparisons between these various techniques, I'm keen
 to see what you find.

 "Differentiated Software Solutions Pvt. Ltd" wrote:
  2. Building the entire DB into a hash variable inside the mod_perl
program
  is the fastest we found it to be 25 times faster than querying a
  postgres database !!
  3. We have a problem rebuilding this database in the ram even say
every
  1000 requests. We tried using dbm and found it a good compromise
solution.
  We found that it is about 8 times faster than postgres querying.

 I assume from this that your data changes, but slowly, and you're
 getting better performance by accepting that your data be slightly out
 of date.

  4. Another surprising finding was we built a denormalised db on the
  Linux file system itself, by using the directory and file name as the
key on
  which we wanted to search. We found that dbm was faster than this.

 I'm curious about how you're dealing with the concurrency aspect with
 solutions 2-3. My guess is that, for 2, you're simply storing a hash in
 the memory, which means that each Apache child has its own copy. There
 will, every 1000 requests in that child, be the overhead of querying the
 DB  rebuilding the hash.

 3 presumably means having only _one_ DBMfile. Do the CGI/mod-Perl
 processes rebuild this periodically, or is this done offline by another
 process? Do the CGI/mod-Perl processes have to wait while writes are
 going on?

 Cheers

 --
 Tim Sweetman
 A L Digital
  moving sideways ---




Re: Fast DB access

2000-11-08 Thread Differentiated Software Solutions Pvt. Ltd

We would like to add one thing to this.
Different application situations seem to require different approaches. While
RDBMS seem to support say 80% of these situations there are some situations
where we find it not good enough.

We have developed an adserver which has exactly the kind of scenario that
Sander has talked about. Lots of similar queries which are read-only
data having to be distributed across servers and so on RDBMSes (in our
experience) don't seem suited for this.

Murali
Differentiated Software Solutions Pvt. Ltd.
176, Ground Floor, 6th Main,
2nd Block, RT Nagar
Bangalore - 560032
Phone : 91 80 3431470
www.diffs-india.com
- Original Message -
From: Sander van Zoest [EMAIL PROTECTED]
To: Matt Sergeant [EMAIL PROTECTED]
Cc: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Thursday, October 12, 2000 2:35 AM
Subject: Re: Fast DB access


 On Wed, 11 Oct 2000, Matt Sergeant wrote:

   I really think that sometimes going for a flat file layout *can* be
much
   more reliable and scalable then RDBMS software. It all really depends
on
   what you plan to do with the data and what you would like to get out
of
   it.
  I think you chose the wrong words there. I think a flat file layout can
be
  more performant than an RDBMS, but I don't think its going to be
  more reliable or scalable than an RDBMS. There are far too many locking
  issues and transaction issues necessary for the terms "reliable and
  scalable", unless you're willing to spend a few years re-coding Oracle
:-)

 I actually think that there are times that can be all three. Notice how
 I said there are times it can be all three, it definately isn't the case
 all the time. Neither are RDBMS. ;-)

 Lots of places use databases for read-only queries. Having a database
 that gets lots of similar queries that are read-only makes it an
 unnecessary single point of failure. Why not use the local disk and
 use rsync to replicate the data around. This way if a machine goes down,
 the others still have a full copy of the content and keep on running.

 If you have a lot of data that you need to keep in sync and needs constant
 updating with a random amount of different queries then you get some real
 use out of a RDBMS.

 I guess I am just saying that there are a gazillions of ways of doing
things,
 and each tool has something it is good at. File systems are really good
 at serving up read-only content. So why re-invent the wheel? It all really
 depends on what content you are dealing with and how you expect to query
 it and use it.

 There is a reason that table optimisation and tuning databases is such
 a sought after skill. Most of these things require different things that
 all rely on the type of content and their use. These things need to be
 taken in consideration on a case by case basis.

 You can do things terribly using Oracle and you can do things well using
 Oracle. The same can be said about just about everything. ;-)


 --
 Sander van Zoest
[[EMAIL PROTECTED]]
 Covalent Technologies, Inc.
http://www.covalent.net/
 (415) 536-5218
http://www.vanzoest.com/sander/





Re: Fast DB access

2000-11-08 Thread Differentiated Software Solutions Pvt. Ltd

Hi,

We are returning after extensive tests of various options suggested.

First, we are not entering into the debate about well designed DBs and
database can handle lots of queries and all that. Assume that we have an
app.(an adserver) which dbs don't support well.. i.e., fairly complex
queries to be services quickly.

Some of the things we've found are 
1. DBD::RAM is quite slow !! We presume this is because the SQL's have to be
parsed everytime we make requests
2. Building the entire DB into a hash variable inside the mod_perl program
is the fastest we found it to be 25 times faster than querying a
postgres database !!
3. We have a problem rebuilding this database in the ram even say every
1000 requests. We tried using dbm and found it a good compromise solution.
We found that it is about 8 times faster than postgres querying.
4. Another surprising finding was we built a denormalised db on the
Linux file system itself, by using the directory and file name as the key on
which we wanted to search. We found that dbm was faster than this.

We're carrying out more tests to see how scaleable is dbm. Hope these
findings are useful to others.

Thanks for all the help.

Murali
Differentiated Software Solutions Pvt. Ltd.
176, Ground Floor, 6th Main,
2nd Block, RT Nagar
Bangalore - 560032
Phone : 91 80 3431470
www.diffs-india.com
- Original Message -
From: Francesc Guasch [EMAIL PROTECTED]
To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, October 11, 2000 1:56 PM
Subject: Re: Fast DB access


  "Differentiated Software Solutions Pvt. Ltd" wrote:
 
  Hi,
 
  We have an application where we will have to service as high as 50
  queries a second.
  We've discovered that most database just cannot keep pace.
 
  The only option we know is to service queries out of flat files.

 There is a DBD module : DBD::Ram. If you got enough memory
 or there is not many data it could be what you need.

 I also have seen recently a post about a new DBD module for
 CSV files, in addition of DBD::CSV, try

 http://search.cpan.org

 --
  - frankie -




Re: Fast DB access

2000-11-08 Thread Differentiated Software Solutions Pvt. Ltd

Yes. The tables were indexed.
Otherwise we might have seen even more spectacular results 

Murali
- Original Message - 
From: G.W. Haywood [EMAIL PROTECTED]
To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, November 08, 2000 5:44 PM
Subject: Re: Fast DB access


 Hi there,
 
 On Wed, 8 Nov 2000, Differentiated Software Solutions Pvt. Ltd wrote:
 
  We are returning after extensive tests of various options suggested.
 
 Did you try different indexing mechanisms in your tests?
 
 73,
 Ged.
 




Re: Fast DB access

2000-11-08 Thread Differentiated Software Solutions Pvt. Ltd

Hi,

When we rebuild the hash in the RAM it takes too much time.
Other questions, my collegues will answer.

Murali
Differentiated Software Solutions Pvt. Ltd.
176, Ground Floor, 6th Main,
2nd Block, RT Nagar
Bangalore - 560032
Phone : 91 80 3431470
www.diffs-india.com

- Original Message -
From: Perrin Harkins [EMAIL PROTECTED]
To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, November 09, 2000 12:19 AM
Subject: Re: Fast DB access


 "Differentiated Software Solutions Pvt. Ltd" wrote:
  3. We have a problem rebuilding this database in the ram even say
every
  1000 requests.

 What problem are you having with it?

  We tried using dbm and found it a good compromise solution.
  We found that it is about 8 times faster than postgres querying.

 Some dbm implementations are faster than others.  Depending on your data
 size, you may want to try a couple of them.

  4. Another surprising finding was we built a denormalised db on the
  Linux file system itself, by using the directory and file name as the
key on
  which we wanted to search. We found that dbm was faster than this.

 Did you end up with a large number of files in one directory?  When
 using the file system in this way, it's a common practice to hash the
 key you're using and then split that across multiple directories to
 prevent too many files from building up in one and slowing things down.

 For example:

 "my_key" -- "dHodeifehH" -- /usr/local/data/dH/odeifehH

 Also, you could try using mmap for reading the files, or possibly the
 Cache::Mmap module.

  We're carrying out more tests to see how scaleable is dbm.

 If you're using read-only data, you can leave the dbm handles persistent
 between connections.  That will speed things up.

 You could look at BerkeleyDB, which has a built-in shared memory buffer
 and page-level locking.

 You could also try IPC::MM, which offers a shared memory hash written in
 C with a perl interface.

  Hope these findings are useful to others.

 They are.  Keep 'em coming.

 - Perrin




Re: persistent database problem

2000-11-08 Thread Differentiated Software Solutions Pvt. Ltd

Yes
- Original Message -
From: Jeff Beard [EMAIL PROTECTED]
To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, October 23, 2000 7:08 PM
Subject: Re: persistent database problem


 Are using Apache::DBI and establishing a connection in
 your startup.pl?

 On Mon, 23 Oct 2000, Differentiated Software Solutions Pvt. Ltd wrote:

  Hi,
  I have started with one httpd; and executed the following mod-perl
program from the browser. We've configured apache to have persistent DBI
  The idea is first time the database handle will be inactive and it
will print 'INSIDE'.  From the second time onwards the database handle will
be active and it will print 'OUTSIDE'.  This is working.
  But, sometimes the 'OUTSIDE' comes from the third or fourth time
only.  (that is it takes more than one attempt to become persistent) Why it
is happening like this?
  Thanks
  Muthu S Ganesh
 
  mod-perl code is here:
 
  $rc = $dbh_pg-{Active};
  print "$$: $rc\n";
  if($rc eq '')
  {
  print "INSIDE\n";
  $dbh_pg =
DBI-connect("dbi:Pg:dbname=adcept_smg_ctrl","postgres","postgres",{RaiseErr
or = 1}) || die $DBI::errstr;
  }
  else
  {
      print "OUTSIDE\n";
  }
 
 
  Differentiated Software Solutions Pvt. Ltd.
  176, Ground Floor, 6th Main,
  2nd Block, RT Nagar
  Bangalore - 560032
  Phone : 91 80 3431470
  www.diffs-india.com
 

 --
 Jeff Beard
 ___
 Web:www.cyberxape.com
 Location:   Boulder, CO, USA






Re: persistent database problem

2000-11-08 Thread Differentiated Software Solutions Pvt. Ltd

Hi,

To avoid this problem, we specifically started only one httpd.

Murali

Differentiated Software Solutions Pvt. Ltd.
176, Ground Floor, 6th Main,
2nd Block, RT Nagar
Bangalore - 560032
Phone : 91 80 3431470
www.diffs-india.com
- Original Message -
From: John K. Sterling [EMAIL PROTECTED]
To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, October 23, 2000 1:35 PM
Subject: Re: persistent database problem



 The db connection happens once for each child - so every time you hit a
 child for the first time it will open up a new connection - you probably
 have apache configured to start with 4 or so kids.

 sterling

  On Mon, 23 Oct 2000,
 Differentiated Software Solutions Pvt. Ltd wrote:

  Hi,
  I have started with one httpd; and executed the following mod-perl
program from the browser. We've configured apache to have persistent DBI
  The idea is first time the database handle will be inactive and it
will print 'INSIDE'.  From the second time onwards the database handle will
be active and it will print 'OUTSIDE'.  This is working.
  But, sometimes the 'OUTSIDE' comes from the third or fourth time
only.  (that is it takes more than one attempt to become persistent) Why it
is happening like this?
  Thanks
  Muthu S Ganesh
 
  mod-perl code is here:
 
  $rc = $dbh_pg-{Active};
  print "$$: $rc\n";
  if($rc eq '')
  {
  print "INSIDE\n";
  $dbh_pg =
DBI-connect("dbi:Pg:dbname=adcept_smg_ctrl","postgres","postgres",{RaiseErr
or = 1}) || die $DBI::errstr;
  }
  else
  {
      print "OUTSIDE\n";
  }
 
 
  Differentiated Software Solutions Pvt. Ltd.
  176, Ground Floor, 6th Main,
  2nd Block, RT Nagar
  Bangalore - 560032
  Phone : 91 80 3431470
  www.diffs-india.com
 





Sharing vars across httpds

2000-11-06 Thread Differentiated Software Solutions Pvt. Ltd



Hi,

We want to share a variable across different httpd 
processes.
Our requirement is as follows :

1. We want to define one variable (which is a large 
hash).
2. Every httpd should be able to access this 
variable (read-only).
3. Periodically (every hour) we would like to have 
another mod_perl program to refresh/recreate this large hash with new 
values
4. After this, we want the new values in the hash 
to be available across httpds

How do we do this ??
Thanks for helping us.

Regards,

Murali

Differentiated Software Solutions Pvt. Ltd.176, 
Ground Floor, 6th Main,2nd Block, RT NagarBangalore - 560032Phone : 
91 80 3431470www.diffs-india.com


Fast DB access

2000-10-11 Thread Differentiated Software Solutions Pvt. Ltd



Hi,

We have an application where we will have to 
service as high as 50 queries a second.
We've discovered that most database just cannot 
keep pace.

The only option we know is to service queries out 
of flat files.
Can somebody give us pointers o n what modules are 
available to create flat file based database.
Specically we want a mechanism to be able service 
queries which can return rows where values are greater than specified 
value.
We are experiementing currently with dbm and 
DB::File. These seem to handle hashes quite comfortably. How do we handle these 
inequality queries.

Thanks,

Murali

Differentiated Software Solutions Pvt. Ltd.176, 
Ground Floor, 6th Main,2nd Block, RT NagarBangalore - 560032Phone : 
91 80 3431470www.diffs-india.com


Re: hi all

2000-10-11 Thread Differentiated Software Solutions Pvt. Ltd

Hi,

We had a similar problem with postgres db.
We had a large query running to 3 kb and the query ran forever without ever
getting completed.
We solved this, by breaking the query into parts and executing each part
separately... i.e., by creating a hash of the output of one step and filter
it into the next step and so on...

Hope this helps.

Murali

- Original Message -
From: Rajesh Mathachan [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, October 11, 2000 2:34 PM
Subject: hi all


 hi all,
 we have a query which goes to 7kb and we use mysql and php , th eserver
 is literally crashing when we do the process
 what is the other alternative fpor me
 The site is  aQuiz site
 regards
 rajesh mathachan

 --
 QuantumLink Communications, Bombay, India






Re: internal_redirect

2000-09-06 Thread Differentiated Software Solutions Pvt. Ltd

Hi,

We changed the code as you had given. Still we get the same message.

Lots of others have told us that we can only run it under mod_perl. Fine. We
realise this. When we run it under mod_perl we got this message in Apache
error log. Hence we ran it under perl.

We feel that there is some basic thing we are missing out. It seems as if
when perl tries to link up to Apache.pm it is not able to recognize the the
method "request". We are using Apache 1.3.6, perl ver 5.0005 and mod_perl
version 1.21
Is there a problem with these versions.
Should we enable anything while compiling in mod_perl

Thanks for any help.

Muthu Ganesh

ps. I'm sorry if we have offended anybody. It's not our intention to cook up
syntax !! We are making sincere attempts to understand why something is not
working. If somebody feels that these questions are below their level, then
please ignore the same.

- Original Message -
From: Ken Williams [EMAIL PROTECTED]
To: Differentiated Software Solutions Pvt. Ltd [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, September 05, 2000 8:01 PM
Subject: Re: internal_redirect


 [EMAIL PROTECTED] (Differentiated Software Solutions Pvt. Ltd) wrote:
 We corrected R to r. Problem still remains.
 We ran this program as a standalone perl program and even this bombs.
Code
 as follows.
 
 #!/usr/bin/perl
 my $r;
 use Apache ();
 
 Apache-request($r);
 
 $r-internal_redirect('hello.html');
 
 Error message : Can't locate object method "request" via package "Apache"
at
 ../test1.pl line 5.


 As others have mentioned, you can't run this code standalone without
 using some tricks (though they're not very tricky).  But you've got a
 different problem.  According to your code, $r is never assigned to, so
 it should fail with a different error than you're seeing anyway.  You
 want something like this:

#!/usr/bin/perl
use Apache ();

my $r = Apache-request;

$r-internal_redirect('/path/to/hello.html');


   ------
   Ken Williams Last Bastion of Euclidity
   [EMAIL PROTECTED]The Math Forum





internal_redirect

2000-09-05 Thread Differentiated Software Solutions Pvt. Ltd



Hi,

 The following code is not 
working.

use Apache;

Apache-Request-internal_redirect('http://192.168.1.2/smg/html/adcept_logo.gif');

The error is:

Can't locate object method "Request" via package 
"Apache" at ./test.cgi line 5.

Thanks for your solution.

Bye
Muthu S Ganesh



Differentiated Software Solutions Pvt. Ltd.176, 
Ground Floor, 6th Main,2nd Block, RT NagarBangalore - 560032Phone : 
91 80 3431470www.diffs-india.com


Re: internal_redirect

2000-09-05 Thread Differentiated Software Solutions Pvt. Ltd

Hi,

We corrected R to r. Problem still remains.
We ran this program as a standalone perl program and even this bombs. Code
as follows.

#!/usr/bin/perl
my $r;
use Apache ();

Apache-request($r);

$r-internal_redirect('hello.html');

Error message : Can't locate object method "request" via package "Apache" at
./test1.pl line 5.

Help please.

Muthu Ganesg
- Original Message -
From: Rob Tanner [EMAIL PROTECTED]
To: Ken Williams [EMAIL PROTECTED]; Differentiated Software
Solutions Pvt. Ltd [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Tuesday, September 05, 2000 12:10 PM
Subject: Re: internal_redirect




 --On 09/05/00 01:23:04 -0500 Ken Williams [EMAIL PROTECTED]
 wrote:

  [EMAIL PROTECTED] (Differentiated Software Solutions Pvt. Ltd) wrote:
  Hi,
 
 The following code is not working.
 
  use Apache;
 
  Apache-Request-internal_redirect('http://192.168.1.2/smg/html/adce
  pt_logo.gif');
 
  The error is:
 
  Can't locate object method "Request" via package "Apache" at
  ./test.cgi line 5.
 
  It's a lowercase 'r'.   Apache-request-...

 That's only half the problem.  Internal redirects are just that.  They
 don't include http://server-name.  They are always internal to the
 server immediate server.

 Apache-request-internal_redirect('/smg/html/adcept_logo.gif'); is
 correct form.  But if 192.168.1.2 isn't this server, than a full
 redirect is required.

 -- Rob

_ _ _ _   __ _ _ _ _
   /\_\_\_\_\/\_\ /\_\_\_\_\_\
  /\/_/_/_/_/   /\/_/ \/_/_/_/_/_/  QUIDQUID LATINE DICTUM SIT,
 /\/_/__\/_/ __/\/_//\/_/  PROFUNDUM VIDITUR
/\/_/_/_/_/ /\_\  /\/_//\/_/
   /\/_/ \/_/  /\/_/_/\/_//\/_/ (Whatever is said in Latin
   \/_/  \/_/  \/_/_/_/_/ \/_/  appears profound)

   Rob Tanner
   McMinnville, Oregon
   [EMAIL PROTECTED]




internal_redirect

2000-09-04 Thread Differentiated Software Solutions Pvt. Ltd



Hi,

 The following code is not 
working.

use Apache;

Apache-Request-internal_redirect('http://192.168.1.2/smg/html/adcept_logo.gif');

The error is:

Can't locate object method "Request" via package 
"Apache" at ./test.cgi line 5.

Thanks for your solution.

Bye
Muthu S Ganesh



Differentiated Software Solutions Pvt. Ltd.176, 
Ground Floor, 6th Main,2nd Block, RT NagarBangalore - 560032Phone : 
91 80 3431470www.diffs-india.com


Adding values to Session file

2000-08-18 Thread Differentiated Software Solutions Pvt. Ltd



Hi,

We have a site where we create a session file on 
login and tie some values.
After a few page visits we want to add more values 
to the session file again using tie.

We find that only the first set of values get 
added. Subsequent values do not get added to this file.
Can somebody tell us what the problem is 
??

Regards,

Murali

Differentiated Software Solutions Pvt. Ltd.176, 
Ground Floor, 6th Main,2nd Block, RT NagarBangalore - 560032Phone : 
91 80 3431470www.diffs-india.com


Returning a GIF without Location

2000-07-13 Thread Differentiated Software Solutions Pvt. Ltd



Hi,

We have a mod_perl program which based on certain 
parameters returns the gif which is to be displayed. We use 
print "Location: http://192.168.1.2/smg/images/logo.gif'". 

This means the browser makes 2 HTTP requests
1. To the CGI program to request for the location of the GIF
2. To the Web server requesting the GIF itself.

We want to combine both into a single HTTP - request response sequence 
i.e., is it possible to read and return the gif file from within a CGI program 
itself. If so, how ??

Thanks for the help.

Regards,

S Muthu Ganesh

Differentiated Software Solutions Pvt. Ltd.176, 
Ground Floor, 6th Main,2nd Block, RT NagarBangalore - 560032Phone : 
91 80 3431470www.diffs-india.com


Modifying of Apache::Session File

2000-05-31 Thread Differentiated Software Solutions Pvt. Ltd.

Hi,

We've got an application where on initial login we're creating the session
file.
Subsequently, we want to add more hash values into this session file.

Immediately after creation if we add values to the session file, these
values get stored.
After a few pages we tried to modify the existing session file, by
First, tie-ing the values to a session hash
Second, Modifying the session hash.

At the point of modifying the session, the program just hangs  waits
indefintely.
Can anybody help us out with this problem.

Murali

Differentiated Software Solutions Pvt. Ltd.,
176, Gr. Floor, 6th Main
2nd Block RT Nagar
Bangalore - 560 032
India
Ph: 91 80 3431470
email : diffs+AEA-vsnl.com
http://www.diffs-india.com




Re: Modifying of Apache::Session File

2000-05-31 Thread Differentiated Software Solutions Pvt. Ltd.

Hi,

We've solved the problem. I don't know whether this is the way.
We untie every time before we tie again and then change the hash.
This seems to work. Is this the correct way of modifying the contents.

Our session hash is not global. (Hope session object and hash are the same).
Session hash is only local to the functions.

We are using a file to store the values.

Murali

-Original Message-
From: Jeffrey W. Baker [EMAIL PROTECTED]
To: Differentiated Software Solutions Pvt. Ltd. [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 01 June 2000 09:12
Subject: Re: Modifying of Apache::Session File


On Wed, 31 May 2000, Differentiated Software Solutions Pvt. Ltd. wrote:

 Hi,

 We've got an application where on initial login we're creating the
session
 file.
 Subsequently, we want to add more hash values into this session file.

 Immediately after creation if we add values to the session file, these
 values get stored.
 After a few pages we tried to modify the existing session file, by
 First, tie-ing the values to a session hash
 Second, Modifying the session hash.

 At the point of modifying the session, the program just hangs  waits
 indefintely.
 Can anybody help us out with this problem.

You must have leaked some session objects, and now you are holding stale
locks.  It is a frequent problem.  If you are using a global for the
session object, don't do that.  Also don't make a circular reference to
the session object.

-jwb




Re: speed up/load balancing of session-based sites

2000-05-10 Thread Differentiated Software Solutions Pvt. Ltd.

Hi,

Pardon my ignorance, what is storable.

Murali
-Original Message-
From: Rodney Broom [EMAIL PROTECTED]
To: Perrin Harkins [EMAIL PROTECTED]; Jeremy Howard
[EMAIL PROTECTED]
Cc: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 10 May 2000 13:13
Subject: Re: speed up/load balancing of session-based sites


  Murali said:
   a) NFS mount a server which will store all session data

Just a note, NFS in specific can be very problematic. It takes some real
tuning to get it just right. As for distributed data; session data ~should~
be small, under a kB. So you could move it around in almost any fassion you
like and still be pretty efficiant.

On that, you can use Storable to push around really compact/complete Perl
data structures.

Rodney




Re: speed up/load balancing of session-based sites

2000-05-09 Thread Differentiated Software Solutions Pvt. Ltd.

Hi,

Reading thorough this interesting posting, I want to focus back on tying
session data across a network.

We are facing a similar problem on our service, which is currently based on
a single server. We're enabled the app. so that it is mirrored on a periodic
basis across a range of servers. Any server can service the first call, but
we would like to ensure that the subsequent calls are serviced by the same
server who answered the first http call. If not, then I have store what the
user had done in the previous calls and this should be accessible to all the
servers.

As I understand from this discussion we have 2 methods involving creating a
session-server which will store all session data.
a) NFS mount a server which will store all session data
b) Have a DB in this server which stores this data. Through a network
connect to the DB and retrieve the info.

My questions
a) Is NFS mount unreliable... esp if the number of hits in each server will
go upto 20 per second. Does this place a limit on the number of servers
which can be connected to the session server.
b) There's lots of chat about Oracle connection slow. Is this a general user
experience. We've got our app. written using Postgres. We had problems
(quite severe) in the initial days. We've somewhat stabilized by getting to
know all quirks of postgres. For a very high-hit site is it wise to switch
to Oracle.
We've solved lots of performance problems, but still find Pg reliability a
bit questionable.
c) How are Oracle connections across network.
Other than having a common session server (which will also hit limits if too
many site-servers) are there any other ways to have a really scaleable
solution to this problem.
Isn't cookies much more scaleable that the session server.

Thanks,

Murali

-Original Message-
From: Leon Brocard [EMAIL PROTECTED]
To: 'Jeffrey W. Baker' [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 09 May 2000 16:54
Subject: RE: speed up/load balancing of session-based sites


 -Original Message-
 From: Jeffrey W. Baker [mailto:[EMAIL PROTECTED]]
 Sent: Monday, May 08, 2000 9:19 PM
 To: Leslie Mikesell
 Cc: Jeffrey W. Baker; Greg Stark; [EMAIL PROTECTED]
 Subject: Re: speed up/load balancing of session-based sites


 On Mon, 8 May 2000, Leslie Mikesell wrote:

  According to Jeffrey W. Baker:
 
I keep meaning to write this up as an Apache:: module,
 but it's pretty trivial
to cons up an application-specific version. The only
 thing this doesn't
provide is a way to deal with large data structures.
 But generally if the
application is big enough to need such data structures
 you have a real
database from which you can reconstruct the data on
 each request, just store
the state information in the cookie.
  
   Your post does a significant amount of hand waving
 regarding people's
   requirements for their websites.  I try to keep an open
 mind when giving
   advice and realize that people all have different needs.
 That's why I
   prefixed my advice with "On my sites..."
 
  Can anyone quantify this a bit?
 
   On my sites, I use the session as a general purpose data
 sink.  I find
   that I can significantly improve user experience by
 keeping things in the
   session related to the user-site interaction.  These
 session object
   contain way more information than could be stuffed into a
 cookie, even if
   I assumed that all of my users had cookies turned on.
 Note also that
   sending a large cookie can significantly increase the size of the
   request.  That's bad for modem users.
  
   Your site may be different.  In fact, it had better be! :)
 
  Have you timed your session object retrieval and the cleanup code
  that becomes necessary with server-session data compared to
  letting the client send back (via cookies or URL) everything you
  need to reconstruct the necessary state without keeping temporary
  session variables on the server?  There must be some size where
  the data values are as easy to pass as the session key, and some
  size where it becomes slower and more cumbersome.  Has anyone
  pinned down the size where a server-side lookup starts to win?
jwb wrote:

 I have really extensive benchmarks for every part of
 Apache::Session.  These will be released with version 1.5,
 which also includes more than fifty new unit tests.

Cool. Strict benchmarking and testing is severely lacking in general
in Perl modules.

Apache::Session rocks, however the name doesn't describe the functionality
of the module (it has nothing to do with Apache). Are there any plans
to change it to "Persistent::Session" or some other name? I'm sure
people are overlooking it because of this.

Leon
--
Leon Brocard   |   perl "programmer"   |   [EMAIL PROTECTED]





Mod perl training material

2000-05-09 Thread Differentiated Software Solutions Pvt. Ltd.

Hi,

We're devising some training material for new people in our firm.
Can any body suggest a site which gives some decent exercises in CGI/Perl
and mod+AF8-perl. Something like a project which can be completed in 2 weeks, at
the end of which they'll have a hang of all basics. I would also like them
to use DBI in this period.

Thanks

Murali

Differentiated Software Solutions Pvt. Ltd.,
176, Gr. Floor, 6th Main
2nd Block RT Nagar
Bangalore - 560 032
India
Ph: 91 80 3431470
email : diffs+AEA-vsnl.com
http://www.diffs-india.com




Re: Implementing security in CGI

2000-04-20 Thread Differentiated Software Solutions Pvt. Ltd.

Hi,

Persitent cookies was the dilemna I was in.

I also found that there are persistent and non-persistent cookies. I wrote
some test Javascript programs and found out that we can have cookies which
die after the browser exits. Would this be a good option.
Another nagging doubt. Is this the way the world implements security with
CGI or am I missing something ??

Thanks a lot for informing me about this news group. I'll keep tabs on this
group.

Murali
-Original Message-
From: Jeff Beard [EMAIL PROTECTED]
To: Differentiated Software Solutions Pvt. Ltd. [EMAIL PROTECTED];
[EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 20 April 2000 10:55
Subject: Re: Implementing security in CGI


This is a question for comp.infosystems.www.authoring.cgi.

But since I'm here...

I would check for the cookie every time a request is made. If you use
Apache::Session there will be a separate session data store from the user
data. Which is probably what you really want. Apache::Session will allow
you to associate whatever data you like with the session id within it's own
schema.

If the browser is closed, the cookie will remain. You can have a logout
feature but there will always be a significant percentage of users that
won't bother. So limit the life of the cookie with the time value and
periodically cull stale sessions on the server.

--Jeff


At 05:21 PM 4/19/00, Differentiated Software Solutions Pvt. Ltd. wrote:
Hi,

My question is much more basic than that. I wanted to validate my design
ideas on a programmatic security.
I would like somebody to go through the following and tell me that I'm on
the right track.

The idea I had was, at the time of login, I generate the session id which
I
write to the cookie.
I have also tied to this session_id the user's login profile.
Every other screen checks for the cookie's existence and reads back the
session_id and gets the user's profile. I hope I'm right till then.
When the user signs out then we can delete the tied file.
Now any person who has access to the same browser will still have to login
to get to the inner pages.

If the browser is killed without sign-out from the system, even then
there's
no problem.
Next person who gets access to the browser and tries to access any inner
page will not be able to, because the cookie with the session-id does not
exist.

Am I right ??? Please help.

Thanks,

Murali

-Original Message-
From: Gunther Birznieks [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 19 April 2000 18:44
Subject: Re: Implementing security in CGI


 Apache::Session could be useful. But the session key that is generated
is
 arguable not necessarily the most secure that it could be. But it is
pretty
 good.
 
 I'm probably opening up a can of worms by saying this.
 
 The MD5 hash itself is relatively secure as hashes go (although SHA hash
 space could be better). But you are relying on underlying system
variables
 to determine what is put into MD5 hashing in the first place -- and this
 data is not necessarily the most random-- process ID, time, memory
address
 of the created hash, etc... Are a bit deterministic. rand() might be
good
 if it is on a system that plugs natively into a good entropy generator
on
 that machine.
 
 To get a better key, you probably end up spending more time pulling
 relatively random data sources together so key generation itself would
be
 slow-- a computational tradeoff. Depends on how "secure" you really want
to
 be. For most situations,  Apache::Session's key generator will work
fine.
 
 It also depends how long you intend the sessions to be active. Will they
 become a "token" that is used in lieu of authentication once the user
has
 entered a username and password or issued a digital client certificate
to
 your web site? Or will it be used after the user registers for a month+
to
 allow them to get back into your site without remember a password.
 
 -- Gunther
 
 At 01:34 PM 4/19/00 +0530, Differentiated Software Solutions Pvt. Ltd.
wrote:
 Hi,
 
 We are having a site which is programmed with perl/CGI.
 To enter the site we have a login and password.
 After which some reports are displayed.
 
 I know that using cookies it is possible to secure the site.
 Can somebody guide me on how to design and implement a cookie based
 security. Sites and books on same will be greatly appreciated.
 
 Would Apache::Session be useful for this ??
 
 Thanks for the help,
 
 Murali
 
 Differentiated Software Solutions Pvt. Ltd.,
 176, Gr. Floor, 6th Main
 2nd Block RT Nagar
 Bangalore - 560 032
 India
 Ph: 91 80 3431470
 email : diffs+AEA-vsnl.com
 http://www.diffs-india.com
 
 Differentiated Software Solutions Pvt. Ltd.,
 176, Gr. Floor, 6th Main
 2nd Block RT Nagar
 Bangalore - 560 032
 India
 Ph: 91 80 3431470
 email : diffs+AEA-vsnl.com
 http://www.diffs-india.com
 
 __
 Gunther Birznieks ([EMAIL PROTECTED])
 Extropia - The Web Technology Company
 http://www.e

Re: Implementing security in CGI

2000-04-19 Thread Differentiated Software Solutions Pvt. Ltd.

Hi,

My question is much more basic than that. I wanted to validate my design
ideas on a programmatic security.
I would like somebody to go through the following and tell me that I'm on
the right track.

The idea I had was, at the time of login, I generate the session id which I
write to the cookie.
I have also tied to this session_id the user's login profile.
Every other screen checks for the cookie's existence and reads back the
session_id and gets the user's profile. I hope I'm right till then.
When the user signs out then we can delete the tied file.
Now any person who has access to the same browser will still have to login
to get to the inner pages.

If the browser is killed without sign-out from the system, even then there's
no problem.
Next person who gets access to the browser and tries to access any inner
page will not be able to, because the cookie with the session-id does not
exist.

Am I right ??? Please help.

Thanks,

Murali

-Original Message-
From: Gunther Birznieks [EMAIL PROTECTED]
To: [EMAIL PROTECTED] [EMAIL PROTECTED]
Date: 19 April 2000 18:44
Subject: Re: Implementing security in CGI


Apache::Session could be useful. But the session key that is generated is
arguable not necessarily the most secure that it could be. But it is pretty
good.

I'm probably opening up a can of worms by saying this.

The MD5 hash itself is relatively secure as hashes go (although SHA hash
space could be better). But you are relying on underlying system variables
to determine what is put into MD5 hashing in the first place -- and this
data is not necessarily the most random-- process ID, time, memory address
of the created hash, etc... Are a bit deterministic. rand() might be good
if it is on a system that plugs natively into a good entropy generator on
that machine.

To get a better key, you probably end up spending more time pulling
relatively random data sources together so key generation itself would be
slow-- a computational tradeoff. Depends on how "secure" you really want to
be. For most situations,  Apache::Session's key generator will work fine.

It also depends how long you intend the sessions to be active. Will they
become a "token" that is used in lieu of authentication once the user has
entered a username and password or issued a digital client certificate to
your web site? Or will it be used after the user registers for a month+ to
allow them to get back into your site without remember a password.

-- Gunther

At 01:34 PM 4/19/00 +0530, Differentiated Software Solutions Pvt. Ltd.
wrote:
Hi,

We are having a site which is programmed with perl/CGI.
To enter the site we have a login and password.
After which some reports are displayed.

I know that using cookies it is possible to secure the site.
Can somebody guide me on how to design and implement a cookie based
security. Sites and books on same will be greatly appreciated.

Would Apache::Session be useful for this ??

Thanks for the help,

Murali

Differentiated Software Solutions Pvt. Ltd.,
176, Gr. Floor, 6th Main
2nd Block RT Nagar
Bangalore - 560 032
India
Ph: 91 80 3431470
email : diffs+AEA-vsnl.com
http://www.diffs-india.com

Differentiated Software Solutions Pvt. Ltd.,
176, Gr. Floor, 6th Main
2nd Block RT Nagar
Bangalore - 560 032
India
Ph: 91 80 3431470
email : diffs+AEA-vsnl.com
http://www.diffs-india.com

__
Gunther Birznieks ([EMAIL PROTECTED])
Extropia - The Web Technology Company
http://www.extropia.com/




Where to get Apache::Session

2000-04-18 Thread Differentiated Software Solutions Pvt. Ltd.

Hi,

Can somebody please tell me from where to download Apache::Session module.

Thanks a lot

Murali

Differentiated Software Solutions Pvt. Ltd.,
176, Gr. Floor, 6th Main
2nd Block RT Nagar
Bangalore - 560 032
India
Ph: 91 80 3431470
email : diffs+AEA-vsnl.com
http://www.diffs-india.com