Re: Persistant data accross processes

2001-06-26 Thread Joachim Zobel

At 14:54 25.06.2001 -0700, you wrote:
Hi all,

I'd like a way to store complex data structures across Apache processes. 
I've looked at Apache::DBI for an example: my tests say that he has to 
create a new dbh for every process. I've looked at IPC::Shareable, but it 
has to copy data. Meaning that I can only have a certain amount of 
complexity to my data structures.

We are using Data::Dumper to put the data structures as TEXT into a MySQL 
table, where they can retrieved via an id.

This is pretty fast for low to medium traffic and has the advantage that 
the data is persistent (restarting apache won't zap the carts) and human 
readable (debugging is easy).

Hth,
Joachim

--
... ein Geschlecht erfinderischer Zwerge, die fuer alles gemietet werden
koennen.- Bertolt Brecht - Leben des Galilei




Re: Persistant data accross processes

2001-06-26 Thread darren chamberlain

Rodney Broom [EMAIL PROTECTED] said something to this effect on 06/25/2001:
 Hi all,
 
 I'd like a way to store complex data structures across Apache
 processes. I've looked at Apache::DBI for an example: my tests
 say that he has to create a new dbh for every process. I've
 looked at IPC::Shareable, but it has to copy data. Meaning that
 I can only have a certain amount of complexity to my data
 structures.
 
 Thoughts?

Apache::Session, currently at 1.53. Here's an except from the
perldoc:

 Sharing data between Apache processes

 When you share data between Apache processes, you need to decide
 on a session ID number ahead of time and make sure that an object
 with that ID number is in your object store before starting you
 Apache.  How you accomplish that is your own business.  I use the
 session ID 1.  Here is a short program in which we use
 Apache::Session to store out database access information.

   use Apache;
   use Apache::Session::File;
   use DBI;

   use strict;

   my %global_data;

   eval {
   tie %global_data, 'Apache::Session::File', 1,
  {Directory = '/tmp/sessiondata'};
   };
   if ($@) {
  die Global data is not accessible: $@;
   }

   my $dbh = DBI-connect($global_data{datasource},
  $global_data{username}, $global_data{password})
  || die $DBI::errstr;

   undef %global_data;

   #program continues...

 As shown in this example, you should undef or untie your session
 hash as soon as you are done with it.  This will free up any
 locks associated with your process.

Is this what you are looking for?

(darren)

-- 
Make no laws whatever concerning speech, and speech will be free; so soon
as you make a declaration on paper that speech shall be free, you will have
a hundred lawyers proving that freedom does not mean abuse, nor liberty
license; and they will define and define freedom out of existence.  
-- Voltarine de Cleyre



Re: Persistant data accross processes

2001-06-26 Thread Olivier Poitrey

I'm working on two modules that can help you to do this job. There names are
Apache::SharedMem and Apache::Cache. You can find them on :

ftp://ftp.rhapsodyk.net/pub/devel/perl/

thx to report bugs to me

- Original Message -
From: Rodney Broom [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, June 25, 2001 11:54 PM
Subject: Persistant data accross processes


Hi all,

I'd like a way to store complex data structures across Apache processes.
I've looked at Apache::DBI for an example: my tests say that he has to
create a new dbh for every process. I've looked at IPC::Shareable, but it
has to copy data. Meaning that I can only have a certain amount of
complexity to my data structures.

Thoughts?

---
Rodney Broom
Programmer: Desert.Net






Re: Persistant data accross processes

2001-06-26 Thread Joshua Chamas

 Rodney Broom wrote:
 
 Hi all,
 
 I'd like a way to store complex data structures across Apache processes. I've looked 
at Apache::DBI for an example: my tests say that he has to
 create a new dbh for every process. I've looked at IPC::Shareable, but it has to 
copy data. Meaning that I can only have a certain amount
 of complexity to my data structures.
 

I you like MLDBM, I created MLDBM::Sync for the purpose
of using in Apache like environments.  MLDBM::Sync creates
a file locking wrapper around underlying dbms like DB_File,
GDBM_File, or SDBM_File.

Try the bench/bench_sync.pl on your platform for some comparison
numbers.  Below are the numbers I get on my platform, 
Linux PIII-450x2

--Josh
_
Joshua Chamas   Chamas Enterprises Inc.
NodeWorks - Web Link Checking  Huntington Beach, CA  USA 
http://www.nodeworks.com1-714-625-4051

=== INSERT OF 50 BYTE RECORDS ===
  Time for 100 writes + 100 reads for  SDBM_File  0.15 seconds 
12288 bytes
  Time for 100 writes + 100 reads for  MLDBM::Sync::SDBM_File 0.17 seconds 
12288 bytes
  Time for 100 writes + 100 reads for  GDBM_File  3.30 seconds 
18066 bytes
  Time for 100 writes + 100 reads for  DB_File4.32 seconds 
20480 bytes

=== INSERT OF 500 BYTE RECORDS ===
  Time for 100 writes + 100 reads for  SDBM_File  0.18 seconds
771072 bytes
  Time for 100 writes + 100 reads for  MLDBM::Sync::SDBM_File 0.58 seconds
110592 bytes
  Time for 100 writes + 100 reads for  GDBM_File  3.42 seconds 
63472 bytes
  Time for 100 writes + 100 reads for  DB_File4.32 seconds 
81920 bytes

=== INSERT OF 5000 BYTE RECORDS ===
 (skipping test for SDBM_File 1024 byte limit)
  Time for 100 writes + 100 reads for  MLDBM::Sync::SDBM_File 1.39 seconds   
1850368 bytes
  Time for 100 writes + 100 reads for  GDBM_File  4.63 seconds
832400 bytes
  Time for 100 writes + 100 reads for  DB_File5.73 seconds
839680 bytes

=== INSERT OF 2 BYTE RECORDS ===
 (skipping test for SDBM_File 1024 byte limit)
  Time for 100 writes + 100 reads for  MLDBM::Sync::SDBM_File 4.83 seconds   
8304640 bytes
  Time for 100 writes + 100 reads for  GDBM_File  4.65 seconds   
2063912 bytes
  Time for 100 writes + 100 reads for  DB_File6.48 seconds   
2068480 bytes

=== INSERT OF 5 BYTE RECORDS ===
 (skipping test for SDBM_File 1024 byte limit)
  Time for 100 writes + 100 reads for  MLDBM::Sync::SDBM_File12.86 seconds  
16192512 bytes
  Time for 100 writes + 100 reads for  GDBM_File  5.68 seconds   
5337944 bytes
  Time for 100 writes + 100 reads for  DB_File6.87 seconds   
5345280 bytes



Persistant data accross processes

2001-06-25 Thread Rodney Broom



Hi all,

I'd like a way to store complex data structures 
across Apache processes. I'velooked at Apache::DBI for an example: my 
tests say that he has to create a new dbh for every process. I've looked at 
IPC::Shareable, but it has to copy data. Meaning that I can only have a certain 
amount ofcomplexity to my data structures.

Thoughts?

---Rodney BroomProgrammer: 
Desert.Net