Re: mod_perl shared memory with MM

2001-03-12 Thread Sean Chittenden

Sorry for taking a while to get back to this, road trips
can be good at interrupting the flow of life.

It depends on the application.  I typically use a few
instances of open() for the sake of simplicity, but I have also had
decent luck with IPC::Open(2|3).  The only problems I've had with
either was an OS specific bug with Linux (the pipe was newline
buffering and dropping all characters over 1023, moved to FreeBSD and
the problem went away).

Words of wisdom: start slow because debugging over a pipe can
be a headache (understatement).  Simple additions + simple debugging =
good thing(tm).  I've spent too many afternoons/nights ripping apart
these kinds of programs only to find a small type-o and then
reconstructing a much larger query/response set of programs.  -sc

PS You also want to attach the program listening to the named
pipe to something like DJB's daemon tools
(http://cr.yp.to/daemontools.html) to prevent new requests from
blocking if the listener dies: bad thing(tm).

On Wed, Feb 28, 2001 at 10:23:06PM -0500, Adi Fairbank wrote:
 Delivered-To: [EMAIL PROTECTED]
 Date: Wed, 28 Feb 2001 22:23:06 -0500
 From: Adi Fairbank [EMAIL PROTECTED]
 X-Mailer: Mozilla 4.75 [en] (X11; U; Linux 2.2.14-5.0 i586)
 X-Accept-Language: en
 To: Sean Chittenden [EMAIL PROTECTED]
 CC: [EMAIL PROTECTED]
 Subject: Re: mod_perl shared memory with MM
 
 Sean,
 
 Yeah, I was thinking about something like that at first, but I've never played
 with named pipes, and it didn't sound too safe after reading the perlipc man
 page.  What do you use, Perl open() calls, IPC::Open2/3, IPC::ChildSafe, or
 something else?  How stable has it been for you?  I just didn't like all those
 warnings in the IPC::Open2 and perlipc man pages.
 
 -Adi
 
 Sean Chittenden wrote:
  
  The night of Fat Tuesday no less...  that didn't help any
  either.  ::sigh::
  
  Here's one possibility that I've done in the past becuase I
  needed mod_perl sessions to be able to talk with non-mod_perl
  programs.  I setup a named bi-directional pipe that let you write a
  query to it for session information, and it wrote back with whatever
  you were looking for.  Given that this needed to support perl, java,
  and c, it worked _very_ well and was extremely fast.  Something you
  may also want to consider because it keeps your session information
  outside of apache (incase of restart of apache, or desire to
  synchronize session information across multiple hosts).
  
  -sc
 
 

-- 
Sean Chittenden[EMAIL PROTECTED]

 PGP signature


Re: mod_perl shared memory with MM

2001-03-11 Thread Christian Jaeger

At 22:23 Uhr -0500 10.3.2001, DeWitt Clinton wrote:
On Sat, Mar 10, 2001 at 04:35:02PM -0800, Perrin Harkins wrote:
   Christian Jaeger wrote:
   Yes, it uses a separate file for each variable. This way also locking
   is solved, each variable has it's own file lock.

  You should take a look at DeWitt Clinton's Cache::FileCache module,
  announced on this list.  It might make sense to merge your work into
  that module, which is the next generation of the popular File::Cache
  module.

Yes!  I'm actively looking for additional developers for the Perl
Cache project.  I'd love new implementations of the Cache interface.
Cache::BerkeleyDBCache would be wonderful.  Check out:

   http://sourceforge.net/projects/perl-cache/

For what it is worth, I don't explicitly lock.  I do atomic writes
instead, and have yet to hear anyone report a problem in the year the
code has been public.


I've looked at Cache::FileCache now and think it's (currently) not 
possible to use for IPC::FsSharevars:

I really miss locking capabilities. Imagine a script that reads a 
value at the beginning of a request and writes it back at the end of 
the request. If it's not locked during this time, another instance 
can read the same value and then write another change back which is 
then overwritten by the first instance.

IPC::FsSharevars even goes one step further: instead of locking 
everything for a particular session, it only locks individual 
variables. So you can say "I use the variables $foo and %bar from 
session 12345 and will write %bar back", in which case %bar of 
session 12345 is locked until it is written back, while $foo and @baz 
are still unlocked and may be read (and written) by other instances. 
:-) Such behaviour is useful if you have framesets where a browser 
may request several frames of the same session in parallel (you can 
see an example on http://testwww.ethz.ch, click on 'Suche' then on 
the submit button, the two appearing frames are executed in parallel 
and both access different session variables), or for handling session 
independant (global) data.

One thing to be careful about in such situations is dead locking. 
IPC::FsSharevars prevents dead locks by getting all needed locks at 
the same time (this is done by first requesting a general session 
lock and then trying to lock all needed variable container files - if 
it fails, the session lock is freed again and the process waits for a 
unix signal indicating a change in the locking states). Getting all 
locks at the same time is more efficient than getting locks always in 
the same order.


BTW some questions/suggestions for DeWitt:
- why don't you use 'real' constants for $SUCCESS and the like? (use constant)
- you probably should either append the userid of the process to 
/tmp/FileCache or make this folder globally writeable (and set the 
sticky flag). Otherwise other users get a permission error.
- why don't you use Storable.pm? It should be much faster than Data::Dumper

I have some preliminary benchmark code -- only good for relative
benchmarking, but it is a start.  I'd be happy to post the results
here if people are interested.

Could you send me the code?, then I'll look into benchmarking my module too.




[OT] Re: mod_perl shared memory with MM

2001-03-11 Thread DeWitt Clinton

On Sun, Mar 11, 2001 at 03:33:12PM +0100, Christian Jaeger wrote:

 I've looked at Cache::FileCache now and think it's (currently) not 
 possible to use for IPC::FsSharevars:
 
 I really miss locking capabilities. Imagine a script that reads a 
 value at the beginning of a request and writes it back at the end of 
 the request. If it's not locked during this time, another instance 
 can read the same value and then write another change back which is 
 then overwritten by the first instance.


I'm very intrigued by your thinking on locking.  I had never
considered the transaction based approach to caching you are referring
to.  I'll take this up privately with you, because we've strayed far
off the mod_perl topic, although I find it fascinating.



 - why don't you use 'real' constants for $SUCCESS and the like? (use
 constant)

Two reasons, mostly historical, and not necessarily good ones.

One, I benchmarked some code once that required high performance, and
the use of constants was just slightly slower.

Two, I like the syntax $hash{$CONSTANT}.  If I remember correctly,
$hash{CONSTANT} didn't work.  This may have changed in newer versions
of Perl.

Obviously those are *very* small issues, and so it is mostly by habit
that I don't use constant.  I would consider changing, but it would
mean asking everyone using the code to change too, because they
currently import and use the constants as Exported scalars.

Do you know of a very important reason to break compatibility and
force the switch?  I'm not opposed to switching if I have to, but I'd
like to minimize the impact on the users.



 - you probably should either append the userid of the process to 
 /tmp/FileCache or make this folder globally writeable (and set the 
 sticky flag). Otherwise other users get a permission error.

As of version 0.03, the cache directories, but not the cache entries,
are globally writable by default.  Users can override this by changing
the 'directory_umask' option, or keep data private altogether by
changing the 'cache_root'.  What version did you test with?  There may
be a bug in there.



 - why don't you use Storable.pm? It should be much faster than Data::Dumper

The TODO contains "Replace Data::Dumper with Storable (maybe)".  :) 

The old File::Cache module used Storable, btw.

It will be trivial to port the new Cache::FileCache to use Storable.
I simply wanted to wait until I had the benchmarking code so I could
be sure that Storeable was faster.  Actually, I'm not 100% sure that I
expect Storeable to be faster than Data::Dumper.  If Data::Dumper
turns out to be about equally fast, then I'll stay with it, because it
is available on all Perl installations, I believe.

Do you know if Storeable is definitely faster?  If you have benchmarks
then I am more than happy to switch now.  Or, do you know of a reason,
feature wise, that I should switch?  Again, it is trivial to do so.



 I have some preliminary benchmark code -- only good for relative
 benchmarking, but it is a start.  I'd be happy to post the results
 here if people are interested.
 
 Could you send me the code?, then I'll look into benchmarking my
 module too.

I checked it in as Cache::CacheBenchmark.  It isn't good code, nor
does it necessarily work just yet.  I simply checked it in while I was
in the middle of working on it.  I'm turning it into a real
benchmarking class for the cache, and hopefully that will help you a
little bit.


Cheers,

-DeWitt





Re: [OT] Re: mod_perl shared memory with MM

2001-03-11 Thread Perrin Harkins

 I'm very intrigued by your thinking on locking.  I had never
 considered the transaction based approach to caching you are referring
 to.  I'll take this up privately with you, because we've strayed far
 off the mod_perl topic, although I find it fascinating.

One more suggestion before you take this off the list: it's nice to have
both.  There are uses for explicit locking (I remember Randal saying he
wished File::Cache had some locking support), but most people will be happy
with atomic updates, and that's usually faster.  Gunther's eXtropia stuff
supports various locking options, and you can read some of the reasoning
behind it in the docs at
http://new.extropia.com/development/webware2/webware2.html.  (See chapters
13 and 18.)

  - why don't you use 'real' constants for $SUCCESS and the like? (use
  constant)

 Two reasons, mostly historical, and not necessarily good ones.

 One, I benchmarked some code once that required high performance, and
 the use of constants was just slightly slower.

Ick.

 Two, I like the syntax $hash{$CONSTANT}.  If I remember correctly,
 $hash{CONSTANT} didn't work.  This may have changed in newer versions
 of Perl.

No, the use of constants as hash keys or in interpolated strings still
doesn't work.  I tried the constants module in my last project, and I found
it to be more trouble than it was worth.  It's annoying to have to write
things like $hash{CONSTANT} or "string @{[CONSTANT]}".

 Do you know if Storeable is definitely faster?

It is, and it's now part of the standard distribution.
http://www.astray.com/pipermail/foo/2000-August/000169.html

- Perrin




Re: [OT] Re: mod_perl shared memory with MM

2001-03-11 Thread Greg Cope

DeWitt Clinton wrote:
 
 On Sun, Mar 11, 2001 at 03:33:12PM +0100, Christian Jaeger wrote:
 
  I've looked at Cache::FileCache now and think it's (currently) not
  possible to use for IPC::FsSharevars:
 
  I really miss locking capabilities. Imagine a script that reads a
  value at the beginning of a request and writes it back at the end of
  the request. If it's not locked during this time, another instance
  can read the same value and then write another change back which is
  then overwritten by the first instance.
 
 I'm very intrigued by your thinking on locking.  I had never
 considered the transaction based approach to caching you are referring
 to.  I'll take this up privately with you, because we've strayed far
 off the mod_perl topic, although I find it fascinating.
 
  - why don't you use 'real' constants for $SUCCESS and the like? (use
  constant)
 
 Two reasons, mostly historical, and not necessarily good ones.
 
 One, I benchmarked some code once that required high performance, and
 the use of constants was just slightly slower.
 
 Two, I like the syntax $hash{$CONSTANT}.  If I remember correctly,
 $hash{CONSTANT} didn't work.  This may have changed in newer versions
 of Perl.
 
 Obviously those are *very* small issues, and so it is mostly by habit
 that I don't use constant.  I would consider changing, but it would
 mean asking everyone using the code to change too, because they
 currently import and use the constants as Exported scalars.
 
 Do you know of a very important reason to break compatibility and
 force the switch?  I'm not opposed to switching if I have to, but I'd
 like to minimize the impact on the users.
 
  - you probably should either append the userid of the process to
  /tmp/FileCache or make this folder globally writeable (and set the
  sticky flag). Otherwise other users get a permission error.
 
 As of version 0.03, the cache directories, but not the cache entries,
 are globally writable by default.  Users can override this by changing
 the 'directory_umask' option, or keep data private altogether by
 changing the 'cache_root'.  What version did you test with?  There may
 be a bug in there.
 
  - why don't you use Storable.pm? It should be much faster than Data::Dumper
 
 The TODO contains "Replace Data::Dumper with Storable (maybe)".  :)
 
 The old File::Cache module used Storable, btw.
 
 It will be trivial to port the new Cache::FileCache to use Storable.
 I simply wanted to wait until I had the benchmarking code so I could
 be sure that Storeable was faster.  Actually, I'm not 100% sure that I
 expect Storeable to be faster than Data::Dumper.  If Data::Dumper
 turns out to be about equally fast, then I'll stay with it, because it
 is available on all Perl installations, I believe.
 
 Do you know if Storeable is definitely faster?  If you have benchmarks
 then I am more than happy to switch now.  Or, do you know of a reason,
 feature wise, that I should switch?  Again, it is trivial to do so.

I've found it to be arround 5 - 10 % faster - on simple stuff on some
benchmarking I did arround a year ago.

Can I ask why you are not useing IPC::Sharedlight (as its pure C and
apparently much faster than IPC::Shareable - I've never benchmarked it
as I've also used IPC::Sharedlight).

Greg

 
  I have some preliminary benchmark code -- only good for relative
  benchmarking, but it is a start.  I'd be happy to post the results
  here if people are interested.
 
  Could you send me the code?, then I'll look into benchmarking my
  module too.
 
 I checked it in as Cache::CacheBenchmark.  It isn't good code, nor
 does it necessarily work just yet.  I simply checked it in while I was
 in the middle of working on it.  I'm turning it into a real
 benchmarking class for the cache, and hopefully that will help you a
 little bit.
 
 Cheers,
 
 -DeWitt



Re: [OT] Re: mod_perl shared memory with MM

2001-03-11 Thread Perrin Harkins

 Can I ask why you are not useing IPC::Sharedlight (as its pure C and
 apparently much faster than IPC::Shareable - I've never benchmarked it
 as I've also used IPC::Sharedlight).

Full circle back to the original topic...
IPC::MM is implemented in C and offers an actual hash interface backed by a
BTree in shared memory.  IPC::ShareLite only works for individual scalars.

It wouldn't surprise me if a file system approach was faster than either of
these on Linux, because of the agressive caching.

- Perrin




Re: [OT] Re: mod_perl shared memory with MM

2001-03-11 Thread Greg Cope

Perrin Harkins wrote:
 
  Can I ask why you are not useing IPC::Sharedlight (as its pure C and
  apparently much faster than IPC::Shareable - I've never benchmarked it
  as I've also used IPC::Sharedlight).
 
 Full circle back to the original topic...
 IPC::MM is implemented in C and offers an actual hash interface backed by a
 BTree in shared memory.  IPC::ShareLite only works for individual scalars.
 

Not tried that one !

I'ce used the obvious Sharedlight plus Storable to serialise hashes.

 It wouldn't surprise me if a file system approach was faster than either of
 these on Linux, because of the agressive caching.

I would be an interesting benchmark ... Althought it may only be a
performance win on a lightly loaded machine,the assymption being that
the stat'ing is fast on a lowly loaded system with fast understressed
disks.  I could be completly wrong here tho ;-).

Has anyone used the file system approach on a RAM disk ?

Greg


 
 - Perrin



Re: mod_perl shared memory with MM

2001-03-10 Thread Perrin Harkins

On Sat, 10 Mar 2001, Christian Jaeger wrote:
 For all of you trying to share session information efficently my 
 IPC::FsSharevars module might be the right thing. I wrote it after 
 having considered all the other solutions. It uses the file system 
 directly (no BDB/etc. overhead) and provides sophisticated locking 
 (even different variables from the same session can be written at the 
 same time).

Sounds very interesting.  Does it use a multi-file approach like
File::Cache?  Have you actually benchmarked it against BerkeleyDB?  It's
hard to beat BDB because it uses a shared memory buffer, but theoretically
the file system buffer could do it since that's managed by the kernel.

- Perrin




Re: mod_perl shared memory with MM

2001-03-10 Thread Christian Jaeger

At 0:23 Uhr -0800 10.3.2001, Perrin Harkins wrote:
On Sat, 10 Mar 2001, Christian Jaeger wrote:
  For all of you trying to share session information efficently my
  IPC::FsSharevars module might be the right thing. I wrote it after
  having considered all the other solutions. It uses the file system
  directly (no BDB/etc. overhead) and provides sophisticated locking
  (even different variables from the same session can be written at the
  same time).

Sounds very interesting.  Does it use a multi-file approach like
File::Cache?  Have you actually benchmarked it against BerkeleyDB?  It's
hard to beat BDB because it uses a shared memory buffer, but theoretically
the file system buffer could do it since that's managed by the kernel.

Yes, it uses a separate file for each variable. This way also locking 
is solved, each variable has it's own file lock.

It's a bit difficult to write a realworld benchmark. I've tried to 
use DB_File before but it was very slow when doing a sync after every 
write as is recommended in various documentation to make it 
multiprocess safe. What do you mean with BerkeleyDB, something 
different than DB_File?

Currently I don't use Mmap (are there no cross platform issues using 
that?), that might speed it up a bit more.

Christian.



Re: mod_perl shared memory with MM

2001-03-10 Thread Perrin Harkins

Christian Jaeger wrote:
 Yes, it uses a separate file for each variable. This way also locking
 is solved, each variable has it's own file lock.

You should take a look at DeWitt Clinton's Cache::FileCache module,
announced on this list.  It might make sense to merge your work into
that module, which is the next generation of the popular File::Cache
module.

 It's a bit difficult to write a realworld benchmark.

It certainly is.  Benchmarking all of the options is something that I've
always wanted to do and never find enough time for.

 I've tried to
 use DB_File before but it was very slow when doing a sync after every
 write as is recommended in various documentation to make it
 multiprocess safe. What do you mean with BerkeleyDB, something
 different than DB_File?

BerkeleyDB.pm is an interface to later versions of the Berkeley DB
library.  It has a shared memory cache, and does not require syncing or
opening and closing of files on every access.  It has built-in locking,
which can be configured to work at a page level, allowing mutiple
simultaneous writers.

 Currently I don't use Mmap (are there no cross platform issues using
 that?), that might speed it up a bit more.

That would be a nice option.  Take a look at Cache::Mmap before you
start.

- Perrin



Re: mod_perl shared memory with MM

2001-03-10 Thread DeWitt Clinton

On Sat, Mar 10, 2001 at 04:35:02PM -0800, Perrin Harkins wrote:
 Christian Jaeger wrote:
  Yes, it uses a separate file for each variable. This way also locking
  is solved, each variable has it's own file lock.
 
 You should take a look at DeWitt Clinton's Cache::FileCache module,
 announced on this list.  It might make sense to merge your work into
 that module, which is the next generation of the popular File::Cache
 module.

Yes!  I'm actively looking for additional developers for the Perl
Cache project.  I'd love new implementations of the Cache interface.
Cache::BerkeleyDBCache would be wonderful.  Check out:
  
  http://sourceforge.net/projects/perl-cache/

For what it is worth, I don't explicitly lock.  I do atomic writes
instead, and have yet to hear anyone report a problem in the year the
code has been public.


  It's a bit difficult to write a realworld benchmark.
 
 It certainly is.  Benchmarking all of the options is something that I've
 always wanted to do and never find enough time for.

I have some preliminary benchmark code -- only good for relative
benchmarking, but it is a start.  I'd be happy to post the results
here if people are interested.

-DeWitt



Re: mod_perl shared memory with MM

2001-03-10 Thread Perrin Harkins

 I have some preliminary benchmark code -- only good for relative
 benchmarking, but it is a start.  I'd be happy to post the results
 here if people are interested.

Please do.
- Perrin




Re: mod_perl shared memory with MM

2001-03-09 Thread Christian Jaeger

For all of you trying to share session information efficently my 
IPC::FsSharevars module might be the right thing. I wrote it after 
having considered all the other solutions. It uses the file system 
directly (no BDB/etc. overhead) and provides sophisticated locking 
(even different variables from the same session can be written at the 
same time). I wrote it for my fastcgi based web app framework (Eile) 
but it should be useable for mod_perl things as well (I'm awaiting 
patches and suggestions in case it is not). It has not seen very much 
real world testing yet.

You may find the manpage on 
http://testwww.ethz.ch/perldoc/IPC/FsSharevars.pm and the module (no 
Makefile.PL yet) under http://testwww.ethz.ch/eile/download/ .

Cheers
Christian.



Re: mod_perl shared memory with MM

2001-03-05 Thread Alexander Farber (EED)

Adi Fairbank wrote:
 Yeah, I was thinking about something like that at first, but I've never played
 with named pipes, and it didn't sound too safe after reading the perlipc man
 page.  What do you use, Perl open() calls, IPC::Open2/3, IPC::ChildSafe, or

IPC:ChildSafe is a good module, I use it here to access ClearCase, but 
it probably won't help you to exchange any data between Apache children



Re: mod_perl shared memory with MM

2001-02-28 Thread Adi Fairbank

Sean Chittenden wrote:
 
   Is there a way you can do that without using Storable?
 
  Right after I sent the message, I was thinking to myself that same
  question... If I extended IPC::MM, how could I get it to be any
  faster than Storable already is?
 
 You can also read in the data you want in a startup.pl file
 and put the info in a hash in a global memory space
 (MyApp::datastruct{}) that gets shared through forking (copy on write,
 not read, right?).  If the data is read only, and only a certain size,
 this option has worked _very_ well for me in the past.  -sc
 

Yeah, I do use that method for all my read-only data, but by definition the
persistent session cache is *not* read-only... it gets changed on pretty much
every request.

-Adi




Re: mod_perl shared memory with MM

2001-02-28 Thread Joshua Chamas

Adi Fairbank wrote:
 
 I am trying to squeeze more performance out of my persistent session cache.  In
 my application, the Storable image size of my sessions can grow upwards of
 100-200K.  It can take on the order of 200ms for Storable to deserialize and
 serialize this on my (lousy) hardware.
 

Its a different approach, but I use simple MLDBM + SDBM_File 
when possible, as its really fast for small records, but it has
that 1024 byte limit per record!  I am releasing a wrapper
to CPAN ( on its way now ) called MLDBM::Sync that handles
concurrent locking  i/o flushing for you.  One advantage
of this approach is that your session state will persist
through a server reboot if its written to disk.

I also wrote a wrapper for SDBM_File called MLDBM::Sync::SDBM_File
that overcomes the 1024 byte limit per record.  The below 
numbers were for a benchmark on my dual PIII 450, linux 2.2.14,
SCSI raid-1 ext2 fs mounted async.  The benchmark can be found
in the MLDBM::Sync package in the bench directory once it makes
it to CPAN.

With MLDBM ( perldoc MLDBM ) you can use Storable or 
XS Data::Dumper method for serialization as well as 
various DBMs.

--Josh

=== INSERT OF 50 BYTE RECORDS ===
 Time for 100 write/read's for  SDBM_File   0.12 seconds  
12288 bytes
 Time for 100 write/read's for  MLDBM::Sync::SDBM_File  0.14 seconds  
12288 bytes
 Time for 100 write/read's for  GDBM_File   2.07 seconds  
18066 bytes
 Time for 100 write/read's for  DB_File 2.48 seconds  
20480 bytes

=== INSERT OF 500 BYTE RECORDS ===
 Time for 100 write/read's for  SDBM_File   0.21 seconds 
658432 bytes
 Time for 100 write/read's for  MLDBM::Sync::SDBM_File  0.51 seconds 
135168 bytes
 Time for 100 write/read's for  GDBM_File   2.29 seconds  
63472 bytes
 Time for 100 write/read's for  DB_File 2.44 seconds 
114688 bytes

=== INSERT OF 5000 BYTE RECORDS ===
(skipping test for SDBM_File 1024 byte limit)
 Time for 100 write/read's for  MLDBM::Sync::SDBM_File  1.30 seconds
2101248 bytes
 Time for 100 write/read's for  GDBM_File   2.55 seconds 
832400 bytes
 Time for 100 write/read's for  DB_File 3.27 seconds 
839680 bytes

=== INSERT OF 2 BYTE RECORDS ===
(skipping test for SDBM_File 1024 byte limit)
 Time for 100 write/read's for  MLDBM::Sync::SDBM_File  4.54 seconds   
13162496 bytes
 Time for 100 write/read's for  GDBM_File   5.39 seconds
2063912 bytes
 Time for 100 write/read's for  DB_File 4.79 seconds
2068480 bytes



Re: mod_perl shared memory with MM

2001-02-28 Thread Sean Chittenden

  Is there a way you can do that without using Storable?
 
 Right after I sent the message, I was thinking to myself that same
 question... If I extended IPC::MM, how could I get it to be any
 faster than Storable already is?

You can also read in the data you want in a startup.pl file
and put the info in a hash in a global memory space
(MyApp::datastruct{}) that gets shared through forking (copy on write,
not read, right?).  If the data is read only, and only a certain size,
this option has worked _very_ well for me in the past.  -sc

-- 
Sean Chittenden[EMAIL PROTECTED]
C665 A17F 9A56 286C 5CFB  1DEA 9F4F 5CEF 1EDD FAAD

 PGP signature


Re: mod_perl shared memory with MM

2001-02-28 Thread Sean Chittenden

The night of Fat Tuesday no less...  that didn't help any
either.  ::sigh::

Here's one possibility that I've done in the past becuase I
needed mod_perl sessions to be able to talk with non-mod_perl
programs.  I setup a named bi-directional pipe that let you write a
query to it for session information, and it wrote back with whatever
you were looking for.  Given that this needed to support perl, java,
and c, it worked _very_ well and was extremely fast.  Something you
may also want to consider because it keeps your session information
outside of apache (incase of restart of apache, or desire to
synchronize session information across multiple hosts).

-sc

On Wed, Feb 28, 2001 at 09:25:45PM -0500, Adi Fairbank wrote:
 Delivered-To: [EMAIL PROTECTED]
 Date: Wed, 28 Feb 2001 21:25:45 -0500
 From: Adi Fairbank [EMAIL PROTECTED]
 X-Mailer: Mozilla 4.75 [en] (X11; U; Linux 2.2.14-5.0 i586)
 X-Accept-Language: en
 To: Sean Chittenden [EMAIL PROTECTED]
 Subject: Re: mod_perl shared memory with MM
 
 It's ok, I do that a lot, too.  Usually right after I click "Send" is when I
 realize I forgot something or didn't think it through all the way. :)
 
 Sean Chittenden wrote:
  
  Hmm... yeah, whoops.  I suppose that's what I get for sending
  email that late.  :~) -sc
 
 

-- 
Sean Chittenden[EMAIL PROTECTED]
C665 A17F 9A56 286C 5CFB  1DEA 9F4F 5CEF 1EDD FAAD

 PGP signature


Re: mod_perl shared memory with MM

2001-02-28 Thread Adi Fairbank

Sean,

Yeah, I was thinking about something like that at first, but I've never played
with named pipes, and it didn't sound too safe after reading the perlipc man
page.  What do you use, Perl open() calls, IPC::Open2/3, IPC::ChildSafe, or
something else?  How stable has it been for you?  I just didn't like all those
warnings in the IPC::Open2 and perlipc man pages.

-Adi

Sean Chittenden wrote:
 
 The night of Fat Tuesday no less...  that didn't help any
 either.  ::sigh::
 
 Here's one possibility that I've done in the past becuase I
 needed mod_perl sessions to be able to talk with non-mod_perl
 programs.  I setup a named bi-directional pipe that let you write a
 query to it for session information, and it wrote back with whatever
 you were looking for.  Given that this needed to support perl, java,
 and c, it worked _very_ well and was extremely fast.  Something you
 may also want to consider because it keeps your session information
 outside of apache (incase of restart of apache, or desire to
 synchronize session information across multiple hosts).
 
 -sc





mod_perl shared memory with MM

2001-02-27 Thread Adi Fairbank

I am trying to squeeze more performance out of my persistent session cache.  In
my application, the Storable image size of my sessions can grow upwards of
100-200K.  It can take on the order of 200ms for Storable to deserialize and
serialize this on my (lousy) hardware.

I'm looking at RSE's MM and the Perl module IPC::MM as a persistent session
cache.  Right now IPC::MM doesn't support multi-dimensional Perl data
structures, nor blessed references, so I will have to extend it to support
these.

My question is: is anyone else using IPC::MM under mod_perl? .. would you if it
supported multi-dimensional Perl data?

My other question is: since this will be somewhat moot once Apache 2.0 +
mod_perl 2.0 are stable, is it worth the effort?  What's the ETA on mod_perl
2.0?  Should I spend my effort helping with that instead?

Any comments appreciated,
-Adi




Re: mod_perl shared memory with MM

2001-02-27 Thread Perrin Harkins

Adi Fairbank wrote:
 
 I am trying to squeeze more performance out of my persistent session cache.  In
 my application, the Storable image size of my sessions can grow upwards of
 100-200K.  It can take on the order of 200ms for Storable to deserialize and
 serialize this on my (lousy) hardware.
 
 I'm looking at RSE's MM and the Perl module IPC::MM as a persistent session
 cache.  Right now IPC::MM doesn't support multi-dimensional Perl data
 structures, nor blessed references, so I will have to extend it to support
 these.

Is there a way you can do that without using Storable?  If not, maybe
you should look at partitioning your data more, so that only the parts
you really need for a given request are loaded and saved.

I'm pleased to see people using IPC::MM, since I bugged Arthur to put it
on CPAN.  However, if it doesn't work for you there are other options
such as BerkeleyDB (not DB_File) which should provide a similar level of
performance.

- Perrin



Re: mod_perl shared memory with MM

2001-02-27 Thread Adi Fairbank

Perrin Harkins wrote:
 
 Adi Fairbank wrote:
 
  I am trying to squeeze more performance out of my persistent session cache.  In
  my application, the Storable image size of my sessions can grow upwards of
  100-200K.  It can take on the order of 200ms for Storable to deserialize and
  serialize this on my (lousy) hardware.
 
  I'm looking at RSE's MM and the Perl module IPC::MM as a persistent session
  cache.  Right now IPC::MM doesn't support multi-dimensional Perl data
  structures, nor blessed references, so I will have to extend it to support
  these.
 
 Is there a way you can do that without using Storable?

Right after I sent the message, I was thinking to myself that same question...
If I extended IPC::MM, how could I get it to be any faster than Storable already
is?

Basically what I came up with off the top of my head was to try to map each Perl
hash to a mm_hash and each Perl array to a mm_btree_table, all the way down
through the multi-level data structure.  Every time you add a hashref to your
tied IPC::MM hash, it would create a new mm_hash and store the reference to that
child in the parent.  Ditto for arrayrefs, but use mm_btree_table.

If this is possible, then you could operate on the guts of a deep data structure
without completely serializing and deserializing it every time.

 If not, maybe
 you should look at partitioning your data more, so that only the parts
 you really need for a given request are loaded and saved.

Good idea!  That would save a lot of speed, and would be easy to do with my
design.  Silly I didn't think of that.

 
 I'm pleased to see people using IPC::MM, since I bugged Arthur to put it
 on CPAN.  However, if it doesn't work for you there are other options
 such as BerkeleyDB (not DB_File) which should provide a similar level of
 performance.

Thanks.. I'll look at BerkeleyDB.

-Adi