Making a wild guess here - most RDBMS won't like it if you make thousands
of queries per second across 500 tables every second. Can this be done -
yes but most setup's aren't tuned to be able to handle such a scenario.

If I was doing something like this I can imagine quite a few places which
would fall apart in my current code which would have nothing to do with
either mod_perl or forking/threading. Again I have absolutely no idea what
has been implemented by John so it is quite possible it is a mod_perl and
forking issue but that can not be generalized for everyone.

On Mon, Dec 21, 2020 at 1:27 AM Vincent Veyron <vv.li...@wanadoo.fr> wrote:

>
> [You forgot to cc the list ]
>
> On Sun, 20 Dec 2020 23:16:03 -0500
> John Dunlap <j...@lariat.co> wrote:
>
> > We run 20 customers on a single box and our database has approximately
> 500
> > tables. We run hundreds or thousands of queries per second.
> >
>
> 500 tables is a lot more than what I typically handle. I'm sure it
> complicates things.
>
> But see this post by James Smith in a recent thread :
>
>
> http://mail-archives.apache.org/mod_mbox/perl-modperl/202008.mbox/ajax/%3Cef383804cf394c53b48258531891d12b%40sanger.ac.uk%3E
>
> Easier to read in this archive :
>
> http://mail-archives.apache.org/mod_mbox/perl-modperl/202008.mbox/browser
>
> I also remember a post by a chinese guy who handled the same order of
> database size, in which he wrote that he had compared several frameworks
> and mod_perl was the fastest; but that was something like 10 years ago, and
> I can't find it anymore.
>
> So I'm not sure how mod_perl could handle that kind of load and be
> horribly inefficient?
>
> (I forgot to say in my previous post that over 50% of the time used by my
> script is spent on the _one_ query out of 120 that writes a smallish
> session hash to disk)
>
> --
>                                         Bien à vous, Vincent Veyron
>
> https://compta.libremen.com
> Logiciel libre de comptabilité générale en partie double
>

Reply via email to