Hey,

I have seen this a while ago and it looks promising, however, I do have my
fair share of concerns, namely:

1. We should be able to define some sort of TTL. What happens if no request
comes for over an hour? Albeit this is *really* unlikely.
2. Why not implement a mechanism similar to `fork` for these new requests?
So that we duplicate the memory of the process and are able to spawn new
instances way faster.
3. How many instances will be kept in (I suppose) a pool for answering the
requests?

Regarding database connections, almost all adapters support some form of
connection pooling, so the example you mention about a database connection
is a bit obsolete in my opinion. Furthermore, how will we deal with
database timeouts, resource exhaustion, and many other problems that might
arise when stalling certain processes for too long?

Just to see what we should do about the aforementioned issues, I propose
that you create some sort of a basic implementation at first, so that
benchmarks can at least show us how much of a difference this makes. As far
as I can see, the difference will be negligible since most modern
frameworks do allow you to generate PHP files, which get pre-compiled by
OPCache, and thus already give amazing performance.

Also, extensions like APCu go unnoticed and already greatly help storing
e.g. pre-compiled routes, if you do not want to generate PHP files.

For now, this will require some more discussion.

Kind regards,
Harm Smits

Reply via email to