Hello,

i'm not sure its the right place to post this message, so redirect me if i'm
wrong.

Here the problematic :

We are alot running php across multiple backend servers and we all know that

we need to syncronise the php sources usualy we do that with rsync , some of

us run all backends on an NFS feed.

-> the usual problem is that the developpers like to see their changes in
"live"
and we dont want to let them touch the holly rsync script.

Here the idea :

if we could have an option in php.ini or a new wrapper localcache:/ we could

get all require / include / require_once / include_once functions to make a
local copy of needed files then require / include them as usualy.

here an exemple :

require("/path/to/file.php"); // the /path/to/file.php is on an nfs mount.

require should :
1 : check in /localcopy/path/to/file.php if it exist
2 : then if its not too old // we can define what this means later // it
just require it as now
3 : if the file does not exist or if its too old we refresh it from the
/path/to/file.php and require it .


result :
1:this will lead all scripts run unmodified but from /localcopy/
2: the nfs is not loaded at all and wont be a bottle neck
3: developpers can change their files and dont need to access the "holly
rsync"
4: all the backend servers auto syncronise and keep in sync.


We could imagine not to fetch the files from the nfs but by http also . (at
step3 ) this remove the need of nfs at all.

This would allow us to run large backends without the syncronisation issue .
And would be a huge perf boost over nfs.
I dont think it will also disturb opcode caches like apc as if we do it
early it will not notice that the file was refreshed from "remote".

Its just an idea but i'm sure it can be realy usefull for big farms.

Cb

Reply via email to