I am using nsv arrays to hold session data across multiple page accesses,
and to hold some relatively static widely-shared database data to avoid
unnecessary database reads.  Probably at any point in time there will be
several hundred plus nsv arrays with a few K bytes in each, but I haven't
spent time to build a good model, and that may be a low estimate.  I'm
running under RedHat Linux 7.2 with 512MB RAM, OpenACS and PostgreSQL.  My
site will deliver interactive training, involving light to modest traffic
with an average of ten or fifteen database reads / writes per page
delivered.  I want to be sure the system delivers rapid response time to our
users.

Without having read the nsv code, but guessing how it's probably
implemented, I am making some assumptions about performance characteristics.
Am I far off base?
- Since Linux provides a virtual memory facility, there is no practical
issue of running out of memory and crashing as long as I apply some
restraint in using nsvs.
- But of course, if the system has to do a lot of page swapping, performance
will degrade.
- If I want to purge stale or unaccessed data in the nsv arrays, I have to
do it myself.

I have read how to set the nsvbuckets parameter and monitor lock contention
with mutex monitoring, and can experiment with that.

Is there any way to monitor the amount of space consumed by nsv arrays at
any point in time?  What should I do to monitor the performance impact of my
nsv use (other than the mutex monitoring), and perhaps to trigger more
aggressive purges - or rewrites :( - when needed?

I realize that nscache provides similar facilities.  There is so much else
on my plate, and my facility with Linux internals is so limited, I hesitate
to go down the path of installing it.  Would it provide a major improvement,
so that I should invest that time and energy?

Thanks!

Dave

Reply via email to