Everything has a limit. Sometimes it's just ridiculously high that it might as well be infinity. This is not the case with open files. I'm not sure if MySQL has it's own limit or if it is constrained by the operating system, like it is with file size. Max open files is a function of the OS, not the file system. Under Unix (or Linux), this is a setting you can adjust from the default (4k for Linux I think). Under Windows, I'm not sure what the limit is or how easy it is to change, but I pretty sure the default is pretty high. Although, older versions of MS Word would hit the open file limit because of the way Word handled the undo history. Think about that, that's one user with one document open, potentially hitting the open file limit. Regardless, the max is still a reasonably finite number, 65K I think. That max may be higher on 64 bit systems. But don't forget, unix/linux also sets limits for users which you would have to change.

Your design is most definitely not scalable. Every user adds another 45 tables. If you get up to 500 users, thats going to be over 22,000 files. The number of file handles used is going to be even higher, at least double, since MySQL will need file handles for the indexes. That's not even counting the MySQL system tables. Even if the OS could handle that many file handles (on top of the ones the OS uses itself), the system would probably slow to a crawl trying to keep track of all those locks and constantly flushing and loading files from the cache.

I think you are doing this the wrong way.
You won't be able to keep things running smoothly as you add users.
It is a problem having that many tables.

A rule I always try to follow is that the number of users should have no effect on your underlying design. Adding a user should not change your design, but in your case it does, your whole system is replicated just because you added one user. You should implement row level security, which MySQL doesn't support so you need to do it in your front end. It's not an easy task to do, but it creates a "scalable" system. If you don't want to go this route, then you could use a database that supports row level security, like Oracle. To stick with open source, you may need to look into something like Ingres.

As a stop gap measure, you may try closing your connections to MySQL as you are looping through the user databases. MySQL may then release the file handles for the "disconnected" databases, otherwise I think it leaves them open for your entire session.

You may try looking into these unix/linux commands:
lsof
file-nr
file-max


On Nov 22, 2004, at 5:52 AM, Andreas Karlsson wrote:

Hi.

I have database structure of about 45 tables, and a separate database for
each of the 100+ users.
Everything has been working nicely for quite some time.


When I wrote a function that looped through all the DBs and ran some
queries, MySQL hung and I got "too many open files" in my .err-file.
(Mysql 4.0.22, trustix linux, reiserfs, mysql:open-files-limit at 1024)

I am trying different ways of raising the limit, and that shouldn't be a
problem.


My question is if I am doing this the wrong way?

Will I be able to keep my structure, reach 500 users and keep things running
smoothly?


Is it a problem to have that many databases?

Thanx!

/Andreas

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.799 / Virus Database: 543 - Release Date: 2004-11-19



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to