Hi, I'm in the process of setting up a new database server that will run on redhat linux. The machine will be dual processor with 4GB ram and about 16GB disk.
The machine is going to be used purely with InnoDB tables and will have a few very large tables acting as cache data. The amount of data I want to store will be between 2 and 4 GB to start with but might grow larger. I've been reading alot on how to set up InnoDB and have come across the 2GB limit problem. There is actually 2 problems here. 1. From reading many articles Linux may or may not support files larger than 2GB. 2. There is a problem with glibc that a process may become unstable if a process allocates more than 2GB. The 1st one isn't a problem, I can have 2 data files of 2GB, but I would like to overcome this issue. The second is where I'm stuck on, the InnoDB configuration page gives a nice formula that you should use so that you can calculate how much memory you should use. It gives an example configuration but this exceeds the 2GB limit even with only 200 concurrent connections. I really need to get the connections to something like 1000 without going over the limit. What configuration can be used and how can this be achieved ? Additionally I have read that each linux thread has a stack of 2MB, this is taken into account in the formula, this can be changed as I understand by changing a #define somewhere and recompiling the kernel and then recompiling the mysql server. Any input would be greatly appreciated. Best Regards, Marvin Wright ________________________________________________________________________ This e-mail has been scanned for all viruses by Star Internet. The service is powered by MessageLabs. For more information on a proactive anti-virus service working around the clock, around the globe, visit: http://www.star.net.uk ________________________________________________________________________ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]