Hi,

On Monday, December 6, 2004, at 04:15  PM, Klaus Berkling wrote:

I beginning to use MySQL clustering abilities for a large records keeping solution.

I have installed 4.1.7 with the clustering components. The ndbd and ndb_mgmd processes are running. I can create the database and tables using the ndb engine.

I have started to import our data. I gather from the manual that tables are stored in RAM. I am trying to import a database with 11 tables with about 7 million rows. If I follow the math in the manual, one row will use 32KB, I would need 224 TB of RAM.

You are misreading how the 32K page-size works. You can have multiple rows on a single page. So here is about how much you would need:


16 bytes overhead per row + 460 bytes per row (taken from 3GB/7,000,000) = 476 bytes per row

You should get ~71 rows per page with each having an overhead of 128 bytes.

7,000,000 / 71 = 98591 pages

98591 * 128 = 12619648 or 12.6MB overhead on the page level.

12.6MB + (460 + 16)*7,000,000 = ~3.3G data * NoOfReplicas

So you can see it isn't much more than your regular tables. If you had the actual table schema it could be a much closer estimate.


Does this make sense or am I way off?

In version 4.0, the data length of my largest table is 2,671,788,032 (I assume bytes, using SHOW TABLE STATUS), about 2.5 GB. So my entire version 4.0 database is about 3 GB is size.

It won't be that much larger in general. Also keep in mind it will be split across all of the machines in the cluster so even if you need 15 gigs of ram, you could do it with 4 * 4GB machines.


Regards,

Harrison

--
Harrison C. Fisk, Trainer and Consultant
MySQL AB, www.mysql.com

Are you MySQL certified?  www.mysql.com/certification


-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



Reply via email to