Give this more thought. I think you have more options that the two you
proposed.

With really large tables, you can collect data in them for a fixed time
period (monthly) then run a batch that removes the data for the time period
after moving it to an archive table.

Try making a staging table that records the data daily and is flushed
nightly (or monthly) into a main repository.

Your architecture will be successful as long as you understand what you're
going to do with the data - reports, exports, maintenance (update/delete),
etc.

Regards,
Adam

-----Original Message-----
From: Mahesh Tailor [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 15, 2003 5:05 PM
To: [EMAIL PROTECTED]
Subject: DB Design


New to the list . . .

Running MySQL Server 3.23.58-1.72 on RedHat Enterprise AS.  System has
four 3GHz processors and 6GB RAM.

I need some advise on what would be best way to approach this problem. 

This system is using snmpcollect to collect network statistics from
about 1500 devices.  The collections are configured to get data every
5-30 minutes depending on the collection type.  Given this I am
collecting approximately 170K records per hour.  I have to keep this
collected data for at least 365 days.  This works out to approximately
1.50B records/year. After setting up the database, each record is 42
bytes [which would yield, if my math is correct, a database of
approximately 62GB].

So my question is: is it better to create one database one table or one
database many tables?  If I use the many tables option, I will have
about 1500 tables.  Or, it is better to create 1500 databases with one
table each.

BTW, I tried the 1-DB-1-table approach the the server came to a crawl.

Thanks for any opinions.

Mahesh



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to