>I think it's always a bad idea to create complex setups like you are 
>contemplating.

I'd agree. They are very hard to maintain if you need to change things in the 
future.

>see any reason to use varchar, that creates variable length records 
>which you want to avoid if you can. Especially with the number of 
>records you are compiling.

Duly noted, i've simplified the database a lot, but i will need variable length 
records for the finished solution. The database would just be too huge, too soon 
otherwise.

>You didn't mention what table type you are using. I'm pretty sure you 
>would want to use InnoDB if you are going to have lots of updates going 
>on.

I have my current system using MyISAM Dynamic table types. It's all in just one 
database at the moment. It's running incredibly efficiently and fast with the 
current database size of 500mb. I've slammed it with Apache Bench and it can 
handle 60,000,000 updates and 20,000,000 selects a day before it starts to slow 
down. This is more than good enough, if I could just reproduce this across 
multiple servers as the service grows.

>I would also try a few other general system changes before playing with 
>you structure. Try finding out where you bottleneck is (disk, memory, 
>cpu). It may be that you just need to load up the machine with RAM to 
>get some nice performance.

Oh, the bottleneck is clear, it's the maximum table size of 4 gigabytes running 
on unix. I have to work around this somehow.

6/27/03 1:31:43 PM, Brent Baisley wrote:

>I think it's always a bad idea to create complex setups like you are 
>contemplating. I would try everything else first. For instance, I don't 
>see any reason to use varchar, that creates variable length records 
>which you want to avoid if you can. Especially with the number of 
>records you are compiling.
>You didn't mention what table type you are using. I'm pretty sure you 
>would want to use InnoDB if you are going to have lots of updates going 
>on.
>I would also try a few other general system changes before playing with 
>you structure. Try finding out where you bottleneck is (disk, memory, 
>cpu). It may be that you just need to load up the machine with RAM to 
>get some nice performance.
>
>On Thursday, June 26, 2003, at 03:33 PM, Aodhan Cullen wrote:
>
>> I've got an interesting problem for you all. I'd love to hear what you 
>> think.
>>
>> I've simplified the database design of my web application to its root 
>> problem,
>> so it'll be very easy to see my difficulty, and if you're feeling 
>> generous
>> possibly offer a solution.
>>
>> I have two tables.
>>
>> member table (circa 1,000,000 rows - 100 meg - easily fit in a single 
>> table)
>> username varchar(30) pk
>> password varchar(30)
>> settings varchar(30)
>>
>>
>> member_log table (circa 3,000,000,000 rows - a few 100 gigs - a few 
>> 100 very
>> simple updates and selects a second, with some quite long selects 
>> every minute
>> or so - when the update is being done it needs to select the settings 
>> for the
>> user from the member table before it does the update to the member_log 
>> table)
>>
>> logid medint pk
>> fk_username varchar(30) fk
>> description varchar(200)
>>
>> My read/update ratio would be something along the lines of 1:3, 3 
>> updates for
>> every read. So it is highly unusual, and more or less rules 
>> replication out of
>> the picture.
>>
>> Now ehm … what's the most efficient way of doing this?
>>
>> What I would like to do is:
>>
>> Is have a copy of the member table on every server, then break up the 
>> member_log
>> based on the username, and spread it across multiple servers.
>>
>> database server a
>>
>> full member table
>> 1/4 member_log table
>>
>> database server b
>>
>> full member table
>> 1/4 member_log table
>>
>> database server c
>>
>> full member table
>> 1/4 member_log table
>>
>> database server d
>>
>> full member table
>> 1/4 member_log table
>>
>> In the future, if the servers start to slow down then I'll just add
>>
>> database server e
>>
>> full member table
>> member_log table
>>
>> Well that's what I'd like to do, but I don't know how to do this.
>>
>> My main problem is keeping the full member table in sync.
>>
>> I can't use replication because my read/update ratio just isn't right 
>> for it.
>> And I only want to keep one table in sync, not the whole database.
>>
>> So i don't know what to do. How do I do this, and do this efficently?
>>
>> Any ideas anyone?
>>
>> regards,
>>
>> Aodhan.
>>
>>
>>
>> -- 
>> MySQL General Mailing List
>> For list archives: http://lists.mysql.com/mysql
>> To unsubscribe:    
>> http://lists.mysql.com/[EMAIL PROTECTED]
>>
>>
>-- 
>Brent Baisley
>Systems Architect
>Landover Associates, Inc.
>Search & Advisory Services for Advanced Technology Environments
>p: 212.759.6400/800.759.0577





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to