> adding an integer 'userid' to member table and using it as a foreign key
> in member_log table instead of username.

You're right. The main reason i'm doing this, is due to legacy reasons.

> Also, I don't know what you mean by 'medint' but if it is a mediumint,
> you will be in trouble since its range is [-8388608..8388607].

The primary key on that table is made up of the username and logid, sorry I 
should have made that clearer. And each user will only have a maximum of about a 
million rows. It isn't an auto incrementing column.

>database: check 'replicate-do-table' option.

Thx, I never saw that option before.

I'm really going to take another look at the whole structure before I decide 
what to do.

6/27/03 10:07:16 AM, Joseph Bueno wrote:

>Sorry to reply to myself but after reading your post again,
>I think you can use replication to maintain member table in sync:
>it is possible to restrict replication to a some tables within a
>database: check 'replicate-do-table' option.
>
>Hope this helps
>Joseph Bueno
>
>Joseph Bueno wrote:
>> I don't have direct answers to your questions but you should consider
>> adding an integer 'userid' to member table and using it as a foreign key
>> in member_log table instead of username. It will make selects and joins
>> faster, data and index sizes smaller.
>> Also, I don't know what you mean by 'medint' but if it is a mediumint,
>> you will be in trouble since its range is [-8388608..8388607].
>> You should use at least an "unsigned int" and may be an "unsigned
>> bigint" if you suspect that you will have more than 4 billion rows.
>> 
>> Hope this helps
>> Joseph Bueno
>> 
>> Aodhan Cullen wrote:
>> 
>>> I've got an interesting problem for you all. I'd love to hear what you 
>>> think.
>>>
>>> I've simplified the database design of my web application to its root 
>>> problem, so it'll be very easy to see my difficulty, and if you're 
>>> feeling generous possibly offer a solution.
>>>
>>> I have two tables.
>>>
>>> member table (circa 1,000,000 rows - 100 meg - easily fit in a single 
>>> table)
>>> username varchar(30) pk
>>> password varchar(30)
>>> settings varchar(30)
>>>
>>>
>>> member_log table (circa 3,000,000,000 rows - a few 100 gigs - a few 
>>> 100 very simple updates and selects a second, with some quite long 
>>> selects every minute or so - when the update is being done it needs to 
>>> select the settings for the user from the member table before it does 
>>> the update to the member_log table)
>>>
>>> logid medint pk
>>> fk_username varchar(30) fk
>>> description varchar(200)
>>>
>>> My read/update ratio would be something along the lines of 1:3, 3 
>>> updates for every read. So it is highly unusual, and more or less 
>>> rules replication out of the picture.
>>>
>>> Now ehm ? what's the most efficient way of doing this?
>>>
>>> What I would like to do is:
>>>
>>> Is have a copy of the member table on every server, then break up the 
>>> member_log based on the username, and spread it across multiple servers.
>>>
>>> database server a
>>>
>>> full member table
>>> 1/4 member_log table
>>>
>>> database server b
>>>
>>> full member table
>>> 1/4 member_log table
>>>
>>> database server c
>>>
>>> full member table
>>> 1/4 member_log table
>>>
>>> database server d
>>>
>>> full member table
>>> 1/4 member_log table
>>>
>>> In the future, if the servers start to slow down then I'll just add
>>>
>>> database server e
>>>
>>> full member table
>>> member_log table
>>>
>>> Well that's what I'd like to do, but I don't know how to do this.
>>>
>>> My main problem is keeping the full member table in sync.
>>>
>>> I can't use replication because my read/update ratio just isn't right 
>>> for it. And I only want to keep one table in sync, not the whole 
>>> database.
>>>
>>> So i don't know what to do. How do I do this, and do this efficently?
>>>
>>> Any ideas anyone?
>>>
>>> regards,
>>>
>>> Aodhan.
>>>
>>>
>>>
>> 
>> 
>> 
>
>
>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to