John Hicks wrote:
>I don't see a question here.
>
>But that won't stop me from giving a little advice :)
>
>It is generally more important to keep things simple (by not
>denormalizing) than to try to optimize performance by complicating
>things significantly.
>
>Moreover, I can't see how combining several columns into one will
>improve performance. I would think it will slow things down whenever you
>have to retrieve data, particular if you query against anything in column 3.
>
>And now you say you want to save disk space by compressing the field
>separators in your combined column?
>
>Forget it all! Go back to a fully normalized design. If you have
>problems, post them here
>
>

I kind of disagree on what you said regarding denormalization, but believe me 
when I say that I have
experienced a 90% improvement on performance with that.

As I said before, my table has +20 million entries; if it was normalized this 
number would be around 20
billion, since it would be a 1 – N relation.

Off course I don’t make any selections based on column 3, but only by the 
table’s keys.

Forget that!!! Runing for normalization would not be viable for me. I need a 
response time lower than 0.01 sec.
(and I’ve been achieving less than that)

However I would like to make a better use of this column’s space, once I use 
two only characters for separators.

Here’s my question: Is there anyway I could minimize that? Is there any 
specific character that would occupy
less space?

Once again thank you very much

==============
Atenciosamente,
Jan Gomes - [EMAIL PROTECTED]

>Jan Gomes wrote:
>> Hy Guys,
>>
>> I needed denormalized my table to obtain high performance, but i want best 
>> appropriate the >space.
>>
>> I joint two column (of the JOIN) intro one column with two separadores (# 
>> and ;)
>>
>> Example:
>> ID | column_1 | column_denormalized
>> 1 | Test | 1#20202;5#1000101;
>>
>> It has some method to minimize the space(disk space) required for this 
>> separadores ? Like >some character that i
>> can use for minimize the table size?
>>
>> PS: The table has 20.000.000 of rows with 2 GB data length.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to