Re: Unique Index efficiency query

2003-11-27 Thread Chris Elsworth
On Wed, Nov 26, 2003 at 06:44:57PM -0600, Matt W wrote:
> Hi Chris,
> 
> It doesn't take MySQL any more or less time to update a unique index
> than a non-unique one. :-)

Ah, a nice simple answer to a long drawn out question :) Thanks Matt,
just the reassurance I was after, I didn't want inserts to suddenly
slow down by a factor of 10. I realise they'll slow down slightly with
another index to update, but never mind.
Now I can go give MySQL half an hours work to do creating this index
:)

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Index efficiency query

2003-11-26 Thread Matt W
Hi Chris,

It doesn't take MySQL any more or less time to update a unique index
than a non-unique one. :-)


Hope that helps.


Matt


- Original Message -
From: "Chris Elsworth"
Sent: Wednesday, November 26, 2003 12:14 PM
Subject: Unique Index efficiency query


> Hello,
>
> Let me just outline the table in question first.
> I have a rather large (40,000,000 rows) table as follows:
>
>Table: MessageIDs
> Create Table: CREATE TABLE `MessageIDs` (
>   `mid_msgid` char(96) NOT NULL default '',
>   `mid_fileid` int(10) unsigned NOT NULL default '0',
>   `mid_segment` smallint(5) unsigned NOT NULL default '0',
>   `mid_date` int(10) unsigned NOT NULL default '0',
>   `mid_bytes` mediumint(8) unsigned NOT NULL default '0',
>   KEY `fid_msgid` (`mid_fileid`,`mid_msgid`(5)),
>   KEY `fid_bytes` (`mid_fileid`,`mid_bytes`),
>   KEY `mid_date` (`mid_date`)
> ) TYPE=MyISAM PACK_KEYS=1 DELAY_KEY_WRITE=1
>
>
> Index details:
> mysql> show indexes from MessageIDs;
>
+++---+--+-+
---+-+--++--++-+
> | Table  | Non_unique | Key_name  | Seq_in_index | Column_name |
Collation | Cardinality | Sub_part | Packed | Null | Index_type |
Comment |
>
+++---+--+-+
---+-+--++--++-+
> | MessageIDs |  1 | fid_msgid |1 | mid_fileid  | A
| 1114302 | NULL | NULL   |  | BTREE  | |
> | MessageIDs |  1 | fid_msgid |2 | mid_msgid   | A
|20057449 |5 | NULL   |  | BTREE  | |
> | MessageIDs |  1 | fid_bytes |1 | mid_fileid  | A
| 1114302 | NULL | NULL   |  | BTREE  | |
> | MessageIDs |  1 | fid_bytes |2 | mid_bytes   | A
|40114898 | NULL | NULL   |  | BTREE  | |
> | MessageIDs |  1 | mid_date  |1 | mid_date| A
| 1744126 | NULL | NULL   |  | BTREE  | |
>
+++---+--+-+
---+-+--++--++-+
>
> Now, what I want to do with this table is create a unique index on
> (mid_fileid, mid_segment).
>
> How does MySQL deal with ensuring that a unique index doesn't end up
> non-unique with a table this large? Is making this index going to
> proportionally slow down my inserts as the table grows? Would I be
> better making it a non-unique index, and doing a select to ensure I'm
> not inserting a duplicate?
>
> Thanks for any tips,
>
> --
> Chris


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Unique Index efficiency query

2003-11-26 Thread Chris Elsworth
Hello,

Let me just outline the table in question first.
I have a rather large (40,000,000 rows) table as follows:

   Table: MessageIDs
Create Table: CREATE TABLE `MessageIDs` (
  `mid_msgid` char(96) NOT NULL default '',
  `mid_fileid` int(10) unsigned NOT NULL default '0',
  `mid_segment` smallint(5) unsigned NOT NULL default '0',
  `mid_date` int(10) unsigned NOT NULL default '0',
  `mid_bytes` mediumint(8) unsigned NOT NULL default '0',
  KEY `fid_msgid` (`mid_fileid`,`mid_msgid`(5)),
  KEY `fid_bytes` (`mid_fileid`,`mid_bytes`),
  KEY `mid_date` (`mid_date`)
) TYPE=MyISAM PACK_KEYS=1 DELAY_KEY_WRITE=1


Index details:
mysql> show indexes from MessageIDs;
+++---+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name  | Seq_in_index | Column_name | Collation | 
Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+++---+--+-+---+-+--++--++-+
| MessageIDs |  1 | fid_msgid |1 | mid_fileid  | A | 
1114302 | NULL | NULL   |  | BTREE  | |
| MessageIDs |  1 | fid_msgid |2 | mid_msgid   | A |
20057449 |5 | NULL   |  | BTREE  | |
| MessageIDs |  1 | fid_bytes |1 | mid_fileid  | A | 
1114302 | NULL | NULL   |  | BTREE  | |
| MessageIDs |  1 | fid_bytes |2 | mid_bytes   | A |
40114898 | NULL | NULL   |  | BTREE  | |
| MessageIDs |  1 | mid_date  |1 | mid_date| A | 
1744126 | NULL | NULL   |  | BTREE  | |
+++---+--+-+---+-+--++--++-+

Now, what I want to do with this table is create a unique index on
(mid_fileid, mid_segment).

How does MySQL deal with ensuring that a unique index doesn't end up
non-unique with a table this large? Is making this index going to
proportionally slow down my inserts as the table grows? Would I be
better making it a non-unique index, and doing a select to ensure I'm
not inserting a duplicate?

Thanks for any tips,

-- 
Chris

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Efficiency Query

2002-10-14 Thread Insanely Great

Greetings...

MySQL can handle datas hundreds times greater then that very efficiently so
dont you worry about handling 500 MB.

My database has 8GB of data and it performs best.

So dont worry.

Rgds
Insane

- Original Message -
From: "Alberto Ruiz Cristina" <[EMAIL PROTECTED]>
To: "mysql" <[EMAIL PROTECTED]>
Sent: Monday, October 14, 2002 6:27 PM
Subject: Efficiency Query


> Hi all
>
> I am new to MySql and I am thinking about using it for managing a
> database, which would have a approximated length of 500 Mb. It is formed
> of vectors.
>
> Trouble is that I am worried about the ability of MySql to handle that
> amount of data.
>
> Do you know about any article or study about MySql efficiency? If not,
> maybe you can just tell me your own experience.
>
> Thank you in advance!
>
> Alberto Ruiz
> Spain
>
>
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail
<[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Efficiency Query

2002-10-14 Thread Michael Zimmermann

At Montag, 14. Oktober 2002 14:57 Alberto Ruiz Cristina wrote:
> Hi all
>
> I am new to MySql and I am thinking about using it for managing a
> database, which would have a approximated length of 500 Mb. It is formed
> of vectors.

Greetingz,

from my experiences with several Databases since the 1970ies
efficiency - especially effiency of relational Databases -
is primarily dependant on the design of your data and programs.
It depends first of all what you want to do with the data,
e.g. in addition to what data you have, how many updates and
what kind of updates you are doing, how many queries and what
kind of them do you expect, how many clients are simultanously
accessing the database and what they are doing...

Any paper or book about relational database design and it's
tuning will probably help you, MySQL is not "special" in
that regard.

Then the next important step will be the tuning of your
database together your applications (or under simulated
load which represents your environment). This may require
some changes in the layout of your data and tables too,
but it comes secondary, because the major faults are often
done in the design of the application algorithms together
with the design of the database. A design flaw can degrade
your performance by several order of magnitudes (and can
be difficult to fix), while the basic database tuning
thingamagics (like adding or dropping an index) are
relative cheap to do later.


> Trouble is that I am worried about the ability of 
> MySql to handle that amount of data.

The amount of data is not a problem, but how you
access and update the data might be. For example:
if you want to cut a hyperplane or space-slice
(sorry, don't know the exact english phrase for that)
through your vector-world, which orientation
will these hyperplanes have? Will it be normal
to one of your dimensions etc.

Just storing 500 MB of vector data might even
be done best in a binary file without using any 
database at all, which is read (with appropriate 
locking) into virtual memory as one large matrix,
processed and perhaps written out again.

If your problem is fit for a relational database, 
then MySQL can do the job. If not, then other
RDBS have the same problems with it, probably.

Get my idea?


Greetings
Michael
-- 
Michael Zimmermann  (http://vegaa.de)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Efficiency Query

2002-10-14 Thread Iikka Meriläinen

On Mon, 14 Oct 2002, Alberto Ruiz Cristina wrote:

> Hi all
>
> I am new to MySql and I am thinking about using it for managing a
> database, which would have a approximated length of 500 Mb. It is formed
> of vectors.
>
> Trouble is that I am worried about the ability of MySql to handle that
> amount of data.

Hi,

Honestly speaking, 500 mb of almost _any_ data is nothing for MySQL. It can
handle even several hundreds of gigabytes of data, assuming it is well
organized and the queries are optimized.

> Do you know about any article or study about MySql efficiency? If not,
> maybe you can just tell me your own experience.

My own experience is limited to fairly small systems, but I guess that for
example, Mr. Jeremy Zawodny (an active participant on this list) could tell
more about using MySQL in _really_ high-loaded systems. See any of his
signatures for examples.. ;-) Quite impressive numbers, there.

Anyway, MySQL efficiency is awesome. If you don't really require transaction
support in your application, consider using MyISAM as your table handler.

Best regards,
Iikka


**
* Iikka Meriläinen   *
* E-mail: [EMAIL PROTECTED] *
* Vaala, Finland *
**


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Efficiency Query

2002-10-14 Thread Alberto Ruiz Cristina

Hi all

I am new to MySql and I am thinking about using it for managing a
database, which would have a approximated length of 500 Mb. It is formed
of vectors.

Trouble is that I am worried about the ability of MySql to handle that
amount of data.

Do you know about any article or study about MySql efficiency? If not,
maybe you can just tell me your own experience.

Thank you in advance!

Alberto Ruiz
Spain


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php