Repairing a table

2007-02-02 Thread Ian Barnes

Hi,

Im having another problem with a large db. I have a 160Gig drive dedicated
to the database partition and I have on database in particular that is
taking up the space. My .MYD is 78Gig and my .MYI is 34 gig. Thats fine i
have a couple of gig left, but whenever i try and do anything on it, the
disk fills up and I have to run a repair table tablename quick to get it
going and that takes forever since its quite large.

Is there any other way that I could repair the index's quickly without
having to wait up to 48 hours for the index to repair? Also, if i run out of
diskspace, is there any way that I can get it working without truncating the
data?

Thanks alot
Ian


Questions about delete and optimize

2007-02-01 Thread Ian Barnes

Hi,

We are looking at various methods that we can effectively and efficiently
delete lots of rows from a database and then optimize it. Our main concern
is disk space - the partition we are working with is only 12gigs small and
our database vary in size from 1gig (not a problem) to 11gig. In the example
below I will use one whos .MYD is 6.5 Gig and the .MYI is 2.7Gig. There are
around 28,900,000 rows in the database.

Once a month we run an automated program that deletes rows older than X
months and then we attempt the optimize the table in question. The delete
query we use is:
DELETE FROM table WHERE date(current_date - interval 2 month). Now my
questions surrounding this are:

1.) Is it quicker to do a query where we say something like: DELETE FROM
table WHERE date = '2006-11-01' instead of where date(current_date)?
2.) Does the current way we do it use a tmp table that is written to disk ?

Then, we run the simple optimize command: OPTIMIZE TABLE tablename and that
is normally where we come into the problem that mysql tries to create a tmp
file while optimizing and it runs out of space, and then corrupts the main
table. We need to run the optimize because after deleting all those rows,
the space isnt freed up until we run the optimize. So my other question is
can we do an optimize a different way, or is there some way that we can
insert and delete rows that would require less optimization?

Thanks in advance,
Ian


Doing sum's if certain conditions are true

2006-08-29 Thread Ian Barnes

Hi,

I am trying to build a query that does a sum if a certain parameter is set.
For example:

I have a row with four fields: code1, code2, duration and bytes. My current
query looks something like this: SELECT code1 as code, sum(bytes) as bin,
max(bytes) as min, avg(bytes) as ain, count(bytes) as cin, sum(duration) as
dur from data group by code; which returns something like this:

+---+--+---+-+--+-+
| code  | bin  | min   | ain |
cin  | dur |
+---+--+---+-+--+-+
| NONE  | 103939170759 | 485089817 |   3739.1827 |
27797297 | 11681839027 |

Now, what i need todo is exclude certain info from the above NONE entry if
code2 is equal to something. So for example (in php terminology):

if(code == 'NONE') {
 if(code2 == 'DENIED') { continue; }
 else {
 bin += bytes;
 if(bytes  min) { min = bytes; }
 cin++;
 dur += dur;
 }
}

after that i could work out the average by dividing bin / cin for what in
the report is called ain. Is there any way of achieving this via the sql
query because the above is a hugely tedious way of doing it. I know mysql
has an if() statement, but I have no idea how to implement it using what i
want to achieve above.

Thanks in advance.
Ian


Re-importing a mysqldump file

2006-06-25 Thread Ian Barnes
Hi,

 

I need to auto re-import a mysqldump file, but when importing it I need to
make a certain field a value for all information imported.  For example my
db looks like this:

 

Id

Name

Value

Serverid

 

Now, on the remote server, name and value get exported, and when I re-import
it here, I need id to auto-increment and serverid to be set by something
that I specify depending on what file im importing. 

 

Is this possible? Or would the best way be to import the dumped file into a
temp table and then select out of the temp table into my correct table ?

 

Thanks for any help!

Ian



RE: Re-importing a mysqldump file

2006-06-25 Thread Ian Barnes
Hi,

No unfortunately not...

Cheers
Ian

 -Original Message-
 From: John Meyer [mailto:[EMAIL PROTECTED]
 Sent: 25 June 2006 05:41 PM
 To: mysql@lists.mysql.com
 Subject: Re: Re-importing a mysqldump file
 
 Ian Barnes wrote:
  Is this possible? Or would the best way be to import the dumped file
 into a
  temp table and then select out of the temp table into my correct table ?
 
 
 
 Anyway to use a trigger?
 
 --
 Online library -- http://pueblonative.110mb.com
 126 books and counting.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Importing 3Gb File

2006-06-11 Thread Ian Barnes
Hi,

This is all I could see just before it happened:

mysql show processlist;
++--+---+-+-+--++---

---+
| Id | User | Host  | db  | Command | Time | State  | Info
|
++--+---+-+-+--++---

---+
| 11 | root | localhost | testing | Query   | 0| creating table | CREATE
TABLE ` upgrade_history` (
  `upgrade_id` int(10) NOT NULL auto_increment,
  `upgrade_vers |
| 12 | root | localhost | | Query   | 0|| show
processlist
|
++--+---+-+-+--++---

---+
2 rows in set (0.00 sec)


Then after that it the following happened for 60 seconds and then it timed
out:

mysql show processlist;
++--+---+-+-+--+---+
--+
| Id | User | Host  | db  | Command | Time | State | Info
|
++--+---+-+-+--+---+
--+
| 11 | root | localhost | testing | Sleep   | 0|   |
|
| 12 | root | localhost | | Query   | 0|   | show
processlist |
++--+---+-+-+--+---+
--+
2 rows in set (0.00 sec)


I have put more RAM in (I now have 768Mb's, as well as 3Gig of SWAP).

Thanks for the suggestions!

Cheers
Ian


 -Original Message-
 From: mos [mailto:[EMAIL PROTECTED]
 Sent: 09 June 2006 07:15 PM
 To: mysql@lists.mysql.com
 Subject: Re: Importing 3Gb File
 
 At 10:20 AM 6/8/2006, you wrote:
 Hi,
 
 I am trying to import a 3.2Gb sql dump file back into my sql server
 (4.1.12)
 and im coming across the following error:
 
 mysql: Out of memory (Needed 178723240 bytes)
 mysql: Out of memory (Needed 178719144 bytes)
 
 That error comes up after about 30 minutes worth of import and I would
 guess
 about half way through the import.
 
 What does Show Processlist say its doing just before the crash? I've had
 problems with Load Data on a very large table 500 million rows because the
 machine did not have enough memory to build the index. The data was loaded
 just fine, it's building the index that hung it out to dry because that
 eats up memory like crazy. How much memory do you have on your machine?
 The
 cheapest solution might be to go out and get a few more gb of RAM.
 
 Mike
 
 
 
 The file in question is a mysqldump
 -all-databases file from another server that im trying to import onto my
 desktop machine. I have tried to alter the my.cnf file a bit, and this is
 what it looks like:
 
 [client]
 #password   = your_password
 port= 3306
 socket  = /tmp/mysql.sock
 
 [mysqld]
 port= 3306
 socket  = /tmp/mysql.sock
 skip-locking
 key_buffer = 64M
 max_allowed_packet = 8M
 table_cache = 512
 sort_buffer_size = 8M
 net_buffer_length = 8M
 myisam_sort_buffer_size = 45M
 set-variable=max_connections=300
 
 # Replication Master Server (default)
 # binary logging is required for replication
 #log-bin
 
 # required unique id between 1 and 2^32 - 1
 # defaults to 1 if master-host is not set
 # but will not function as a master if omitted
 server-id   = 1
 
 #bdb_cache_size = 4M
 #bdb_max_lock = 1
 
 # Uncomment the following if you are using InnoDB tables
 #innodb_data_home_dir = /var/db/mysql/
 #innodb_data_file_path = ibdata1:10M:autoextend
 #innodb_log_group_home_dir = /var/db/mysql/
 #innodb_log_arch_dir = /var/db/mysql/
 # You can set .._buffer_pool_size up to 50 - 80 %
 # of RAM but beware of setting memory usage too high
 #innodb_buffer_pool_size = 16M
 #innodb_additional_mem_pool_size = 2M
 # Set .._log_file_size to 25 % of buffer pool size
 #innodb_log_file_size = 5M
 #innodb_log_buffer_size = 8M
 #innodb_flush_log_at_trx_commit = 1
 #innodb_lock_wait_timeout = 50
 
 [mysqldump]
 quick
 max_allowed_packet = 16M
 
 [mysql]
 no-auto-rehash
 # Remove the next comment character if you are not familiar with SQL
 #safe-updates
 
 [isamchk]
 key_buffer = 10M
 sort_buffer_size = 20M
 read_buffer = 2M
 write_buffer = 2M
 
 [myisamchk]
 key_buffer = 10M
 sort_buffer_size = 20M
 read_buffer = 2M
 write_buffer = 2M
 
 [mysqlhotcopy]
 interactive-timeout
 
 
 Does anyone have any advice as to what I could change to make it import,
 and
 not break half way through. The command im running to import is: mysql -n
 -f
 -p  alldb.sql
 
 Thanks in advance,
 Ian
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]

RE: Importing 3Gb File

2006-06-11 Thread Ian Barnes
Hi,

Yes, I don't actually know if I have very large blogs, but the possibility
exists, and is quite large.

I am running FreeBSD so I don't have the ulimit program, the only program I
have is called limits and these are what I get when running it:

[EMAIL PROTECTED] /home # limits
Resource limits (current):
  cputime  infinity secs
  filesize infinity kb
  datasize   524288 kb
  stacksize   65536 kb
  coredumpsize infinity kb
  memoryuseinfinity kb
  memorylocked infinity kb
  maxprocesses 5446
  openfiles   10893
  sbsize   infinity bytes
  vmemoryuse   infinity kb
[EMAIL PROTECTED] /home #

I have upped my RAM in the unit with 512Mb to 768Mb, so I think I should
have enough RAM now.

Any other ideas?

Thanks,
Ian

 -Original Message-
 From: Jeremy Cole [mailto:[EMAIL PROTECTED]
 Sent: 09 June 2006 05:01 PM
 To: Ian Barnes
 Cc: mysql@lists.mysql.com
 Subject: Re: Importing 3Gb File
 
 Hi Ian,
 
  I am trying to import a 3.2Gb sql dump file back into my sql server
 (4.1.12)
  and im coming across the following error:
 
  mysql: Out of memory (Needed 178723240 bytes)
  mysql: Out of memory (Needed 178719144 bytes)
 
 That error message comes from some single place trying to allocate 178MB
 at a single time.  Do you have large BLOBs in your data?
 
 This error message means that mysqld is beind denied memory by the OS,
 either because you are actually out of memory, or because your ulimit
 has been reached (more likely).
 
 Check your ulimits for your system with ulimit -a, and adjust if
 necessary in the mysql.server or mysqld_safe script (those both run as
 root, so can adjust ulimits upwards).
 
 Regards,
 
 Jeremy
 
 --
 Jeremy Cole
 MySQL Geek, Yahoo! Inc.
 Desk: 408 349 5104
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Importing 3Gb File

2006-06-08 Thread Ian Barnes
Hi,

I am trying to import a 3.2Gb sql dump file back into my sql server (4.1.12)
and im coming across the following error:

mysql: Out of memory (Needed 178723240 bytes)
mysql: Out of memory (Needed 178719144 bytes)

That error comes up after about 30 minutes worth of import and I would guess
about half way through the import. The file in question is a mysqldump
-all-databases file from another server that im trying to import onto my
desktop machine. I have tried to alter the my.cnf file a bit, and this is
what it looks like:

[client]
#password   = your_password
port= 3306
socket  = /tmp/mysql.sock

[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 64M
max_allowed_packet = 8M
table_cache = 512
sort_buffer_size = 8M
net_buffer_length = 8M
myisam_sort_buffer_size = 45M
set-variable=max_connections=300

# Replication Master Server (default)
# binary logging is required for replication
#log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id   = 1

#bdb_cache_size = 4M
#bdb_max_lock = 1

# Uncomment the following if you are using InnoDB tables
#innodb_data_home_dir = /var/db/mysql/
#innodb_data_file_path = ibdata1:10M:autoextend
#innodb_log_group_home_dir = /var/db/mysql/
#innodb_log_arch_dir = /var/db/mysql/
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
#innodb_buffer_pool_size = 16M
#innodb_additional_mem_pool_size = 2M
# Set .._log_file_size to 25 % of buffer pool size
#innodb_log_file_size = 5M
#innodb_log_buffer_size = 8M
#innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 10M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout


Does anyone have any advice as to what I could change to make it import, and
not break half way through. The command im running to import is: mysql -n -f
-p  alldb.sql 

Thanks in advance,
Ian


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



FW: Importing 3Gb File

2006-06-08 Thread Ian Barnes
Sorry, forgot to send to the list aswell. My reply is at the bottom.

 -Original Message-
 From: Ian Barnes [mailto:[EMAIL PROTECTED]
 Sent: 08 June 2006 09:58 PM
 To: 'Kishore Jalleda'
 Subject: RE: Importing 3Gb File
 
 
 
  -Original Message-
  From: Kishore Jalleda [mailto:[EMAIL PROTECTED]
  Sent: 08 June 2006 06:18 PM
  To: Ian Barnes
  Cc: mysql@lists.mysql.com
  Subject: Re: Importing 3Gb File
 
  On 6/8/06, Ian Barnes [EMAIL PROTECTED] wrote:
  
   Hi,
  
   I am trying to import a 3.2Gb sql dump file back into my sql server (
   4.1.12)
   and im coming across the following error:
  
   mysql: Out of memory (Needed 178723240 bytes)
   mysql: Out of memory (Needed 178719144 bytes)
  
   That error comes up after about 30 minutes worth of import and I would
   guess
   about half way through the import. The file in question is a mysqldump
   -all-databases file from another server that im trying to import onto
 my
   desktop machine. I have tried to alter the my.cnf file a bit, and this
  is
   what it looks like:
  
   [client]
   #password   = your_password
   port= 3306
   socket  = /tmp/mysql.sock
  
   [mysqld]
   port= 3306
   socket  = /tmp/mysql.sock
   skip-locking
   key_buffer = 64M
   max_allowed_packet = 8M
   table_cache = 512
   sort_buffer_size = 8M
   net_buffer_length = 8M
   myisam_sort_buffer_size = 45M
   set-variable=max_connections=300
  
   # Replication Master Server (default)
   # binary logging is required for replication
   #log-bin
  
   # required unique id between 1 and 2^32 - 1
   # defaults to 1 if master-host is not set
   # but will not function as a master if omitted
   server-id   = 1
  
   #bdb_cache_size = 4M
   #bdb_max_lock = 1
  
   # Uncomment the following if you are using InnoDB tables
   #innodb_data_home_dir = /var/db/mysql/
   #innodb_data_file_path = ibdata1:10M:autoextend
   #innodb_log_group_home_dir = /var/db/mysql/
   #innodb_log_arch_dir = /var/db/mysql/
   # You can set .._buffer_pool_size up to 50 - 80 %
   # of RAM but beware of setting memory usage too high
   #innodb_buffer_pool_size = 16M
   #innodb_additional_mem_pool_size = 2M
   # Set .._log_file_size to 25 % of buffer pool size
   #innodb_log_file_size = 5M
   #innodb_log_buffer_size = 8M
   #innodb_flush_log_at_trx_commit = 1
   #innodb_lock_wait_timeout = 50
  
   [mysqldump]
   quick
   max_allowed_packet = 16M
  
   [mysql]
   no-auto-rehash
   # Remove the next comment character if you are not familiar with SQL
   #safe-updates
  
   [isamchk]
   key_buffer = 10M
   sort_buffer_size = 20M
   read_buffer = 2M
   write_buffer = 2M
  
   [myisamchk]
   key_buffer = 10M
   sort_buffer_size = 20M
   read_buffer = 2M
   write_buffer = 2M
  
   [mysqlhotcopy]
   interactive-timeout
  
  
   Does anyone have any advice as to what I could change to make it
 import,
   and
   not break half way through. The command im running to import is: mysql
 -
  n
   -f
   -p  alldb.sql
  
   Thanks in advance,
   Ian
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
  
  
 
  Assuming that you have dumped your databases using mysqldump, what
 options
  did you give to mysqldump, as of 4.1, --opt is enabled by default, and
  this enables the --quick option which basically forces mysqldump to
  retrieve one row at a time instead of buffering the whole table into
  memory
  and then writing out the result.
 
  So if you have the --quick option enabled in myslqdump, you should not
 be
  getting the out of memory errors, also I see you are using the -n option
  with mysql CLT, which does not buffer sql statements/queries  into
 memory
  before flushing them, but if the dump itself consists of large rows of
  table
  data flushed into one large sql statement, then mysql CLT would still
  treat
  it as one query, so i am sure you have to change the way you dump your
  tables...
 
  Kishore Jalleda
  http://kjalleda.googlepages.com/projects
 
 Hi Kishore,
 
 Thanks for the info!
 
 I don't know how the file was dumped (I know it was via mysqldump), but I
 assume it was the default dump method (mysqldump -uuser -p --all-databases
  alldb.sql).
 
 I have tried running it with the -q option and it still fails with that
 message. (mysql -f -q -p  /home/iandb.sql)
 
 Any other ideas?
 
 Cheers
 Ian


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Repairing table problem

2006-06-08 Thread Ian Barnes
Hi,

 

When I try to repair a database that has been marked as crashed, I get the
following error:

 

| db.table | repair | info | Key 1 - Found wrong stored record at 885548
|

| db.table | repair | error| Not enough memory for blob at 885648 (need
842150190) |

| db.table | repair | info | Key 1 - Found wrong stored record at 885548
|

| db.table | repair | error| Not enough memory for blob at 885648 (need
842150190) |

| db.table | repair | status   | Operation failed

 

Any idea how to fix it? There is more than enough space in the db directory.
Should I increase the memory that MySQL is allowed to use? 

 

I am running MySQL 5.0.18

 

Thanks,

Ian



Query Help

2006-01-24 Thread Ian Barnes
Hi,

This is my current query which works in mysql 4, but not in 5. Its from
mambo, but im trying to modify it because they don't officially support
mysql5 yet.

The original query:
SELECT c.*, g.name AS groupname, cc.name, u.name AS editor, f.content_id AS
frontpage, s.title AS section_name, v.name AS author FROM mos_content AS c,
mos_categories AS cc, mos_sections AS s LEFT JOIN mos_groups AS g ON g.id =
c.access LEFT JOIN mos_users AS u ON u.id = c.checked_out LEFT JOIN
mos_users AS v ON v.id = c.created_by LEFT JOIN mos_content_frontpage AS f
ON f.content_id = c.id WHERE c.state = 0 AND c.catid=cc.id AND
cc.section=s.id AND s.scope='content' AND c.sectionid='1' ORDER BY
cc.ordering, cc.title, c.ordering LIMIT 0,10

My modified version:
SELECT c.*, g.name AS groupname, cc.name, u.name AS editor, f.content_id AS
frontpage, s.title AS section_name, v.name AS author FROM mos_content c,
mos_categories cc, mos_sections s INNER JOIN mos_groups g ON (g.id =
c.access) LEFT JOIN mos_users u ON (u.id = c.checked_out) LEFT JOIN
mos_users v ON (v.id = c.created_by) LEFT JOIN mos_content_frontpage f ON
(f.content_id = c.id) WHERE c.state = 0 AND c.catid=cc.id AND
cc.section=s.id AND s.scope='content' AND c.sectionid='1' ORDER BY
cc.ordering, cc.title, c.ordering LIMIT 0,10;

The error I get for both ones is: Unknown column 'c.access' in 'on clause'

Thanks and sorry for the stupid question.

Cheers
Ian



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Query Help

2006-01-24 Thread Ian Barnes
Hi,

It now looks like this and still doesn't work, complains about exactly the
same thing.

SELECT c.*, g.name AS groupname, cc.name, u.name AS editor, f.content_id AS
frontpage, s.title AS section_name, v.name AS author FROM mos_content c,
mos_categories cc, mos_sections s INNER JOIN mos_groups g ON (g.id =
c.access) INNER JOIN mos_users u ON (u.id = c.checked_out) INNER JOIN
mos_users v ON (v.id = c.created_by) INNER JOIN mos_content_frontpage f ON
(f.content_id = c.id) WHERE c.state = 0 AND c.catid=cc.id AND
cc.section=s.id AND s.scope='content' AND c.sectionid='1' ORDER BY
cc.ordering, cc.title, c.ordering LIMIT 0,10;

Thanks,
Ian

-Original Message-
From: gerald_clark [mailto:[EMAIL PROTECTED] 
Sent: 24 January 2006 09:50 PM
To: Ian Barnes
Cc: mysql@lists.mysql.com
Subject: Re: Query Help

Ian Barnes wrote:

Hi,

This is my current query which works in mysql 4, but not in 5. Its from
mambo, but im trying to modify it because they don't officially support
mysql5 yet.

The original query:
SELECT c.*, g.name AS groupname, cc.name, u.name AS editor, f.content_id AS
frontpage, s.title AS section_name, v.name AS author FROM mos_content AS c,
mos_categories AS cc, mos_sections AS s LEFT JOIN mos_groups AS g ON g.id =
c.access LEFT JOIN mos_users AS u ON u.id = c.checked_out LEFT JOIN
mos_users AS v ON v.id = c.created_by LEFT JOIN mos_content_frontpage AS f
ON f.content_id = c.id WHERE c.state = 0 AND c.catid=cc.id AND
cc.section=s.id AND s.scope='content' AND c.sectionid='1' ORDER BY
cc.ordering, cc.title, c.ordering LIMIT 0,10

My modified version:
SELECT c.*, g.name AS groupname, cc.name, u.name AS editor, f.content_id AS
frontpage, s.title AS section_name, v.name AS author FROM mos_content c,
mos_categories cc, mos_sections s INNER JOIN mos_groups g ON (g.id =
c.access) LEFT JOIN mos_users u ON (u.id = c.checked_out) LEFT JOIN
mos_users v ON (v.id = c.created_by) LEFT JOIN mos_content_frontpage f ON
(f.content_id = c.id) WHERE c.state = 0 AND c.catid=cc.id AND
cc.section=s.id AND s.scope='content' AND c.sectionid='1' ORDER BY
cc.ordering, cc.title, c.ordering LIMIT 0,10;

The error I get for both ones is: Unknown column 'c.access' in 'on clause'

Thanks and sorry for the stupid question.

Cheers
Ian



  

Replace all your comma joins to INNER JOIN syntax

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]