RE: Need to store a Guid as an Id

2004-09-24 Thread Michael McTernan
Hi,

 I was able to get the binary storate I needed in a TinyBlob
 but I can't set this to a primary key.

Can be done:

mysql CREATE TABLE tb
- (
-   id TINYBLOB NOT NULL,
-   PRIMARY KEY (id(255))
- );
Query OK, 0 rows affected (0.00 sec)

You need to say how many chars you want the index to run to, max is 255 I
think - check the docs.

Regards,

Mike

-Original Message-
From: Daniel Cummings [mailto:[EMAIL PROTECTED]
Sent: 23 September 2004 20:27
To: MySql List
Subject: Need to store a Guid as an Id


MySql doesn't support Guids but we were attempting to use a VarChar.  We set
the VarChar to binary, but from what I'm reading the binary setting doesn't
affect storage just sorting and evaluations.



I was able to get the binary storate I needed in a TinyBlob but I can't set
this to a primary key.



Is there a work around for this?



TIA



Dan





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Upgrading MySQL. Caveats or Cautions anyone?

2004-09-24 Thread Michael McTernan
Hi,

I found that the Intel C++ version needed some libraries which weren't on my
RH9.0 system (this was when upgrading to 4.1.4g):

  libcprts.so.5
  libcxa.so.5
  libunwind.so.5

This seems to have been reported many times e.g.
http://bugs.mysql.com/bug.php?id=4408

The resolve was to get the libs from:

ftp://ftp.mysql.com/pub/mysql/download/icc-8.0-libs.tar.gz
MD5 checksum: 0970bbfb9fb076dfe995b3176bf90fad

Then extract them and put them somewhere sensible (I chose /usr/local/lib)
and then run ldconfig to find them.  Without these libs none of the mysql
apps would start.  I guess 4.1.5g may have packaged the libs with the
tarball, I don't know.

Thanks,

Mike

-Original Message-
From: Tim Johnson [mailto:[EMAIL PROTECTED]
Sent: 18 September 2004 03:43
To: [EMAIL PROTECTED]
Subject: Upgrading MySQL. Caveats or Cautions anyone?


Hello:
Am currently using Ver 11.18 Distrib 3.23.58
on Linux 2.4.20-20.9. (RH 9.0).

I'd like to upgrade to the current version. I would
like to know if there are any cautions to observe,
or any prepartory steps to take.

If I look at the following URL:
http://dev.mysql.com/downloads/mysql/4.0.html

I see some things that confuse me a bit:
I see this label:

Linux (x86, glibc-2.2, static, gcc)
## Well, I've got glibc-2.3

And I see this label:
Linux (x86, glibc-2.3, dynamic, Intel C++ Compiler)
## Well, glibc-2.3 looks good, *but* I don't have
## the Intel C++ Compiler. Just gcc (GCC) 3.2.2

Comments and pointers to docs would be appreciated.

Thanks much
tim
--
Tim Johnson [EMAIL PROTECTED]
  http://www.alaska-internet-solutions.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



DELETE IGNORE fails for foreign key references with InnoDb

2004-09-23 Thread Michael McTernan
Hi there,

I'm finding that DELETE IGNORE doesn't actually ignore all errors when using
InnoDb and trying to remove rows that would result in a foreign key error.
I've checked the docs and think that what I'm doing should work, but
doesn't - I'm using server 4.1.4-gamma:

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 820 to server version: 4.1.4-gamma-standard-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql DROP TABLE IF EXISTS b,a;
Query OK, 0 rows affected (0.02 sec)

mysql CREATE TABLE a
- (
-   id INTEGER NOT NULL AUTO_INCREMENT,
-   PRIMARY KEY(id)
- )
- ENGINE=InnoDb;
Query OK, 0 rows affected (0.01 sec)

mysql CREATE TABLE b
- (
-   aid INTEGER NOT NULL,
-   PRIMARY KEY(aid),
-   FOREIGN KEY (aid) REFERENCES a(id)
- )
- ENGINE=InnoDb;
Query OK, 0 rows affected (0.02 sec)

mysql INSERT INTO a() VALUES();
Query OK, 1 row affected (0.29 sec)

mysql INSERT INTO b SELECT * FROM a;
Query OK, 1 row affected (0.08 sec)
Records: 1  Duplicates: 0  Warnings: 0

mysql INSERT INTO a(id) VALUES(1024);
Query OK, 1 row affected (0.00 sec)

mysql SELECT * FROM a;
+--+
| id   |
+--+
|1 |
| 1024 |
+--+
2 rows in set (0.00 sec)

mysql SELECT * FROM b;
+-+
| aid |
+-+
|   1 |
+-+
1 row in set (0.00 sec)

mysql DELETE IGNORE FROM a;
ERROR 1105 (HY000): Unknown error
mysql

I'm actually being lazy and just trying to mop up rows that are not
referenced anymore.  I would use ON DELETE CASCADE, except that the
constraint is pointing the wrong way around in my application (I may have to
fix this).

Anyone know if DELETE IGNORE should work?  Will it work in later releases of
4.1, although I don't see reference to this problem in the 4.1.5-gamma
changelog?

Thanks,

Mike




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: DROP TEMPORARY TABLE and implicit commits

2004-06-18 Thread Michael McTernan
Hi,

I guess since it is documented, it is a new feature - I agree with the
principal of not backporting it.

Many thanks for the reply - can't wait for 4.1 to mature :-)

Thanks,

Mike

 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
 Sent: 15 June 2004 13:00
 To: Mysql
 Subject: Re: DROP TEMPORARY TABLE and implicit commits


 Michael,

 - Alkuperäinen viesti -
 Lähettäjä: Michael McTernan [EMAIL PROTECTED]
 Vastaanottaja: Mysql [EMAIL PROTECTED]
 Kopio: [EMAIL PROTECTED]
 Lähetetty: Monday, June 14, 2004 9:40 PM
 Aihe: DROP TEMPORARY TABLE and implicit commits


  Hi there,
 
  I'm using MySQL 4.0.18 and finding that DROP TEMPORARY TABLE is
 performing
  an implicit commit, as is documented.
 
  The problem is that I have a complex query in a Java function,
 and I need
 to
  call it a number of times to get the overall, which I also want
 to do as a
  single transaction so as not to leave data inconsistent at any point.
 
  The implicit commit is causing me a problem here, the only solution to
 which
  I can think is to use 4.1.0 (which is alpha) or to do some
 horrible table
  name mangling for the temp table and just accumulate a lot of
 data for the
  duration of the transaction.
 
  Does anyone know if it is planned to back port the fix in 4.1.0 onto
 4.0.21
  or later?

 I am sorry, 4.0 is frozen from new features. The backport will
 probably not
 happen.

  From 4.1.0 changelog:
 
   - DROP TEMPORARY TABLE now drops only temporary tables and doesn't end
  transactions.
 
  Thanks,
 
  Mike

 Best regards,

 Heikki
 Innobase Oy
 InnoDB - transactions, row level locking, and foreign keys for MySQL
 InnoDB Hot Backup - a hot backup tool for InnoDB which also backs
 up MyISAM
 tables
 http://www.innodb.com/order.php

 Order MySQL support from http://www.mysql.com/support/index.html




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysqld stalls

2004-06-18 Thread Michael McTernan
Dear Mark,

You should be tweaking your mailer such that your mails originate fom
[EMAIL PROTECTED], and not my own email address.

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 27 May 2004 10:00
 To: [EMAIL PROTECTED]
 Subject: Mysqld stalls


 I've been trying to fine tune my mysqld settings in my.cnf on a very busy
 server. It seems to be doing fine, as in the server loading. BUT every so
 often I see the number of processes spike and then it sesms mysqld is
 unresponsive through httpd and I end up having to restart mysqld to get it
 going again.

 What settings should I be tweaking ?

 Mark Susol




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL4 and phpBB

2004-06-18 Thread Michael McTernan
Dear Mark,

 Is anyone experiencing problems with MySQL 4.0.17-max (other
 versions) and a
 lot of I/O type errors on large databases or especially running phpBB?

This can be fixed by sending emails from your own email address
([EMAIL PROTECTED]), and not mine.

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 19 April 2004 10:22
 To: [EMAIL PROTECTED]
 Subject: MySQL4 and phpBB


 I have recently setup a new server running EnsimPro 3.7 (Fedora) and
 upgraded the MySQL to 4.0.17-max per their tech notes.

 Now, I'm having troubles with tables related to my most busiest site
 becoming corrupt. The site in question uses phpBB. We've checked the hard
 drive out and don't think we have an issue with it. We did map out any bad
 blocks on install. The corruption is only with this one site.

 My sys admin believes this may be some I/O problem at the kernel
 level with
 this version of MySQL. The phpBB has been very stable for years even on a
 RaQ4 running RH6.2 and MySQL 4.0.15..albeit that server could
 barely handle
 the load. But since moving to this new install I'm struggling
 with problems..

 Is anyone experiencing problems with MySQL 4.0.17-max (other
 versions) and a
 lot of I/O type errors on large databases or especially running phpBB?

 Yes I'm asking on Ensim  phpBB support sites for help as well,
 but this may
 also be a compile/module issue with MySQl and the server.


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: PHP + MySQL Not Playing Nice Any More

2004-06-18 Thread Michael McTernan
Dear Mark,

Please set your from address to [EMAIL PROTECTED], and don't
use my email address.

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 16 April 2004 10:21
 To: Joseph A. Nagy, Jr.; MySQL General
 Subject: Re: PHP + MySQL Not Playing Nice Any More


 On 4/15/04 11:46 PM, Joseph A. Nagy, Jr.
 [EMAIL PROTECTED] wrote:

  So I'm using PHP and MySQL to serve up a game and all is going
 well  until
  today. The first problem came when for some reason the game was sending
  apache as the username to access the db (which is not what I have in the
  dbconnect file) and output some errors. I checked the page it was
  complaining about and all was good so I use phpmyadmin to login
 and admin my
  db but now even phpmyadmin won't let me in and I know I haven't
 touched the
  config file since I first set it up.
 
  Does MySQL do this very often or is it a PHP error and not a
 MySQL one and
  if this is a PHP error where do I look to fix it?

 This seems more like a change was made in your hosting environment,
 unrelated to php or mysql. I've seen this happen when sites were
 moved into
 safe moded environments.


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Dumping mass inserts?

2004-06-18 Thread Michael McTernan
Dear Mark,

 And when I go to move them to the new db, I do use the mysqldump command
 since I am dumping to the new db?

You need to correctly setup your mailer such that the from address is
[EMAIL PROTECTED], and not my email address.

Thanks,

Mike


 -Original Message-
 From: Michael McTernan
 Sent: 09 April 2004 18:30
 To: James E Hicks III; [EMAIL PROTECTED]
 Subject: Re: Dumping mass inserts?


  Ah..I see now. I was still using mysqldump..instead of mysql. Yes I'm
  trying to import the data from backup, into an empty table
 since the data
  was corrupt.
 
  Looks like you are on the right track now! :)
 
  James

 Ok now I want to dump tables with names starting with phpads so I can
 move them to another db. How is the mysqldump command run then?

 And when I go to move them to the new db, I do use the mysqldump command
 since I am dumping to the new db?

 Mark


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Corruption and my.cnf

2004-06-18 Thread Michael McTernan
Dear Mark,

I think your emailer has also experienced corruption since your from address
is actually mine.  Please set it to your own email address, such as
[EMAIL PROTECTED]

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 09 April 2004 12:18
 To: [EMAIL PROTECTED]
 Subject: Re: Corruption and my.cnf


  I've experienced more corruption lately on my main site since I
 moved to my
  own server running 4.0.17max. The site is very busy (60GB a
 month) and the
  tables are large. I didn't have this level of problems on the
 rental server.
 
  What are the variables to look into regarding why my tables are getting
  corrupt? Is this a my.cnf issue? Is this a memory issue? Is this a hard
  drive issue? Is this a too many connections issue?

 Looking at my dmesg output..

 end_request: I/O error, dev 03:42 (hdb), sector 52228450
 hdb: read_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }
 hdb: read_intr: error=0x40 { UncorrectableError },
 LBAsect=56276830, high=3,
 low=5945182, sector=52228450

 Is it possible this is related to my MySQL table corruption issues? There
 are more of these in the file.


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Using PHP to copy tables

2004-06-18 Thread Michael McTernan
Dear Mark,

You also seem to have used my email address before.  Please correctly
configure your mailer such that the from address is correctly reported as
[EMAIL PROTECTED], and not mine.

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 08 April 2004 18:36
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: Using PHP to copy tables


  Is there an easy way to create an HTML page that will copy
 selected tables
  to backup copies on the same server? I want to create an administration
  page for my client to be able to backup their database
 whenever they see
  fit. But, I can't give them direct access to the MySQL server and don't
  want them backing up to their hard drive. I prefer to simply copy the
  tables to backup versions on the server so that if problems
 arise, I can
  log into the server and simply copy the backups to the originals.

 Here is what I have used before...but this might be for you to
 run as a cron
 task and be transparent to them.

 http://www.silisoftware.com/scripts/index.php?scriptname=backupDB


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Too Many Connections

2004-06-18 Thread Michael McTernan
Dear Mark,

The best way to fix this is by correctly setting your from address in your
mailer to [EMAIL PROTECTED]

Thanks,

Mike

 -Original Message-
 From: Michael McTernan
 Sent: 08 April 2004 10:33
 To: [EMAIL PROTECTED]
 Subject: Too Many Connections


 What is the best way to diagnose the root cause of this error?
 What scripts
 are doing the connecting and totalling them up?

 Warning: mysql_connect(): User ultimated has already more than
 'max_user_connections' active connections

 I have a very active phpBB but I'm on a new server and its not pulling a
 server loading over 0.5. I ran some data before (crontab php
 script gathered
 the info for me every 5 minutes for several weeks) and the
 problem happened
 before related to server loading..not necessarily how many users I had on
 that site posting. That was an older Cobalt RaQ4. I seemed to be having a
 lot of search bots accessing the site then.

 [mysqld]
 set-variable = max_connections=512
 set-variable = max_user_connections=200
 set-variable = key_buffer=64M
 set-variable = table_cache=256
 set-variable = sort_buffer=4M
 set-variable = wait_timeout=300

 I've only had this problem this week, its run 3 weeks fine. I do have a
 corrupted MYI file according to myisamck.


 Mark Súsol
 ---
 u l t i m a t e ­ CreativeMedia
 Web | Print | CD Media | eCommerce
 www.ultimatecreativemedia.com
 Ph: 301-668-0588


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: GROUP BY across UNION

2004-06-18 Thread Michael McTernan
Hi John,

Depending on the size of your datasets, you could merge the data into a
TEMPORARY table and then compute from there?  If the temp table is small
enough it will fit in RAM as a heap table, and will probably be more
efficient than fetching all the results and computing them in code.  Of
course, if the dataset is large enough, the temporary table will hit the
disc, and then it will be very inefficent though.

Thanks,

Mike

 -Original Message-
 From: John McCaskey [mailto:[EMAIL PROTECTED]
 Sent: 24 February 2004 17:08
 To: [EMAIL PROTECTED]
 Subject: RE: GROUP BY across UNION


 Nope, this would yield a 'Column: 'avg' in field list is ambiguous'
 error.  I'm still trying to work out a better way of doing this.  I also
 need to get standard deviations now, and the method I mentioned in my
 original post doesn't even work for that.  So now I'm left with actually
 getting all the values and computing them in code.  Very sub optimal.

 John A. McCaskey


 -Original Message-
 From: Ligaya Turmelle [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, February 18, 2004 6:30 PM
 To: [EMAIL PROTECTED]
 Subject: Re: GROUP BY across UNION


 I am pretty new myself but can't you do it with a join? Like SELECT
 AVG(avg) FROM table_a, table_b GROUP BY id_field.

 Respectfully,
 Ligaya Turmelle

 John McCaskey [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 First of all I am using mysql 4.0.18 and I am not free to upgrade to 4.1
 or 5.0.



 I have two tables:



 table_a, and table_b these two tables have the same structure:

 CREATE table_a (

   id_field mediumint(8) unsigned NOT NULL,

   avg float default NULL

 )



 What I want to do is get the combined avg across table_a and table_b for
 each id value.  So I tried doing

 (SELECT AVG(avg) FROM table_a)

 UNION

 (SELECT AVG(avg) FROM table_b)

 GROUP BY id_field;



 This however doesn't work.  I can see why this is thinking about how a
 union should work mathematically but I'm left with no apparent way to
 directly get the avg across two tables.  I know that I can take the avg
 from each along with the row count and then do a weighted average using
 those values, but I'd prefer not to have to do that.  Any suggestions or
 am I stuck doing individual group bys on each table uninoning the
 results and then doing weighted averages on the matching pairs?



 Here is what I'm talking about doing with the weighted averages incase
 it is unclear:

 (SELECT AVG(avg) AS avg_a, COUNT(id_field) AS count_a FROM table_a GROUP
 BY id_field)

 UNION

 (SELECT AVG(avg) AS avg_b, COUNT(id_field) AS count_b FROM table_b GROUP
 BY id_field);



 Then I would match up the results and compute total_avg = (avg_a*count_a
 + avg_b*count_b)/(count_a+count_b).  This is not nearly as clean as I
 would like.







 John A. McCaskey






 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



DROP TEMPORARY TABLE and implicit commits

2004-06-14 Thread Michael McTernan
Hi there,

I'm using MySQL 4.0.18 and finding that DROP TEMPORARY TABLE is performing
an implicit commit, as is documented.

The problem is that I have a complex query in a Java function, and I need to
call it a number of times to get the overall, which I also want to do as a
single transaction so as not to leave data inconsistent at any point.

The implicit commit is causing me a problem here, the only solution to which
I can think is to use 4.1.0 (which is alpha) or to do some horrible table
name mangling for the temp table and just accumulate a lot of data for the
duration of the transaction.

Does anyone know if it is planned to back port the fix in 4.1.0 onto 4.0.21
or later?

From 4.1.0 changelog:

 - DROP TEMPORARY TABLE now drops only temporary tables and doesn't end
transactions.

Thanks,

Mike



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Error dropping databases [in InnoDb]

2004-04-30 Thread Michael McTernan
Many Thanks :)

Mike

 -Original Message-
 From: Heikki Tuuri [mailto:[EMAIL PROTECTED]
 Sent: 30 April 2004 11:56
 To: Michael McTernan; Tom O'Neil
 Cc: [EMAIL PROTECTED]
 Subject: Re: Error dropping databases [in InnoDb]


 Michael,

 Marko has fixed this to 4.0.19.

 Best regards,

 Heikki Tuuri
 Innobase Oy
 Foreign keys, transactions, and row level locking for MySQL
 InnoDB Hot Backup - a hot backup tool for InnoDB which also backs
 up MyISAM
 tables
 http://www.innodb.com/order.php

 Order MySQL technical support from https://order.mysql.com/

 - Original Message -
 From: Michael McTernan [EMAIL PROTECTED]
 To: Tom O'Neil [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Sent: Thursday, April 29, 2004 12:40 PM
 Subject: RE: Error dropping databases [in InnoDb]


  Hi,
 
  I have found this too.  I think it is as a result of a change introduced
 in
  4.0.18. From http://www.innodb.com/ibman.php#InnoDB.history :
 
  MySQL/InnoDB-4.0.18, February 13, 2004
   - Do not allow dropping a table referenced by a FOREIGN KEY constraint,
  unless the user does SET FOREIGN_KEY_CHECKS=0. The error message here is
  somewhat misleading 'Cannot delete or update a parent row...',
 and must be
  changed in a future version 4.1.x.
 
  It's a good change, but I'm guessing that InnoDb is handling a drop
 database
  first as if all tables are being dropped in some arbirtary
 order and hence
  not respecting foreign keys.
 
  One solution is to disable foreign key checks before the drop, and then
  re-enable them afterwards, or alternatively explicitly drop all
 tables in
  the correct order before dropping the database.
 
  I wonder if there are any plans to fix this in future versions
 on InnoDb?
 
  Thanks,
 
  Mike
 
   -Original Message-
   From: Tom O'Neil [mailto:[EMAIL PROTECTED]
   Sent: 25 March 2004 22:07
   To: [EMAIL PROTECTED]
   Subject: Error dropping databases
  
  
   I am trying to drop several InnoDB and am unable to do
   so. I login to the command line client and enter drop
   database [db_name]; and then get this error:
  
   ERROR 1217: Cannot delete or update a parent row: a
   foreign key constraint fails
  
   Why am I unable to drop these databases?
  
   Tom
  
   =
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
  
  
  
 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Error dropping databases [in InnoDb]

2004-04-29 Thread Michael McTernan
Hi,

I have found this too.  I think it is as a result of a change introduced in
4.0.18. From http://www.innodb.com/ibman.php#InnoDB.history :

MySQL/InnoDB-4.0.18, February 13, 2004
 - Do not allow dropping a table referenced by a FOREIGN KEY constraint,
unless the user does SET FOREIGN_KEY_CHECKS=0. The error message here is
somewhat misleading 'Cannot delete or update a parent row...', and must be
changed in a future version 4.1.x.

It's a good change, but I'm guessing that InnoDb is handling a drop database
first as if all tables are being dropped in some arbirtary order and hence
not respecting foreign keys.

One solution is to disable foreign key checks before the drop, and then
re-enable them afterwards, or alternatively explicitly drop all tables in
the correct order before dropping the database.

I wonder if there are any plans to fix this in future versions on InnoDb?

Thanks,

Mike

 -Original Message-
 From: Tom O'Neil [mailto:[EMAIL PROTECTED]
 Sent: 25 March 2004 22:07
 To: [EMAIL PROTECTED]
 Subject: Error dropping databases


 I am trying to drop several InnoDB and am unable to do
 so. I login to the command line client and enter drop
 database [db_name]; and then get this error:

 ERROR 1217: Cannot delete or update a parent row: a
 foreign key constraint fails

 Why am I unable to drop these databases?

 Tom

 =


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Gripe with MySQL

2004-04-20 Thread Michael McTernan
Hi,

ENUM columns can also trip you up if you insert a value outside of the ENUM;
an empty string is inserted instead.  This is documented behaviour
(mysql.com seems to be going slowly though, so can't dig a reference right
now), even if it is undesired in some cases.

Thanks,

Mike

 -Original Message-
 From: news [mailto:[EMAIL PROTECTED] Behalf Of Stormblade
 Sent: 20 April 2004 01:42
 To: [EMAIL PROTECTED]
 Subject: Gripe with MySQL


 Ok. Love MySQL and I will be using it for my personal use and recommending
 it to clients as a lower cost alternative. I've only been using it for a
 very short time but there one major gripe I have with it and I
 believe it's
 just a design thing.

 MySQL seems to put the burden of error checking and such on the client.

 - All fields have a default value even when I don't tell it to?
 - Allow Null will only result in an error if I explicitly try to set the
 field to null.

 These are the two things that I really dislike. I think it's a poor design
 to rely on clients only for error checking. MySQL supports foreign keys.
 This is good because the database handles referential integrity. But it
 shouldn't stop there. I should also be able to tell the database not to
 allow a field to be empty/null and it should not put anything in
 there that
 I have not told it to.

 One scenario I can think of is this. My company uses MySQL as it's
 database. Different departments implement various interfaces to this
 database. All it would take is ONE client to have ONE bad SQL and although
 the insert works (Thanks to default values being put in) the data is not
 valid.

 I've only been working with MySQL for a little bit so this is
 just my first
 impressions. I'll be very happy to be told I'm wrong or that
 future updates
 (5.0 perhaps) will change some of the things I've mentioned.

 Relying on clients for database integrity is a bad idea in my experience.
 --
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Stormblade (Shaolin Code Warrior)
 Software Developer (15+ Years Programming exp.)

 My System: http://www.anandtech.com/mysystemrig.html?rigid=1683
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Gripe with MySQL

2004-04-20 Thread Michael McTernan
Donny,

I think you misunderstand the point of my comment, and possibly Stormblade's
too.

The point is that there are some things that are misleading, and it would be
kinder for MySQL to report errors than silently performing something
unexpected and continuing without warning - at least if you run rm -rf /*
you'll notice pretty quickly that everything has disappeared and learn not
to do it again :D

 If you use a client like PHPMyadmin or one of the other 80
 million that are
 around you won't have to worry about error checking because they have
 already done it for you.

Using MySQL + JDBC to make a very custom app, sorry.

 if you don't have error checking in yourself, that's your problem
 not mysql's problem.

True, but still doesn't help when someone inexperienced with MySQL makes a
mistake and buggers the database... again.  I don't see why you object to
making things easier to use, even if it is only activated by a -n00b option
in mysql_safe.

Thanks,

Mike

 -Original Message-
 From: Donny Simonton [mailto:[EMAIL PROTECTED]
 Sent: 20 April 2004 17:37
 To: 'Michael McTernan'; 'Stormblade'
 Cc: [EMAIL PROTECTED]
 Subject: RE: Gripe with MySQL


 Everybody should remember as well, if you run rm -rf /*.* on your
 server you
 will delete everything from your server, but linux will stay
 running.  Even
 though that's not documented either.

 If you use a client like PHPMyadmin or one of the other 80
 million that are
 around you won't have to worry about error checking because they have
 already done it for you.  Now as far as your clients/customers,
 if you don't
 have error checking in yourself, that's your problem not mysql's problem.

 Donny

  -Original Message-
  From: Michael McTernan [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, April 20, 2004 9:04 AM
  To: Stormblade
  Cc: [EMAIL PROTECTED]
  Subject: RE: Gripe with MySQL
 
  Hi,
 
  ENUM columns can also trip you up if you insert a value outside of the
  ENUM;
  an empty string is inserted instead.  This is documented behaviour
  (mysql.com seems to be going slowly though, so can't dig a
 reference right
  now), even if it is undesired in some cases.
 
  Thanks,
 
  Mike
 
   -Original Message-
   From: news [mailto:[EMAIL PROTECTED] Behalf Of Stormblade
   Sent: 20 April 2004 01:42
   To: [EMAIL PROTECTED]
   Subject: Gripe with MySQL
  
  
   Ok. Love MySQL and I will be using it for my personal use and
  recommending
   it to clients as a lower cost alternative. I've only been
 using it for a
   very short time but there one major gripe I have with it and I
   believe it's
   just a design thing.
  
   MySQL seems to put the burden of error checking and such on
 the client.
  
   - All fields have a default value even when I don't tell it to?
   - Allow Null will only result in an error if I explicitly try
 to set the
   field to null.
  
   These are the two things that I really dislike. I think it's a poor
  design
   to rely on clients only for error checking. MySQL supports
 foreign keys.
   This is good because the database handles referential
 integrity. But it
   shouldn't stop there. I should also be able to tell the
 database not to
   allow a field to be empty/null and it should not put anything in
   there that
   I have not told it to.
  
   One scenario I can think of is this. My company uses MySQL as it's
   database. Different departments implement various interfaces to this
   database. All it would take is ONE client to have ONE bad SQL and
  although
   the insert works (Thanks to default values being put in) the
 data is not
   valid.
  
   I've only been working with MySQL for a little bit so this is
   just my first
   impressions. I'll be very happy to be told I'm wrong or that
   future updates
   (5.0 perhaps) will change some of the things I've mentioned.
  
   Relying on clients for database integrity is a bad idea in my
  experience.
   --
  
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
   Stormblade (Shaolin Code Warrior)
   Software Developer (15+ Years Programming exp.)
  
   My System: http://www.anandtech.com/mysystemrig.html?rigid=1683
  
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
  
  
  
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
  http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Multiple concurrent transactions per connection

2004-02-17 Thread Michael McTernan
Hi there,

 Given this model, each thread is obviously going to want to have
 multiple transactions outstanding. Is this something that might be added
 to MySQL in future or am I totally overestimating the expense of using
 one thread per connection?

I'm guessing you mean that each thread will want to have a number of
different transactions active at one time.  Depending on how your
application works, you could nest transactions on one connection through
careful use of savepoints and rollbacks to save points (under InnoDb from
4.0.14).  This won't help overlapping transactions that don't nest neatly
though.

Alternatively you can use connection pooling and have it such that you use a
connection per transaction.  You could make it such that once all the
connections were taken, your application blocked the start of further
transactions.

I think you should be careful though.  While a good design will help
performance, you might end up finding that the benefits of your design are
lost since disk IO or some other larger factor proves to be a much bigger
bottleneck!

Thanks,

Mike



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDb Table Performance problem

2004-02-12 Thread Michael McTernan
Hi,

 SELECT COUNT(*) for InnoDB tables is a know problem...  The table
 handler (for InnoDB) has to do a table scan to count all rows...  This
 particular case is optimized with MyISAM ...

Sure.  But why is the tablescan ~100 times faster for the table without the
BLOB column?

This goes back to my original assumption that I thought the BLOB contents
would be stored outside of the normal column data, meaning that you have to
do an additional seek to retrieve a BLOB value, but that a BLOB column
doesn't inflate the data over which a table scan has to iterate and so
doesn't impact tablescan performance.  Is this wrong?  Or is this different
for InnoDB vs MyISAM?

Of course, I'm aiming not to use table scans at all though :)

Thanks,

Mike

 -Original Message-
 From: Benoit St-Jean [mailto:[EMAIL PROTECTED]
 Sent: 11 February 2004 22:47
 To: Michael McTernan
 Cc: Mysql
 Subject: Re: InnoDb Table Performance problem


 Michael McTernan wrote:

 Hi there,
 
 I'm making something similar to a file revision control system, and using
 MySQL on Linux as the database to drive it.  Almost all my tables are
 InnoDB, and generally it is going very well, with the exception
 of one table
 that is always very slow.
 
 This table holds the files within the database.  It is defined
 as follows:
 
 CREATE TABLE `files` (
   `id` int(10) unsigned NOT NULL auto_increment,
   `revision` int(10) unsigned NOT NULL default '0',
   `filenameid` int(10) unsigned NOT NULL default '0',
   `pathid` int(10) unsigned NOT NULL default '0',
   `extensionid` int(10) unsigned NOT NULL default '0',
   `isDeleted` enum('0','1') NOT NULL default '0',
   `filepathname` tinytext NOT NULL,
   `contentsGz` longblob NOT NULL,
   PRIMARY KEY  (`id`),
   UNIQUE KEY `revision` (`revision`,`filepathname`(255)),
   KEY `fpeindex` (`filenameid`,`pathid`,`extensionid`,`revision`),
   KEY `filepathname` (`filepathname`(255)),
   CONSTRAINT `0_3570` FOREIGN KEY (`filenameid`) REFERENCES `filenames`
 (`id`),
   CONSTRAINT `0_3571` FOREIGN KEY (`extensionid`) REFERENCES
 `fileextensions` (`id`),
   CONSTRAINT `0_3572` FOREIGN KEY (`pathid`) REFERENCES
 `filepaths` (`id`)
 ) TYPE=InnoDB;
 
 The 'contentsGz' column will have the contents of the file and will
 typically be a couple of hundred kilobytes, but in some rare
 cases as large
 as 20 Megabytes.
 
 Selects on this table always go very slowly.  I've used EXPLAIN
 to look at
 what is going on, and carefully added a couple of multi-column
 indexes that
 have improved SELECT performance (this table is updated rarely,
 so I'm not
 too worried about INSERT performance).  However, the performance is still
 really bad.
 
 I tried creating an identical table with the exception that it
 doesn't have
 the 'contentsGz' column:
 
 CREATE TABLE `filemetadata` (
   `id` int(10) unsigned NOT NULL auto_increment,
   `revision` int(10) unsigned NOT NULL default '0',
   `filenameid` int(10) unsigned NOT NULL default '0',
   `pathid` int(10) unsigned NOT NULL default '0',
   `extensionid` int(10) unsigned NOT NULL default '0',
   `isDeleted` enum('0','1') NOT NULL default '0',
   `filepathname` tinytext NOT NULL,
   PRIMARY KEY  (`id`),
   UNIQUE KEY `revision` (`revision`,`filepathname`(255)),
   KEY `fpeindex` (`filenameid`,`pathid`,`extensionid`,`revision`),
   KEY `filepathname` (`filepathname`(255)),
   CONSTRAINT `0_3651` FOREIGN KEY (`filenameid`) REFERENCES `filenames`
 (`id`),
   CONSTRAINT `0_3652` FOREIGN KEY (`extensionid`) REFERENCES
 `fileextensions` (`id`),
   CONSTRAINT `0_3653` FOREIGN KEY (`pathid`) REFERENCES
 `filepaths` (`id`)
 ) TYPE=InnoDB;
 
 I used UPDATE ... SELECT to copy all data from the 'files' table to
 'filemetadata'.
 
 Here is something I found suprising:
 
 mysql SELECT COUNT(1) FROM files;
 +--+
 | COUNT(1) |
 +--+
 |   101013 |
 +--+
 1 row in set (32.42 sec)
 
 mysql SELECT COUNT(1) FROM filemetadata;
 +--+
 | COUNT(1) |
 +--+
 |   101013 |
 +--+
 1 row in set (0.29 sec)
 
 

 SELECT COUNT(*) for InnoDB tables is a know problem...  The table
 handler (for InnoDB) has to do a table scan to count all rows...  This
 particular case is optimized with MyISAM ...

 
 
 
 
 




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDb Table Performance problem

2004-02-12 Thread Michael McTernan
Hi,

 Well, two possible explanations...  BLOBS are either stored in a
 separate area thus forcing MySQL to perform an extra seek to retrieve
 the full row or the table scan takes a lot more time to execute because
 of the BLOB data you have to retrieve...

I'm not often retrieving the BLOB value in my queries, so wouldn't expect to
get the second seek performance it, and hence would expect good peformance.

 Reading 10 rows of 1K is
 way faster than retrieving 10 rows of  800K!  By your result, I am
 assuming that your table with the BLOB is 100 times bigger than the one
 without it...

I'm not sure how to get the size of a table from InnoDb, so could guess that
this is true, but I'm not certain.

I'm guessing that with InnoDb, the BLOB data is being stored inline with the
other column data :(

Thanks,

Mike


 -Original Message-
 From: Benoit St-Jean [mailto:[EMAIL PROTECTED]
 Sent: 12 February 2004 14:04
 To: Michael McTernan
 Cc: Mysql
 Subject: Re: InnoDb Table Performance problem


 Michael McTernan wrote:

 Hi,
 
 
 
 SELECT COUNT(*) for InnoDB tables is a know problem...  The table
 handler (for InnoDB) has to do a table scan to count all rows...  This
 particular case is optimized with MyISAM ...
 
 
 
 Sure.  But why is the tablescan ~100 times faster for the table
 without the
 BLOB column?
 
 
 Well, two possible explanations...  BLOBS are either stored in a
 separate area thus forcing MySQL to perform an extra seek to retrieve
 the full row or the table scan takes a lot more time to execute because
 of the BLOB data you have to retrieve...  Reading 10 rows of 1K is
 way faster than retrieving 10 rows of  800K!  By your result, I am
 assuming that your table with the BLOB is 100 times bigger than the one
 without it...







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: InnoDb Table Performance problem

2004-02-12 Thread Michael McTernan
Hi there,

 I would try to sum it up like this:

 1. Discs work best when used as purely sequential devices.
 2. Inserting those BLOBs takes up space somewhere. Strictly speaking, on
 the disc they may very well be in between certain rows.

Good points - I guess there is no way for the database to control where on
the file system the data ends up.  I wonder if using a raw parition for the
database under InnoDB would fix this?  I guess that would give the storage
engine the opportunity to put the BLOB data in a heap at one end of a disk
while keeping the table data together at the other, although this sounds
like a lot of work for the storage engine.  What would be good would be a
tool to show how 'fragmented' a database table is on disk?

 3. As you may need to read over them, the disc has to seek much further
 and depending on the way BLOBs are placed inside the tablespace, seeking
 may be all over the place.

Another good point.  It maybe that by creating the 'filemetadata' table in
one go I've just ensured that most of the columns are linear on the disc.
In which case re-creating the 'files' table and then adding the BLOB data
back as a second step might fix this.  My experiments show this is not the
case though...

 Have you tried the following to see the results:

 ALTER TABLE files TYPE = MyISAM;
 (Remove all indexes)
 SELECT SUM(revision) FROM files;

Okay.  I've made two new tables, one called 'myfiles' and one called
'inofiles' which are both duplicates of 'files' without any indices except
for the primary key.  Here are the tables I'm comparing:

CREATE TABLE `myfiles` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `revision` int(10) unsigned NOT NULL default '0',
  `filenameid` int(10) unsigned NOT NULL default '0',
  `pathid` int(10) unsigned NOT NULL default '0',
  `extensionid` int(10) unsigned NOT NULL default '0',
  `isDeleted` enum('0','1') NOT NULL default '0',
  `filepathname` tinytext NOT NULL,
  `contentsGz` longblob NOT NULL,
  PRIMARY KEY  (`id`)
) TYPE=MyISAM;

CREATE TABLE `inofiles` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `revision` int(10) unsigned NOT NULL default '0',
  `filenameid` int(10) unsigned NOT NULL default '0',
  `pathid` int(10) unsigned NOT NULL default '0',
  `extensionid` int(10) unsigned NOT NULL default '0',
  `isDeleted` enum('0','1') NOT NULL default '0',
  `filepathname` tinytext NOT NULL,
  `contentsGz` longblob NOT NULL,
  PRIMARY KEY  (`id`)
) TYPE=InnoDB;


Here's the results for the select statement you suggest:

[Restart MySQL]
mysql SELECT SUM(revision) FROM inofiles;
+---+
| SUM(revision) |
+---+
|561401 |
+---+
1 row in set (33.61 sec)

mysql SELECT SUM(revision) from inofiles;
+---+
| SUM(revision) |
+---+
|561401 |
+---+
1 row in set (31.79 sec)

[Restart MySQL]
mysql SELECT SUM(revision) FROM myfiles;
+---+
| SUM(revision) |
+---+
|561401 |
+---+
1 row in set (1 min 7.86 sec)


mysql SELECT SUM(revision) from myfiles;
+---+
| SUM(revision) |
+---+
|561401 |
+---+
1 row in set (1 min 7.83 sec)

I restarted MySQL before running each test, and then run the test twice,
being careful to change the case of the SELECT statements to avoid hitting
the query cache.  Strangely InnoDb is more than twice as fast...

Now, if I drop the BLOB column 'contentsGz' from both 'inofiles' and
'myfiles' and repeat:

[Restart database]
mysql SELECT SUM(revision) FROM myfiles;
+---+
| SUM(revision) |
+---+
|561401 |
+---+
1 row in set (0.34 sec)

mysql SELECT SUM(revision) from myfiles;
+---+
| SUM(revision) |
+---+
|561401 |
+---+
1 row in set (0.18 sec)

[Restart database]
mysql SELECT SUM(revision) FROM inofiles;
+---+
| SUM(revision) |
+---+
|561401 |
+---+
1 row in set (0.29 sec)

mysql SELECT SUM(revision) from inofiles;
+---+
| SUM(revision) |
+---+
|561401 |
+---+
1 row in set (0.10 sec)

So I guess the conclusion is that BLOBs are *really* bad for table scans in
MyISAM _and_ InnoDB, although perhaps slightly less so for InnoDB.  I guess
the BLOB data must get interleaved with the non-BLOB data on disk, and that
putting the BLOBs into a different table may cause some sort of partitioning
between the tables on disk.

I'm guessing that the real problem here is that some of my queries are
secretly doing table scans when they shouldn't, and that is causing a huge
slowdown, although I might split the BLOB column into a different table such
that table scans don't give me such a bad hit if they do happen.

Thanks,

Mike

 -Original Message-
 From: Chris Nolan [mailto:[EMAIL PROTECTED]
 Sent: 12 February 2004 12:23
 To: Michael McTernan
 Cc: Benoit St-Jean; Mysql
 Subject: Re: InnoDb Table

RE: InnoDb Table Performance problem

2004-02-12 Thread Michael McTernan
Hi Chris,

 A HEX editor? :-) Finding out what the developers at MySQL AB like to
 drink and sending them a few megalitres of it? :-)

Heh heh :-)

 In all seriousness, you haven't got a great many options (although I
 could be completely wrong, in which case some kind person on this list
 will enlighten / flame me), but using OPTIMIZE TABLE on MyISAM tables
 and ALTER TABLE TYPE = InnoDB on InnoDB tables will defrag your table
 for you, possibly helping matters.

I'm pretty sure that the new tables I created for the test would have been
defraged since I ALTER'd them a couple of times to drop indices.

 InnoDB looks at tablespace files and raw partitions in the same way and
 manages space within them in a method related to that used by the
 Berkeley Fast File System.

Sounds like it is already very efficient then.

 It *might* have some more
 options to play with if you wrap a few INSERTs in a transaction block.

That's a good idea - I could certainly try that.  A future version of InnoDb
may also take advantage of this I guess :)

 That may help. One thing you may want to do is re-run the above with an
 index on the revision column for both table types with and without
 BLOBs.

I did try adding indices, and you are correct, it brings the times right
down, so it is only table scan performance that is at fault here.

 You have to be impressed with InnoDB though - it's a full-on
 multiversioned storage engine and it manages to keep pace with MyISAM
 (and outperform it on occassion).

Completely.  I'm impressed with the whole lot of MySQL - it's brilliant.

 Using the FORCE INDEX(date) predicate, the query time drops to about 2
seconds.

I've had to do this in the past too.  I'll be looking out for slow queries
in the logs and then EXPLAINing them to see where I can make this
improvement.

Thankyou for all your help.  While there is no simple solution, I'm a lot
more sure of my options now :)

Cheers,

Mike

 -Original Message-
 From: Chris Nolan [mailto:[EMAIL PROTECTED]
 Sent: 12 February 2004 17:28
 To: Michael McTernan
 Cc: Mysql; Benoit St-Jean
 Subject: Re: InnoDb Table Performance problem


 Michael McTernan wrote:

 Hi there,
 
 
 
 I would try to sum it up like this:
 
 1. Discs work best when used as purely sequential devices.
 2. Inserting those BLOBs takes up space somewhere. Strictly speaking, on
 the disc they may very well be in between certain rows.
 
 
 
 Good points - I guess there is no way for the database to
 control where on
 the file system the data ends up.  I wonder if using a raw
 parition for the
 database under InnoDB would fix this?  I guess that would give
 the storage
 engine the opportunity to put the BLOB data in a heap at one end
 of a disk
 while keeping the table data together at the other, although this sounds
 like a lot of work for the storage engine.  What would be good would be a
 tool to show how 'fragmented' a database table is on disk?
 
 
 
 A HEX editor? :-) Finding out what the developers at MySQL AB like to
 drink and sending them a few megalitres of it? :-)

 In all seriousness, you haven't got a great many options (although I
 could be completely wrong, in which case some kind person on this list
 will enlighten / flame me), but using OPTIMIZE TABLE on MyISAM tables
 and ALTER TABLE TYPE = InnoDB on InnoDB tables will defrag your table
 for you, possibly helping matters. In each case, the table has a SHARE
 MODE lock set (read only), copies all existing rows to a new table such
 that they are positioned optimally, drops the old table and renames the
 new table to the original name.

 InnoDB looks at tablespace files and raw partitions in the same way and
 manages space within them in a method related to that used by the
 Berkeley Fast File System. The only advantage of raw partition usage is
 bypassing filesystem logic which is very beneficial on some OSes (not
 Linux or any of the BSDs to my knowledge though - their FS layers are
 damned quick!).

 3. As you may need to read over them, the disc has to seek much further
 and depending on the way BLOBs are placed inside the tablespace, seeking
 may be all over the place.
 
 
 
 Another good point.  It maybe that by creating the
 'filemetadata' table in
 one go I've just ensured that most of the columns are linear on the disc.
 In which case re-creating the 'files' table and then adding the BLOB data
 back as a second step might fix this.  My experiments show this
 is not the
 case though...
 
 
 If you're doing INSERTs in AUTO_COMMIT mode, then InnoDB will have no
 choice but to put the BLOBs where it can. It *might* have some more
 options to play with if you wrap a few INSERTs in a transaction block.

 
 
 Have you tried the following to see the results:
 
 ALTER TABLE files TYPE = MyISAM;
 (Remove all indexes)
 SELECT SUM(revision) FROM files;
 
 
 
 Okay.  I've made two new tables, one called 'myfiles' and one called
 'inofiles' which are both duplicates of 'files' without any
 indices except

RE: best-practices backups

2004-02-11 Thread Michael McTernan
Hi,

I'd love to see this too.  Even if it was a book that cost ?40 to buy, I'd
get a copy.

Hey, maybe someone can recommend a book - I've looked hard and not really
come up with anything better than the MySQL manual, which while great, is
missing the 'best practices' :(

Thanks,

Mike

 -Original Message-
 From: Michael Collins [mailto:[EMAIL PROTECTED]
 Sent: 05 February 2004 22:56
 To: [EMAIL PROTECTED]
 Subject: best-practices backups


 Is there any best-practices wisdom on what is the most preferable
 method of backing up moderately (~10-20,000 record) MySQL 4
 databases? A mysql dump to store records as text, the format provided
 by the BACKUP sql command, or some other method? I am not asking
 about replication, rotating backups, or remote storage, and I am not
 concerned about the size of the backup files.  Replication might be
 the best scenario for some sites but this case is not high finance.

 --
 Michael
 __
 ||| Michael Collins
 ||| Kuwago Inc  mailto:[EMAIL PROTECTED]
 ||| Seattle, WA, USAhttp://michaelcollins.net

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: best-practices backups

2004-02-11 Thread Michael McTernan
Hi,

I do just this at the moment - I have a cron job that runs MySQL dump, gzips
the output, and will then ftp the important files to a machine that get's
backed-up to a tape drive.  I also time the dump, and it currently takes
just over 3 minutes which is quite acceptable for what I'm doing.  I'm
thinking about piping the output of mysqldump straight through gzip and then
ftp'd away to ease the disk access too, but that maybe later.

I would still like a best-practices guide though, so that if everything does
go wrong I'm sure that I've got everything I need to reconstruct the system
as swiftly as possible.  I've done some dry runs, but still feel that this
isn't the same as learning from that gleaned by others that may have
actually been faced with disaster in the past!

Thanks,

Mike

 -Original Message-
 From: David Brodbeck [mailto:[EMAIL PROTECTED]
 Sent: 11 February 2004 19:27
 To: 'Michael McTernan'; Michael Collins
 Cc: [EMAIL PROTECTED]
 Subject: RE: best-practices backups


   -Original Message-
   From: Michael Collins [mailto:[EMAIL PROTECTED]

   Is there any best-practices wisdom on what is the most preferable
   method of backing up moderately (~10-20,000 record) MySQL 4
   databases? A mysql dump to store records as text, the
  format provided
   by the BACKUP sql command, or some other method?

 I think it depends on how long a backup window you can tolerate.

 On our databases, I use mysqldump to dump to a text file.  The backup is
 piped through gzip to reduce the size on disk.  This has the advantage of
 being portable between architectures, robust, and human-readable.

 I also run with update logging turned on, and turn over the logfiles after
 each backup.  This way I can restore the database to any point in time by
 restoring the next earliest backup, then running as much of the update log
 as necessary to get to the desired point.  I use a script with the 'find'
 command to weed out old backups and update logs older than a
 certain number
 of weeks.  All of this is run nightly by cron entries.

 The disadvantage of this method is that mysqldump can take a while to dump
 large databases, and the tables are locked during the backup process.  If
 this is a problem, you should probably investigate mysqlhotcopy.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDb Table Performance problem

2004-02-11 Thread Michael McTernan
Hi there,

I'm making something similar to a file revision control system, and using
MySQL on Linux as the database to drive it.  Almost all my tables are
InnoDB, and generally it is going very well, with the exception of one table
that is always very slow.

This table holds the files within the database.  It is defined as follows:

CREATE TABLE `files` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `revision` int(10) unsigned NOT NULL default '0',
  `filenameid` int(10) unsigned NOT NULL default '0',
  `pathid` int(10) unsigned NOT NULL default '0',
  `extensionid` int(10) unsigned NOT NULL default '0',
  `isDeleted` enum('0','1') NOT NULL default '0',
  `filepathname` tinytext NOT NULL,
  `contentsGz` longblob NOT NULL,
  PRIMARY KEY  (`id`),
  UNIQUE KEY `revision` (`revision`,`filepathname`(255)),
  KEY `fpeindex` (`filenameid`,`pathid`,`extensionid`,`revision`),
  KEY `filepathname` (`filepathname`(255)),
  CONSTRAINT `0_3570` FOREIGN KEY (`filenameid`) REFERENCES `filenames`
(`id`),
  CONSTRAINT `0_3571` FOREIGN KEY (`extensionid`) REFERENCES
`fileextensions` (`id`),
  CONSTRAINT `0_3572` FOREIGN KEY (`pathid`) REFERENCES `filepaths` (`id`)
) TYPE=InnoDB;

The 'contentsGz' column will have the contents of the file and will
typically be a couple of hundred kilobytes, but in some rare cases as large
as 20 Megabytes.

Selects on this table always go very slowly.  I've used EXPLAIN to look at
what is going on, and carefully added a couple of multi-column indexes that
have improved SELECT performance (this table is updated rarely, so I'm not
too worried about INSERT performance).  However, the performance is still
really bad.

I tried creating an identical table with the exception that it doesn't have
the 'contentsGz' column:

CREATE TABLE `filemetadata` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `revision` int(10) unsigned NOT NULL default '0',
  `filenameid` int(10) unsigned NOT NULL default '0',
  `pathid` int(10) unsigned NOT NULL default '0',
  `extensionid` int(10) unsigned NOT NULL default '0',
  `isDeleted` enum('0','1') NOT NULL default '0',
  `filepathname` tinytext NOT NULL,
  PRIMARY KEY  (`id`),
  UNIQUE KEY `revision` (`revision`,`filepathname`(255)),
  KEY `fpeindex` (`filenameid`,`pathid`,`extensionid`,`revision`),
  KEY `filepathname` (`filepathname`(255)),
  CONSTRAINT `0_3651` FOREIGN KEY (`filenameid`) REFERENCES `filenames`
(`id`),
  CONSTRAINT `0_3652` FOREIGN KEY (`extensionid`) REFERENCES
`fileextensions` (`id`),
  CONSTRAINT `0_3653` FOREIGN KEY (`pathid`) REFERENCES `filepaths` (`id`)
) TYPE=InnoDB;

I used UPDATE ... SELECT to copy all data from the 'files' table to
'filemetadata'.

Here is something I found suprising:

mysql SELECT COUNT(1) FROM files;
+--+
| COUNT(1) |
+--+
|   101013 |
+--+
1 row in set (32.42 sec)

mysql SELECT COUNT(1) FROM filemetadata;
+--+
| COUNT(1) |
+--+
|   101013 |
+--+
1 row in set (0.29 sec)

mysql

Note that I was careful to ensure that the query cache was not active for
these queries.  I'm fully aware that a smaller table should go faster, but
in the MySQL manual is says this about BLOB columns, at the end of section
11.3.2:

Note that each BLOB or TEXT value is represented internally by a separately
allocated object. This is in contrast to all other column types, for which
storage is allocated once per column when the table is opened.

I took this to mean that the BLOB data is stored outside the table, meaning
that the table should still be 'small', but that you get a double seek hit
accessing the BLOB contents (which would be absolutely fine for what I am
doing since I access files much less than examining the metadata).

I'm wondering if I should change my schema to have the BLOBs in a new table
and just keep the meta-data in a table of it's own, but I thought that using
a BLOB would effectively do this for me?

Are my assumptions wrong, and is separating the BLOB contents into a
different table a good solution?

Any help appreciated,

Mike



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Removing sub-query

2004-01-23 Thread Michael McTernan
Hi there,

I'm using MySQL 4.0.15 on RedHat 9.0.

I've got problem for which I can't figure out how to do it efficiently
without using a subquery.  4.1.0 isn't really an option since this is going
into a stable environment.

The table is like this.

+--+--++
| revision | id   | name   |
+--+--++
|4 |   80 | filea  |
|5 |   85 | filea  |
|8 |   77 | fileb  |
|9 |   73 | filec  |
+--+--++

I want to group by name, and get the id of the highest revision for each
file.  So the query should return the following results:

+--+
| id   |
+--+
|   85 |
|   77 |
|   73 |
+--+

Is there any way of doing this efficiently without needing a subquery and
upgrade to MySQL 4.1.0?

Thanks,

Mike




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



LOCK TABLES and multi table UPDATE

2004-01-22 Thread Michael McTernan
Hi there,

I've got a small issue which looks a little like a bug.  I'm using MySQL
4.0.15-standard-log, on RedHat 9.0 with InnoDb tables.

Essentially I have two tables, one table is a list of integers, while the
second is a table of integer pairs. e.g.

+---+   +-+-+
| A |   | B.x | B.y |
+---+   +-+-+
| 1 |   |  1  |  5  |
| 2 |   |  3  |  6  |
| 3 |   +-+-+
| 4 |
+---+

What I want to do is form a temporary table containing A, but then with each
value in A that matches B.x substituted for B.y.  e.g. the desired result
from the example would be a new table:

+---+
| T |
+---+
| 5 |
| 2 |
| 6 |
| 4 |
+---+

Here is what I try executing to get this, from my live database:

LOCK TABLES
  labelfiles AS labelfile READ,
  branchfiles AS bfile READ;

DROP TEMPORARY TABLE IF EXISTS tmpLabelFiles;

-- This creates table 'A' from some other table
CREATE TEMPORARY TABLE tmpLabelFiles
(
   PRIMARY KEY (id)
)
SELECT
  labelfile.fileid AS id
FROM
  labelfiles AS labelfile
WHERE
  labelfile.labelid=18;

-- This performs the substitution for 'B.x' - 'B.y'
UPDATE
  tmpLabelFiles AS tfile,
  branchfiles AS bfile
SET
  tfile.id=bfile.replacementfileid
WHERE
  tfile.id=bfile.branchfileid;

UNLOCK TABLES;

So far so good.  Except that I get the following error when trying to
execute the UPDATE:

ERROR 1099: Table 'bfile' was locked with a READ lock and can't be updated

If I lock 'bfile' with a WRITE lock it succeeds, but I'd prefer not to use a
WRITE lock since other accesses to the table might be needed and this table
can get quite large, and really, I'm not updating 'bfile' so should only
need a READ lock, right?

Has anyone else found this, and does anyone else know if there is an
efficient work around?  I've checked bugs.mysql.com and found nothing, is
this a new bug?

Thanks,

Mike




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: LOCK TABLES and multi table UPDATE

2004-01-22 Thread Michael McTernan
Hi there,

Thanks for your quick response!

 Why all the locks, temp tables and updates? You can just do:

 SELECT
   CASE WHEN B.y IS NULL THEN A.x ELSE B.y END AS newX
 FROM
   A LEFT JOIN B ON A.x = B.x

Spot on - many thanks!  I wasn't aware of the CASE function, but I've
certainly learnt something now :)

Thanks,

Mike

 -Original Message-
 From: Jochem van Dieten [mailto:[EMAIL PROTECTED]
 Sent: 22 January 2004 16:38
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: LOCK TABLES and multi table UPDATE


 Michael McTernan said:
 
  I'm using MySQL 4.0.15-standard-log, on RedHat 9.0 with InnoDb tables.
 
  Essentially I have two tables, one table is a list of integers,
  while the second is a table of integer pairs. e.g.
 
  +---+   +-+-+
  | A |   | B.x | B.y |
  +---+   +-+-+
  | 1 |   |  1  |  5  |
  | 2 |   |  3  |  6  |
  | 3 |   +-+-+
  | 4 |
  +---+
 
  What I want to do is form a temporary table containing A, but then
  with each value in A that matches B.x substituted for B.y.  e.g. the
  desired result from the example would be a new table:
 
  +---+
  | T |
  +---+
  | 5 |
  | 2 |
  | 6 |
  | 4 |
  +---+

 Why all the locks, temp tables and updates? You can just do:

 SELECT
   CASE WHEN B.y IS NULL THEN A.x ELSE B.y END AS newX
 FROM
   A LEFT JOIN B ON A.x = B.x

 Jochem





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Access rules for users from specific hosts

2003-11-24 Thread Michael McTernan
Hi there,

I'm seeing some problems with the way I've granted access to different
users.  I've setup the server to allow access only from certain clients.
For example, the backup user is only allowed access from some machines:

GRANT SELECT ON testdb.* TO [EMAIL PROTECTED]

The problem is that when connecting I must supply a -h option to match the
grant e.g.

[EMAIL PROTECTED] mm1]$ mysql -u backup
ERROR 1045: Access denied for user: '[EMAIL PROTECTED]' (Using password: NO)
[EMAIL PROTECTED] mm1]$ mysql -u backup -h 127.0.0.1
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 32 to server version: 4.0.15-standard-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql

Is there an option to get the MySQL server to resolve localhost to 127.0.0.1
automatically?  Or do I just have to use the -h option everywhere?  It's not
a big problem for me, but somewhat of an annoyance :(

Many Thanks,

Mike



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Backslash and full text searches

2003-11-16 Thread Michael McTernan
Hi there,

I had a similar problem wanting to match 'words' like 42.3.1.1 as
technical references.  I couldn't get them to match using a fulltext index,
I assume because the full stops are used by the parser in MySql to break the
text into words for indexing - maybe this is the same for the slashes.  I've
considered changing the full stops to underscores as I add the text to the
database, or re-coding the function that does the splitting - I think I
found the function, but it is quite hard to penetrate :-/

Thanks,

Mike

 -Original Message-
 From: Jason Ramsey [mailto:[EMAIL PROTECTED]
 Sent: 13 November 2003 22:11
 To: [EMAIL PROTECTED]
 Subject: Backslash and full text searches


 We make extensive use of full text searches, but have run into
 some problems
 with backslashes.  If a word like somebody's is entered into
 our database,
 we escape the string using mysql_escapes_string in php.  So,
 mysql_escape_string(somebody's) becomes somebody\'s when it
 is saved in
 the database.  The problem is, we don't seem to be able to match against
 this in the database.

 Let's say we saved somebody's in the data base.  The following
 will match
 fine and pull up the results expected...

 SELECT * FROM Table WHERE MATCH (Field) AGAINST (somebody's)

  But if somebody\'s is stored in the database, there seems
 to be no way
 to match the \.  We've tried all of the following...

 SELECT * FROM Table WHERE MATCH (Field) AGAINST (somebody's)
 SELECT * FROM Table WHERE MATCH (Field) AGAINST ('somebody's')
 SELECT * FROM Table WHERE MATCH (Field) AGAINST ('somebody's')
 SELECT * FROM Table WHERE MATCH (Field) AGAINST (somebody\'s)
 SELECT * FROM Table WHERE MATCH (Field) AGAINST ('somebody\'s')
 SELECT * FROM Table WHERE MATCH (Field) AGAINST (somebody\\\'s)
 SELECT * FROM Table WHERE MATCH (Field) AGAINST ('somebody\\\'s')
 SELECT * FROM Table WHERE MATCH (Field) AGAINST ('somebody\'s')
 SELECT * FROM Table WHERE MATCH (Field) AGAINST ('somebody\\\'s')

  Any ideas?


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: maintaining size of a db

2003-11-12 Thread Michael McTernan
From the manual:

1.8.4.1 Subqueries

Subqueries are supported in MySQL version 4.1. See section 1.6.1 Features
Available in MySQL 4.1.



Hope that helps,

Mike

 -Original Message-
 From: Scott H [mailto:[EMAIL PROTECTED]
 Sent: 12 November 2003 17:45
 To: Dan Greene; MySQL Mailing List
 Subject: RE: maintaining size of a db


 OK, I *THINK* I follow you here.  Couple of
 questions.  I'm reading an online tutorial trying
 to figure this out, and I am led to believe mysql
 can't do nested queries, aka sub-queries. But you
 say it can? Is this recent?  And I don't have a
 timestamp field, I have an autoincrement field,
 but what do you mean by the (@aa:=id) thing?  I
 don't follow that.  thanks.

 --- Dan Greene [EMAIL PROTECTED]
 wrote:
  What I would do is a classical guesstimate
 
  find the average size per record (data file
  size + index file(s) size / # records in table)
 
  using that, find the data used per day
 
  using that, figure out how many days, on
  average it takes to hit 20GB
 
  let's say it's 89 days.
 
  right off the top, take 10% off for safety, now
  we're at 80 days
 
  presuming your table has a timestamp field:
 
  delete from log_table WHERE TO_DAYS(NOW()) -
  TO_DAYS(date_col)  80
 
  if you don't have a timestamp field, but you do
  have an autoincrement id field:
 
  figure out number of records on average = 20gb
  (say it's 2M)
  again, use 10% for safety (1.8M)
 
  select (@aa:=id) as low_id from logtable order
  by id limit 1800,1
  delete from logtable where id  @aa
 
  (do subqueries work with a limit clause?)
 
 
   -Original Message-
   From: Scott H
  [mailto:[EMAIL PROTECTED]
   Sent: Wednesday, November 12, 2003 11:19 AM
   To: Dan Greene; MySQL Mailing List
   Subject: RE: maintaining size of a db
  
  
   Yes sir, exactly.  It's just that's what I'm
   looking for, and can't figure out.  I can set
  up
   a cron job, but what exactly would the SQL
  delete
   statement be that would allow me to delete
  old
   records in such a way that the db maintains
  an
   approximately constant size on disk?
  (Failing
   that perhaps a delete statement that would
  just
   have it maintain a constant # of records?
   ...maybe this would be much simpler?)
  
   --- Dan Greene wrote:
cronjob a sql script that runs a delete
statement for old jobs daily
   
 --- Egor Egorov wrote:
  Scott H wrote:
  Can't seem to find this one in the
  manual
or
  archives - how do I control a db to
maintain
  its size to an arbitrary value, say 20
  GB?
I
  want to just rotate records, deleting
those
  that are oldest.
 
  You can't restrict size of the database
only
  with MySQL, use disk quotas.

 No!  That would just stop mysql right in
  its
 tracks (so to speak...) when it got too
large.
 But I want old records sloughed off and
  the
db to
 continue running.  (This is for a central
syslog
 box.)
  
  
  
   .
  
   __
   Do you Yahoo!?
   Protect your identity with Yahoo! Mail
  AddressGuard
   http://antispam.yahoo.com/whatsnewfree
  
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:
 
 http://lists.mysql.com/[EMAIL PROTECTED]
 


 =
 --

 To announce that there must be no criticism of the President, or
 that we are to stand by the President, right or wrong, is not
 only unpatriotic and servile, but is morally treasonable to the
 American public.
   -- Theodore Roosevelt, 1918






 ..

 __
 Do you Yahoo!?
 Protect your identity with Yahoo! Mail AddressGuard
 http://antispam.yahoo.com/whatsnewfree

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: max_user_connections problem after upgrading

2003-11-10 Thread Michael McTernan
Hi,

Have you tried netstat -a on the box with the MySQL server?  This command
(Linux) will show what is connected to where, and will help you double check
that there really aren't any open connections to the server.

Thanks,

Mike

 -Original Message-
 From: Henrik Skotth [mailto:[EMAIL PROTECTED]
 Sent: 10 November 2003 18:54
 To: [EMAIL PROTECTED]
 Subject: Re: max_user_connections problem after upgrading


 That's new for 4.0? Hadn't noticed it... Anyway, it's set to 0 so there is
 no limit to exceed.
 Also, we aren't getting the error messages ALL the time, they start to
 appear after a day or two and gets more and more frequent untill I restart
 mysql. Any other ideas?

 -- Henrik

 gerald_clark skrev:
  Are you sure you are net exceeding the setting for
  maximum connections per hour for that user?
 
  Henrik Skotth wrote:
 
 Hi!
 
 What I meant was that even if there are currently only two user
 connections being used, and the limit is 300, we still get the already
 more than max_user_connections error...
 
 -- Henrik
 
 gerald_clark skrev:
 
 
 Henrik Skotth wrote:
 
 
 
 Hello all,
 
 We recently upgraded to 4.0, it went
 very well and the performance gains have been great.
 But now the server has started to act strangely. Every few days, the
 
 
 server starts to refuse connections, saying
 
 
 that there is already more than max_user_connections, but there is
 
 
 really only one or two active connections and our max_user_connections
 is 300. I have to take down and restart the server to solve the problem,
 and it keeps happening over and over again every few days...
 
 
 Am I the only one having this problem? Any suggestions?
 
 Regards,
 -- Henrik Skotth, Hogwarts.nu
 
 
 
 
 
 Are there 298 or 299 inactive connections?
 If so, why are they not being closed?
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]













 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Viruses from the list

2003-11-05 Thread Michael McTernan
Hi,

Since joining the list I'm getting about 6 virus emails a day.  Fortunately
the firewall is stripping them for me and just sending me alerts telling me
the virus and the sender of the mail e.g.

The message senders were
[EMAIL PROTECTED]
[EMAIL PROTECTED]

The message title was latest internet security pack
The message date was 4 Nov 2003 09:58:03 +0100
The virus or unauthorised code identified in the email is
/var/qmail/queue/split/0/attach/422738_7X_PM10_EMS_MA-X=2DMSDOWNLOAD__instal
ler355.exe
Found the W32/[EMAIL PROTECTED] virus !!!

I think it is unfortunate, and people should be aware of this, but it's just
one of those things that happen...  oh well!

Thanks,

Mike

 -Original Message-
 From: Leo Huang [mailto:[EMAIL PROTECTED]
 Sent: 05 November 2003 12:39
 To: Al Bogner
 Cc: [EMAIL PROTECTED]
 Subject: Re: Viruses from the list


 Al Bogner,

 Thanks for you info.

 Yes, I got quite a few as well. About Microsoft update stuff etc.

 But I think emails with viruses are quite common, my mail server
 captures around 2,000 emails with virus everyday. Also this is an old
 virus(relatively speaking), so it should be fine, I think.

 Leo


 Al Bogner wrote:

 I use an email-adress for this list only and since my first
 posting a few days
 ago I got viruses, while I didn't before.
 
 VIRUS FROM mamo @ hvd.healthnet.lu (W32/[EMAIL PROTECTED])
 VIRUS FROM kawamoto @ wave.tky.plala.or.jp ([EMAIL PROTECTED])
 VIRUS FROM sales @ vulcanford.ca (W32/[EMAIL PROTECTED])/[EMAIL PROTECTED])
 
 Wake up people, it was time enough to update virus-definitions.
 Clean your
 pcs.
 
 I will disable my email-adress soon.
 
 Al
 
 
 



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Multiuser Programming

2003-10-30 Thread Michael McTernan
hi,

 -Original Message-
 From: William R. Mussatto [mailto:[EMAIL PROTECTED]
 Sent: 30 October 2003 01:14
 To: [EMAIL PROTECTED]
 Subject: RE: Multiuser Programming


 Chris said:
  I'm not familiar with that function in MS SQL, and you're a bit unclear,
  but it looks like you might be able to use temporary tables.
 
  http://www.mysql.com/doc/en/CREATE_TABLE.html
 
  Temporary tables are on a per-connection basis, and are unique per
  connection (each connection can have a temporary table that won't
  conflict with others). Temporary tables are dropped as soon as the
  connection is dropped.
 
  Chris
 
  -Original Message-
  From: William IT [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, October 29, 2003 5:01 PM
  To: [EMAIL PROTECTED]
  Subject: Multiuser Programming
 
 
  I am doing transfer from MS SQL.
  In MS SQL while I connect as USER1 my table will be create as
  User1.TableName. This will avoid conlict with other user.
 
  Now, how do I do that in Mysql? Or, maybe there is such system in Mysql.
  Is there any Mysql reference talk about this matter?

 I thought this list wanted bottom posting?  Sorry if I've got it
 backwards.

 How does connection pooling figure into this?  I would assume a drop table
 command would be needed since from MySQL's point of view, the connection
 never gets dropped.

Yes, I believe that a drop is`needed in such cases.

Thanks,

Mike





 William R. Mussatto, Senior Systems Engineer
 Ph. 909-920-9154 ext. 27
 FAX. 909-608-7061



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: php temp table question

2003-10-28 Thread Michael McTernan
Hi Larry,

I was wondering a similar thing, but for JDBC using connection pooling.  In
your example you say that 2 people will hit the script at the same time; if
they are using independent connections this should cause no problems since
TEMPORARY tables are local to the connection so don't interfere.

My worry is that when using connection pooling, if I create temporary tables
without dropping them and then return the connection to the pool, if the
connection then gets reused, the temporary table will still exist and cause
problems.  Clearly dropping the temporary table is needed, although I don't
want to give the user in question DROP TABLES priveledge, and there is no
such priveledge just for temporary tables :(  I'm not sure how the database
connection gets 'reset' without being terminated and then reconnecting.

Thanks,

Mike

 -Original Message-
 From: Larry Brown [mailto:[EMAIL PROTECTED]
 Sent: 27 October 2003 21:44
 To: MySQL List
 Subject: RE: php temp table question


 Thanks, I got the answer from a php developer.  In case anyone is
 wondering,
 according to him the table is dropped at the end of the script execution
 regardless of whether you use persistent connections or not.

 -Original Message-
 From: Larry Brown [mailto:[EMAIL PROTECTED]
 Sent: Monday, October 27, 2003 4:04 PM
 To: MySQL List
 Subject: php temp table question


 Does anyone know whether the use of persistent connections with php will
 allow a temp table created by a script to linger around and cause
 a problem
 with the next execution of the script when it tries to create the
 temp table
 again?  Also if it does present a problem with the next script execution
 trying to create the temp table again, if I drop the temp table at the end
 of the script will I still have problems if the script is run by
 two client
 in tandem?  For instance two people connect, both hit the script at about
 the same time.  One script creates the temp table and before it
 can drop the
 table the second script tries to create the table.  Will it see the table
 created by the other script?  Again the use of persistent
 connections would
 be a the heart of this I would think.



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: DROP TEMORARY TABLE

2003-10-28 Thread Michael McTernan
Hi Steve,

Thanks for your reply.  I agree that TEMPORARY TABLES are really great, and
I'm using them in a similar way to your description, and I think I
understand their purpose.

My problem, that so far no one has been able to answer, is that I'm using
connection pooling with the Tomcat server.  As far as I understand, this
means that you don't create a connection to the server to do some processing
then close it.  Instead you get a connection from a pool, use it, then
return it to the pool, and all connections appear as the same user to MySQL
(i.e. same user, password and host).

So my worry is this scenario:

TransactionA gets connection A from the pool.
TransactionA creates a temporary table for some query.
TransactionA is done, and returns the connection to the pool.
TransactionB gets a connection from the pool, which just so happens to be
connection A.
TransactionB tries to create a temporary table with the same name as the one
that already exists.
** BANG!! **

TransactionA and TransactionB were trying to use the same temporary table on
the same connection.  This is why I think there should be a DROP TEMPORARY
TABLE priveledge so that I can let the Tomcat server drop the temporary
tables, but without having to give it DROP TABLE priveledge which is a lot
more worrying.

Note I'm cross posting this into the Java list too, incase I'm wrong about
my understanding of pooling and someone there can correct me :)

Many Thanks,

Mike

 -Original Message-
 From: Steve Buehler [mailto:[EMAIL PROTECTED]
 Sent: 28 October 2003 11:37
 To: Michael McTernan; mysql
 Subject: Re: DROP TEMORARY TABLE


 TEMPORARY TABLES are suppose to work this way.  They are only
 there during
 a connection and then dropped when the connection to the database is
 closed. One thing they are great for is when you have temporary
 information
 that needs to be kept and worked with.  One of my programs has to pull
 things from other tables and I put them into a temporary table
 where I can
 do things with this information easier than I could by putting them into
 variables.  It makes sorts much easier and the information is
 only used to
 generate the pages that a client is viewing.  Instead of having a
 table for
 every client that has to be deleted afterwards, I use a temporary
 table.  You also don't have to have the table be a different name
 for each
 client that is connecting because only the connection that created it can
 see it.  So if 5 people access my page at the same time and each
 have thier
 own population table, it is not shared so the information can/will be
 different for each of them.
 Before I started using temporary tables, I would have my script come up
 with a random name for the table.  Problem was that if the client killed
 the connection before the table could be dropped, it would stay in the
 database until I manually dropped it.  In my opinion, this was
 the biggest
 benefit of TEMPORARY TABLES.

 Steve

 At 02:58 PM 10/26/2003, you wrote:
 Hi there,
 
 I'm trying to restrict the access that a Tomcat server has to mysql
 4.0.15-standard-log database server.  Unfortunately some of the
 queries use
 temporary tables, so I've had to GRANT CREATE TEMPORARY TABLES
 to the user.
 
 However, since the server is using connection pooling, it also DROPS the
 temporary tables after the queries have completed.
 
 I'd really like to be able to GRANT DROP TEMPORARY TABLES, but
 this doesn't
 seem to exist, so I've got to GRANT DROP - which feels a less
 safe to me :(
 
 Is there a better way of doing this that someone has found before, or is
 there a way to grant DROPS of the temporary table?
 
 Many Thanks,
 
 Mike
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Foreigner keys in MySQL?

2003-10-28 Thread Michael McTernan
Hi,

If you mean foreign key constraints, it's available with InnoDb tables under
MySQL, as are atomic transactions (commit and rollback).  I'm using this
with MySQL4.0 and it works a treat :)

Thanks,

Mike

 -Original Message-
 From: Juliana Gubert Ehrensperger [mailto:[EMAIL PROTECTED]
 Sent: 28 October 2003 14:50
 To: SQL List
 Subject: Foreigner keys in MySQL?


 Hello all,
 I'm joined to a research project and by now we need to specify a
 software to build a database.
 Although I don't have so much information about this kind of
 software, I've been considered MySQL a good option, as it's an open
 source database and this project has been developed in a public
 university.
 Nevertheless, I received the information that MySQL don't support
 relacional functions between tables of the same database. I'm not sure
 about this, and I'd like to confirm this information. Does MySQL allow
 relacionl functions between tables recognizing foreign keys?
 Excuse me for asking a so simple question,

 --
 _

 Juliana Gubert Ehrensperger
 Federal Univesity of Santa Catarina
 Brazil
 _



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: DROP TEMORARY TABLE

2003-10-28 Thread Michael McTernan
Hi there,

Cool, I'm not imagining things :)

 1) creating a *unique* temporary table using the SESSIONID, in a
 separate DB where the tomcat 'user' has DROP privileges, and use
 a method implementing HttpSessionBindingListener to drop that
 table when the session terminates;

I currently drop the temporary table after use, so that the connection gets
'cleaned' up for reuse.  I've used the finally clause of Java to ensure this
gets done, like this:

Connection dbConnection = Pool.getConection();

try
{
/* Do some funky MySQL stuff here, using temp tables x,y,z */

return true;
}
finally
{
dbConnection.prepareStatment(DROP TABLES x,y,z;).executeUpdate;
/* Close method is overridden, so actually returns to the Pool */
dbConnection.close();
}

The problem for me is that I don't like giving the Tomcat user DROP
privilege for security reasons.  Seems like a feature request is needed to
ask for a DROP TEMPORARY TABLE privilege to match the CREATE TEMPORARY TABLE
privilege.

Thanks,

Mike

 -Original Message-
 From: Hassan Schroeder [mailto:[EMAIL PROTECTED]
 Sent: 28 October 2003 15:54
 To: mysql
 Cc: MySQL Java
 Subject: Re: DROP TEMORARY TABLE


 Michael McTernan wrote:

  My problem, that so far no one has been able to answer, is that
 I'm using
  connection pooling with the Tomcat server.

  TransactionA gets connection A from the pool.
  TransactionA creates a temporary table for some query.
  TransactionA is done, and returns the connection to the pool.
  TransactionB gets a connection from the pool, which just so
 happens to be
  connection A.
  TransactionB tries to create a temporary table with the same
 name as the one
  that already exists.
  ** BANG!! **

 Yes, you're right -- as long as the container-managed connection is
 open, the original TEMPORARY table will persist across the sessions
 of different individuals.

 I just ran into this issue myself, and so far I've thought of

 1) creating a *unique* temporary table using the SESSIONID, in a
 separate DB where the tomcat 'user' has DROP privileges, and use
 a method implementing HttpSessionBindingListener to drop that
 table when the session terminates;

 2) just keeping the original ResultSet in memory and manipulating
 it there...

 Neither quite as graceful as using a real temporary table, so I'm
 also open to other suggestions :-)

 --
 Hassan Schroeder - [EMAIL PROTECTED]
 Webtuitive Design ===  (+1) 408-938-0567   === http://webtuitive.com

dream.  code.




 --
 MySQL Java Mailing List
 For list archives: http://lists.mysql.com/java
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



DROP TEMORARY TABLE

2003-10-26 Thread Michael McTernan
Hi there,

I'm trying to restrict the access that a Tomcat server has to mysql
4.0.15-standard-log database server.  Unfortunately some of the queries use
temporary tables, so I've had to GRANT CREATE TEMPORARY TABLES to the user.

However, since the server is using connection pooling, it also DROPS the
temporary tables after the queries have completed.

I'd really like to be able to GRANT DROP TEMPORARY TABLES, but this doesn't
seem to exist, so I've got to GRANT DROP - which feels a less safe to me :(

Is there a better way of doing this that someone has found before, or is
there a way to grant DROPS of the temporary table?

Many Thanks,

Mike



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]