MS SQL emulator for MySQL in order to support MS Project Server, Sharepoint...
I apologize for asking this question. I am somewhat confused by Microsoft's Licensing and I personally can not justify to senior management why MS SQL should be procured instead of procuring software more urgent requirements. I had the not so bright idea that maybe a MS SQL emulator or compatibility mode existed for/in MySQL. Does such a beast exist? In particular, I wanted to use MySQL as backend database for MS Project Server and Sharepoint, and other applications that use MS SQL as their backend database. Again, I apologize for asking this question. I could not figure out the correct string to search for such software. thank you
How does one transpose a group of columns into rows?
I have a table: Id |path1 | display| value 1 | ostype | os|windows 1 | ostype | ver |NT4 2 | ostype | os | linux 2 | ostype | ver | RHEL 5.4 That I would like to tranform into Id | os| ver 1 |windows | NT4 2| linux |RHEL 5.4 Thank you, Raymond smime.p7s Description: S/MIME cryptographic signature
RE: Adding and Removing tables from MERGE tables dynamically
Thanks for the information. I want to make sure that I understand: Do you run ALTER TABLE command on a live database(table) that is doing inserts; Or, do you stop accepting Remote connections, flush the tables, run the ALTER TABLE command, start accepting connections? Thank you, Raymond -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent: Wednesday, September 06, 2006 10:24 To: Jacob, Raymond A Jr; mysql@lists.mysql.com Subject: Re: Adding and Removing tables from MERGE tables dynamically I've got a similar setup, total records across about 8 tables hoovers around 200 million. To change a merge table just issue an alter table with a new union. ALTER TABLE mergetable UNION=(table1, table2, table3,...); - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Tuesday, September 05, 2006 5:29 PM Subject: Adding and Removing tables from MERGE tables dynamically Problem: I use two applications one called snort, the client that inserts data into eleven(11) tables. The other application BASE joins the tables into one table. When the tables become 4GB in size, deleting records for one month becomes unbearably slow(20-30days). The search(Select ) is slow too but that is a problem with the BASE application. I thought that using MERGE tables would allow an administrator to create a monthly table, using the original table names, composed of daily or weekly tables, by appending the date of creation to the table i.e. data_table1_-MM-DD and join_table_-MM-DD. From the documentation: creating the table with INSERT_METHOD = FIRST results in INSERTs being done to first table in the MERGE UNION statement. I will assume that the first table is the latest table. So one of the first tables should look like: CREATE TABLE original_table { ... } TYPE = MERGE UNION = (data_table1_2006-09-12 ,data_table1_2006-09-05) Using cron and depending on the interval chosen daily, weekly, bi-weekly, monthly, quarterly, or yearly at the start of a new interval, a new table would be created with current date in the -MM-DD format. For example: if the start of new interval begins a week from today on 2006-09-12. At 12:00am on 2006-09-12, a script would create new tables that would look like: CREATE TABLE data_table1_2006-09-12 { ... } One the MERGE TABLES should look like: CREATE TABLE original_table { ... } TYPE = MERGE UNION = (data_table1_2006-09-12 ,data_table1_2006-09-05) On every Tuesday(in this case) from now on, new tables are created ending with date in the format -MM-DD and merged into the original table. So that by 2006-09-30, one of the MERGE tables should look something like ... } TYPE = MERGE UNION = (data_table1_2006-09-26, data_table1_2006-09-19, data_table1_2006-09-12, data_table1_2006-09-05) On 2006-10-05 at 00:00hrs the newest table data_table1_2006-10-05 should be created and merged into the original_table. The oldest table in this case data_table1_2006-09-05 should be removed from one of the MERGE tables in this case original_table. The resulting merge table should look something like ... } TYPE = MERGE UNION = (data_table1_2006-10-05,data_table1_2006-09-26, data_table1_2006-09-19, data_table1_2006-09-12) Question: How does one add data_table1_2006-09-12 to original_table dynamically? Question: How does one remove data_table1_2006-09-05 from the original_table dynamically? Question: In other words, can tables be added and removed dynamically to/from a MERGE TABLE? Benefit: I hope is to archive individual tables. When I need to review old data I will use a copy of the BASE application, then Merge the tables that I am interested in, in order to search smaller tables without changing the BASE application. Question: Is this possible. Do these question make sense? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Adding and Removing tables from MERGE tables dynamically
Problem: I use two applications one called snort, the client that inserts data into eleven(11) tables. The other application BASE joins the tables into one table. When the tables become 4GB in size, deleting records for one month becomes unbearably slow(20-30days). The search(Select ) is slow too but that is a problem with the BASE application. I thought that using MERGE tables would allow an administrator to create a monthly table, using the original table names, composed of daily or weekly tables, by appending the date of creation to the table i.e. data_table1_-MM-DD and join_table_-MM-DD. From the documentation: creating the table with INSERT_METHOD = FIRST results in INSERTs being done to first table in the MERGE UNION statement. I will assume that the first table is the latest table. So one of the first tables should look like: CREATE TABLE original_table { ... } TYPE = MERGE UNION = (data_table1_2006-09-12 ,data_table1_2006-09-05) Using cron and depending on the interval chosen daily, weekly, bi-weekly, monthly, quarterly, or yearly at the start of a new interval, a new table would be created with current date in the -MM-DD format. For example: if the start of new interval begins a week from today on 2006-09-12. At 12:00am on 2006-09-12, a script would create new tables that would look like: CREATE TABLE data_table1_2006-09-12 { ... } One the MERGE TABLES should look like: CREATE TABLE original_table { ... } TYPE = MERGE UNION = (data_table1_2006-09-12 ,data_table1_2006-09-05) On every Tuesday(in this case) from now on, new tables are created ending with date in the format -MM-DD and merged into the original table. So that by 2006-09-30, one of the MERGE tables should look something like ... } TYPE = MERGE UNION = (data_table1_2006-09-26, data_table1_2006-09-19, data_table1_2006-09-12, data_table1_2006-09-05) On 2006-10-05 at 00:00hrs the newest table data_table1_2006-10-05 should be created and merged into the original_table. The oldest table in this case data_table1_2006-09-05 should be removed from one of the MERGE tables in this case original_table. The resulting merge table should look something like ... } TYPE = MERGE UNION = (data_table1_2006-10-05,data_table1_2006-09-26, data_table1_2006-09-19, data_table1_2006-09-12) Question: How does one add data_table1_2006-09-12 to original_table dynamically? Question: How does one remove data_table1_2006-09-05 from the original_table dynamically? Question: In other words, can tables be added and removed dynamically to/from a MERGE TABLE? Benefit: I hope is to archive individual tables. When I need to review old data I will use a copy of the BASE application, then Merge the tables that I am interested in, in order to search smaller tables without changing the BASE application. Question: Is this possible. Do these question make sense?
I ran of disk space running optimize on table.... Now I can not open data.MYI
Now, I when I do a desc on data table, I get the error ERROR 1016 (HY000): Can't open file: 'data.MYI' (errorno: 144). As I recall I ran: optimize data; The Optimize command did not complete the error as I recall was /var filesystem full. Running the following commands: ls data.* data.MYD data.MYIdata.frm acid# ls -last snort/data.* 2 -rw-rw 1 mysql mysql1024 Jul 12 14:23 data.MYI 4543712 -rw-rw 1 mysql mysql 4650467248 Jul 12 14:09 data.MYD 10 -rw-rw 1 mysql mysql8632 Jun 28 20:54 data.frm Is there anything I can do to get this table back? r/Raymond
RE: I ran of disk space running optimize on table.... Now I can not open data.MYI
Thank you, raymond -Original Message- From: Dan Buettner [mailto:[EMAIL PROTECTED] Sent: Thursday, July 13, 2006 18:07 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: I ran of disk space running optimize on table Now I can not open data.MYI Hmmm. Doesn't look pretty, Raymond! You've got 4.6GB of data and just 1K of indexes now. Step ZERO - make a backup copy of the files you have NOW, in case any repair operations you try make things worse. On the bright side, your data file (.MYD) should sitll be intact - it's just the index file (.MYI) that's hosed. At a high level, you're going to want to rebuild that index file. How - is a little trickier. See http://dev.mysql.com/doc/refman/5.0/en/repair.html I expect you'll want to look at the section titled Stage 3: Difficult repair because the first part of your index file is in fact destroyed. Good luck! Dan On 7/13/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote: Now, I when I do a desc on data table, I get the error ERROR 1016 (HY000): Can't open file: 'data.MYI' (errorno: 144). As I recall I ran: optimize data; The Optimize command did not complete the error as I recall was /var filesystem full. Running the following commands: ls data.* data.MYD data.MYIdata.frm acid# ls -last snort/data.* 2 -rw-rw 1 mysql mysql1024 Jul 12 14:23 data.MYI 4543712 -rw-rw 1 mysql mysql 4650467248 Jul 12 14:09 data.MYD 10 -rw-rw 1 mysql mysql8632 Jun 28 20:54 data.frm Is there anything I can do to get this table back? r/Raymond -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Trying to understand why Select running out of memory if table not used
When I try to retrieve all rows in the event table that are in a particular range and then try to determine if the cid is in the data table the join takes forever. So I tried just tried adding the table to see if the where clause was slowing things down. Below is a portion on my session. mysql select cid, timestamp from event where timestamp between '2006-05-01' AND '2006-05-15'; | 7753544 | 2006-05-14 23:59:58 | | 7753545 | 2006-05-15 00:00:00 | | 7753546 | 2006-05-15 00:00:00 | +-+-+ 1336344 rows in set (32.55 sec) mysql select event.cid, event.timestamp from event, data where mysql event.timestamp between '2006-05-01' AND '2006-05-15'; mysql: Out of memory (Needed 1477684 bytes) ERROR 2008 (HY000): MySQL client ran out of memory Obviously, that is a bad idea. I just can not figure Out how to speed the select clause up. I was using the query: Select event.cid, event.timestamp from event, data Where ( event.timestamp between '2006-05-01' AND '2006-05-15' ) and event.cid=data.cid; But the query never completed i.e. I aborted the query after a few hours. Thank you, Raymond -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: How does one speed up delete-Again
Could the problem the Locked data table in this case? mysql show processlist; +-+---+---+---+-+--- -+--+--- ---+ | Id | User | Host | db| Command | Time | State| Info | +-+---+---+---+-+--- -+--+--- ---+ | 1 | goat | xxx:62693 | snort | Query | 424668 | Locked | INSERT INTO data (sid,cid,data_payload) VALUES ('2','13305243','45C10FEF40002F11D162CD9E | | 524 | root | localhost | snort | Query | 424669 | Sending data | DELETE data from data JOIN sidtemp ON data.cid = sidtemp.cid | | 537 | root | localhost | snort | Query | 242922 | Locked | DELETE data FROM data, event WHERE data.cid = event.cid AND event.timestamp 2006-05-01 | | 542 | root | localhost | NULL | Query | 0 | NULL | show processlist | +-+---+---+---+-+--- -+--+- I killed 524 537, started over and CPU went down. The query is still slow. Do I need to bring the database down to single user mode(I don't know the correct Term so I just am guessing single user mode exists) then do the delete? Thank you, Raymond
Re: How does one speed up delete-Again
It appears that every time I start query the event or the data table gets Locked. Could this have any affect on why it takes so long to delete records. Grasping at straws, Thank you, Raymond mysql show processlist; +-+---+---+---+- +--+--+-- + | Id | User | Host | db| Command | Time | State| Info | +-+---+---+---+- +--+--+-- + | 1 |goat| xxx:62693 |snort| Query | 1555 | Locked | INSER T INTO event (sid,cid,signature,timestamp) VALUES ('2', '13305568', '12', '2006-07-10 11:48:22. | | 542 | root | localhost |snort| Query | 1555 | Sending data | DELET E data FROM data, event WHERE data.cid = event.cid AND event.timestamp 2006-04-01 | | 543 |goat| xxx:62680 | snort_archive | Sleep | 1555 | | NULL | | 544 | root | localhost |snort| Query |0 | NULL | show processlist | +-+---+---+---+- +--+--processlist | | 545 | root | localhost |snort| Query | 308 | Sending data | DELET E data FROM data, event WHERE data.cid = event.cid AND event.timestamp '2006-03-03' | +-+---+---+---+- +--+--+-- + 4 rows in set (0.00 sec) mysql show processlist; +-+---+---+---+- +--+--+-- + | Id | User | Host | db| Command | Time | State| Info | +-+---+---+---+- +--+--+-- + | 1 |goat| xxx:62693 |snort| Query | 333 | Locked | INSER T INTO event (sid,cid,signature,timestamp) VALUES ('2', '13305782', '12', '2006-07-10 12:16:30. | | 543 |goat| xxx:62680 | snort_archive | Sleep | 333 | | NULL | | 544 | root | localhost |snort| Query |0 | NULL | show processlist | | 545 | root | localhost |snort| Query | 333 | Sending data | DELET E data FROM data, event WHERE data.cid = event.cid AND event.timestamp '2006-03-03' | +-+---+---+---+- +--+--+-- + 4 rows in set (0.00 sec) mysql show processlist; +-+---+---+---+- +--+--+-- + | Id | User | Host | db| Command | Time | State| Info | +-+---+---+---+- +--+--+-- + | 1 |goat| xxx:62693 |snort| Query | 337 | Locked | INSER T INTO event (sid,cid,signature,timestamp) VALUES ('2', '13305782', '12', '2006-07-10 12:16:30. | | 543 |goat| xxx:62680 | snort_archive | Sleep | 337 | | NULL | | 544 | root | localhost |snort| Query |0 | NULL | show processlist | | 545 | root | localhost |snort| Query | 337 | Sending data | DELET E data FROM data, event WHERE data.cid = event.cid AND event.timestamp '2006-03-03' | +-+---+---+---+- +--+--+-- + 4 rows in set (0.01 sec) mysql show processlist; +-+---+---+---+- +--+--+-- + | Id | User | Host | db| Command | Time | State| Info
How does one speed up delete-Again
I started the operation below on Friday at 1300hrs EST DELETE data FROM data, event WHERE data.cid = event.cid AND event.timestamp 2006-05-01 It is now Sunday 22:00hrs EST and the operation is still running. Question: Should it take this long to delete 7.5 million records from a 4.5GB Table? Question: Other than writing a script to export all the cid's to a file and deleting the records one at a time so at least I can delete some records. Is there a way to delete records one at a time or in groups so that if I have to stop the operation the delete will not rolled back? Question:Does anyone on the list have experience deleting what I guess is a large number of records from a large table? i.e. how long does it take? r/Raymond -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
How does one speed up delete.
Env: Freebsd 6.0 MySql 4.1.18 Mem: 1GB(?) can not tell without rebooting Disk Avail: 4GB Problem: the table data is 4.5GB. I created a temporary table sidtemp in the database snort by typing: CREATE TEMPORARY TABLE sidtemp SELECT cid FROM event WHERE timestamp '2006-05-01'; Query OK, 7501376 rows affected (36.38 sec) Records: 7501376 Duplicates: 0 Warnings: 0 Next I want to delete all rows from the table data when data.cid = sidtemp.cid So I started the following command on Jul 5 at 16:44 GMT: DELETE data FROM data JOIN sidtemp ON data.cid = sidtemp.cid It is now Jul 7 19:56 GMT. I had forgotten how long it takes to run this delete command as I recall it takes 15-20days on just one database. I have two(2) Databases with the same schema. The databases are live now and Usually without executing this delete mysql uses between 0-10% Of the CPU. The delete is causing the mysql to use between 98-99% of the CPU. Any ideas on what I can do to speed up the Delete? Thank you Raymond
RE: How does one speed up delete.
-Original Message- From: Dan Buettner [mailto:[EMAIL PROTECTED] Sent: Friday, July 07, 2006 15:48 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: How does one speed up delete. Raymond, I would expect that adding an index on 'cid' column in your 'sidtemp' table would help quite a bit. Out of curiousity, why use a temp table in this situation? Why not Dan: I had erroneously assumed that the delete would delete rows from data and event. DELETE data FROM data, event WHERE data.cid = event.cid AND event.timestamp 2006-05-01 I just stopped my previous query. I am running the above now. I used a temporary table because I thought I only needed the table to hold the events.cid's temporarily that I wished to delete from the data table. Can you tell when I should use temporary tables. Below I believe command below demonstrates that an index exists on data and event? mysql show index from data; +---++--+--+-+-- -+-+--++--++-+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +---++--+--+-+-- -+-+--++--++-+ | data | 0 | PRIMARY |1 | sid | A | 3 | NULL | NULL | | BTREE | | | data | 0 | PRIMARY |2 | cid | A | 9678480 | NULL | NULL | | BTREE | | +---++--+--+-+-- -+-+--++--++-+ 2 rows in set (0.00 sec) mysql show index from event; +---++--+--+-+-- -+-+--++--++-+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +---++--+--+-+-- -+-+--++--++-+ | event | 0 | PRIMARY |1 | sid | A |NULL | NULL | NULL | | BTREE | | | event | 0 | PRIMARY |2 | cid | A |14389173 | NULL | NULL | | BTREE | | | event | 1 | sig |1 | signature | A |NULL | NULL | NULL | | BTREE | | | event | 1 | time |1 | timestamp | A |NULL | NULL | NULL | | BTREE | | +---++--+--+-+-- -+-+--++--++-+ 4 rows in set (0.00 sec) Thank you, raymond On 7/7/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote: Env: Freebsd 6.0 MySql 4.1.18 Mem: 1GB(?) can not tell without rebooting Disk Avail: 4GB Problem: the table data is 4.5GB. I created a temporary table sidtemp in the database snort by typing: CREATE TEMPORARY TABLE sidtemp SELECT cid FROM event WHERE timestamp '2006-05-01'; Query OK, 7501376 rows affected (36.38 sec) Records: 7501376 Duplicates: 0 Warnings: 0 Next I want to delete all rows from the table data when data.cid = sidtemp.cid So I started the following command on Jul 5 at 16:44 GMT: DELETE data FROM data JOIN sidtemp ON data.cid = sidtemp.cid It is now Jul 7 19:56 GMT. I had forgotten how long it takes to run this delete command as I recall it takes 15-20days on just one database. I have two(2) Databases with the same schema. The databases are live now and Usually without executing this delete mysql uses between 0-10% Of the CPU. The delete is causing the mysql to use between 98-99% of the CPU. Any ideas on what I can do to speed up the Delete? Thank you Raymond -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Temporary table ERROR 1109 (42S02) where are temporary tables kept?
I ran the following commands: USE snort; CREATE TEMPORARY TABLE sidtemp SELECT cid FROM event WHERE timestamp '2006-05-01'; ... SELECT count(*) from sidtemp; count(*) 7501376 DELETE FROM data WHERE data.cid = sidtemp.cid; ERROR 1109 (42S02): Unkown table 'sidtemp' in where clause SHOW tables; Does not include sidtemp in the list of tables in the snort database nor would I expect it to. Question: What database is the table sidtemp in? r/Raymond
RE: Temporary table ERROR 1109 (42S02) where are temporary tables kept?
Thank you, I was definitely on the wrong track on this one. I annotated your commands to make sure that I understood what they were doing. Are my comments correct? --- You have the wrong syntax. You can't mention a table in the WHERE clause that wasn't in the FROM clause. Try DELETE data FROM data JOIN sidtemp ON data.cid = sidtemp.cid; /* deletes all records in data with cid equal cid in sidtemp but leaves sidtemp unchanged */ or DELETE data FROM data, sidtemp WHERE data.cid = sidtemp.cid; /* deletes all records in data and sidtemp where cids are equal */ -- /* Will the USING clause work also? */ or /* looking that link below: */ DELETE data FROM data USING data, sidtemp WHERE data.cid = sidtemp.cid or DELETE data FROM data USING data, sidtemp JOIN sidtemp ON data.cid = sidtemp.cid Is the above correct also? Thank you, raymond -- See the manual for details http://dev.mysql.com/doc/refman/4.1/en/delete.html. Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Temporary table ERROR 1109 (42S02) where are temporary tables kept?
Thanks again, raymond -Original Message- From: Michael Stassen [mailto:[EMAIL PROTECTED] Sent: Wednesday, July 05, 2006 14:54 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: Temporary table ERROR 1109 (42S02) where are temporary tables kept? Jacob, Raymond A Jr wrote: Thank you, I was definitely on the wrong track on this one. I annotated your commands to make sure that I understood what they were doing. Are my comments correct? --- You have the wrong syntax. You can't mention a table in the WHERE clause that wasn't in the FROM clause. Try DELETE data FROM data JOIN sidtemp ON data.cid = sidtemp.cid; /* deletes all records in data with cid equal cid in sidtemp but leaves sidtemp unchanged */ Correct. or DELETE data FROM data, sidtemp WHERE data.cid = sidtemp.cid; /* deletes all records in data and sidtemp where cids are equal */ No, this only deletes from data. These 2 are supposed to be equivalent. Using this version of the syntax deletes rows from the tables named *before* the FROM. Tables used to determine the matching rows come after the FROM. The first version I gave uses an explicit JOIN, the second uses the implicit, comma join. I prefer explicit joins, but I included the implicit join because it seemed to be what you were trying. -- /* Will the USING clause work also? */ It should. or /* looking that link below: */ DELETE data FROM data USING data, sidtemp WHERE data.cid = sidtemp.cid or DELETE data FROM data USING data, sidtemp JOIN sidtemp ON data.cid = sidtemp.cid Is the above correct also? Almost. In the USING form, the tables which should lose rows go after FROM, while the tables used to make the selection go after USING. Hence, the query would be DELETE FROM data USING data JOIN sidtemp ON data.cid = sidtemp.cid; or DELETE FROM data USING data, sidtemp WHERE data.cid = sidtemp.cid; In general, the implicit join syntax (comma between tables with join condition(s) in the WHERE clause) seems easy to use, but frequently leads to trouble. The explicit join syntax (table JOIN table ON condition) is much clearer, which should help avoid mistakes. You should probably read the manual page describing JOIN syntax http://dev.mysql.com/doc/refman/4.1/en/join.html. Thank you, raymond Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Solved: Client still reports table full
I forgot I have two databases snortdb and archive_snortdb with same schema. Syslog did not distinguish between the two(2). Both were suffering from the 4GB limit. Once I increased max_row the error stopped on the client. Brent, Thanks again, Raymond -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent: Friday, June 30, 2006 8:49 To: Jacob, Raymond A Jr; mysql@lists.mysql.com Subject: Re: Client still reports table full Wow, I'm really sorry about that. Left out a zero. I should stop answering questions before the holiday weekend. I was suggesting a minor change to 500 to see if that would work. Everything I've read about adjusting for table full errors always specifies both. Since only one was changed, MySQL might not have realized things were different(?). Unfortunately, there's not a whole lot of info on this topic, at least that I can find. I learned a while ago to set these parameters when I create the table if I think the table will be huge. I've also switched to using merge tables, which makes it a lot easier to archive parts of the table. You might try running the alter table command again with both variables specified. - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, June 29, 2006 5:21 PM Subject: RE: Client still reports table full If I understand the results from SHOW TABLE STATUS LIKE 'data'; My avg_row_length = 497 Why would descreasing it to 50 have a positive Effect. I would assume I should increase it? Thank you/Raymond -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent: Thursday, June 29, 2006 15:53 To: Jacob, Raymond A Jr; mysql@lists.mysql.com Subject: Re: Client still reports table full Oops, left out an important part. You should change the Avg_row_length also. ALTER TABLE AVG_ROW_LENGTH = 50 You need to specify an average row length if you have dynamic length fields in the table (blob, text, etc.). Also, perhaps a silly question which you may have answered earlier, but does you file system allow files larger than 4GB? Sometimes you have to specifically enable that feature in a file system. If that doesn't work, or you're limited to 4GB files, you may need to switch to using a merge table. - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, June 29, 2006 1:37 PM Subject: Client still reports table full Yesterday: I ran the following command: ALTER TABLE data max_rows=1100 Today: The client still reported table is full. I rebooted the client and stopped and started the mysql server. I still get the table is full error on the data table. I ran the command: \ echo SHOW TABLE STATUS LIKE 'data'; | mysql -u user -p snortdb \ below is the output. Name Engine Version Row_format Rows Avg_row_length Data_length Max_data_length Index_length Data_free Auto_increment Create_time Update_time Check_time Collation Checksum Create_options Comment data MyISAM 9 Dynamic 8721565 497 4335220336 1099511627775 127599616 0 NULL 2006-06-28 20:54:55 2006-06-29 18:02:32 NULL latin1_swedish_ci NULL max_rows=1100 \\ df shows /var the partition with the database has enoungh room: Filesystem1K-blocks Used Avail Capacity Mounted on /dev/amrd0s1f 27792614 18449326 711988072%/var -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Client still reports table full
Yesterday: I ran the following command: ALTER TABLE data max_rows=1100 Today: The client still reported table is full. I rebooted the client and stopped and started the mysql server. I still get the table is full error on the data table. I ran the command: \ echo SHOW TABLE STATUS LIKE 'data'; | mysql -u user -p snortdb \ below is the output. NameEngine Version Row_format RowsAvg_row_length Data_length Max_data_length Index_lengthData_free Auto_increment Create_time Update_time Check_time Collation ChecksumCreate_options Comment dataMyISAM 9 Dynamic 8721565 497 4335220336 1099511627775 127599616 0 NULL2006-06-28 20:54:55 2006-06-29 18:02:32 NULLlatin1_swedish_ci NULL max_rows=1100 \\ df shows /var the partition with the database has enoungh room: Filesystem1K-blocks Used Avail Capacity Mounted on /dev/amrd0s1f 27792614 18449326 711988072%/var -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Client still reports table full
If I understand the results from SHOW TABLE STATUS LIKE 'data'; My avg_row_length = 497 Why would descreasing it to 50 have a positive Effect. I would assume I should increase it? Thank you/Raymond -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent: Thursday, June 29, 2006 15:53 To: Jacob, Raymond A Jr; mysql@lists.mysql.com Subject: Re: Client still reports table full Oops, left out an important part. You should change the Avg_row_length also. ALTER TABLE AVG_ROW_LENGTH = 50 You need to specify an average row length if you have dynamic length fields in the table (blob, text, etc.). Also, perhaps a silly question which you may have answered earlier, but does you file system allow files larger than 4GB? Sometimes you have to specifically enable that feature in a file system. If that doesn't work, or you're limited to 4GB files, you may need to switch to using a merge table. - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, June 29, 2006 1:37 PM Subject: Client still reports table full Yesterday: I ran the following command: ALTER TABLE data max_rows=1100 Today: The client still reported table is full. I rebooted the client and stopped and started the mysql server. I still get the table is full error on the data table. I ran the command: \ echo SHOW TABLE STATUS LIKE 'data'; | mysql -u user -p snortdb \ below is the output. Name Engine Version Row_format Rows Avg_row_length Data_length Max_data_length Index_length Data_free Auto_increment Create_time Update_time Check_time Collation Checksum Create_options Comment data MyISAM 9 Dynamic 8721565 497 4335220336 1099511627775 127599616 0 NULL 2006-06-28 20:54:55 2006-06-29 18:02:32 NULL latin1_swedish_ci NULL max_rows=1100 \\ df shows /var the partition with the database has enoungh room: Filesystem1K-blocks Used Avail Capacity Mounted on /dev/amrd0s1f 27792614 18449326 711988072%/var -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Client still reports table full
I will try REPAIR TABLE data. data.MYD is 4.1G so I am over the 4GB limit and growing. I also tried: CHECK TABLE data QUICK; CHECK TABLE data MEDIUM; CHECK TABLE data EXTENDED; They all showed OK. Thank you again, raymond -Original Message- From: Dan Buettner [mailto:[EMAIL PROTECTED] Sent: Thursday, June 29, 2006 17:55 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: Client still reports table full I'm not sure that avg_row_length has a bearing on your problem right now ... the output of show table status you posted earlier shows that you have: current data length: 4335220336 maximum data length: 1099511627775 data_free: 0 (oddly) data_free should be something like 1099511627775 - 4335220336 = 1095176407439 I wonder if the table was marked as being in an error status when it filled up earlier and now needs a REPAIR operation (see my earlier post). Alternatively, Brent's question about a filesystem limit seems pertinent. What OS and filesystem are you using? Though I would think your error would change from a MySQL 'table is full' error to some kind of OS-related error... Dan On 6/29/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote: If I understand the results from SHOW TABLE STATUS LIKE 'data'; My avg_row_length = 497 Why would descreasing it to 50 have a positive Effect. I would assume I should increase it? Thank you/Raymond -Original Message- From: Brent Baisley [mailto:[EMAIL PROTECTED] Sent: Thursday, June 29, 2006 15:53 To: Jacob, Raymond A Jr; mysql@lists.mysql.com Subject: Re: Client still reports table full Oops, left out an important part. You should change the Avg_row_length also. ALTER TABLE AVG_ROW_LENGTH = 50 You need to specify an average row length if you have dynamic length fields in the table (blob, text, etc.). Also, perhaps a silly question which you may have answered earlier, but does you file system allow files larger than 4GB? Sometimes you have to specifically enable that feature in a file system. If that doesn't work, or you're limited to 4GB files, you may need to switch to using a merge table. - Original Message - From: Jacob, Raymond A Jr [EMAIL PROTECTED] To: mysql@lists.mysql.com Sent: Thursday, June 29, 2006 1:37 PM Subject: Client still reports table full Yesterday: I ran the following command: ALTER TABLE data max_rows=1100 Today: The client still reported table is full. I rebooted the client and stopped and started the mysql server. I still get the table is full error on the data table. I ran the command: \ echo SHOW TABLE STATUS LIKE 'data'; | mysql -u user -p snortdb \ below is the output. Name Engine Version Row_format Rows Avg_row_length Data_length Max_data_length Index_length Data_free Auto_increment Create_time Update_time Check_time Collation Checksum Create_options Comment data MyISAM 9 Dynamic 8721565 497 4335220336 1099511627775 127599616 0 NULL 2006-06-28 20:54:55 2006-06-29 18:02:32 NULL latin1_swedish_ci NULL max_rows=1100 \\ df shows /var the partition with the database has enoungh room: Filesystem1K-blocks Used Avail Capacity Mounted on /dev/amrd0s1f 27792614 18449326 711988072%/var -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Sorry for the dumb question how do I fix table is full?
Environment: Freebsd 6.0 Mysql : mysql Ver 14.7 Distrib 4.1.18, for porbld-freebsd6.0 (i386) using 5.0 On the client, I get /var/log/messages, I get the errors: kernel: 9643D22706C and database: mysql_error: The table 'data' is full SQL=INSERT INTO data (sid,cid,data_payload) VALUES I have 7Gig free on the database Server. I assumed tables became full when I ran out of disk space. I guess I was wrong. On the server /var/log/messages is clear except for entries when I ran out of disk space On June 3,2006. Please tell what do I need to do stop the client from generating error messages and sending Data to the server? Thank you, raymond -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Sorry for the dumb question how do I fix table is full?
mysql SHOW TABLE STATUS LIKE 'data'; +--++-++-++- +-+-- +---++-+ -++-- -+--++-+ | Name | Engine | Version | Row_format | Rows| Avg_row_length | Data_length | Max_data_length | Index _length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +--++-++-++- +-+-- +---++-+ -++-- -+--++-+ | data | MyISAM | 9 | Dynamic| 8593198 |499 | 4294967280 | 4294967295 |11 5791872 | 0 | NULL | 2006-03-02 18:47:02 | 2006-06-09 21:08:48 | NULL | latin1_sw edish_ci | NULL || | +--++-++-++- +-+-- +---++-+ -++-- -+--++-+ 1 row in set (0.07 sec) mysql -Original Message- From: Dan Buettner [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 28, 2006 12:43 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: Sorry for the dumb question how do I fix table is full? Raymond, can you post the output of SHOW TABLE STATUS LIKE 'data'; that should show how big your table is and how big it can be... Dan On 6/28/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote: Environment: Freebsd 6.0 Mysql : mysql Ver 14.7 Distrib 4.1.18, for porbld-freebsd6.0 (i386) using 5.0 On the client, I get /var/log/messages, I get the errors: kernel: 9643D22706C and database: mysql_error: The table 'data' is full SQL=INSERT INTO data (sid,cid,data_payload) VALUES I have 7Gig free on the database Server. I assumed tables became full when I ran out of disk space. I guess I was wrong. On the server /var/log/messages is clear except for entries when I ran out of disk space On June 3,2006. Please tell what do I need to do stop the client from generating error messages and sending Data to the server? Thank you, raymond -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: (thank you) Sorry for the dumb question how do I fix table is full?
thank you, raymond -Original Message- From: Dan Buettner [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 28, 2006 16:07 To: Jacob, Raymond A Jr; mysql@lists.mysql.com Subject: Re: Sorry for the dumb question how do I fix table is full? I agree it's not very clear. I think Brent's example is perfect: ALTER TABLE data max_rows=1 (or whatever number you believe is appropriate for your table) Dan On 6/28/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote: The documentation does not seem very clear, at least me on how to increase the max_rows. Could you give me an example? Thank you raymond -Original Message- From: Dan Buettner [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 28, 2006 15:38 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: Sorry for the dumb question how do I fix table is full? Your table has just about max'd out - you're using 4,294,967,280 bytes out of a maximum of 4,294,967,295 bytes (15 bytes free). You need to tell mysql to expand this table, with an alter table command to increase max rows: http://dev.mysql.com/doc/refman/5.0/en/table-size.html Dan On 6/28/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote: mysql SHOW TABLE STATUS LIKE 'data'; +--++-++-++- +--++-++-++- +--++-++-++- +--++-++-++- +--++-++-++- +-+-- +---++-+ +---++-+-- +---++-+-- -++-- -+--++-+ | Name | Engine | Version | Row_format | Rows| Avg_row_length | Data_length | Max_data_length | Index _length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | +--++-++-++- +--++-++-++- +--++-++-++- +--++-++-++- +--++-++-++- +-+-- +---++-+ +---++-+-- +---++-+-- -++-- -+--++-+ | data | MyISAM | 9 | Dynamic| 8593198 |499 | 4294967280 | 4294967295 |11 5791872 | 0 | NULL | 2006-03-02 18:47:02 | 2006-06-09 21:08:48 | NULL | latin1_sw edish_ci | NULL || | +--++-++-++- +--++-++-++- +--++-++-++- +--++-++-++- +--++-++-++- +-+-- +---++-+ +---++-+-- +---++-+-- -++-- -+--++-+ 1 row in set (0.07 sec) mysql -Original Message- From: Dan Buettner [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 28, 2006 12:43 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: Sorry for the dumb question how do I fix table is full? Raymond, can you post the output of SHOW TABLE STATUS LIKE 'data'; that should show how big your table is and how big it can be... Dan On 6/28/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote: Environment: Freebsd 6.0 Mysql : mysql Ver 14.7 Distrib 4.1.18, for porbld-freebsd6.0 (i386) using 5.0 On the client, I get /var/log/messages, I get the errors: kernel: 9643D22706C and database: mysql_error: The table 'data' is full SQL=INSERT INTO data (sid,cid,data_payload) VALUES I have 7Gig free on the database Server. I assumed tables became full when I ran out of disk space. I guess I was wrong. On the server /var/log/messages is clear except for entries when I ran out of disk space On June 3,2006. Please tell what do I need to do stop the client from generating error messages
Can I delete old host-bin.xxxxx files in order to free up some space?
I have not been monitoring my databases now I am using a 100% disk space. thank you, Raymond
RE: Can I delete old host-bin.xxxxx files in order to free up some space?
-Original Message- From: Gary Richardson [mailto:[EMAIL PROTECTED] Sent: Saturday, June 03, 2006 14:10 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: Can I delete old host-bin.x files in order to free up some space? Yes you can. Be sure not to delete the one the database is currently writing to. Are you replicating your database? [Jacob, Raymond A Jr] No. Are you using them for point in time restores? [Jacob, Raymond A Jr] No. If you're replicating, make sure your replicas have all caught up on the files you're deleting. [Jacob, Raymond A Jr] To remove the excess binary update logs, and start again , should I run the SQL command: RESET MASTER? Or if I don't need to reconstruct or restore a table may I just delete them are comment log-bin out of my.cnf and restart mysql? thank you, Raymond -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Can I delete old host-bin.xxxxx files in order to free up some space?
Thank you Raymond that did the trick. -Original Message- From: Gary Richardson [mailto:[EMAIL PROTECTED] Sent: Saturday, June 03, 2006 14:39 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: Can I delete old host-bin.x files in order to free up some space? I believe the command is PURGE MASTER LOGS TO 'host-bin.xx' where x is between 0-9 or something like that. On 6/3/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote: -Original Message- From: Gary Richardson [mailto: [EMAIL PROTECTED] Sent: Saturday, June 03, 2006 14:10 To: Jacob, Raymond A Jr Cc: mysql@lists.mysql.com Subject: Re: Can I delete old host-bin.x files in order to free up some space? Yes you can. Be sure not to delete the one the database is currently writing to. Are you replicating your database? [Jacob, Raymond A Jr] No. Are you using them for point in time restores? [Jacob, Raymond A Jr] No. If you're replicating, make sure your replicas have all caught up on the files you're deleting. [Jacob, Raymond A Jr] To remove the excess binary update logs, and start again , should I run the SQL command: RESET MASTER? Or if I don't need to reconstruct or restore a table may I just delete them are comment log-bin out of my.cnf and restart mysql? thank you, Raymond -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysql performance problems.
After a 23days of running mysql, I have a 3GB database. When I use an application called base(v.1.2.2) a web based intrusion detection analysis console, the mysqld utilization shoots up to over 90% and stays there until the application times out or is terminated. Question: Have I made some error in configuration? When I don't run the application base, mysqld utilization is between 30-50%. Question: What hardware do I need to speed up queries? Question: How do determine if the query is the problem? Data: I used my-large.cnf as the basis of my.cnf. Hardware and OS info: ... FreeBSD 6.0-RELEASE-p5 #0: ... CPU: Intel Pentium III (997.46-MHz 686-class CPU) Origin = GenuineIntel Id = 0x68a Stepping = 10 Features=0x383fbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE real memory = 1073676288 (1023 MB) avail memory = 1041784832 (993 MB) Observations: Disk Space used: du -am /var/db/mysql | sort -nr | head -20 5259mysql/ 3055mysql/snort 2184mysql/snort_archive 1546mysql/snort_archive/data.MYD 1546mysql/snort/data.MYD 560 mysql/snort/acid_event.MYI 311 mysql/snort/acid_event.MYD 132 mysql/snort_archive/event.MYI 132 mysql/snort/event.MYI 116 mysql/snort_archive/iphdr.MYI 116 mysql/snort/iphdr.MYI 112 mysql/snort_archive/iphdr.MYD 112 mysql/snort/iphdr.MYD 74 mysql/snort_archive/event.MYD 74 mysql/snort/event.MYD 42 mysql/snort_archive/data.MYI 42 mysql/snort/data.MYI 40 mysql/snort_archive/icmphdr.MYI 40 mysql/snort/icmphdr.MYI 35 mysql/snort_archive/icmphdr.MYD ... snort is 3GB snort_archive is 2GB(snort_archive acid and base tables have not been built that is why snort archive is smaller) When the application searches the database, the mysqld utilization goes up to over 90% until the application times out. top last pid: 44263; load averages: 0.95, 0.89, 0.76 up 25+23:49:4416:07:17 49 processes: 2 running, 47 sleeping Mem: 173M Active, 608M Inact, 186M Wired, 29M Cache, 111M Buf, 1660K Free Swap: 2048M Total, 156K Used, 2048M Free PID USERNAME THR PRI NICE SIZERES STATETIME WCPU COMMAND 31890 mysql 15 200 103M 79032K kserel 768:38 93.46% mysqld 49138 www 1 40 17432K 12848K accept 0:23 0.00% httpd 46759 www 1 200 16584K 12084K lockf0:21 0.00% httpd 46764 www 1 40 16632K 12072K accept 0:21 0.00% httpd 46763 www 1 40 16580K 12012K accept 0:20 0.00% httpd 46760 www 1 40 17452K 12872K accept 0:19 0.00% httpd 46762 www 1 40 16568K 12000K accept 0:19 0.00% httpd 46761 www 1 40 16608K 12088K sbwait 0:17 0.00% httpd 68456 www 1 40 16572K 11980K accept 0:17 0.00% httpd 68457 www 1 40 16724K 11824K accept 0:17 0.00% httpd 68458 www 1 40 16980K 11920K accept 0:17 0.00% httpd Processes that run in the background: I run an update process in the background with hope that if I process the alerts from the snort table on a regular basis.o I won't have process a large number( 44,000) alerts first thing in the morning. The update process inserts records into the acid table that result from the join of certain fields from the snort tables. (Schema at http://www.andrew.cmu.edu/user/rdanyliw/snort/acid_db_er_v102.html ) rabid# cat /var/log/base-update.2006-03-28.log 2006-03-28, 00:05:00, Added 3808 alert(s) to the Alert cache 2006-03-28, 01:05:00, Added 5855 alert(s) to the Alert cache 2006-03-28, 02:05:00, Added 4096 alert(s) to the Alert cache 2006-03-28, 03:05:00, Added 4473 alert(s) to the Alert cache 2006-03-28, 04:05:00, Added 4378 alert(s) to the Alert cache 2006-03-28, 05:05:00, Added 4087 alert(s) to the Alert cache 2006-03-28, 06:05:00, Added 5163 alert(s) to the Alert cache 2006-03-28, 07:05:00, Added 4789 alert(s) to the Alert cache 2006-03-28, 08:05:00, Added 4411 alert(s) to the Alert cache 2006-03-28, 09:05:00, Added 4830 alert(s) to the Alert cache 2006-03-28, 10:05:00, Added 4739 alert(s) to the Alert cache 2006-03-28, 11:05:00, Added 5360 alert(s) to the Alert cache 2006-03-28, 12:05:00, Added 7305 alert(s) to the Alert cache 2006-03-28, 13:05:00, Added 8481 alert(s) to the Alert cache 2006-03-28, 14:05:00, Added 60731 alert(s) to the Alert cache 2006-03-28, 15:05:00, Added 44328 alert(s) to the Alert cache 2006-03-28, 15:50:00, Added 13742 alert(s) to the Alert cache 2006-03-28, 15:55:00, Added 607 alert(s) to the Alert cache 2006-03-28, 16:00:00, Added 938 alert(s) to the Alert cache 2006-03-28, 16:05:00, Added 1348 alert(s) to the Alert cache 2006-03-28, 16:10:02, Added 687 alert(s) to the Alert cache 2006-03-28, 16:15:00, Added 1168 alert(s) to the Alert cache 2006-03-28, 16:20:00, Added 1760 alert(s) to the Alert cache 2006-03-28, 16:25:00, Added 814 alert(s) to the Alert cache 2006-03-28, 16:30:01, Added 617 alert(s) to the Alert cache
Can I change the password of a user on the server without interupting the user's existing connections?
I have three(3) instances of an application inserting data into a MYSQL (4.X) database. I have to change the password. I would like to know, if I can change the password of the application on the server without disrupting existing connections then modify the password in the startup file for the application. So that when the application is restarted automatically at night, the application will login to the database with the new password with a minimal disruption of service. Thank you, Raymond
syntax errors with ENGINE=MYISAM... and NOT NULL default CURRENT_TIMESTAMP ...
running: mysql Ver 12.22 Distrib 4.0.23, for Win95/Win98(i32) downloaded from www.devside.net/web/servers/free/download. When I run a script with the following commands I get sql syntax errors: Create table 'test' ( 'helper' varchar(22) ) ENGINE=MYISAM DEFAULT CHARSET=latin1; I get a sql error telling me to check the syntax for this version. I modified the line to ENGINE=MYISAM CHARSET=latin1; no sql errors. Next snytax error I get is: Create table 'testime' ( 'teatime' timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP, 'val' char(22) NOT Null); I modified the line to Create table 'testime' ( 'teatime' timestamp NOT NULL default 'CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP', 'val' char(22) NOT Null); no sql errors. Question 1: How do I check my syntax against the version and distribution that I am running? Question 2: I thought on update was a reference definition and required a reference to a parent table and I did think CURRENT_STAMP was a reference action? Thank you, Raymond
Does such a JOIN exist that can create a pivot table?
Question: I frequently would like to summarize the results of my query in heiarchical layout also known as a Pivot table. Here is an example of what I would like output. NULL will be printed as a space when output. sum of broken| source of | qty |reseller of |qty |customer with |qty by bolts |broken bolts|regional |broken bolts|reseller |broken bolts |customer - 100 | NULL |NULL |NULL| NULL|NULL |NULL (100/NULL) | US| 75 |NULL| NULL|NULL |NULL (100/NULL) | (US/NULL) |(75/NULL) |ACME| 35 |NULL |NULL (100/NULL) | (US/NULL) |(75/NULL) |(ACME/NULL) |(35/NULL)|Barney Rubble |20 (100/NULL) | US| 75 |NULL| NULL|NULL |NULL (100/NULL) | (US/NULL) |(75/NULL) |ACME| 35 |NULL |NULL (100/NULL) | (US/NULL) |(75/NULL) |(ACME/NULL) |(35/NULL)|Betty Rubble |5 (100/NULL) | US| 75 |NULL| NULL|NULL |NULL (100/NULL) | (US/NULL) |(75/NULL) |ABLE| 25 |NULL |NULL (100/NULL) | (US/NULL) |(75/NULL) |(ABLE/NULL) |(25/NULL)|Sponge Bobble |20 (100/NULL) | MEXICO| 15 |NULL| NULL|NULL |NULL (100/NULL) | (MEX/NULL) |(15/NULL) |TIPPY | 12 |NULL |NULL (100/NULL) | (MEX/NULL) |(15/NULL) |(ACME/NULL) |(12/NULL)|Juan Pepe |7 100 * * * * * * * US 75 * * * * * * * ACME 35 * * * * * * * Barney Ruble 20 Where * represents NULL or a Primary Key. How does one build a pivot table? from tables such as: factory_parts table ::{ part no, plant, qty_manufactured plant name } reseller_parts table ::{ part no plant qty received cost reseller name reseller id } customer_parts table ::{ reseller id part no plant qty sold qty recvd customer id customer name } Ooops now the light bulb comes on I would do: select factory_parts.plant name, reseller_parts.reseller_name customer_parts.customer_name, customer_parts.qty_recvd from factory_parts,reseller_parts,customer_parts where customer_parts.part_no == 'broken_bolt' AND ( customer_parts.part_no == reseller_parts.part.no AND customer_parts.part_no == factory_parts.part.no ) Now the question becomes how does one construct the aggregate columns representing the sum of bolts produced by the company,made at the plant, shipped to the reseller and sold to the customer, then join those aggregate columns? Any suggestions? Thank you, Raymond
Thank you-regarding: Does such a JOIN exist that can create a pivot table?
Does such a JOIN exist that can create a pivot table? Thank you: 182361 by: Dan Bolser 182362 by: Peter Brawley Now I must go into my cave and meditate on these queries: Ommm,Ommm,...Ommm :-) raymond
Merge tables,trigger support, and table rotation
How does one insert records based on some kind of meta data or key in particular table belonging to a merge table? I have a network logging program and would like to partition the table so that analysts can query certain tables belonging to the merge table instead of the whold table to corelate events. /*merge table section of email */ Based on my limited research I thought that I would create a merged table such as: /*from the web site */ mysql CREATE TABLE t1 ( -a INT NOT NULL AUTO_INCREMENT PRIMARY KEY, -message CHAR(20)); mysql CREATE TABLE t2 ( -a INT NOT NULL AUTO_INCREMENT PRIMARY KEY, -message CHAR(20)); mysql INSERT INTO t1 (message) VALUES ('Testing'),('table'),('t1'); mysql INSERT INTO t2 (message) VALUES ('Testing'),('table'),('t2'); mysql CREATE TABLE total ( -a INT NOT NULL AUTO_INCREMENT, -message CHAR(20), INDEX(a)) -TYPE=MERGE UNION=(t1,t2) INSERT_METHOD=LAST; /* trigger section of email */ Question(s): I will assume of that when 5.02 becomes stable and triggers are available then mysql CREATE TRIGGER ins_sum INSERT ON total IF message='happy ending' Then INSERT INTO t1 (message) VALUES ('goldie locks'),('mama bear'),('eat her up') ELSE INSERT INTO t2 (message) VALUES ('KILLED Mama bear'),('papa bear and baby bear'), ('in cirus'); Will triggers support insert, update, and delete verbs(actions) on a merged table? /* table rotation */ Second question: As tables grow can can a new table be created and automatically added to the merged table, at or on a particular date or time or when the size of the table reaches a certain limit? Which leads to the third question: if myisam tables can be created dynamically based on size of table or time of day, can the oldest table be compressed with myisampack automatically? Thank you, Raymond
Is their still any reason why Stored Procedure calls are not supported in MySql?
Firstly, I do appologize for my ignorance in advance. I read the message regarding PRODUCT() and thought to myself:A perl interpreter can be run in a C program. Some one must have written a userdefined function that can execute perl code. Googling I found http://software.tangent.org/download/myperl-0.8.tar.gz at http://tangent.org. Looking at the readme file myperl is mysql userdefined function that executes perl code contained in a table in the select statement. The command line version of mysql also has the ability to start an editor. Theoretically, one could: 1. create a table: Create table user.sparky.functions ( name TEXT NOT NULL , code TEXT NOT NULL ); 2. \e myfunc.mypl 3. LOAD DATA IN FILE myfunc.mypl INTO TABLE user.sparky.functions; 4. /* assume no errors */ Select myfunc(code, colum_data) from user.sparky.functions, data_table where (code = myperl_code ) and ( column_data = what I am looking for ); If and I stress if my assumptions are valid, then stored procedure calls could be written in any interpreted language with an interpreter that can be linked into mysql. Of course from a security stand point this could be dangerous but chrooted brain dead interperter with limited functionality and limits on the amount of disk space and memory that can be used should solve those problems. One interesting consequence of the development of infrastructure to support the development of stored procedure calls is that IDE developers that support mysql would have new market to potentially exploit in a corporate enviroment i.e. run code and get metrics such as how long it to process data i.e. Hi,Lo, Avg. In conclusion, I appologize for my ignorance again however I must still ask: Is their still any reason why Stored Procedure calls are not supported in MySql? Thank you, Raymond -- Date: Mon, 30 Aug 2004 11:29:35 -0400 To: Thomas Schager [EMAIL PROTECTED], [EMAIL PROTECTED] From: Sapenov [EMAIL PROTECTED] Subject: Re: PRODUCT() function - calculating the product of grouped numeric values Message-ID: [EMAIL PROTECTED] Hello Thomas, You probably may consider to write a UDF to do that. Here is a list of available UDF extensions for 4.0.20 - http://mysql-udf.sourceforge.net/ Regards, Khazret Sapenov - http://tangent.org myperl (default) 0.8 2004-01-12 07:58:51 About myperl allows you to execute Perl from inside of MySQL. It can either be stored in a row, or it can be specified in your SQL statement. You also can select other columns which will be passed to Perl. Changes * Added support for the perl interpreter to persist for requests * re-aranged package to meet CPAN spec. * Fix for myperlgroup (it was calling itself too frequently) TAR/GZ http://software.tangent.org/download/myperl-0.8.tar.gz -- Date: Mon, 30 Aug 2004 09:48:55 -0400 To: Per Lonnborg [EMAIL PROTECTED] From: Michael Stassen [EMAIL PROTECTED] CC: [EMAIL PROTECTED] Subject: Re: SELECT, ORDER one DISTINCT kolumn? Message-ID: [EMAIL PROTECTED] Per Lonnborg wrote: Hi, Here´s a question from a newbie: I have a database containing ET-times for drivers and their cars. On the web startpage (http://www.lanechoice.net/~pelle/br/) i have made a Top-10 list containing the 10 fastest times. I'll guess column names based on the web page. I have 2 problems: 1. How to select just ONE (the fastest of course) time per uniqe drivercar? SELECT namm, fabrikat, MIN(tid) AS fastest_tid FROM ettan GROUP BY namm, fabrikat ORDER BY fastest_tid DESC LIMIT 10; 2.IF the driver/car has run some race faster than 7,5 secs, he/she is disqualified and should not be on the Top-10 at all. SELECT namm, fabrikat, MIN(tid) AS fastest_tid FROM ettan GROUP BY namm, fabrikat HAVING fastest_tid = 7.5 ORDER BY fastest_tid LIMIT 10; The query I use right now is: Select * from ettan where tid =7.5 ORDER BY tid LIMIT 10 /Per Stockholm, Sweden Note that I only selected aggregate values and grouped columns. If you need the values of non-grouped columns (ort, datum, 60fot, hast) which correspond to the minimum times, you'll need to do a bit more. The manuaal describes 3 methods http://dev.mysql.com/doc/mysql/en/example-Maximum-column-group-row.html. ___ Skicka gratis SMS! http://www.passagen.se Michael -- -- End of mysql-plain Digest *** -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Is their still any reason why Stored Procedure calls are not supported in MySql?
My Bad. I was all into 4.xx and since most of the www.mysql.com references 4.xx I never thought about 5.0. I appologize for disturbing the list. thank you, raymond -Original Message- From: V. M. Brasseur [mailto:[EMAIL PROTECTED] Sent: Tuesday, August 31, 2004 18:54 To: Jacob, Raymond A Jr Cc: [EMAIL PROTECTED] Subject: Re: Is their still any reason why Stored Procedure calls are not supported in MySql? Stored procedures are in the works for MySQL 5.0: http://dev.mysql.com/doc/mysql/en/TODO_MySQL_5.0.html The manual is your friend. Cheers, --V Jacob, Raymond A Jr wrote: Firstly, I do appologize for my ignorance in advance. I read the message regarding PRODUCT() and thought to myself:A perl interpreter can be run in a C program. Some one must have written a userdefined function that can execute perl code. Googling I found http://software.tangent.org/download/myperl-0.8.tar.gz at http://tangent.org. Looking at the readme file myperl is mysql userdefined function that executes perl code contained in a table in the select statement. The command line version of mysql also has the ability to start an editor. Theoretically, one could: 1. create a table: Create table user.sparky.functions ( name TEXT NOT NULL , code TEXT NOT NULL ); 2. \e myfunc.mypl 3. LOAD DATA IN FILE myfunc.mypl INTO TABLE user.sparky.functions; 4. /* assume no errors */ Select myfunc(code, colum_data) from user.sparky.functions, data_table where (code = myperl_code ) and ( column_data = what I am looking for ); If and I stress if my assumptions are valid, then stored procedure calls could be written in any interpreted language with an interpreter that can be linked into mysql. Of course from a security stand point this could be dangerous but chrooted brain dead interperter with limited functionality and limits on the amount of disk space and memory that can be used should solve those problems. One interesting consequence of the development of infrastructure to support the development of stored procedure calls is that IDE developers that support mysql would have new market to potentially exploit in a corporate enviroment i.e. run code and get metrics such as how long it to process data i.e. Hi,Lo, Avg. In conclusion, I appologize for my ignorance again however I must still ask: Is their still any reason why Stored Procedure calls are not supported in MySql? Thank you, Raymond -- Date: Mon, 30 Aug 2004 11:29:35 -0400 To: Thomas Schager [EMAIL PROTECTED], [EMAIL PROTECTED] From: Sapenov [EMAIL PROTECTED] Subject: Re: PRODUCT() function - calculating the product of grouped numeric values Message-ID: [EMAIL PROTECTED] Hello Thomas, You probably may consider to write a UDF to do that. Here is a list of available UDF extensions for 4.0.20 - http://mysql-udf.sourceforge.net/ Regards, Khazret Sapenov - http://tangent.org myperl (default) 0.8 2004-01-12 07:58:51 About myperl allows you to execute Perl from inside of MySQL. It can either be stored in a row, or it can be specified in your SQL statement. You also can select other columns which will be passed to Perl. Changes * Added support for the perl interpreter to persist for requests * re-aranged package to meet CPAN spec. * Fix for myperlgroup (it was calling itself too frequently) TAR/GZ http://software.tangent.org/download/myperl-0.8.tar.gz -- Date: Mon, 30 Aug 2004 09:48:55 -0400 To: Per Lonnborg [EMAIL PROTECTED] From: Michael Stassen [EMAIL PROTECTED] CC: [EMAIL PROTECTED] Subject: Re: SELECT, ORDER one DISTINCT kolumn? Message-ID: [EMAIL PROTECTED] Per Lonnborg wrote: Hi, Here´s a question from a newbie: I have a database containing ET-times for drivers and their cars. On the web startpage (http://www.lanechoice.net/~pelle/br/) i have made a Top-10 list containing the 10 fastest times. I'll guess column names based on the web page. I have 2 problems: 1. How to select just ONE (the fastest of course) time per uniqe drivercar? SELECT namm, fabrikat, MIN(tid) AS fastest_tid FROM ettan GROUP BY namm, fabrikat ORDER BY fastest_tid DESC LIMIT 10; 2.IF the driver/car has run some race faster than 7,5 secs, he/she is disqualified and should not be on the Top-10 at all. SELECT namm, fabrikat, MIN(tid) AS fastest_tid FROM ettan GROUP BY namm, fabrikat HAVING fastest_tid = 7.5 ORDER BY fastest_tid LIMIT 10; The query I use right now is: Select * from ettan where tid =7.5 ORDER BY tid LIMIT 10 /Per Stockholm, Sweden Note that I only selected aggregate values and grouped columns. If you need the values of non-grouped columns (ort, datum, 60fot, hast) which correspond to the minimum times, you'll need to do a bit more. The manuaal describes 3 methods http://dev.mysql.com/doc/mysql/en/example-Maximum
archive db daily- Rough Draft
-Original Message- From: Jacob, Raymond A Jr Sent: Monday, August 02, 2004 21:17 To: [EMAIL PROTECTED] Subject: archive db daily- Rough Draft I am new to Databases and would appreciate a second set of eyes looking over my sql script. The script takes the existing snort database and copies it to a database named snort-2004-08-02. Then copies over a day's worth of information to the database snort-2004-08-02. Once the database has been copied over, mysqlhotcopy is used to copy the database to another directory. The database is then archived and compressed to a file with a .tgz extension. After a week the database is removed. I assume that I can find the age of the snort-2004-08-02 database from mysql without checking a table in the snort-2004-08-02 database When I determine that the database is over 7-days old I will drop the database? thank you, Raymond ---copySnort.sql--- #copySnort.sql copy snortdb database to snort--mm-dd CREATE DATABASE snort-2004-08-02; USE snort-2004-08-02; # Not sure if this is even necessary Database may use my default information. GRANT INSERT,SELECT on snort-2004-08-02.* to [EMAIL PROTECTED]; CREATE TABLE acid_event ( sid INT UNSIGNED NOT NULL, cid INT UNSIGNED NOT NULL, signature INT UNSIGNED NOT NULL, sig_nameVARCHAR(255), sig_class_idINT UNSIGNED, sig_priorityINT UNSIGNED, timestamp DATETIME NOT NULL, ip_src INT UNSIGNED, ip_dst INT UNSIGNED, ip_protoINT, layer4_sportINT UNSIGNED, layer4_dportINT UNSIGNED, PRIMARY KEY (sid,cid), INDEX (signature), INDEX (sig_name), INDEX (sig_class_id), INDEX (sig_priority), INDEX (timestamp), INDEX (ip_src), INDEX (ip_dst), INDEX (ip_proto), INDEX (layer4_sport), INDEX (layer4_dport) ); INSERT snort-2004-08-02.acid_event.* SELECT snort.acid_event.* FROM snort.acid_event, snort.event WHERE (snort.event.timestamp between DATE_ADD('2004-08-01', interval -1 day ) and 2004-08-01 AND (snort.event.cid = snort.acid_event.cid); CREATE TABLE acid_ag ( ag_id INT UNSIGNED NOT NULL AUTO_INCREMENT, ag_name VARCHAR(40), ag_desc TEXT, ag_ctimeDATETIME, ag_ltimeDATETIME, PRIMARY KEY (ag_id), INDEX (ag_id)); INSERT snort-2004-08-02.acid_ag.* SELECT snort.acid_ag.*, snort.acid_ag_alert.ag_id, snort.acid_ag_alert.ag_cid FROM snort.acid_ag, snort.event WHERE (snort.event.timestamp between DATE_ADD('2004-08-01', interval -1 day ) and '2004-08-01' AND (snort.event.cid = snort.acid_ag.cid); CREATE TABLE acid_ag_alert( ag_id INT UNSIGNED NOT NULL, ag_sid INT UNSIGNED NOT NULL, ag_cid INT UNSIGNED NOT NULL, PRIMARY KEY (ag_id, ag_sid, ag_cid), INDEX (ag_id), INDEX (ag_sid, ag_cid)); INSERT snort-2004-08-02.acid_ag_alert.* SELECT snort.acid_ag_alert.* FROM snort.acid_event, snort.event WHERE (snort.event.timestamp between DATE_ADD('2004-08-01', interval -1 day ) and 2004-08-01) AND (snort.event.cid = snort.acid_ag.cid); CREATE TABLE acid_ip_cache( ipc_ip INT UNSIGNED NOT NULL, ipc_fqdnVARCHAR(50), ipc_dns_timestamp DATETIME, ipc_whois TEXT, ipc_whois_timestamp DATETIME, PRIMARY KEY (ipc_ip), INDEX (ipc_ip) ); INSERT snort-2004-08-02.acid_ip_cache.* SELECT snort.acid_ip_cache.* FROM snort.acid_event, snort.event WHERE (snort.event.timestamp between DATE_ADD('2004-08-01', interval -1 day ) and 2004-08-01
RE: How do you archive db daily?
Shawn: Thank you for your reply. I did not know you could reference a Database in a select clause, I thought you could only reference tables. I do have a followup question. I assume once I have created the dbArchYesterday that mysqld has the database in memory.Is there way to: 1. write the database to disk, 2. unload the database from memory, if it is in memory, in order to file utilities such as tar and gzip to archive and compress the database into to a tgz file? Thank you, raymond PS: Is there a document that describes how mysql loads and unloads tables and databases in memory? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: Friday, July 30, 2004 9:03 To: Jacob, Raymond A Jr Cc: [EMAIL PROTECTED] Subject: Re: How do you archive db daily? Why not keep your server running and just use scripted SQL to migrate the records from one DB to the other? CREATE DATABASE dbArchYesterday; USE dbArchYesterday; CREATE TABLE table1 like dbActive.table1; CREATE TABLE table2 like dbActive.table2; CREATE TABLE table3 like dbActive.table3; CREATE TABLE table4 like dbActive.table4; INSERT table1(list of columns to load) SELECT list of columns to read FROM dbActive.table1 WHERE alertdate between 'yesterday 00:00:00' and 'yesterday 23:59:59'; INSERT table2(list of columns to load) SELECT list of columns to read FROM dbActive.table2 WHERE alertdate between 'yesterday 00:00:00' and 'yesterday 23:59:59'; INSERT table3(list of columns to load) SELECT list of columns to read FROM dbActive.table3 WHERE alertdate between 'yesterday 00:00:00' and 'yesterday 23:59:59'; INSERT table4(list of columns to load) SELECT list of columns to read FROM dbActive.table4 WHERE alertdate between 'yesterday 00:00:00' and 'yesterday 23:59:59'; DELETE FROM dbActive.Table1 WHERE alertdate between 'yesterday 00:00:00' and 'yesterday 23:59:59'; DELETE FROM dbActive.Table2 WHERE alertdate between 'yesterday 00:00:00' and 'yesterday 23:59:59'; DELETE FROM dbActive.Table3 WHERE alertdate between 'yesterday 00:00:00' and 'yesterday 23:59:59'; DELETE FROM dbActive.Table4 WHERE alertdate between 'yesterday 00:00:00' and 'yesterday 23:59:59'; That way you never have to stop your server and you can transform any data you need to change during the move. This process also allows you to build additional summary tables during the move, if you want them. By scripting the entire thing and replacing my (hopefully) obvious placeholders with live names and dates, this should perform rather well with little or no intervention on your part. Another advantage to scripting this process is that you can monitor each stage and abort the deletes at the end if things did not go well. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Jacob, Raymond A Jr [EMAIL PROTECTED] wrote on 07/29/2004 06:26:26 PM: I am running MySql 3.23.58-1 on a snort database containing IDS alerts. At 12:00am I would like move the previous day's alerts from four tables to a backup database named for the previous day. Has anyone implemented such a backup schedule? And if so can someone send me a script? I had the following ideas on the subject: 1.a. mysql shutdown. b. cp -r database //2004-07-29 c. mysqlstart /* I need a single user mode for the delete to work */ d. echo Delete iphdr; Delete tcphdr;Delete acid_event; Delete event | mysql -p xxx -u e. go to multiuser mode. 2. a. Assuming logging turned on mysqlhotcopy snortdb ( echo Delete iphdr; Delete tcphdr;Delete acid_event; Delete event ; mysqlbinlog snort.log ) | mysql -p xxx -u yyy 3. a. $ mysql -p xxx -u yyy 1. if a week then purge tables: $mysql Delete iphdr; ( repeat for the rest of the tables.) 2. mysql -p xxx -u yyy mysql Select iphdr.* from iphdr, event_id into outfile /.../backup/2004-07-29/iphdr.txt where timestamp.event_id 2004-07-29; mysql Delete iphdr; ( repeat for the rest of the tables.) mysql use backsnort_db Select iphdr.* from iphdr, event_id Load infile /.../backup/2004-07-29/iphdr.txt ( repeat for the rest of the tables.) mysql exit b. tar cvf backsnort_db That is my best shot if anyone has a more elegant solution I would appreciate hearing about it. Thank you, Raymond
How do you archive db daily?
I am running MySql 3.23.58-1 on a snort database containing IDS alerts. At 12:00am I would like move the previous day's alerts from four tables to a backup database named for the previous day. Has anyone implemented such a backup schedule? And if so can someone send me a script? I had the following ideas on the subject: 1.a. mysql shutdown. b. cp -r database //2004-07-29 c. mysqlstart /* I need a single user mode for the delete to work */ d. echo Delete iphdr; Delete tcphdr;Delete acid_event; Delete event | mysql -p xxx -u e. go to multiuser mode. 2. a. Assuming logging turned on mysqlhotcopy snortdb ( echo Delete iphdr; Delete tcphdr;Delete acid_event; Delete event ; mysqlbinlog snort.log ) | mysql -p xxx -u yyy 3. a. $ mysql -p xxx -u yyy 1. if a week then purge tables: $mysql Delete iphdr; ( repeat for the rest of the tables.) 2. mysql -p xxx -u yyy mysql Select iphdr.* from iphdr, event_id into outfile /.../backup/2004-07-29/iphdr.txt where timestamp.event_id 2004-07-29; mysql Delete iphdr; ( repeat for the rest of the tables.) mysql use backsnort_db Select iphdr.* from iphdr, event_id Load infile /.../backup/2004-07-29/iphdr.txt ( repeat for the rest of the tables.) mysql exit b. tar cvf backsnort_db That is my best shot if anyone has a more elegant solution I would appreciate hearing about it. Thank you, Raymond
help query analysis
I am trying to analyze a query that is taking forever. I am new to this so or rather desperate. I would assume that my query would be faster, if the event id (*.cid) Primary key were used to search for rows in the iphdr table with the same event id, but I don't think that is happening. Could some one tell me if my assumption is correct and how can I get the query to use the primary *.cid keys. It appears that the developer created a table that joined signature, iphdr, and event tables together to solve the problem of speed to create the Acid_event table. I would still like to know how can I improve my query? thank you, Raymond -Original Message- From: Raymond Jacob Sent: Monday, July 26, 2004 19:50 To: Jacob, Raymond A Jr Subject: query analysis version: MySQL-Max-3.23.58-1 DESC iphdr; DESC event; DESC signature; DESC acid_event EXPLAIN select count(ip_dst) as ip_dst_count, inet_ntoa(ip_dst) from iphdr, event , signature where ( event.timestamp now() - interval 24 hour ) and (event.cid = iphdr.cid and event.signature = signature.sig_id and signature.sig_sid = 1432 ) group by ip_dst order by ip_dst_count desc limit 100; Iphdr Field|Type|Null|Key|Default|Extra sid|int(10) unsigned||PRI|0| cid|int(10) unsigned||PRI|0| ip_src|int(10) unsigned||MUL|0| ip_dst|int(10) unsigned||MUL|0| ip_ver|tinyint(3) unsigned|YES||NULL| ip_hlen|tinyint(3) unsigned|YES||NULL| ip_tos|tinyint(3) unsigned|YES||NULL| ip_len|smallint(5) unsigned|YES||NULL| ip_id|smallint(5) unsigned|YES||NULL| ip_flags|tinyint(3) unsigned|YES||NULL| ip_off|smallint(5) unsigned|YES||NULL| ip_ttl|tinyint(3) unsigned|YES||NULL| ip_proto|tinyint(3) unsigned|||0| ip_csum|smallint(5) unsigned|YES||NULL| event: Field|Type|Null|Key|Default|Extra sid|int(10) unsigned||PRI|0| cid|int(10) unsigned||PRI|0| signature|int(10) unsigned||MUL|0| timestamp|datetime||MUL|-00-00 00:00:00| Signature: Field|Type|Null|Key|Default|Extra sig_id|int(10) unsigned||PRI|NULL|auto_increment sig_name|varchar(255)||MUL|| sig_class_id|int(10) unsigned||MUL|0| sig_priority|int(10) unsigned|YES||NULL| sig_rev|int(10) unsigned|YES||NULL| sig_sid|int(10) unsigned|YES||NULL| Acid_event: Field TypeNullKey Default Extra sid int(10) unsignedPRI 0 cid int(10) unsignedPRI 0 signature int(10) unsignedMUL 0 sig_namevarchar(255)YES MUL NULL sig_class_idint(10) unsignedYES MUL NULL sig_priorityint(10) unsignedYES MUL NULL timestamp datetimeMUL -00-00 00:00:00 ip_src int(10) unsignedYES MUL NULL ip_dst int(10) unsignedYES MUL NULL ip_protoint(11) YES MUL NULL layer4_sportint(10) unsignedYES MUL NULL layer4_dportint(10) unsignedYES MUL NULL Query Analysis table|type|possible_keys|key|key_len|ref|rows|Extra event|range|sig,time|time|8|NULL|39382|where used; Using temporary; Using filesort iphdr|ALL|NULL|NULL|NULL|NULL|375383|where used signature|eq_ref|PRIMARY|PRIMARY|4|event.signature|1|where used cartesian product= 14,783,333,306 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]