Re: Question about MySQL 4.0.20 and make test failure on Linux
Jeff Smelser wrote: On Friday 18 June 2004 11:48 pm, Tom Williams wrote: Hi! I'm trying to build MySQL 4.0.20 on RedHat 5.2 (I think) Linux system with glibc-2.2.5 and gcc-3.4.0 (which I recently upgraded to). The compile runs smoothly, but make test fails. Here is my configure command: Whoa.. 2.2.5? Redhat 5.2? you realize redhat 5.2. was released in the early 90's?/ Anyway $ ./configure --prefix=/usr/local/mysql-4.0.20 --enable-assembler --enable-thread-safe-client --with-mysqld-user=mysql Here is the output from make test: snip ERROR: /home/tom/mysql-4.0.20/mysql-test/var/run/master.pid was not created in 30 seconds; Aborting make: *** [test] Error 1 [EMAIL PROTECTED] mysql-4.0.20]$ Are you trying to install this as a normal user? That's what your doing. What do you mean? He's not installing, he's running `make test`, which should run as a normal user. (It will log warnings about being unable to switch to user mysql, since he's not running it as root, but that's OK.) I don't think I have ever ran make test.. :) You should have rights since its your home directory.. I always run `make test` when I build from source. It can catch problems. For example, mysql 4.0.17 built with gcc in Mac OS X 10.2 passed all tests and ran flawlessly for me. On the other hand, mysql 4.0.18 and 4.1.1 built on Mac OS X 10.3 seemed to build fine but failed the test suite. Sure enough, there were problems with those combinations. Now I've built 4.0.20 and 4.1.2 on 10.3.4 and once again all tests pass. Failing tests is an indication that something isn't behaving as expected. That would probably indicate a problem with the build or the OS. Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Clustered Index - Where is data inserted ? Not Answered
Paul, - Original Message - From: Paul Chu [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Saturday, June 19, 2004 5:07 AM Subject: RE: Clustered Index - Where is data inserted ? Not Answered Appreciate any help at all Thanks, Paul -Original Message- From: Paul Chu [mailto:[EMAIL PROTECTED] Sent: Friday, June 18, 2004 10:16 AM To: [EMAIL PROTECTED] Subject: Clustered Index - Where is data inserted ? Hi, Can someone explain how the data is stored in a table using a clustered index. Does this mean that the data is inserted in the .myd file in sorted index order ? MyISAM does not have clustered indexes. InnoDB has a clustered index on every table, and normally it is on the PRIMARY KEY. If so, how is space made to insert the new records ? Does this make inserting records slow because data is being inserted in physical sort order in the table ? For large tables, people usually insert new rows at the high end of the PRIMARY KEY, since the key is usually a generated id. Splitting B-tree pages at the index end is very efficient. In this case having a clustered index poses no overhead. But if you insert records randomly on the PRIMARY KEY, then B-tree page splits waste disk space, and that will reduce performance somewhat. Thanks, Paul Best regards, Heikki Tuuri Innobase Oy Foreign keys, transactions, and row level locking for MySQL InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM tables http://www.innodb.com/order.php Order MySQL technical support from https://order.mysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one byte index :)
Just a suggestion, Kevin, but how about changing from INT to BIGINT? Terry --Original Message- We have a table with a primary index which is INT NOT NULL AUTO_INCREMENT. After inserting ~87,000,000 entries, we started seeing error 1062, ER_DUP_ENTRY. We can get going again after doing an ALTER TABLE to reset the auto_increment starting point, but this takes about an hour... I've seen a couple of places where how to get around this problem was discussed, but nobody seems to discuss *why* this occurs in the first place. Does anyone know why MySQL would start failing to increment an auto_increment index properly when it's nowhere near the upper limit? Does anyone know a way to get things functioning again without a couple of hours downtime? Hoping there's an answer out there somewhere... Kevin Brock [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
problem
ok this is what i want to do i have 2 tables 1 is information about a postion and the second is org info eg table 1 | Continent | Country | Postion | Name | Ward | ORG_ID | | 1 |2|3| Name | Ward | 1 | table 2 | ORG_ID | ORG_Tag | | 1| Org 1 | | 2| Org 2 | | 3| Org 3 | now what i want is when a user enters 2 3 4 Name Ward ORG 2 i want to user a single INSERT Command to enter this into the database 2 3 4 Name Ward 2 by cross referencing the ORG_Tag with the ORG_ID ___ALL-NEW Yahoo! Messenger - so many all-new ways to express yourself http://uk.messenger.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Question about MySQL 4.0.20 and make test failure on Linux
Michael Stassen wrote: I always run `make test` when I build from source. It can catch problems. For example, mysql 4.0.17 built with gcc in Mac OS X 10.2 passed all tests and ran flawlessly for me. On the other hand, mysql 4.0.18 and 4.1.1 built on Mac OS X 10.3 seemed to build fine but failed the test suite. Sure enough, there were problems with those combinations. Now I've built 4.0.20 and 4.1.2 on 10.3.4 and once again all tests pass. Failing tests is an indication that something isn't behaving as expected. That would probably indicate a problem with the build or the OS. Michael Thanks for the info. :) When you mentioned logging warnings above, I started looking more closely at the log files I could find and found this: CURRENT_TEST: ctype_latin1_de /home/tom/mysql-4.0.20/sql/mysqld: File '/home/tom/mysql-4.0.20/sql/share/charsets/latin1_de.conf' not found (Errcode: 2) /home/tom/mysql-4.0.20/sql/mysqld: Character set 'latin1_de' is not a compiled character set and is not specified in the '/home/tom/mysql-4.0.20/sql/share/charsets/Index' file I think this is my problem even though the master.pid message is what gets displayed. I'll research this and see what I come up with. Thanks! Peace... Tom -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one byte index :)
On Jun 18, 2004, at 5:31 PM, Scott Haneda wrote: on 06/18/2004 05:16 PM, Kevin Brock at [EMAIL PROTECTED] wrote: We have a table with a primary index which is INT NOT NULL AUTO_INCREMENT. After inserting ~87,000,000 entries, we started seeing error 1062, ER_DUP_ENTRY. While I do not know why, I would suggest you simply drop the PK and recreate it, this should be a whole lot faster than the alter. Thanks, I'll give it a try next time the problem shows up. My main worry is that the AUTO_INCREMENT problem is merely a symptom of a more serious problem with the table. I've got a couple of potential band-aids, but I want to find out how it got hurt so we can avoid it in the future... Kevin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one byte index :)
On Jun 19, 2004, at 6:03 AM, Terry Riley wrote: Just a suggestion, Kevin, but how about changing from INT to BIGINT? I thought of trying that, but since we're nowhere near the limit even for an INT I think changing to BIGINT is premature. I want to find out a bit more about what's happening first. The fact that it stops at such a low number makes me think it's not related to the size of the field. I saw a reference in a post (to a different mailing llist) to a tables auto_increment limit, as though that was something different from the max value of an INT, but I haven't run across it in any official documentation. Kevin --Original Message- We have a table with a primary index which is INT NOT NULL AUTO_INCREMENT. After inserting ~87,000,000 entries, we started seeing error 1062, ER_DUP_ENTRY. We can get going again after doing an ALTER TABLE to reset the auto_increment starting point, but this takes about an hour... I've seen a couple of places where how to get around this problem was discussed, but nobody seems to discuss *why* this occurs in the first place. Does anyone know why MySQL would start failing to increment an auto_increment index properly when it's nowhere near the upper limit? Does anyone know a way to get things functioning again without a couple of hours downtime? Hoping there's an answer out there somewhere... Kevin Brock [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: ERROR 1045: Access denied for user: 'foo@host' (Using password: YES)
Marc Tardif [EMAIL PROTECTED] wrote: I'm using mysql-3.23.58-1.9 installed from RPM on Red Hat Linux release 9 (Shrike). First, I create a database and user to connect to this database using the following commands: mysql CREATE DATABASE foo; mysql GRANT ALL PRIVILEGES ON foo.* TO foo@'%' IDENTIFIED BY 'password' WITH GRANT OPTION; mysql GRANT ALL PRIVILEGES ON foo.* TO [EMAIL PROTECTED] IDENTIFIED BY 'password' WITH GRANT OPTION; mysql FLUSH PRIVILEGES; Then, I try to connect to the database using the host parameter but I get the following error message: # mysql --user=foo --password=password -h host foo ERROR 1045: Access denied for user: '[EMAIL PROTECTED]' (Using password: YES) I've only managed to find a single thread about this issue which was a bug in the installation of mysql on RedHat 7. Many months have gone by since then so I wonder if this is still the same issue. Delete from the table User entry for user ''@'localhost' and then FLUSH PRIVILEGES. -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Solaris install of 4.0.20 - Only 20 connections allowed
All Hardware: Solaris 4 cpu, 16G ram, 900MHz, ver 5.8 MySQL: 4.0.20 binary install, 32 bit version Admin install, I maintain the my.cnf, and do db admin. Installed a production db (10 million rows) with no problems. However, can only open 20 or so connections to mysqld. The error is: ERROR 1135: Can't create a new thread (error 11). If you are not out of available memory, you can consult the manual for a possible OS-dependent bug The command perror 11 returns Resource temporarily unavailable. The my.cnf config is based on my-huge.cnf with the key-buffer set to 4000m. The max number of connections is set to 100. That's standard. Looked at file descriptors using command pfiles pid and it has over 1k on that process. I get the error while using client mysql, or using the normal client (java jdbc). I have an older development solaris box with 4.0.18 installed (64 bit version) and it can do connections up to 100 with no problems. This was confirmed with by creating a java program and doing connections in a loop. Does anyone have any ideas as to what to check or what else to do? Kinda desperate David -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
what is the appropriate forum to discuss test errors?
Hello, I'm trying to build the current source (fetched via bitkeeper) to have access to ndb/clusters. I am building successfully but get errors with 'make test'. I have been cycling through this exersize for 4 days, fetching the daily patches and getting a clean build every time (various Redhat linux, gcc 3.2/3) but falling down on the tests. The first 2 days I am failing on test #2 (alter table) when the server spotaneously aborted, but yesterday, the server stablized to the point where I got through a pile of tests only to fail on 'grant_cache' which seemed to be a subtler error. As of this mornings patch, I'm back to failing on test #2. Can anyone suggest the best place to get some advice on this? - michael dykman - [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one byte index :)
To the best of my knowledge, AUTO_INCREMENT columns are limited only by the size of the int, so an INT NOT NULL AUTO_INCREMENT should go to 2,147,483,647. Something about your description doesn't quite fit, however. You say that you are nowhere near the limit, but you say that resetting the auto_increment starting point fixes the problem. Those seem contradictory to me. Perhaps if you explained further, someone might be able to figure out what is happening. To that end, here are some questions: What kind of table is it (MyISAM, InnoDB,...)? Have you tried SHOW TABLE STATUS LIKE 'yourtable' when this happens? What is the next auto_increment value (in the Auto_increment column) according to the output? Have you looked at the data in the table? Are the values in the auto_increment column consecutive, as expected? What's the max value in that column? Are you inserting 87 million rows in an empty or existing table? If the latter, how many rows are already there? In either case, is this a large bulk insert that fails before it's done, or are you saying that in normal operations it fails every 87 million or so inserts? How are you resetting the AUTO_INCREMENT starting point, exactly? How do you choose the starting value? What is the value you choose? Do you verify it worked with SHOW TABLE STATUS? Note that if you try to set the next AUTO_INCREMENT to a value less than the largest value currently in the column, the ALTER silently fails. Normally, auto_increment columns do not reuse values, so if you insert then delete a row, that number is still gone. Are you doing just inserts, or are there deletions mixed in? If, for example, you insert 80 million rows a day and delete rows that were inserted yesterday, you'll run out of INT keys in under a month, even though there are only 80 million rows in the table. Finally, just to cover all the bases, that really is 87 million inserts, not 8.7 million, right? I only ask because a MEDIUMINT column runs out a little past 8.3 million. Michael Kevin Brock wrote: On Jun 19, 2004, at 6:03 AM, Terry Riley wrote: Just a suggestion, Kevin, but how about changing from INT to BIGINT? I thought of trying that, but since we're nowhere near the limit even for an INT I think changing to BIGINT is premature. I want to find out a bit more about what's happening first. The fact that it stops at such a low number makes me think it's not related to the size of the field. I saw a reference in a post (to a different mailing list) to a tables auto_increment limit, as though that was something different from the max value of an INT, but I haven't run across it in any official documentation. Kevin --Original Message- We have a table with a primary index which is INT NOT NULL AUTO_INCREMENT. After inserting ~87,000,000 entries, we started seeing error 1062, ER_DUP_ENTRY. We can get going again after doing an ALTER TABLE to reset the auto_increment starting point, but this takes about an hour... I've seen a couple of places where how to get around this problem was discussed, but nobody seems to discuss *why* this occurs in the first place. Does anyone know why MySQL would start failing to increment an auto_increment index properly when it's nowhere near the upper limit? Does anyone know a way to get things functioning again without a couple of hours downtime? Hoping there's an answer out there somewhere... Kevin Brock [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one byte index :)
On Jun 19, 2004, at 10:37 AM, Michael Stassen wrote: Something about your description doesn't quite fit, however. You say that you are nowhere near the limit, but you say that resetting the auto_increment starting point fixes the problem. Those seem contradictory to me. To me as well, that's why I posted. What kind of table is it (MyISAM, InnoDB,...)? MyISAM. Have you tried SHOW TABLE STATUS LIKE 'yourtable' Nope. I'm sure we'll be able to reproduce the problem shortly though, and I'll try it then. when this happens? What is the next auto_increment value (in the Auto_increment column) according to the output? Well, since I didn't try it I don't know for sure :-) When I check LAST_INSERT_ID, it's correct. I.e., the ID that fails would be the next ID after LAST_INSERT_ID. Have you looked at the data in the table? Are the values in the auto_increment column consecutive, as expected? What's the max value in that column? The max value is one less than the value that failed, the number of rows is correct, and the values in the column are consecutive. Are you inserting 87 million rows in an empty or existing table? If the latter, how many rows are already there? In either case, is this a large bulk insert that fails before it's done, or are you saying that in normal operations it fails every 87 million or so inserts? The table has been accumulating data for about a month, starting with an empty table. Each insert is on the order of 10-20,000, inserting using INSERT INTO doing multiple rows at a time. How are you resetting the AUTO_INCREMENT starting point, exactly? How do you choose the starting value? What is the value you choose? Do you verify it worked with SHOW TABLE STATUS? Note that if you try to set the next AUTO_INCREMENT to a value less than the largest value currently in the column, the ALTER silently fails. I did an ALTER TABLE to set AUTO_INCREMENT to one greater than the value that failed. We were able to insert more data after that, and the data inserted had the expected values for the ID column... Normally, auto_increment columns do not reuse values, so if you insert then delete a row, that number is still gone. Are you doing just inserts, or are there deletions mixed in? If, for example, you insert 80 million rows a day and delete rows that were inserted yesterday, you'll run out of INT keys in under a month, even though there are only 80 million rows in the table. We hadn't done any deletes on the table. The first ID value is one. Finally, just to cover all the bases, that really is 87 million inserts, not 8.7 million, right? I only ask because a MEDIUMINT column runs out a little past 8.3 million. Right. 87 million rows in the table. Kevin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Question about MySQL 4.0.20 and make test failure on Linux
Tom Williams wrote: Hi! I'm trying to build MySQL 4.0.20 on RedHat 5.2 (I think) Linux system with glibc-2.2.5 and gcc-3.4.0 (which I recently upgraded to). The compile runs smoothly, but make test fails. Here is my configure command: $ ./configure --prefix=/usr/local/mysql-4.0.20 --enable-assembler --enable-thread-safe-client --with-mysqld-user=mysql Here is the output from make test: [EMAIL PROTECTED] mysql-4.0.20]$ make test cd mysql-test ; ./mysql-test-run Installing Test Databases Removing Stale Files Installing Master Databases running ../sql/mysqld --no-defaults --bootstrap --skip-grant-tables --basedir=. --datadir=./var/master-data --skip-innodb --skip-bdb --skip-warnings --language=../sql/share/english/ 040619 7:39:33 ../sql/mysqld: Shutdown Complete Installing Slave Databases running ../sql/mysqld --no-defaults --bootstrap --skip-grant-tables --basedir=. --datadir=./var/slave-data --skip-innodb --skip-bdb --skip-warnings --language=../sql/share/english/ 040619 7:39:33 ../sql/mysqld: Shutdown Complete Manager disabled, skipping manager start. Loading Standard Test Databases Starting Tests TEST RESULT -- alias [ pass ] alter_table[ pass ] analyse[ pass ] ansi [ pass ] auto_increment [ pass ] backup [ pass ] bdb-alter-table-1 [ skipped ] bdb-alter-table-2 [ skipped ] bdb-crash [ skipped ] bdb-deadlock [ skipped ] bdb[ skipped ] bdb_cache [ skipped ] bench_count_distinct [ pass ] bigint [ pass ] binary [ pass ] bool [ pass ] bulk_replace [ pass ] case [ pass ] cast [ pass ] check [ pass ] comments [ pass ] compare[ pass ] constraints[ pass ] convert[ pass ] count_distinct [ pass ] count_distinct2[ pass ] create [ pass ] ctype_cp1251 [ pass ] ERROR: /home/tom/mysql-4.0.20/mysql-test/var/run/master.pid was not created in 30 seconds; Aborting make: *** [test] Error 1 [EMAIL PROTECTED] mysql-4.0.20]$ I've been trying to find info on the test suite in the online doc and I decided to post this question here while I continue pouring over the doc. If make test is failing, does that mean I shouldn't use the server I just built? Thanks in advance for your time and assistance! :) Peace... Tom Well, I found the solution to this problem: http://bugs.mysql.com/bug.php?id=1838 Recompiling with --with-extra-charsets=all seems to have worked! Peace... Tom -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Clustered Index - Where is data inserted ? Not Answered
Excuse me for jumping in but isn't a primary key likely to be a poor choice for a clustering index? Wouldn't it be better to choose a non-unique key of some kind? I've used clustering indexes for years with DB2 and, assuming InnoDB uses the term in the same way, a clustering index has only one effect over and above the effect of any other index: it influences the placement of new rows when they are inserted in such a way that the database manager *tries* to put a new row on the same physical page as another row with the same key value. This is potentially advantageous because it improves performance in the case where a user asks for the rows in the clustering sequence, assuming that a large majority of rows *were* actually stored in clustering sequence. To illustrate with an example, consider an ORDERS table which contains orders placed with a manufacturing company. The table is likely to have a primary key and, more often than not, the primary key will be ORDER_NUMBER. The table is also likely to have a column like ORDER_DATE, which will normally be non-unique. If I cluster the table on ORDER_NUMBER, the clustering index is only going to be useful if I write a query that displays all my orders in order number sequence. Although that is certainly something we do often enough when messing around with test tables, just to be sure the data is inserted correctly, that doesn't seem particularly likely in the real world for real reports. A report we're much more likely to do in the real world, with real production data, is to list the orders for a given day or week. In that case, that query would work very well if the table had the ORDER_DATE as the clustering index, assuming that most of the rows are actually in clustering sequence. Again, in the DB2 world, we have to make sure that we leave sufficient freespace so that new rows *can* go on the same page as rows with the same or adjacent keys and we have to reorganize our tables occasionally to get all the rows back into clustering order and re-establish the necessary margin of freespace. Does InnoDB clustering work the same way? I'm afraid I haven't had any need for clustering indexes in my project so far so I haven't had to look for the information. Rhino - Original Message - From: Heikki Tuuri [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Saturday, June 19, 2004 6:12 AM Subject: Re: Clustered Index - Where is data inserted ? Not Answered Paul, - Original Message - From: Paul Chu [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Saturday, June 19, 2004 5:07 AM Subject: RE: Clustered Index - Where is data inserted ? Not Answered Appreciate any help at all Thanks, Paul -Original Message- From: Paul Chu [mailto:[EMAIL PROTECTED] Sent: Friday, June 18, 2004 10:16 AM To: [EMAIL PROTECTED] Subject: Clustered Index - Where is data inserted ? Hi, Can someone explain how the data is stored in a table using a clustered index. Does this mean that the data is inserted in the .myd file in sorted index order ? MyISAM does not have clustered indexes. InnoDB has a clustered index on every table, and normally it is on the PRIMARY KEY. If so, how is space made to insert the new records ? Does this make inserting records slow because data is being inserted in physical sort order in the table ? For large tables, people usually insert new rows at the high end of the PRIMARY KEY, since the key is usually a generated id. Splitting B-tree pages at the index end is very efficient. In this case having a clustered index poses no overhead. But if you insert records randomly on the PRIMARY KEY, then B-tree page splits waste disk space, and that will reduce performance somewhat. Thanks, Paul Best regards, Heikki Tuuri Innobase Oy Foreign keys, transactions, and row level locking for MySQL InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM tables http://www.innodb.com/order.php Order MySQL technical support from https://order.mysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Server Startup
Hello I installed the software. I would like to start up the server but there is no icon. What is the name of the executable so I can make an icon? I am running under Windows. Thanks, Andrew -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Server Startup
Go into the bin directory and startup winmysqladmin.exe That will give you a graphical control over the server. - Original Message - From: Andrew McHorneymailto:[EMAIL PROTECTED] To: [EMAIL PROTECTED]mailto:[EMAIL PROTECTED] Sent: Saturday, June 19, 2004 4:20 PM Subject: Server Startup Hello I installed the software. I would like to start up the server but there is no icon. What is the name of the executable so I can make an icon? I am running under Windows. Thanks, Andrew -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysqlhttp://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]http://lists.mysql.com/[EMAIL PROTECTED]
Returning updated rows
Is it possible that one can return the actual rows, which where affected by an update statement? What I want to do: I have n rows in a table which symbolize some work, which have several servers to do. For this, the table has a column called INPROCESS. In order that multiple servers can work on the transactions, I'ld like that each of them requests 1 row, which is not currently processed - and set's INPROCESS to YES. I have to avoid that two server grab the same row... My ideas was now something like UPDATE ... INPROCESS=YES WHERE INPROCESS=NO ... LIMIT 1 - but of course the application would also have to know which item it should process know. Thank you in advance for your help. Thomas -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Professional certification
-Original Message- From: Brian Mansell [mailto:[EMAIL PROTECTED] I took the certification exam this morning and passed. When should I expect to receive the certificate (and other items) in the mail? I'm thinking of studying for the exam. How much time did you put into studying before you sat for the exam? Which books did you use? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Returning updated rows
There are a number of approaches which you could take to this. One option would be to 'pre-assign' rows to a server - so that each server has a distinct set of rows to work through. Another would be to use transactions to handle this, so that only one set of updates actually occur at a time (locking should prevent the processes from getting the same data, since the objects/rows held in the transaction would be locked out). Difficult to say without a better idea of what you are trying to achieve, though! Cheers, Matt -Original Message- From: Thomas Schwanhaeuser [mailto:[EMAIL PROTECTED] Sent: 19 June 2004 23:40 To: [EMAIL PROTECTED] Subject: Returning updated rows Is it possible that one can return the actual rows, which where affected by an update statement? What I want to do: I have n rows in a table which symbolize some work, which have several servers to do. For this, the table has a column called INPROCESS. In order that multiple servers can work on the transactions, I'ld like that each of them requests 1 row, which is not currently processed - and set's INPROCESS to YES. I have to avoid that two server grab the same row... My ideas was now something like UPDATE ... INPROCESS=YES WHERE INPROCESS=NO ... LIMIT 1 - but of course the application would also have to know which item it should process know. Thank you in advance for your help. Thomas -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Clustered Index - Where is data inserted ? Not Answered
Hi Rhino, Thanks for that explanation of how and why to use a clustered index ! :) I am just a complete beginner with MySql. I have extensive experience with Sql Server, some Oracle and some DB2. It seems that if I want to create a very large master table with 300 million rows using MYISAM I could use an autoincrement as the PK and create indexes on the columns I'm searching by: e.g. order date Accordingly, new records are inserted at the end of the table and this would make MASS inserts ( millions of rows ) very quick. I hope this assessment is correct ? Regards, Paul -Original Message- From: Rhino [mailto:[EMAIL PROTECTED] Sent: Saturday, June 19, 2004 2:19 PM To: Heikki Tuuri; [EMAIL PROTECTED] Subject: Re: Clustered Index - Where is data inserted ? Not Answered Excuse me for jumping in but isn't a primary key likely to be a poor choice for a clustering index? Wouldn't it be better to choose a non-unique key of some kind? I've used clustering indexes for years with DB2 and, assuming InnoDB uses the term in the same way, a clustering index has only one effect over and above the effect of any other index: it influences the placement of new rows when they are inserted in such a way that the database manager *tries* to put a new row on the same physical page as another row with the same key value. This is potentially advantageous because it improves performance in the case where a user asks for the rows in the clustering sequence, assuming that a large majority of rows *were* actually stored in clustering sequence. To illustrate with an example, consider an ORDERS table which contains orders placed with a manufacturing company. The table is likely to have a primary key and, more often than not, the primary key will be ORDER_NUMBER. The table is also likely to have a column like ORDER_DATE, which will normally be non-unique. If I cluster the table on ORDER_NUMBER, the clustering index is only going to be useful if I write a query that displays all my orders in order number sequence. Although that is certainly something we do often enough when messing around with test tables, just to be sure the data is inserted correctly, that doesn't seem particularly likely in the real world for real reports. A report we're much more likely to do in the real world, with real production data, is to list the orders for a given day or week. In that case, that query would work very well if the table had the ORDER_DATE as the clustering index, assuming that most of the rows are actually in clustering sequence. Again, in the DB2 world, we have to make sure that we leave sufficient freespace so that new rows *can* go on the same page as rows with the same or adjacent keys and we have to reorganize our tables occasionally to get all the rows back into clustering order and re-establish the necessary margin of freespace. Does InnoDB clustering work the same way? I'm afraid I haven't had any need for clustering indexes in my project so far so I haven't had to look for the information. Rhino - Original Message - From: Heikki Tuuri [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Saturday, June 19, 2004 6:12 AM Subject: Re: Clustered Index - Where is data inserted ? Not Answered Paul, - Original Message - From: Paul Chu [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Saturday, June 19, 2004 5:07 AM Subject: RE: Clustered Index - Where is data inserted ? Not Answered Appreciate any help at all Thanks, Paul -Original Message- From: Paul Chu [mailto:[EMAIL PROTECTED] Sent: Friday, June 18, 2004 10:16 AM To: [EMAIL PROTECTED] Subject: Clustered Index - Where is data inserted ? Hi, Can someone explain how the data is stored in a table using a clustered index. Does this mean that the data is inserted in the .myd file in sorted index order ? MyISAM does not have clustered indexes. InnoDB has a clustered index on every table, and normally it is on the PRIMARY KEY. If so, how is space made to insert the new records ? Does this make inserting records slow because data is being inserted in physical sort order in the table ? For large tables, people usually insert new rows at the high end of the PRIMARY KEY, since the key is usually a generated id. Splitting B-tree pages at the index end is very efficient. In this case having a clustered index poses no overhead. But if you insert records randomly on the PRIMARY KEY, then B-tree page splits waste disk space, and that will reduce performance somewhat. Thanks, Paul Best regards, Heikki Tuuri Innobase Oy Foreign keys, transactions, and row level locking for MySQL InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM tables http://www.innodb.com/order.php Order MySQL technical support from https://order.mysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:
Re: Clustered Index - Where is data inserted ? Not Answered
- Original Message - From: Paul Chu [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Saturday, June 19, 2004 10:55 PM Subject: RE: Clustered Index - Where is data inserted ? Not Answered Hi Rhino, Thanks for that explanation of how and why to use a clustered index ! :) Again, I really haven't looked into how MySQL/InnoDB handles clustered indexes. I told you how they work in DB2, which I know much better than MySQL/InnoDB. Before using anything I say, I strongly advise you to check the MySQL/InnoDB documentation to see how *they* handle clustered indexes, it might not be the same. I am just a complete beginner with MySql. I have extensive experience with Sql Server, some Oracle and some DB2. It seems that if I want to create a very large master table with 300 million rows using MYISAM I could use an autoincrement as the PK and create indexes on the columns I'm searching by: e.g. order date Remember, clustering indexes have nothing to do with your PK or other indexes. Again, assuming MySQL/InnoDB works the same as DB2, a clustering index has all the same attributes as any other index PLUS one other attribute: it controls the placement of new rows in a table. Which columns should comprise your primary key and what other non-clustering indexes you should create still need to get decided. The primary key is probably the most important decision you need to make: remember, it could be single or multi-column but it must be unique and there can't be nulls in any part of the primary key. (Again, those are the rules for DB2; things may differ in MySQL for all I know.) The choice of primary key normally results from normalization; typically, you put your data into Third (or Fourth) Normal Form and then denormalize on a case by case basis if it seems appropriate. I've already explained the key thinking behind the selection of the clustering index. There's one other thing to remember about clustering indexes: you can only have one of them on a given table! That should be obvious if you think about it for a second: since the clustering index influences the physical placement of data and you can only store the data in one order, naturally there can only be one clustering index. Other non-clustering indexes are often chosen on the basis of normalization or simply familiarity with the data. For instance, most people would intuitively put an index on Social Security Number because they know it is unique and it could easily be something that we know about an employee so that we can look up his records. Other non-clustering indexes will be less obvious. They may or may not be unique. They may be single-column or multi-column. Their main job though will be to help you find things faster. In other words, they are a performance technique. In the case of DB2 (and hopefully MySQL/InnoDB), any query will return the right answer, even if you don't have any indexes. However, having an appropriate index should get you the answer (considerably) more quickly, assuming the optimizer actually uses the index. (That is not always a sure thing in DB2 - usually for good reasons - but I *think* that MySQL lets you specify that you want to use a specific index in a query.) Since I have no idea of how the MySQL optimizer works, I won't make any further remarks about it at this time for fear of putting my foot really deep into my mouth ;-) Accordingly, new records are inserted at the end of the table and this would make MASS inserts ( millions of rows ) very quick. I hope this assessment is correct ? It *should* work the way you say, again assuming that clustering works the same in MySQL as in DB2. But, as always with databases, you should definitely test your design with a test database first and then monitor carefully as you scale up the amount of data to be sure that you don't get any rude surprises. And be sure to get someone like Heikki to verify that what I've said is applicable to MySQL/InnoDB. I don't want to sent you astray with inappropriate extrapolations from DB2! Rhino -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one byte index :)
At 17:16 -0700 on 06/18/2004, Kevin Brock wrote about AUTO_INCREMENT problem... ER_DUP_ENTRY? (No, it's not a one: We have a table with a primary index which is INT NOT NULL AUTO_INCREMENT. After inserting ~87,000,000 entries, we started seeing error 1062, ER_DUP_ENTRY. You are wasting half your range by not saying UNSIGNED. Not your problem (I think) but still an error. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]