Re: What is ROWNUM of oracle in mysql
not the exact equivalent but if you are looking at a query like: select * from mytable where rownum=8; you can do : select * from mytable limit 7,1 select query .. [LIMIT [offset,] rows] http://www.mysql.com/doc/S/E/SELECT.html - Original Message - From: Ramaraju.R.V [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, February 05, 2002 5:17 AM Subject: What is ROWNUM of oracle in mysql Hi, What is the alternative in mysql for ROWNUM of Oracle? Thanks, Rama Raju - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: INSERTing duplicate values to a UNIQUE-indexed table
I want to thank everyone for their contributions on this thread,and especially Erik for posting the question. I'm in the middle of the problem, and was about to write excessive code using Erik's suggestion No.1. Thank you, thank you to all. - Original Message - From: Erik Price [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, February 04, 2002 1:10 PM Subject: INSERTing duplicate values to a UNIQUE-indexed table Hello, everyone. I have a slight dilemma, and was wondering what the standard workaround is. I have three tables: owners (auto_increment primary key is owners_id), objects (auto_increment primary key is objects_id), and owners_objects (which is a foreign key table that I created, under advice from someone on this list a while back whose email address has changed -- there are two columns in owners_objects: owners_id and objects_id, and there are two unique indexes on the table, owners_id / objects_id and objects_id / owners_id -- this is to keep duplicates combinations in this table, since they would only take up extra disk space). I am designing an application in PHP which stores the relationship between an Owner and an Object using the owners_objects table in a many-to-many relationship. When someone adds a new owner, they can choose from an HTML listbox any number of objects to associate with that owner. The PHP code creates an INSERT statement that inserts the data into owners, and then takes the auto_incremented primary key of the last insert (which is the insert into owners) and uses that as the value for the second INSERT statemetn: to insert into owners_objects.owner_id. In this second INSERT statement, the objects_id of the Object(s) selected from the listbox go into the second column of owners_objects. I am sure that many people have done this sort of setup. But what do you do to get around the problem of INSERTing a pair of values that already exist? Because the combinations in owners_objects are UNIQUE (the UNIQUE indexes), MySQL won't accept a pair that is already present. I see two possible options: 1) Check to see if the combination is already present, and if so, do not run the INSERT query 2) run the INSERT query regardless and suppress the error message The disadvantage of the first one is that it adds an extra SQL query to the process. The disadvantage of the second one is that I think it is somewhat tasteless to execute code that will knowingly error -- or should I just stop trying to be such a perfectionist? I would post code but this is all pseudocode right now b/c I haven't solved this dilemma yet -- all experimentation with this has been done from the mysql client. Thanks for your advice! Erik - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: What is ROWNUM of oracle in mysql
not the exact equivalent but if you are looking at a query like: select * from mytable where rownum=8; you can do : select * from mytable limit 7,1 select query .. [LIMIT [offset,] rows] http://www.mysql.com/doc/S/E/SELECT.html - Original Message - From: Ramaraju.R.V [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, February 05, 2002 5:17 AM Subject: What is ROWNUM of oracle in mysql Hi, What is the alternative in mysql for ROWNUM of Oracle? Thanks, Rama Raju - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: INSERTing duplicate values to a UNIQUE-indexed table
I want to thank everyone for their contributions on this thread,and especially Erik for posting the question. I'm in the middle of the problem, and was about to write excessive code using Erik's suggestion No.1. Thank you, thank you to all. - Original Message - From: Erik Price [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, February 04, 2002 1:10 PM Subject: INSERTing duplicate values to a UNIQUE-indexed table Hello, everyone. I have a slight dilemma, and was wondering what the standard workaround is. I have three tables: owners (auto_increment primary key is owners_id), objects (auto_increment primary key is objects_id), and owners_objects (which is a foreign key table that I created, under advice from someone on this list a while back whose email address has changed -- there are two columns in owners_objects: owners_id and objects_id, and there are two unique indexes on the table, owners_id / objects_id and objects_id / owners_id -- this is to keep duplicates combinations in this table, since they would only take up extra disk space). I am designing an application in PHP which stores the relationship between an Owner and an Object using the owners_objects table in a many-to-many relationship. When someone adds a new owner, they can choose from an HTML listbox any number of objects to associate with that owner. The PHP code creates an INSERT statement that inserts the data into owners, and then takes the auto_incremented primary key of the last insert (which is the insert into owners) and uses that as the value for the second INSERT statemetn: to insert into owners_objects.owner_id. In this second INSERT statement, the objects_id of the Object(s) selected from the listbox go into the second column of owners_objects. I am sure that many people have done this sort of setup. But what do you do to get around the problem of INSERTing a pair of values that already exist? Because the combinations in owners_objects are UNIQUE (the UNIQUE indexes), MySQL won't accept a pair that is already present. I see two possible options: 1) Check to see if the combination is already present, and if so, do not run the INSERT query 2) run the INSERT query regardless and suppress the error message The disadvantage of the first one is that it adds an extra SQL query to the process. The disadvantage of the second one is that I think it is somewhat tasteless to execute code that will knowingly error -- or should I just stop trying to be such a perfectionist? I would post code but this is all pseudocode right now b/c I haven't solved this dilemma yet -- all experimentation with this has been done from the mysql client. Thanks for your advice! Erik - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Problem with query
What kind of code ? I typically send each of those in as separate queries. - Original Message - From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Friday, February 01, 2002 8:36 AM Subject: Problem with query Hi, I Have a problem with transaction controls... runing the query begin; select * from table; commit;, this query run perfectly in the shell, but in my code return error of sintax. Somebody help me? - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
BDB Transactions and Deleted Data
Has anyone had the following problem or anything related to this ? I've had some problems where my data gets deleted after I add or drop a column. The scenario is as follows. 1)mysql daemon is running. 2)make a structure change to table/add or drop column 3)stop and restart mysql daemon 4)look at table , the data is gone. 5)stop and restart mysql daemon 6)look at table, the data is back. I'm also running safe_mysqld as recommended. Is there some sequence of events that must happen for a proper shutdown and restart of the mysql daemon ? Or is there some process I need to do before I call an alter table to make a change to a table structure ? I thought maybe this is associated with flush log/checkpoint process happpens with BDB tables ? Can anyone confirm this ? --Terribly confused. - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: bdb table crashes?
I've had many problems with BDB table crashes. Up until now, I haven't really cared, running safe_mysqld, the database just restarts, and things proceed as normal. But recently I've had some strange behaviour, where table data is deleted after an alter table statement. Upon looking through the error logs, I found many cases much like yours, but NOT due to delete queries, mine were just select queries. I don't know if the 2 problems are related at all, the crashes, and the data deletion I'm runing MYSQL server version: 3.23.36 Here is 1 sample of what I'm getting. -- mysqld got signal 11; The manual section 'Debugging a MySQL server' tells you how to use a stack trace and/or the core file to produce a readable backtrace that may help in finding out why mysqld died. Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Stack range sanity check OK, backtrace follows: 0x4007d0fe 0x810afb0 0x81554ca 0x819bc49 0x819108a 0x818e9ea 0x81699d6 0x81659f0 0x8109119 0x80e2776 0x80e252b 0x80d9bcf 0x80c0cd3 0x80c3d8b 0x80c008f 0x80bf599 Stack trace successful, trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at 0x82a9338 = Select l.name l_name , s.name s_name, description, shopa_notes, not_allowed, year from list l, school s, grade g where list_id = 50 and s.school_id=l.school_id and g.grade_id=l.grade_id thd-thread_id = 10283 Successfully dumped variables, if you ran with --log, take a look at the details of what thread 10283 did to cause the crash. In some cases of really bad corruption, this value may be invalid Please use the information above to create a repeatable test case for the crash, and send it to [EMAIL PROTECTED] Number of processes running now: 0 011217 13:17:06 mysqld restarted - Original Message - From: Geoffrey Soh [EMAIL PROTECTED] To: Mysql@Lists. Mysql. Com [EMAIL PROTECTED] Sent: Monday, January 21, 2002 10:27 PM Subject: bdb table crashes? Hi, Having some problems with a particular BDB table, which crashes once in a while. Just wondering if anyone has experienced such BDB table crashes? The relevant info is included below for troubleshooting. Let me know if I need to send anything else. Appreciate any help. Thanks! Description: BDB table crashes on delete query, such delete queries execute flawlessly but once in a while, the table crashes on a delete, and causes a restart of MySQL How-To-Repeat: Not very repeatable, happens intermittently and at random. Fix: No fix yet. Submitter-Id: submitter ID Originator: Organization: Ufinity Pte Ltd MySQL support: none Synopsis: bdb table crashes Severity: serious Priority: medium Category: mysql Class: sw-bug Release: mysql-3.23.42 (Source distribution) Environment: machine, os, target, libraries (multiple lines) System: Linux axle 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown Architecture: i686 Some paths: /usr/local/bin/perl /usr/bin/make /usr/bin/gmake /usr/bin/gcc /usr/ bin/cc GCC: Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs gcc version 2.96 2731 (Red Hat Linux 7.1 2.96-81) Compilation info: CC='gcc' CFLAGS='' CXX='c++' CXXFLAGS='' LDFLAGS='' LIBC: lrwxrwxrwx1 root root 13 Sep 12 02:51 /lib/libc.so.6 - libc-2 .2.2.so -rwxr-xr-x2 root root 1236396 Apr 7 2001 /lib/libc-2.2.2.so -rw-r--r--1 root root 26350254 Apr 7 2001 /usr/lib/libc.a -rw-r--r--1 root root 178 Apr 7 2001 /usr/lib/libc.so Configure command: ./configure --prefix=/home/mysql/mysql-lite --localstatedir= /home/mysql/data/mysql-lite --with-berkeley-db --with-raid --enable-assemble r --with-mysqld-ldflags=-all-static Stack Trace : 0x806c666 handle_segfault__Fi + 258 0x818d3ee btr_cur_pessimistic_update + 386 0x80b4c6e delete_row__12ha_myisammrgPCc + 14 0x80feb28 txn_abort + 184 0x81429c2 __bam_repl_recover + 566 0x8138859 __bam_pgout + 41 0x8135204 __qam_c_get + 1008 0x8111a66 __db_rename + 926 0x80b6b65 update_row__11ha_berkeleyPCcPc + 1153 0x80b6d5a update_row__11ha_berkeleyPCcPc + 1654 0x809b2e1 generate_table__FP3THDP13st_table_listP8st_table + 673 0x8074115 mysql_execute_command__Fv + 5705 0x8075a73 my_yyoverflow__FPPsPP7YYSTYPEPi + 31 0x8072082 do_command__FP3THD + 862 0x80715a4 handle_one_connection__FPv + 180 Schema from table : mysql desc user_status; +---+-+--+-+-+---+ | Field | Type| Null | Key | Default | Extra | +---+-+--+-+-+---+ | userid| varchar(255)| | PRI | | | | domain| varchar(150)| | PRI | | | | password | varchar(100)| YES | | NULL|
Database Maintenance Routines?
Hi, I'm new at the Database Maintenance Role. Can anyone tell me what are the basic things that need to be done to keep my database healthy ? i.e. I've heard that it's good to regularly rebuild indexes. How frequently should that be done ? And what is the best way to go about it. - Original Message - From: Kalok Lo [EMAIL PROTECTED] To: [EMAIL PROTECTED]; Egor Egorov [EMAIL PROTECTED] Sent: Thursday, January 24, 2002 11:51 AM Subject: Re: how to repair bdb tables? I'm having the same need. It looks like the following is intended for MyISAM tables only: http://www.mysql.com/doc/U/s/Using_mysqlcheck.html Has anyone else had any other luck ? - Original Message - From: Egor Egorov [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, January 23, 2002 9:41 AM Subject: how to repair bdb tables? Hello Christian, Wednesday, January 23, 2002, 2:03:18 PM, you wrote: CH Hello CH I just wondered how to repair/check a berkeley db (bdb) table. I cannot CH use (my)isamchk and neither the check table/repair table commands. CH At least without a check facility the transaction support for MySQL is CH not very usable for production servers (innodb seems to be to buggy yet, CH I saw several entries in the change log in every of the last few releasees) and also look at the output of: mysqlcheck --help -- For technical support contracts, goto https://order.mysql.com/ This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Why does DISTINCT take so long time ??
Anvar had some very good explanations about the time it takes to run the queries. ##Here are some work arounds: ##If you need to have these columns (mot, date, numresponse) in the group by clause, ##try putting an index on each of them to speed it up. mysqlalter table searchhardwarefr3 add index idx_mot(mot); ##... etc. ##this should speed up the 1st query for sure. ##if the second query is still slow, (i'm not sure about the exact details of mysql, so this might or might not make a difference) ## put the results from your first query into a temporary table (mytemp): mysqlcreate temporary table mytemp SELECT COUNT(*) as count, numreponse FROM searchhardwarefr3 GROUP BY mot,date,numreponse HAVING count1 LIMIT 100; ## then explicitly index both columns mysql alter table mytemp add index idx_count(count); mysql alter table mytemp add index idx_numresponse(numresponse) ##and then run the following query mysqlselect distinct count, numresponse from temp (mytemp) ##by the way, I don't think the HAVING clause is redundant. Good Luck. - Original Message - From: Anvar Hussain K.M. [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Saturday, January 19, 2002 2:27 AM Subject: Re: Why does DISTINCT take so long time ?? Hi, Yes, the reason for the time difference is that for distinct query, as Sinisa noted, it has to reiterate. For the output to generate, first the rows have to be ordered ( in this case since count(*) is given every column should be present in the comparison.) using a temp table (or any other mechanism to keep rows ordered). For the first query also there should be an intermediate temp table to order rows but here it is only necessary to consider the columns in the group by clause. Considering these, a rough and primitive estimate of the time taken to execute the query can be found. Suppose the average length of a row is 300 bytes and the three columns in the group by clause takes 30 bytes average. Then if the first query takes 15 minutes, the second query will take 150 minutes. This may not be the real scenario with mysql but some thing similar. The having clause I feel, is redundant. the 15 min for the first qurey seem too much, perhaps indexing might help. Hope somebody else has a better explanation. Anvar. At 02:41 AM 18/01/2002 +0100, you wrote: Hi, I've notice sometimes DISTINCT clause take a really high amount of time to remove duplicates whereas it should be really quick (I assume it should be ;)) My first query is : mysql SELECT COUNT(*) as count, numreponse FROM searchhardwarefr3 GROUP BY mot,date,numreponse HAVING count1 LIMIT 100; it returns : +---++ | count | numreponse | +---++ | 2 | 111239 | | 2 | 108183 | | 2 | 73 | | 2 | 111383 | cut | 2 | 111239 | | 2 | 111760 | | 3 | 109166 | | 2 | 09 | | 3 | 109166 | +---++ 58 rows in set (14 min 51.15 sec) My second query is : mysql SELECT DISTINCT COUNT(*) as count, numreponse FROM searchhardwarefr3 GROUP BY mot,date,numreponse HAVING count1 LIMIT 100; Well I'm not enough patient to wait, but when I stop the querie, it has been running for more than 3500 seconds... (and more than 45mn in 'Removing duplicates' state...) mysql EXPLAIN SELECT DISTINCT COUNT(*) as count, numreponse FROM searchhardwarefr3 GROUP BY mot,date,numreponse HAVING count1 LIMIT 100; +---+---+---+-+-+--+--- - -+--+ | table | type | possible_keys | key | key_len | ref | rows| Extra| +---+---+---+-+-+--+--- - -+--+ | searchhardwarefr3 | index | NULL | PRIMARY | 75 | NULL | 2026032 | Using index; Using temporary | +---+---+---+-+-+--+--- - -+--+ 1 row in set (0.00 sec) mysql EXPLAIN SELECT COUNT(*) as count, numreponse FROM searchhardwarefr3 GROUP BY mot,date,numreponse HAVING count1 LIMIT 100; +---+---+---+-+-+--+--- - -+--+ | table | type | possible_keys | key | key_len | ref | rows| Extra| +---+---+---+-+-+--+--- - -+--+ | searchhardwarefr3 | index | NULL | PRIMARY | 75 | NULL | 2026032 | Using index; Using temporary | +---+---+---+-+-+--+--- - -+--+ 1 row in set (0.00 sec) Why does it take so much time to remove duplicates in only 58 rows ?? Thank you :) Regards,
Database locks up on certain queries. Is this intentional ?
Hello, I'm experincing a problem with my database locking up on some queries. Any explanation or solutions anyone can provide will be much appreicated. - ##Scenario: 3 tables: company, co_type_assoc, co_type ##with data pertaining to [co_id in (0,1) as follows] mysql select * from co_type; ++--+--+--+ | co_type_id | co_super_type_id | code | description | ++--+--+--+ | 1 |4 | AC | Associate Consultant | | 2 |4 | AL | Associate Life Member| | 3 |4 | AM | Associate Media | ++--+--+--+ 3 rows in set (0.00 sec) mysql select * from company where co_id = 0; Empty set (0.00 sec) mysql select * from company where co_id = 1; +---+---+---+---+---+-++ | co_id | name1 | name2 | sort_name | email | website | pri_co_type_id | +---+---+---+---+---+-++ | 1 | SHOPA | | | | | 1 | +---+---+---+---+---+-++ 1 row in set (0.00 sec) mysql select * from co_type_assoc where co_id in (0,1); Empty set (0.00 sec) - ##When I run the following query, there is no problem: mysql Select cta.co_type_id , ct.description from company c, co_type_assoc cta, co_type ct where c.co_id = cta.co_id and ct.co_type_id = cta.co_type_id and c.co_id = 1; - ##But if I run this query, the entire database locks up: mysql Select cta.co_type_id , ct.description - from company c, co_type_assoc cta, co_type ct - where c.co_id = cta.co_id - and ct.co_type_id = cta.co_type_id - and c.co_id = 0; ERROR 1015: Can't lock file (errno: -30989) - On examing the queries using EXPLAIN, I get the following: - mysql explain - Select cta.co_type_id , ct.description - from company c, co_type_assoc cta, co_type ct - where c.co_id = cta.co_id - and ct.co_type_id = cta.co_type_id - and c.co_id = 0; +-+ | Comment | +-+ | Impossible WHERE noticed after reading const tables | +-+ - mysql explain - Select cta.co_type_id , ct.description - from company c, co_type_assoc cta, co_type ct - where c.co_id = cta.co_id - and ct.co_type_id = cta.co_type_id - and c.co_id = 1; +---++---+-+-++- -+-+ | table | type | possible_keys | key | key_len | ref| rows | Extra | +---++---+-+-++- -+-+ | c | const | PRIMARY | PRIMARY | 4 | const |1 | | | cta | ref| PRIMARY | PRIMARY | 4 | const |1 | where used; Using index | | ct| eq_ref | PRIMARY | PRIMARY | 1 | cta.co_type_id |1 | | +---++---+-+-++- -+-+ 3 rows in set (0.00 sec) - So my question is 1) Why does the database lock up ? 2) Is this intentional, i.e. the optimizer is smarter than I want. 3) How can I overcome this problem ? Any help anyone can provide will be very much appreciated. Thank you. Kalok - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail mysql-unsubscribe-##L=##[EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: joining a string and a value to a variable
there's a concat(string1, string2) function and also a concat_ws(',', string1, string2) newstring --concat with separator comma alias newstring http://www.mysql.com/doc/S/t/String_functions.html Refer to the above page for more useful string functions - Original Message - From: sonicstate [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Friday, October 26, 2001 9:00 AM Subject: joining a string and a value to a variable Hi, I'm new here may as well jump right in with a question Anyone know the equivalent of 's' + convert(varchar,id) as dkey The result I get in MSSQL 7 is s1, s2, s3 etc have tried 's'+id as dkey but just get 1,2,3 etc need to concatenate the two values 's'id inside a query to populate a verity collection Nick Batt - CTO Sonic State Ltd. http://www.sonicstate.com [EMAIL PROTECTED] tel:+44 (0)1225 442546 - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
mysql_errno: 1015, can't lock file(errno: -30989)
I've encountered this problem several times on complicated queries that return no rows. mysql_errno:1015 mysql_error:Can't lock file (errno: -30989) After this error, I can't run queries to any associated tables, and the only I know how to get around it is to restart mysqld. Does anyone know why this is ? - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
Re: Comma delimited files (CSV files) automatically inseted into database?
Check out this page. You still have to create the table in mysql first, then you can run the LOAD DATA INFILE command . http://www.mysql.com/doc/L/O/LOAD_DATA.html - Original Message - From: Todd Williamsen [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, October 24, 2001 7:14 PM Subject: Comma delimited files (CSV files) automatically inseted into database? Is this possible? I tried it, and the format is all garbled... I have an application that outputs data into a CSV file and was wondering of a way to have it automatically inserted into a database with it formatted for the query of those results. Thank you, Todd Williamsen, MCSE home: 847.265.4692 Cell: 847.867.9427 - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php - Before posting, please check: http://www.mysql.com/manual.php (the manual) http://lists.mysql.com/ (the list archive) To request this thread, e-mail [EMAIL PROTECTED] To unsubscribe, e-mail [EMAIL PROTECTED] Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php