Re: Proper parens in OR searches
on 8/11/04 9:48 PM, Michael Stassen at [EMAIL PROTECTED] wrote: With parens: SELECT id, name FROM listmail WHERE date_time DATE_SUB(NOW(), INTERVAL 30 DAY) AND (subject = 'semaphore' OR subject = 'Re: semaphore' OR subject = 'Re:semaphore') ORDER BY id ASC LIMIT 60 Without parens: SELECT id, name FROM listmail WHERE date_time DATE_SUB(NOW(), INTERVAL 30 DAY) AND subject IN ('semaphore', 'Re: semaphore', 'Re:semaphore') ORDER BY id ASC LIMIT 60 These two are perfectly equivalent. The latter is, of course, the same query as in your post. Why do you expect these 2 queries to be different with respect to SQL injection? It seems to me you need to validate your input either way. Perhaps if you reminded us what language you're using, showed us the insert, and told us what you mean by making mysql cranky, someone could help you solve that problem. Thanks! In regards to the injection, I was just having a hard time escaping the strings as I wanted them to be, and MySql was not liking what I was up to.. IN ('dadas', 'wewew') IN (\dadas\, \wewew\) Etc, etc, etc, you get the idea the path I was down :-) I figured out to simply quote and escape each inner string finally. I think I will stick with the IN style, since it is a little cleaner looking, not sure about performance. I tend to not mention the language I am using since I am pretty sure I am the only one on this list using it. It is called WebSiphon, it is a little like php, a little like BASIC, a little like C, my favorite, but not well known... Yet :-) -- - Scott HanedaTel: 415.898.2602 http://www.newgeo.com Fax: 313.557.5052 [EMAIL PROTECTED]Novato, CA U.S.A. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Order by with one exception
Mysql 4, this has been bothering me for some time now... I made a mailing list archiver, I thread discussions by subject. I chose to not use message-id's since so many people hijack threads. In most cases, I ORDER BY id, which is simply a auto-inc that is set as they come in, in order via email. Occasionally, messages will be out of order due to email delivery issues. I tried date stamping them, but users clocks are so messed up that never panned out :-) Given this case: Subject ID RE: Order by with one exception 1 RE: Order by with one exception 2 RE: Order by with one exception 3 Order by with one exception 4 RE: Order by with one exception 5 RE: Order by with one exception 6 RE: Order by with one exception 7 As you can see, these are in correct order, but in this case, I want to push the one without the Re: to the top. I can not just order by subject, id, since not a subject could start with a letter after R. Suggestions? Thanks -- - Scott HanedaTel: 415.898.2602 http://www.newgeo.com Fax: 313.557.5052 [EMAIL PROTECTED]Novato, CA U.S.A. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Re: [OT] PostgreSQL / MySQL Data Dictionary
mysqldump --no-data --all-databases SNIP Eamon Daly Yeap Eamon, as mentioned MyRun is not the only utility on earth with the functionality. The difference between mysqldump and MyRun is that while MyRun includes all the mysqldump functionality, MyRun can take ANY source script. Let's make an example: mysqldump is great for backuping up complete database(s) with or without data. This is ofcourse nice, except when you have 50M records in a table, because then you get a resulting script which is huge. So essentially they both do something like this to generate the insert record sql for data backup purposes: select * from accounts; -- as an example but because you can customize the source sql script for MyRun, you can go like: select * from accounts where AccountDateYEAR(CURDATE()); -- i.e. limit the inserts you going to get to that which is really important. Also because it takes a source script, you can essentially limit the tables in a specific database to those with the important stuff in which you want to backup: --- use this-db; select * from accounts; -- Yes, important select * from orders; -- Yes, important -- select * from sessions; -- No skip this table completely select * from logs limit 0; -- Data not important, only capture schema .. -- Maybe do a little maintenance while we are busy? update accountpasswords set AccPassword=encrypt(AccPassword) where AccOpenDateCURDATE(); select * from accountpasswords; .. use that-db; select * from ...etc etc --- The logs table is a good example of such tables, it contains temporary kind of data and potentially a huge amount,ex. millions of recs. This will unnecessarily bloat the destination script file, so we limit it. Honestly, your backups is only limited by your imagination. Kind Regards SciBit MySQL Team http://www.scibit.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: [OT] PostgreSQL / MySQL Data Dictionary
Great, MyCon produces SQL statements ready to recreate just your schema and/or all data as well, now did I miss something, or does MyCon actually write the SQL one needs to create and populate a set of system tables for the schema? PB . Nope Peter, you didn't miss a beat ;) Just to be very clear, MyCon is the front-end GUI which auto setup scripts for MyRun to use for backups or it can also optionally schedule MyRun to auto execute these source scripts. The end-user hardly ever sees MyRun (except when the OS task scheduler fires it up). All the end-user ever does, is to click on a database (or a specific table) and click Backup, the rest just happens. MyRun is the commandline utility which actually does the hard yards. And to just state it again so there is no confusion; MyRun's target script is fully capable of recreating the full schema as well as populate it with data (insert statements) from whatever was selected in the source script. NOTES: * MyRun can also do this from a remote MySQL server. * It can also execute the target script against another mysql server/database instead of to a script file * MySQL V3.23 and higher * The target script can optionally contain USE db;, DROP table if exists tb1;, CREATE table if not exists tb1 ... and your data using INSERT into tb1 (..columns..) values ((...record1...),(...record2...),..); in 100xrecord batches (so a 1,000 records will be contained in only 10 insert statements). These batch inserts are used because from the MySQL Manual: This is much faster (many times faster in some cases) than using separate single-row INSERT statements. If you are adding data to non-empty table, you may tune up the bulk_insert_buffer_size variable to make it even faster. EXAMPLE: Every morning we have a scheduled MyRun which backups up our company's MySQL webdata from our remote webhost/ISP's MySQL server to our local network using a ADSL connection. It contains tens of tables with thousands of records each and from start to finish takes approx. 10 seconds (our webhost and we are on different continents). And to set all this up initially took a massive.. one click! If our ISP drops our database by mistake, it will take us all of two seconds to recreate a complete snapshot of the database using the latest target script. Kind Regards SciBit MySQL Team http://www.scibit.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: On the licensing once again
What if a company hires me to build an application for them. They already have MySQL downloaded and installed. Do I/they need a license? What if the software is open-source, but not free? - Original Message - From: Lachlan Mulcahy [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, August 11, 2004 2:14 PM Subject: RE: On the licensing once again Essentially the spirit of the license is, if a company builds a system of some kind where the database facilities are provided by MySQL and wish to sell that system as a whole without a GPL or other accepted open license then they will be required to purchase a license for each copy of the server they distribute. The company distributing the system would generally pay this to MySQL and include it in the costing/pricing of their product. Does that clarify things any further for you? Regards, Lachlan -Original Message- From: DebugasRu [mailto:[EMAIL PROTECTED] Sent: Wednesday, 11 August 2004 5:22 PM To: [EMAIL PROTECTED] Subject: On the licensing once again 1) LM If your software is licensed under either the GPL-compatible Free Software LM License as defined by the Free Software Foundation or approved by OSI, then LM use our GPL licensed version. 2) LM If you distribute a proprietary application in any way, and you are not LM licensing and distributing your source code under GPL, you need to purchase LM a commercial license of MySQL To be honest i don't understand the double licensing issues at all. Does the second part 2) applies to the developer of proprietary software only or does it apply to its users too ? Why a user cannot install and use MySql under GPL and then install and use proprietary software under whatever licence he got it. Or do you want to say that the end user can in general use MySQL under GPL, but as soon as he tries to use it with that particular proprietary software then he can no longer use MySQL under GPL ? This seems to contradict the GPL license terms -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
MySQL won't start... was runnnig fine
Hi, I have been running 4.1.3 on redhat9, installed from the mysql rpm package. It has been running great until the server crashed for an unknown reason 2 days ago. now when ever i try to start mysql i get the following error in log 040812 09:52:34 mysqld started040812 9:52:34 Warning: Asked for 196608 thread stack, but got 126976040812 9:52:34 InnoDB: Error: unable to create temporary file040812 9:52:34 InnoDB: Operating system error number 13 in a file operation.InnoDB: The error means mysqld does not have the access rights toInnoDB: the directory.InnoDB: File operation call: 'tmpfile'.InnoDB: Cannot continue operation.040812 09:52:34 mysqld ended i have chown -Rthe /var/lib/mysql directory -rw-rw 1 mysql mysql 0 Aug 12 09:59 innodb.status.4891drwx--x--x 2 mysql mysql 4096 Aug 11 09:57 mysqlsrwxrwxrwx 1 mysql mysql 0 Aug 12 09:59 mysql.sock-rw-rw 1 mysql root 442 Aug 12 09:59 slim.errdrwxr-xr-x 2 mysql mysql 4096 Aug 11 09:57 test I had changed nothing in the server, apart from the unexplained crash everything was normal now i'm stuck anyone seen this error before ? Regards Chris BlackwellLead Web DeveloperUK City Directory LtdTelephone: +44 (0)117 373 1465Email: [EMAIL PROTECTED]This email is intended for the use of the named recipient(s) only. Any information contained within this message or any of its attachments may be confidential and privileged information. Any unauthorized disclosure, reproduction, distribution or other dissemination or use of this communication is strictly prohibited.
Re: [OT] PostgreSQL / MySQL Data Dictionary
SciBit MySQL Team [EMAIL PROTECTED] wrote: select * from accounts; -- as an example but because you can customize the source sql script for MyRun, you can go like: select * from accounts where AccountDateYEAR(CURDATE()); -- i.e. limit the inserts you going to get to that which is really important. mysqldump also supports it: -w, --where=nameDump only selected records; QUOTES mandatory! :) -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Concat fields
Egor Egorov wrote: Paul McNeil [EMAIL PROTECTED] wrote: If I am not running 4, is there another way to achieve the same result as GROUP_CONCAT(myField) ? I don't think so. :( Except if you are ready to make a User-defined function to make a simple group_concat... http://dev.mysql.com/doc/mysql/en/UDF_aggr._calling.html -- Philippe Poelvoorde COS Trading Ltd. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Corrupt table, mysqld crashes, server crashes. Looking for a solution
Feedback [EMAIL PROTECTED] wrote: Besides that, the server has also a lot of trouble with the fact that = sometimes the MySQLd just stops functionizing somehow, and the only way = to kill the (all child processes broke down, so i mean the parent) = process is 'killall -9 mysqld'. 'kill pid' doesn't work, and = '/etc/init.d/mysql stop' neither. Sometimes the whole server crashes, no = SSH/ping, and the only option to get it back only is through the reboot = That's the point! I don't think it's a MySQL issue. A piece of software cannot hang all the OS (by the exception of a well-known OS ;). -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL won't start... was runnnig fine
Chris Blackwell [EMAIL PROTECTED] wrote: now when ever i try to start mysql i get the following error in log 040812 09:52:34 mysqld started 040812 9:52:34 Warning: Asked for 196608 thread stack, but got 126976 040812 9:52:34 InnoDB: Error: unable to create temporary file 040812 9:52:34 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File operation call: 'tmpfile'. InnoDB: Cannot continue operation. 040812 09:52:34 mysqld ended [EMAIL PROTECTED] egor]$ perror 13 Error code 13: Permission denied i have chown -R the /var/lib/mysql directory So anyway permission denied. Check the rights on /var/lib/mysql itself -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Concat fields
Philippe Poelvoorde [EMAIL PROTECTED] wrote: If I am not running 4, is there another way to achieve the same result as GROUP_CONCAT(myField) ? I don't think so. :( Except if you are ready to make a User-defined function to make a simple group_concat... http://dev.mysql.com/doc/mysql/en/UDF_aggr._calling.html Which is probably an overkill for most cases. :) -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: On the licensing once again
On Aug 12, 2004, at 2:38, Issac Goldstand wrote: What if a company hires me to build an application for them. They already have MySQL downloaded and installed. Do I/they need a license? What if the software is open-source, but not free? Hi Issac, We always recommend that proprietary applications that use MySQL use the proprietary version of MySQL. This recommendation helps us fund development of the database and is always accurate. However, if you are working for a company, you are likely not distributing anything to them and can likely use MySQL under the terms of the GPL - it should not matter if they download and install MySQL before or after the work starts. Also, there is no problem selling software that is based on GPL-licensed MySQL and other GPL-licensed software (or some other Free Software*/Open Source** licenses. See http://www.mysql.com/products/licensing/foss-exception.html for a list of licenses that are accepted in addition to the GPL.) * See http://www.gnu.org/philosophy/free-sw.html for more information ** See http://www.opensource.org/docs/definition.php for more information Cheers! -- Zak Greant MySQL AB Community Advocate -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: [OT] PostgreSQL / MySQL Data Dictionary
-w, --where=nameDump only selected records; QUOTES mandatory! :) The more options the merrier for MySQL and the end-users :) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Problem after crash
Hello Please help me. Im newbie in the database and I got one crash after outage. Now my mysql.server cant start. Im my log is nothing . And in err log in database directory is: ** 040811 20:26:59 mysqld started InnoDB: The first specified data file ./ibdata1 did not exist: InnoDB: a new database to be created! 040811 20:27:02 InnoDB: Setting file ./ibdata1 size to 10 MB InnoDB: Database physically writes the file full: wait... InnoDB: Error: all log files must be created at the same time. InnoDB: All log files must be created also in database creation. InnoDB: If you want bigger or smaller log files, shut down the InnoDB: database and make sure there were no errors in shutdown. InnoDB: Then delete the existing log files. Edit the .cnf file InnoDB: and start the database again. 040811 20:27:03 Can't init databases 040811 20:27:03 Aborting 040811 20:27:04 InnoDB: Warning: shutting down a not properly started InnoDB: or created database! 040811 20:27:04 /usr/local/mysql/libexec/mysqld: Shutdown Complete 040811 20:27:04 mysqld ended 040812 12:09:53 mysqld started 040812 12:09:53 InnoDB: Database was not shut down normally. InnoDB: Starting recovery from log files... InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 0 50170 InnoDB: Doing recovery: scanned up to log sequence number 0 50170 InnoDB: Page directory corruption: supremum not pointed to 040812 12:09:53 InnoDB: Page dump in ascii and hex (16384 bytes): 040812 12:09:53 InnoDB: Page checksum 1575996416, prior-to-4.0.14-form checksum 1371122432 InnoDB: stored checksum 0, prior-to-4.0.14-form stored checksum 0 InnoDB: Page lsn 0 0, low 4 bytes of lsn at page end 0 InnoDB: Error: trying to access a stray pointer c0cbbff8 InnoDB: buf pool start is at 404c8000, number of pages 512 040812 12:09:53 InnoDB: Assertion failure in thread 16384 in file ../../innobase/include/buf0buf.ic line 284 InnoDB: Failing assertion: 0 InnoDB: We intentionally generate a memory trap. InnoDB: Send a detailed bug report to [EMAIL PROTECTED] mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8388600 read_buffer_size=131072 max_used_connections=0 max_connections=100 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 225791 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=0x838c740 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Bogus stack limit or frame pointer, fp=0xbfffdcf8, stack_bottom=0x20, thread_stack=196608, aborting backtrace. Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at (nil) is invalid pointer thd-thread_id=-488513313 The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. 040812 12:09:53 mysqld ended *** I look at this page but I cannot find an answer my problem What can I do ? Thanx a lot for any advice pet
Auto Increment Column
Hi, Does anyone know how to extract only the Auto_Increment column from the command results of SHOW TABLE STATUS Regards Naresh -- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the originator of the message. This footer also confirms that this email message has been scanned for the presence of computer viruses. Any views expressed in this message are those of the individual sender, except where the sender specifies and with authority, states them to be the views of DA Group. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Corrupt table, mysqld crashes, server crashes. Looking for a solution
Well.. I had the same issue 2 weeks ago with another server from my DC. I told them it was their hardware, they didnt believe it, so i had to order a new server, and ... The same problem occurs... Thats like one on a million that its the hardware... - Original Message - From: Egor Egorov [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, August 12, 2004 10:23 AM Subject: Re: Corrupt table, mysqld crashes, server crashes. Looking for a solution Feedback [EMAIL PROTECTED] wrote: Besides that, the server has also a lot of trouble with the fact that = sometimes the MySQLd just stops functionizing somehow, and the only way = to kill the (all child processes broke down, so i mean the parent) = process is 'killall -9 mysqld'. 'kill pid' doesn't work, and = '/etc/init.d/mysql stop' neither. Sometimes the whole server crashes, no = SSH/ping, and the only option to get it back only is through the reboot = That's the point! I don't think it's a MySQL issue. A piece of software cannot hang all the OS (by the exception of a well-known OS ;). -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
select data in one order, print in another order
Hi list environment: host:/ # uname -a Linux host 2.4.19-4GB #1 Fri Sep 13 13:14:56 UTC 2002 i686 unknown host:/ # host:/ # /usr/local/mysql/bin/mysqld -v /usr/local/mysql/bin/mysqld Ver 4.0.12-max for pc-linux on i686 host:/ # an php application of ours prepares customer data on the screen in a normal way: select name, b, c from tabx left outer join taby on tabx.cx = taby.cy where tabx.name like %username% order by tabx.cx; this works fine and incredibly fast and the output suits the needs of our cusstomers. now, customers keep telling me that they would like to have a printed output of their screen-lists. and they would very much prefer that the printed output were ordered by tabx.name, and - of course - that they have exactly the same data on the screen as later on paper. if i just change the order by clause, i will usually not have the same data on the screen like on the paper. i can do it with a temporary table: mysql drop temporary table if exists temptab; mysql create temporary table temptab select name, b, c, d from tabx where name like %username% order by tabx.cx; mysql select * from temptab order by name; that gave the customers the same data on paper like on the screen, and the paper data are ordered by name ONLY THAT i must give alle customers create-temporay-table rights, and this is not exactly, what i want. can i do this another way round, i.e. without going through a temporary table? i thought about using a table alias an do a union, but the table alias is not known any more in the second statement of the union. i also tried to google but the situation cannot be explained in two words which would probably lead to a google-success. you might tell me, that i could do the same select and afterwards sort the result using php-sort. that is neither what i want because our customers are used to the incredible speed of mysql. any suggestion is very much appreciated. thanks in advance suomi -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Auto Increment Column
| Subject: Auto Increment Column | | Hi, | | Does anyone know how to extract only the Auto_Increment column from the | command results of SHOW TABLE STATUS Do you mean that you want a command that will run on the command line that will show the value of the Auto_Increment column? Or are you trying to determine how to read the value via programming statements in an application? If you prefer the latter, what programming language are you using? Rhino -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Auto Increment Column
Well actually I want it in a stored procedure. However due to a bug in version 5.0.0 alpha (posted on the bugs database by me ID 5017) I cannot get the LAST_INSERT_ID() in the stored procedures. So I want an alternate way of getting LAST_INSERT_ID(). I thought auto_increment column of SHOW table status should get me that... However it is proving difficult as well. I even tried using MAX on the table which also has a bug when called from a stored procedure (I have not reported this as yet). So any ideas? Cheers Naresh -Original Message- From: Rhino [mailto:[EMAIL PROTECTED] Sent: 12 August 2004 13:25 To: Naresh Sadhnani; [EMAIL PROTECTED] Subject: Re: Auto Increment Column | Subject: Auto Increment Column | | Hi, | | Does anyone know how to extract only the Auto_Increment column from | the command results of SHOW TABLE STATUS Do you mean that you want a command that will run on the command line that will show the value of the Auto_Increment column? Or are you trying to determine how to read the value via programming statements in an application? If you prefer the latter, what programming language are you using? Rhino -- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the originator of the message. This footer also confirms that this email message has been scanned for the presence of computer viruses. Any views expressed in this message are those of the individual sender, except where the sender specifies and with authority, states them to be the views of DA Group. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Fulltext Search takes 17 sec.
Hmmm... looks like it may be impossible to do fulltext searching on 3 million rows as few have chimed in on this one. Just to add some background to my setup, I am currently running on MySQL 4.0.17 (plan to upgrade soon.. will this help??) and I created the index that took 9 minutes to process using phpmyadmin. I did notice that when I created the fulltext index that the fulltext has field size of 1 when looking at the keys in phpmyadmin. More specifically, it looks like this: KeynameTypeCardinalityAction__Field PRIMARY...PRIMARY.3237981.product_id search_text...FULLTEXT3237981.search_text..1 Also, would tuning mysql settings get the processing time down from 5-40 seconds to something more reasonable like 0.3 - 1.0 sec? My current system variables are: ft_min_word_len: 3 ft_max_word_len: 100 ft_max_word_len_for_sort: 20 max_heap_table_size: 16777216 max_join_size: 4294967295 max_seeks_for_key: 4294967295 max_sort_length: 1024 myisam_max_extra_sort_file_size: 268435456 myisam_max_sort_file_size: 2147483647 myisam_sort_buffer_size: 8388608 query_alloc_block_size: 8192 read_buffer_size: 131072 sort_buffer_size: 89128952 tmp_table_size: 33554432 I think I've heard of people getting good results will fulltext in under a second with 3 million rows+ so I think its possible... Help greatly appreciated, - John - Original Message I am trying to do a fulltext search on my database of 3.3 million rows (~ 1 Gb). I attempted to make the searching faster by concatenating all columns I wanted to search and placed them in a table: product_fulltext (product_id, search_text) Then I added the fulltext index: ALTER TABLE product_fulltext ADD FULLTEXT search_ft (search_text) The index was created in about 9 minutes. Next, I need the results grouped into categories. For instance, if I search for 'Mountain' I would like to see the top 3 results for each category. For instance 'mountain' might return: Apparel Mountainsmith Backpack Mountain Hardware Sub Zero Jacket Mountain Horse Stable Loafer Food Beverage Green Mountain Coffee Mountain Grown Fodgers Mountain Berry Tart Video Games No Fear Downhill Mountain Biking Mountain Climber - Xbox Cliff Hanger Books Mountain High, Mountain Rescue Mountain Bike Magic Go Tell It on the Mountain .. etc ... Obviously doing a fulltext search on each category individually wouldn't be fast since there are about 20 categories. I decided instead to make a TEMPORARY TABLE, insert all the matching product_ids into it, then group by category and limit for the top 3. Unfortunately, the temporary table insert combined with the fulltext query takes much too long: CREATE TEMPORARY TABLE temp_table_326 SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('Mountain') Query took: 17.46 seconds So I tried a plain-jane fulltext select query to see if the temp table was the issue: SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('United States') Executed in: 13.52 seconds At this point, I haven't even grouped by the top 3 results per category (not sure how I will do that yet) or joined the products on any tables for images etc. I didn't set any LIMIT parameters because if I said LIMIT 20, all 20 matches could be Apparel products and the other categories would not show up. The BEST performance I can get is about 5 seconds on a single search term (as opposed to the 2 terms in 'United States') I'm at a loss here... -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL know
Encontré qué aparece ser una buena clase particular de MySQL de los novatos en http://www.programatium.com/. Navega con Manuales - Bases de Datos - MySQL y usted lo verá. Shawn Green Database Administrator Unimin Corporation - Spruce Pine Yusdaniel Rodriguez Espinosa [EMAIL PROTECTED] wrote on 08/11/2004 05:21:52 PM: I install MySQL Server in my computer, now how a can run the program to make a code In whats program I write the code thanks Yusdaniel Por si alguien habla español: Yo instale el MySQL Server en mi computadora, quiero empezar a aprender a programar en el, pero no encuentro ningun ejecutable para abrir el programa, tengo que utilizarlo desde otro Programa??? como creo las tablas??? etc. Si alguien conoce algun sitio en español que hable detalladamente del MySQL, por favor agradeceria mucho que me pasen el link. Gracias Yusdaniel
RE: Fulltext Search takes 17 sec.
We Fulltext index multi million rows with very good response time. The cardinality of `1` is okay. Did you fulltext index the rows as is or did you create some sort of combined field? -Original Message- From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: 8/12/04 7:43 AM Subject: RE: Fulltext Search takes 17 sec. Hmmm... looks like it may be impossible to do fulltext searching on 3 million rows as few have chimed in on this one. Just to add some background to my setup, I am currently running on MySQL 4.0.17 (plan to upgrade soon.. will this help??) and I created the index that took 9 minutes to process using phpmyadmin. I did notice that when I created the fulltext index that the fulltext has field size of 1 when looking at the keys in phpmyadmin. More specifically, it looks like this: KeynameTypeCardinalityAction__Field PRIMARY...PRIMARY.3237981.product_id search_text...FULLTEXT3237981.search_text..1 Also, would tuning mysql settings get the processing time down from 5-40 seconds to something more reasonable like 0.3 - 1.0 sec? My current system variables are: ft_min_word_len: 3 ft_max_word_len: 100 ft_max_word_len_for_sort: 20 max_heap_table_size: 16777216 max_join_size: 4294967295 max_seeks_for_key: 4294967295 max_sort_length: 1024 myisam_max_extra_sort_file_size: 268435456 myisam_max_sort_file_size: 2147483647 myisam_sort_buffer_size: 8388608 query_alloc_block_size: 8192 read_buffer_size: 131072 sort_buffer_size: 89128952 tmp_table_size: 33554432 I think I've heard of people getting good results will fulltext in under a second with 3 million rows+ so I think its possible... Help greatly appreciated, - John - Original Message I am trying to do a fulltext search on my database of 3.3 million rows (~ 1 Gb). I attempted to make the searching faster by concatenating all columns I wanted to search and placed them in a table: product_fulltext (product_id, search_text) Then I added the fulltext index: ALTER TABLE product_fulltext ADD FULLTEXT search_ft (search_text) The index was created in about 9 minutes. Next, I need the results grouped into categories. For instance, if I search for 'Mountain' I would like to see the top 3 results for each category. For instance 'mountain' might return: Apparel Mountainsmith Backpack Mountain Hardware Sub Zero Jacket Mountain Horse Stable Loafer Food Beverage Green Mountain Coffee Mountain Grown Fodgers Mountain Berry Tart Video Games No Fear Downhill Mountain Biking Mountain Climber - Xbox Cliff Hanger Books Mountain High, Mountain Rescue Mountain Bike Magic Go Tell It on the Mountain .. etc ... Obviously doing a fulltext search on each category individually wouldn't be fast since there are about 20 categories. I decided instead to make a TEMPORARY TABLE, insert all the matching product_ids into it, then group by category and limit for the top 3. Unfortunately, the temporary table insert combined with the fulltext query takes much too long: CREATE TEMPORARY TABLE temp_table_326 SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('Mountain') Query took: 17.46 seconds So I tried a plain-jane fulltext select query to see if the temp table was the issue: SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('United States') Executed in: 13.52 seconds At this point, I haven't even grouped by the top 3 results per category (not sure how I will do that yet) or joined the products on any tables for images etc. I didn't set any LIMIT parameters because if I said LIMIT 20, all 20 matches could be Apparel products and the other categories would not show up. The BEST performance I can get is about 5 seconds on a single search term (as opposed to the 2 terms in 'United States') I'm at a loss here... -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Auto Increment Column
Keep in mind that it is not a ood idea to get (not sure how to get the last value for auto_increment column) last id with a statement like SHOW TABLE STATUS. Because this will return allways the some for different connections which might create problems for you. My point is. last_insert_id() returns the ID generated last according to the connecttion you used. For example if you make two different connections and insert two different records you will NOT get the same last_insert_id() value.. But at your case they will be probably the same.. Best Regards, Cemal Dalar a.k.a Jimmy System Administrator Web Developer http://www.dalar.net - Original Message - From: Naresh Sadhnani [EMAIL PROTECTED] To: Rhino [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Thursday, August 12, 2004 3:35 PM Subject: RE: Auto Increment Column Well actually I want it in a stored procedure. However due to a bug in version 5.0.0 alpha (posted on the bugs database by me ID 5017) I cannot get the LAST_INSERT_ID() in the stored procedures. So I want an alternate way of getting LAST_INSERT_ID(). I thought auto_increment column of SHOW table status should get me that... However it is proving difficult as well. I even tried using MAX on the table which also has a bug when called from a stored procedure (I have not reported this as yet). So any ideas? Cheers Naresh -Original Message- From: Rhino [mailto:[EMAIL PROTECTED] Sent: 12 August 2004 13:25 To: Naresh Sadhnani; [EMAIL PROTECTED] Subject: Re: Auto Increment Column | Subject: Auto Increment Column | | Hi, | | Does anyone know how to extract only the Auto_Increment column from | the command results of SHOW TABLE STATUS Do you mean that you want a command that will run on the command line that will show the value of the Auto_Increment column? Or are you trying to determine how to read the value via programming statements in an application? If you prefer the latter, what programming language are you using? Rhino -- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the originator of the message. This footer also confirms that this email message has been scanned for the presence of computer viruses. Any views expressed in this message are those of the individual sender, except where the sender specifies and with authority, states them to be the views of DA Group. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
recommended books for web app.
I am looking to port an app from an existing web environment to MySQL. The requirement is relatively easy. The Tables are read only and the data is to be only accessed via login password. The login will give a limited view of records based on a master/detail table relationship. I need to be aware of securing the database and have been told by others that I should look to use PHP. I know my local bookstore has the following publications (based on asking about MySQL PHP) Beginning PHP, Apache MySQL Web Development published by Wrox PHP MySQL written by Larry Ullmen PHP MySQL Web Development written by Luke Welling Laura Thomsan Has anyone seen these books and possibly recommend one of them? Thanks Kerry
Re: Concat fields
Philippe, If he is pre-4 (I assume that as he does not seem to have the GROUP_CONCAT() function) and UDFs aren't available until 5+, how exactly would he do this as a UDF? :-D Paul, I believe you are going to have to combine those fields during some form of post-query processing (macro, script, program, etc). Sorry! Shawn Green Database Administrator Unimin Corporation - Spruce Pine Philippe Poelvoorde [EMAIL PROTECTED] wrote on 08/12/2004 05:20:49 AM: Egor Egorov wrote: Paul McNeil [EMAIL PROTECTED] wrote: If I am not running 4, is there another way to achieve the same result as GROUP_CONCAT(myField) ? I don't think so. :( Except if you are ready to make a User-defined function to make a simple group_concat... http://dev.mysql.com/doc/mysql/en/UDF_aggr._calling.html -- Philippe Poelvoorde COS Trading Ltd. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: recommended books for web app.
Welling Thomson is terrific. - Original Message - From: Kerry Frater To: MySQL List Sent: Thursday, August 12, 2004 8:09 AM Subject: recommended books for web app. I am looking to port an app from an existing web environment to MySQL. The requirement is relatively easy. The Tables are read only and the data is to be only accessed via login password. The login will give a limited view of records based on a master/detail table relationship. I need to be aware of securing the database and have been told by others that I should look to use PHP. I know my local bookstore has the following publications (based on asking about MySQL PHP) Beginning PHP, Apache MySQL Web Development published by Wrox PHP MySQL written by Larry Ullmen PHP MySQL Web Development written by Luke Welling Laura Thomsan Has anyone seen these books and possibly recommend one of them? Thanks Kerry
RE: Problem after crash
Are you using InnoDB tables? If so, have you properly configured the directories? -Original Message- From: Jevos, Peter To: '[EMAIL PROTECTED]' Sent: 8/12/04 6:02 AM Subject: Problem after crash Hello Please help me. Im newbie in the database and I got one crash after outage. Now my mysql.server cant start. Im my log is nothing . And in err log in database directory is: ** 040811 20:26:59 mysqld started InnoDB: The first specified data file ./ibdata1 did not exist: InnoDB: a new database to be created! 040811 20:27:02 InnoDB: Setting file ./ibdata1 size to 10 MB InnoDB: Database physically writes the file full: wait... InnoDB: Error: all log files must be created at the same time. InnoDB: All log files must be created also in database creation. InnoDB: If you want bigger or smaller log files, shut down the InnoDB: database and make sure there were no errors in shutdown. InnoDB: Then delete the existing log files. Edit the .cnf file InnoDB: and start the database again. 040811 20:27:03 Can't init databases 040811 20:27:03 Aborting 040811 20:27:04 InnoDB: Warning: shutting down a not properly started InnoDB: or created database! 040811 20:27:04 /usr/local/mysql/libexec/mysqld: Shutdown Complete 040811 20:27:04 mysqld ended 040812 12:09:53 mysqld started 040812 12:09:53 InnoDB: Database was not shut down normally. InnoDB: Starting recovery from log files... InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 0 50170 InnoDB: Doing recovery: scanned up to log sequence number 0 50170 InnoDB: Page directory corruption: supremum not pointed to 040812 12:09:53 InnoDB: Page dump in ascii and hex (16384 bytes): 040812 12:09:53 InnoDB: Page checksum 1575996416, prior-to-4.0.14-form checksum 1371122432 InnoDB: stored checksum 0, prior-to-4.0.14-form stored checksum 0 InnoDB: Page lsn 0 0, low 4 bytes of lsn at page end 0 InnoDB: Error: trying to access a stray pointer c0cbbff8 InnoDB: buf pool start is at 404c8000, number of pages 512 040812 12:09:53 InnoDB: Assertion failure in thread 16384 in file ../../innobase/include/buf0buf.ic line 284 InnoDB: Failing assertion: 0 InnoDB: We intentionally generate a memory trap. InnoDB: Send a detailed bug report to [EMAIL PROTECTED] mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=8388600 read_buffer_size=131072 max_used_connections=0 max_connections=100 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 225791 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=0x838c740 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Bogus stack limit or frame pointer, fp=0xbfffdcf8, stack_bottom=0x20, thread_stack=196608, aborting backtrace. Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at (nil) is invalid pointer thd-thread_id=-488513313 The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. 040812 12:09:53 mysqld ended *** I look at this page but I cannot find an answer my problem What can I do ? Thanx a lot for any advice pet -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Fulltext Search takes 17 sec.
Missed your original message. What do you mean you concatenated the columns you wanted to search on and placed them in a table? You should just create the full text index on multiple columns. For instance, in a contacts table, you may have firstname, lastname. So you would create a full text index like: FULLTEXT namesearch(firstname,lastname) You don't show the value for the key_buffer_size variable. This is very important for taking advantage of your indexes. Here is a link for the documentation: http://dev.mysql.com/doc/mysql/en/Server_system_variables.html As for get the top three in each category. That's a tough one. You may try using a GROUP BY with the GROUP_CONCAT function. This would give you one line per group. And limit the groups returns by setting a max length for the group concat. Although this is a kludge since you are limiting the amount of text returned rather than the number of values concatenated. The other way would be to just get all the matched rows, sorted and/or grouped appropriately, and then filtering the list with your front end (i.e. php). If the physical size of the text returned is not that large, it should process it fairly quickly. On Aug 12, 2004, at 8:43 AM, [EMAIL PROTECTED] wrote: Hmmm... looks like it may be impossible to do fulltext searching on 3 million rows as few have chimed in on this one. Just to add some background to my setup, I am currently running on MySQL 4.0.17 (plan to upgrade soon.. will this help??) and I created the index that took 9 minutes to process using phpmyadmin. I did notice that when I created the fulltext index that the fulltext has field size of 1 when looking at the keys in phpmyadmin. More specifically, it looks like this: KeynameTypeCardinalityAction__Field PRIMARY...PRIMARY.3237981.product_id search_text...FULLTEXT3237981.search_text..1 Also, would tuning mysql settings get the processing time down from 5-40 seconds to something more reasonable like 0.3 - 1.0 sec? My current system variables are: ft_min_word_len: 3 ft_max_word_len: 100 ft_max_word_len_for_sort: 20 max_heap_table_size: 16777216 max_join_size: 4294967295 max_seeks_for_key: 4294967295 max_sort_length: 1024 myisam_max_extra_sort_file_size: 268435456 myisam_max_sort_file_size: 2147483647 myisam_sort_buffer_size: 8388608 query_alloc_block_size: 8192 read_buffer_size: 131072 sort_buffer_size: 89128952 tmp_table_size: 33554432 I think I've heard of people getting good results will fulltext in under a second with 3 million rows+ so I think its possible... Help greatly appreciated, - John - Original Message I am trying to do a fulltext search on my database of 3.3 million rows (~ 1 Gb). I attempted to make the searching faster by concatenating all columns I wanted to search and placed them in a table: product_fulltext (product_id, search_text) Then I added the fulltext index: ALTER TABLE product_fulltext ADD FULLTEXT search_ft (search_text) The index was created in about 9 minutes. Next, I need the results grouped into categories. For instance, if I search for 'Mountain' I would like to see the top 3 results for each category. For instance 'mountain' might return: Apparel Mountainsmith Backpack Mountain Hardware Sub Zero Jacket Mountain Horse Stable Loafer Food Beverage Green Mountain Coffee Mountain Grown Fodgers Mountain Berry Tart Video Games No Fear Downhill Mountain Biking Mountain Climber - Xbox Cliff Hanger Books Mountain High, Mountain Rescue Mountain Bike Magic Go Tell It on the Mountain .. etc ... Obviously doing a fulltext search on each category individually wouldn't be fast since there are about 20 categories. I decided instead to make a TEMPORARY TABLE, insert all the matching product_ids into it, then group by category and limit for the top 3. Unfortunately, the temporary table insert combined with the fulltext query takes much too long: CREATE TEMPORARY TABLE temp_table_326 SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('Mountain') Query took: 17.46 seconds So I tried a plain-jane fulltext select query to see if the temp table was the issue: SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('United States') Executed in: 13.52 seconds At this point, I haven't even grouped by the top 3 results per category (not sure how I will do that yet) or joined the products on any tables for images etc. I didn't set any LIMIT parameters because if I said LIMIT 20, all 20 matches could be Apparel products and the other categories would not show up. The BEST performance I can get is about 5 seconds on a single search term (as opposed to the 2 terms in 'United States') I'm at a loss here... -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To
RE: Fulltext Search takes 17 sec.
Hi Victor, The fulltext index was created on 1 column only that is of type text. The benchmarks I get are very inconsistant... SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('China') Query took 20.17 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('Mongolia') Query took 0.43 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('russia') Query took 6.18 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('United States') Query took 35.57 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('Chicago') Query took 11.81 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('New York') Query took 43.14 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('heserfretzel') Query took 0.04 seconds The last word I made up. It seems to be directly proportional to the number of results it pulls up. :( - John -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: recommended books for web app.
Thanks for your recommendation Peter Kerry -Original Message- From: Peter Brawley [mailto:[EMAIL PROTECTED] Sent: 12 August 2004 14:23 To: Kerry Frater; MySQL List Subject: Re: recommended books for web app. Welling Thomson is terrific. - Original Message - From: Kerry Frater To: MySQL List Sent: Thursday, August 12, 2004 8:09 AM Subject: recommended books for web app. I am looking to port an app from an existing web environment to MySQL. The requirement is relatively easy. The Tables are read only and the data is to be only accessed via login password. The login will give a limited view of records based on a master/detail table relationship. I need to be aware of securing the database and have been told by others that I should look to use PHP. I know my local bookstore has the following publications (based on asking about MySQL PHP) Beginning PHP, Apache MySQL Web Development published by Wrox PHP MySQL written by Larry Ullmen PHP MySQL Web Development written by Luke Welling Laura Thomsan Has anyone seen these books and possibly recommend one of them? Thanks Kerry
AW: recommended books for web app.
It's not one from your list but I can recommend you this one: http://www.oreilly.com/catalog/webdbapps2/index.html Gives you a nice overview about PHP/MySQL and a nice case study which explains how to create a Online Winestore. Regards Olivier -Ursprüngliche Nachricht- Von: Peter Brawley [mailto:[EMAIL PROTECTED] Gesendet: Donnerstag, 12. August 2004 15:23 An: Kerry Frater; MySQL List Betreff: Re: recommended books for web app. Welling Thomson is terrific. - Original Message - From: Kerry Frater To: MySQL List Sent: Thursday, August 12, 2004 8:09 AM Subject: recommended books for web app. I am looking to port an app from an existing web environment to MySQL. The requirement is relatively easy. The Tables are read only and the data is to be only accessed via login password. The login will give a limited view of records based on a master/detail table relationship. I need to be aware of securing the database and have been told by others that I should look to use PHP. I know my local bookstore has the following publications (based on asking about MySQL PHP) Beginning PHP, Apache MySQL Web Development published by Wrox PHP MySQL written by Larry Ullmen PHP MySQL Web Development written by Luke Welling Laura Thomsan Has anyone seen these books and possibly recommend one of them? Thanks Kerry -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Fulltext Search takes 17 sec.
On your two word plus searches, `New York` for example, have you tried using `IN BOOLEAN MODE` to reduce the number of false positives? AGAINST(New York IN BOOLEAN MODE) -Original Message- From: [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: 8/12/04 8:33 AM Subject: RE: Fulltext Search takes 17 sec. Hi Victor, The fulltext index was created on 1 column only that is of type text. The benchmarks I get are very inconsistant... SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('China') Query took 20.17 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('Mongolia') Query took 0.43 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('russia') Query took 6.18 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('United States') Query took 35.57 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('Chicago') Query took 11.81 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('New York') Query took 43.14 seconds SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('heserfretzel') Query took 0.04 seconds The last word I made up. It seems to be directly proportional to the number of results it pulls up. :( - John -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL won't start... was runnnig fine
I have resolved this issue now, it was actually the permission on /tmp that had become corrupted as a result of the crash I can only assume chmod 777 /tmp chmod +t /tmp this fixed the problem but could have been fixed within minutes not days if the error message had told me which directory was causing the problem :/ chris -Original Message- From: Egor Egorov [mailto:[EMAIL PROTECTED] Sent: 12 August 2004 10:26 To: [EMAIL PROTECTED] Subject: Re: MySQL won't start... was runnnig fine Chris Blackwell [EMAIL PROTECTED] wrote: now when ever i try to start mysql i get the following error in log 040812 09:52:34 mysqld started 040812 9:52:34 Warning: Asked for 196608 thread stack, but got 126976 040812 9:52:34 InnoDB: Error: unable to create temporary file 040812 9:52:34 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File operation call: 'tmpfile'. InnoDB: Cannot continue operation. 040812 09:52:34 mysqld ended [EMAIL PROTECTED] egor]$ perror 13 Error code 13: Permission denied i have chown -R the /var/lib/mysql directory So anyway permission denied. Check the rights on /var/lib/mysql itself -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Fulltext Search takes 17 sec.
Hi Brent, Oops, sorry about forgetting to post the key buffer size, its: key_buffer_size 262144000 As far as the combined column I'm using, I did it to make the fulltext as simple as possible and keep the index at only 1 column. I am aware that I can create an index for multiple columns but basically I just selected all the columns that may contain keywords for each product and tossed them into 1 table with the PK as product_id. Just thought this might help performance... maybe I'm wrong, I don't know. If I can ever get the MATCH, AGAINST query down to a faster speed then I will really focus on getting the top 3 for each category. Sorting into categories in PHP would work but I'd need a much larger dataset in order to insure that I got all the possible matches for each category... what if there was only 1 match in the 'Movies' category and it was at the bottom of the results...? I'm thinking that we will have to use some other method such as listing by relavence (fulltext ordering) then showing Search within: Category X, Category Y, Category Z... links on the side (like Ebay I guess). Still the problem arises as to which categories qualify. It would be nice to order them by the category with the most matches but I doubt that can be accomplished. Previously I had not been using a LIMIT because I was going to do processing for category grouping etc. However, if I do use a LIMIT, the query speeds are almost totally dependant on the number of rows returned: WITH LIMIT: SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('san francisco') LIMIT 1000 Query doine in 2.01 seconds Num Rows: 1000 WITHOUT LIMIT: SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('san francisco') Query doine in 13.45 seconds Num Rows: 9287 Strange. Is this typical or do I need to tweek my system variables? - John -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: representing a sequence of events in a database
The problem you point out is one of the reasons this isn't the best way to organize your data. A better way would be to keep your events table: events event_id event_name event_description but properly normalize the sequences table by putting one event per row like this: table sequences sequence_id position event_id Make (sequence_id, position) a primary or unique key. Then if sequence 1 is e_1, e_5, e_3 and sequence 2 is e_7, e_4, you would have events 1 'e_1' 'description of event 1' 2 'e_2' 'description of event 2' 3 'e_3' 'description of event 3' ... sequences 1 1 1 1 2 5 1 3 3 2 1 7 2 2 4 and so on. Now it is easy to anser your question, In what sequence has event e_5 happened as the third event? SELECT s.sequenceid FROM sequences s JOIN events e ON s.event_id = e.event_id WHERE s.position = 3 AND e.event_name = 'e_5'; Michael Alexander Hannemann wrote: Hi, I have the following problem: I want to store a list of events e_1 to e_n in one table, and in a second table I would like to store sequences of these events so that later I can ask questions like: In what sequence has event e_5 happened as the third event. My more or less trivial solution would be to have an events table table events event_key description of event and a sequence table table sequences sequenceid numberofevents event_key1 event_key2 event_key3 ... event_keyN , with some predefined maximum number of events. Lets say N_max =10 Now if a sequence happens to contains only 3 events. How do I calculate the 'correct', normalizable sequenceID. One way would be a string concatenation of the event keys. to define as the primary key. Any hints Thanks Alex -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Keyword Search
It would probably be better to post this to the Lasso discussion list at Blueworld.com as for Lasso, this will require you to use tags but it's been ages since I coded with Lasso (3.6 and FileMaker. :) and I can't recall/never knew, the tag modifications. Good luck --- maggie chloe [EMAIL PROTECTED] wrote: I am new to Lasso and My SQL, but I have a pretty basic question on searching a text field using Keywords. I think I am missing something easy. I have a MySql database and am using Lasso 7. The user enters search information on a form on a search.lasso page and an inline search is then run on a results.lasso page. I want to parse out what the user enters and search the text field and retrieve only those records that contain each word of the search. For example: Where field is 'NAME' Record 1 contains 'John Michael Smith' Record 2 contain 'John Adams' Record 3 conains 'John Smith' Record 4 contains 'Michael Smith' If the user searches the field by typing 'John Smith' his found set should contain Record 1 and Record 3 only. How do I accomplish this? This is the inline that I'm using. [Inline:(Action_Params),-Search,-database='DATABASE',- Layout='CONTACTS',-Operator='ft','NAME'=(Action_Param:'NAME')] Thanks for your help, David -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] = Robert Reed 512-869-0063 home 512-818-2460 cell __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: C API: mysql_options and mysql_real_connect
Paul DuBois wrote: At 13:03 -0700 8/11/04, V. M. Brasseur wrote: Assuming a my.cnf file which looks like this: [client] port=3306 socket=/path/to/mysql.sock [app] user=appuser password=apppwd host=my.host.com Ignore for now the insecurity of putting a password in the my.cnf file. This is mostly a hypothetical question at the moment. Calling mysql_options(MYSQL, MYSQL_READ_DEFAULT_FILE, /path/to/my.cnf); and mysql_options(MYSQL, MYSQL_READ_DEFAULT_GROUP, app); in the client will read the options in these two groups. How, if at all, would something like this be useful to mysql_real_connect? From my research it appears that you still need to specify the user, host, pwd and port (assuming TCP/IP connection) when calling mysql_real_connect(), so setting these parms in the my.cnf file does not really help for this scenario. Something (a non-API function, most likely) would still need to parse the file separately and grab the parms for passing to mysql_real_connect(). Is this an accurate assessment? No. If you pass NULL in the mysql_real_connect() params, the values from the option file(s) are used. Even for the password param? The mysql_real_connect() write-up in your MySQL book says that a NULL passed for password results in allowing connections only if there is no password in the mysql.user.password column for the current user. Perhaps having the password defined via a mysql_options() call trumps this NULL behavior? --V -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL won't start... was runnnig fine
This line probably should have been a good hint: InnoDB: File operation call: 'tmpfile'. Any idea why the server crashed in the first place? That's not the sort of thing you want just randomly happening, I'd imagine. Although it might make life a bit more interesting... Cheers. --V Chris Blackwell wrote: I have resolved this issue now, it was actually the permission on /tmp that had become corrupted as a result of the crash I can only assume chmod 777 /tmp chmod +t /tmp this fixed the problem but could have been fixed within minutes not days if the error message had told me which directory was causing the problem :/ chris -Original Message- From: Egor Egorov [mailto:[EMAIL PROTECTED] Sent: 12 August 2004 10:26 To: [EMAIL PROTECTED] Subject: Re: MySQL won't start... was runnnig fine Chris Blackwell [EMAIL PROTECTED] wrote: now when ever i try to start mysql i get the following error in log 040812 09:52:34 mysqld started 040812 9:52:34 Warning: Asked for 196608 thread stack, but got 126976 040812 9:52:34 InnoDB: Error: unable to create temporary file 040812 9:52:34 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File operation call: 'tmpfile'. InnoDB: Cannot continue operation. 040812 09:52:34 mysqld ended [EMAIL PROTECTED] egor]$ perror 13 Error code 13: Permission denied i have chown -R the /var/lib/mysql directory So anyway permission denied. Check the rights on /var/lib/mysql itself -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Help, slave wont stay running!
I cant keep the slave up for more than 10 minutes constantly getting these errors 040812 10:32:25 Error reading packet from server: binlog truncated in the middle of event (server_errno=1236) 040812 10:32:25 Got fatal error 1236: 'binlog truncated in the middle of event' from master when reading data from binary log 040812 10:32:25 Slave I/O thread exiting, read up to log 'FINANCE-bin.185', position 284963878 both servers have plenty of free space Here is the master setup.. skip-locking set-variable= key_buffer_size=1000M set-variable=bulk_insert_buffer_size=256M set-variable=delay_key_write=ALL set-variable=join_buffer_size=256M set-variable= max_allowed_packet=256M set-variable= table_cache=512 set-variable= sort_buffer_size=256M set-variable=tmp_table_size=256M set-variable= record_buffer=256M set-variable= thread_cache=8 set-variable=myisam_sort_buffer_size=256M myisam-recover=BACKUP,FORCE set-variable=read_buffer_size=256M set-variable=interactive_timeout=7200 set-variable=wait_timeout=7200 here is the slave setup skip-locking set-variable= key_buffer_size=1500M set-variable=bulk_insert_buffer_size=512M set-variable=delay_key_write=ALL set-variable=join_buffer_size=512M set-variable= max_allowed_packet=384M set-variable= table_cache=512 set-variable= sort_buffer_size=256M set-variable=tmp_table_size=400M set-variable= record_buffer=384M set-variable= thread_cache=8 set-variable=myisam_sort_buffer_size=256M myisam-recover=BACKUP,FORCE set-variable=read_buffer_size=384M set-variable=interactive_timeout=7200 set-variable=wait_timeout=7200 I'm out of ideas, I've played with buffer sizes, packet sizes, but still get the same error my other master/slave has no problems at all, the slave is the same server (one box that's slave for two sites) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: removing duplicates and giving average of other field.
Jeroen, One minor caveat here. You are grouping by gene_name, but selecting non-aggregate values from the chr and strand columns. Since they aren't part of the list of grouped columns, you'll get effectively randomly chosen values (probably the first found per group) for each from among the possible values per gene_name. (This is why other systems won't allow this.) That is fine so long as the chr and strand values are unique per gene_name. I expect there is one chromosome per gene, so that should be no problem for the chr column. If there is a unique value for strand per gene_name, then you'll have no problem there, either. If there isn't, then the strand column in the new table won't be much use. Sorry if that was obvious to you. My wife's the geneticist in the family. I haven't absorbed enough to have an expectation about uniqueness of strand values, and I wasn't sure if you had absorbed enough SQL to have an expectation about the non-grouped columns. Michael Lachlan Mulcahy wrote: Hi Jeroen, Have you tried creating a new table by selecting the data that you want and then removing the old table and replacing it with the new one. Eg. CREATE TABLE new_g2d ( the precise definition for your g2d table goes here i imagine something like the following chr char(10) NOT NULL DEFAULT '', start int NOT NULL DEFAULT 0, stop int NOT NULL DEFAULT 0, gene_name char(64) NOT NULL DEFAULT '', score decimal(6,3) NOT NULL DEFAULT 000.000, strand char(5) NOT NULL DEFAULT '-' ) SELECT chr, MIN(start), MAX(stop), gene_name, AVG(score), strand FROM g2d GROUP BY gene_name; ALTER TABLE g2d RENAME TO g2d_old; ALTER TABLE new_g2d RENAME TO g2d; When you are satisfied that your new g2d table is what you want you can safely drop your g2d_old table. HTH, Lachlan -Original Message- From: Jeroen Van Goey [mailto:[EMAIL PROTECTED] Sent: Wednesday, 11 August 2004 9:16 PM To: [EMAIL PROTECTED] Subject: removing duplicates and giving average of other field. Hi again, thanks for the quick reply on my previous question, the solution given by Michael Stassen worked like a charm. But now I've again run into a situation where I'm stuck with my limited knowledge of MySQL. Given the table: mysql select * from g2d; +---+--+--+-+-++ | chr | start| stop | gene_name | score | strand | +---+--+--+-+-++ | chr13 | 58214248 | 58214733 | ENSG0139734 | 774.518 | - | | chr13 | 58214732 | 58215034 | ENSG0139734 | 774.518 | - | | chr13 | 58215238 | 58215432 | ENSG0139734 | 774.518 | - | | chr13 | 58215021 | 58215182 | ENSG0139734 | 774.518 | - | | chr13 | 58214732 | 58215031 | ENSG0139734 | 755.930 | - | | chr13 | 58214296 | 58214727 | ENSG0139734 | 755.930 | - | | chr13 | 58215045 | 58215098 | ENSG0139734 | 755.930 | - | | chr13 | 65481104 | 65481424 | ENSG0184226 | 896.699 | - | | chr13 | 65481119 | 65481424 | ENSG0184226 | 809.759 | - | | chr13 | 65481116 | 65481355 | ENSG0184226 | 809.759 | - | | chr13 | 65481113 | 65481421 | ENSG0184226 | 808.113 | - | | chr13 | 65481092 | 65481424 | ENSG0184226 | 796.108 | - | | chr13 | 65481110 | 65481424 | ENSG0184226 | 796.011 | - | | chr13 | 65481098 | 65481358 | ENSG0184226 | 796.011 | - | I would like to reduce the table, by combining all the records with the same gene_name. The score field should then give the average of all the scores for that gene_name. The table above would become then: mysql select * from g2d; +---+--+--+-+-++ | chr | start| stop | gene_name | score | strand | +---+--+--+-+-++ | chr13 | 58214248 | 58215432 | ENSG0139734 | 766.551 | - | | chr13 | 65481092 | 65481424 | ENSG0184226 | 819.408 | - | where 766.551 = (((774.518 * 4) + (755.930 *3)) / 7) and 819.408 = ((896.699 + (809.759 *2) + 808.113 + 796.108 + 796.011) / 6) If there's also an easy way to make the start field the lowest value found for that gene_name, and the stop field the highest value, that would be nice, but it isn't that important for my application, so don't bother to much with it. Thanks in advance, I appreciate your input very much. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: recommended books for web app.
I'd go along with that recommendation. Terry - Original Message - Welling Thomson is terrific. - Original Message - From: Kerry Frater To: MySQL List Sent: Thursday, August 12, 2004 8:09 AM Subject: recommended books for web app. I am looking to port an app from an existing web environment to MySQL. The requirement is relatively easy. The Tables are read only and the data is to be only accessed via login password. The login will give a limited view of records based on a master/detail table relationship. I need to be aware of securing the database and have been told by others that I should look to use PHP. I know my local bookstore has the following publications (based on asking about MySQL PHP) Beginning PHP, Apache MySQL Web Development published by Wrox PHP MySQL written by Larry Ullmen PHP MySQL Web Development written by Luke Welling Laura Thomsan Has anyone seen these books and possibly recommend one of them? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Help, slave wont stay running!
Can you reset the slave to read the next event its relay log? If this is not possible, is refreshing the data from the master a viable option? -Original Message- From: matt ryan To: [EMAIL PROTECTED] Sent: 8/12/04 10:22 AM Subject: Help, slave wont stay running! I cant keep the slave up for more than 10 minutes constantly getting these errors 040812 10:32:25 Error reading packet from server: binlog truncated in the middle of event (server_errno=1236) 040812 10:32:25 Got fatal error 1236: 'binlog truncated in the middle of event' from master when reading data from binary log 040812 10:32:25 Slave I/O thread exiting, read up to log 'FINANCE-bin.185', position 284963878 both servers have plenty of free space Here is the master setup.. skip-locking set-variable= key_buffer_size=1000M set-variable=bulk_insert_buffer_size=256M set-variable=delay_key_write=ALL set-variable=join_buffer_size=256M set-variable= max_allowed_packet=256M set-variable= table_cache=512 set-variable= sort_buffer_size=256M set-variable=tmp_table_size=256M set-variable= record_buffer=256M set-variable= thread_cache=8 set-variable=myisam_sort_buffer_size=256M myisam-recover=BACKUP,FORCE set-variable=read_buffer_size=256M set-variable=interactive_timeout=7200 set-variable=wait_timeout=7200 here is the slave setup skip-locking set-variable= key_buffer_size=1500M set-variable=bulk_insert_buffer_size=512M set-variable=delay_key_write=ALL set-variable=join_buffer_size=512M set-variable= max_allowed_packet=384M set-variable= table_cache=512 set-variable= sort_buffer_size=256M set-variable=tmp_table_size=400M set-variable= record_buffer=384M set-variable= thread_cache=8 set-variable=myisam_sort_buffer_size=256M myisam-recover=BACKUP,FORCE set-variable=read_buffer_size=384M set-variable=interactive_timeout=7200 set-variable=wait_timeout=7200 I'm out of ideas, I've played with buffer sizes, packet sizes, but still get the same error my other master/slave has no problems at all, the slave is the same server (one box that's slave for two sites) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL won't start... was runnnig fine
Nope, no idea why it crashed, and I don't know enough about redhat to bother investigating. It seemed to hang so it got a kick, when it came backup was bit screwy... so far this has been only problem. It's only a dev box so I'll keep my fingers crossed it's a freak occurrence :) chris -Original Message- From: V. M. Brasseur [mailto:[EMAIL PROTECTED] Sent: 12 August 2004 16:21 To: [EMAIL PROTECTED] Subject: Re: MySQL won't start... was runnnig fine This line probably should have been a good hint: InnoDB: File operation call: 'tmpfile'. Any idea why the server crashed in the first place? That's not the sort of thing you want just randomly happening, I'd imagine. Although it might make life a bit more interesting... Cheers. --V Chris Blackwell wrote: I have resolved this issue now, it was actually the permission on /tmp that had become corrupted as a result of the crash I can only assume chmod 777 /tmp chmod +t /tmp this fixed the problem but could have been fixed within minutes not days if the error message had told me which directory was causing the problem :/ chris -Original Message- From: Egor Egorov [mailto:[EMAIL PROTECTED] Sent: 12 August 2004 10:26 To: [EMAIL PROTECTED] Subject: Re: MySQL won't start... was runnnig fine Chris Blackwell [EMAIL PROTECTED] wrote: now when ever i try to start mysql i get the following error in log 040812 09:52:34 mysqld started 040812 9:52:34 Warning: Asked for 196608 thread stack, but got 126976 040812 9:52:34 InnoDB: Error: unable to create temporary file 040812 9:52:34 InnoDB: Operating system error number 13 in a file operation. InnoDB: The error means mysqld does not have the access rights to InnoDB: the directory. InnoDB: File operation call: 'tmpfile'. InnoDB: Cannot continue operation. 040812 09:52:34 mysqld ended [EMAIL PROTECTED] egor]$ perror 13 Error code 13: Permission denied i have chown -R the /var/lib/mysql directory So anyway permission denied. Check the rights on /var/lib/mysql itself -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.net http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Egor Egorov / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.net ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help, slave wont stay running!
Victor Pendleton wrote: Can you reset the slave to read the next event its relay log? If this is not possible, is refreshing the data from the master a viable option? I can start slave, and it runs a little while, then stops again. I can refresh the data from the master, iv'e done it 25 times at least, 3 times a week, the database is 90 gig, so it's not very fun! every time I resync them, I'll reset master first, resync, set the slave to start on the new master info, and then start the slave, boom fails in 15 min -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Fulltext Search takes 17 sec.
If you really want to test the raw query speed, then do a SELECT COUNT(*). That will return just a count of the number of records found, eliminating any speed issues caused by display processing or transfer speeds from the database to the interface. Adding a limit usually speeds things up since you limit the amount of data being transferred out of the database. If your front end and database are not on the same machine, then the amount of data being transferred over the network can have a huge impact. What does your EXPLAIN look like for these queries? On Aug 12, 2004, at 10:23 AM, [EMAIL PROTECTED] wrote: Hi Brent, Oops, sorry about forgetting to post the key buffer size, its: key_buffer_size 262144000 As far as the combined column I'm using, I did it to make the fulltext as simple as possible and keep the index at only 1 column. I am aware that I can create an index for multiple columns but basically I just selected all the columns that may contain keywords for each product and tossed them into 1 table with the PK as product_id. Just thought this might help performance... maybe I'm wrong, I don't know. If I can ever get the MATCH, AGAINST query down to a faster speed then I will really focus on getting the top 3 for each category. Sorting into categories in PHP would work but I'd need a much larger dataset in order to insure that I got all the possible matches for each category... what if there was only 1 match in the 'Movies' category and it was at the bottom of the results...? I'm thinking that we will have to use some other method such as listing by relavence (fulltext ordering) then showing Search within: Category X, Category Y, Category Z... links on the side (like Ebay I guess). Still the problem arises as to which categories qualify. It would be nice to order them by the category with the most matches but I doubt that can be accomplished. Previously I had not been using a LIMIT because I was going to do processing for category grouping etc. However, if I do use a LIMIT, the query speeds are almost totally dependant on the number of rows returned: WITH LIMIT: SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('san francisco') LIMIT 1000 Query doine in 2.01 seconds Num Rows: 1000 WITHOUT LIMIT: SELECT product_id FROM product_fulltext WHERE MATCH ( search_text ) AGAINST ('san francisco') Query doine in 13.45 seconds Num Rows: 9287 Strange. Is this typical or do I need to tweek my system variables? - John -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- Brent Baisley Systems Architect Landover Associates, Inc. Search Advisory Services for Advanced Technology Environments p: 212.759.6400/800.759.0577 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help, slave wont stay running!
El Jueves, 12 de Agosto de 2004 09:22, matt ryan escribió: I cant keep the slave up for more than 10 minutes constantly getting these errors 040812 10:32:25 Error reading packet from server: binlog truncated in the middle of event (server_errno=1236) 040812 10:32:25 Got fatal error 1236: 'binlog truncated in the middle of event' from master when reading data from binary log 040812 10:32:25 Slave I/O thread exiting, read up to log 'FINANCE-bin.185', position 284963878 I had a similar situation one week ago. Found one of the tables (MyISAM) had a corrupt index. After fixing it, everything was fine again. Regards. -- Alfredo Cole Grupo ACyC www.acyc.com - www.clshonduras.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
active new data with inactive old data
I have some large tables (hundreds of millions of rows) that are extremely active. They have heavy read and heavy modify activity. But, all modifications are only on recent records (data added within the last month is very volatile, but data earlier than that never changes). We use the InnoDB engine for these tables because of the high concurrency (as well as the desire for transactions [potential rollback] on data changes and inserts). But, for all the data that's older than a month and never changes, having it in huge InnoDB tables is very cumbersome (constantly re-backing up all the old data that never changes, etc.). Plus, the data from the past six months is heavily accessed, but older data gets little use and is only necessary as historic information, so it would be nice to pack it away using MyISAM compressed tables. Ideally, what I'd like to do is create packed MyISAM tables of older data, possibly separated by quarter years, keep the last couple months in an InnoDB table, and use a single Merge table to access them. Obviously I can't do that because Merge tables only span MyISAM tables. My current plan is to put old data in packed MyISAM tables made accessible with a single Merge table, put the new data in an InnoDB table, and put my own frontend on it in my code to do UNIONs between the InnoDB and Merge tables as necessary. This is going to be a significant change in my code (plus a bunch of work to setup the job of transferring data from InnoDB to next MyISAM table at quarter-rollover time, etc.), so I'd like to know if anyone else has done something similar and has some suggestions from their experience. Or, if there is simply a better approach in general, then I'd love to hear your suggestions. Better yet, is there a plan to be able to create Merge tables spanning different engine types in the future? -keith -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Fulltext Search takes 17 sec.
Thanks for all your help guys, Using COUNT(*) I get the following: mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('blue jeans'); +--+ | COUNT(*) | +--+ |51513 | +--+ 1 row in set (48.58 sec) The EXPLAIN for the fulltext queries look like this: mysql EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('blue jeans'); +--+--+---+---+-+--+--+-+ | table| type | possible_keys | key | key_len | ref | rows | Extra | +--+--+---+---+-+--+--+-+ | product_fulltext | fulltext | search_ft | search_ft | 0 | |1 | Using where | +--+--+---+---+-+--+--+-+ 1 row in set (0.00 sec) Seems like the explain is working properly. But even using COUNT and not pulling any data from the db, mysql still seems to lag significantly depending on the number of fulltext matches. mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('Oregon'); +--+ | COUNT(*) | +--+ | 1876 | +--+ 1 row in set (3.14 sec) mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('Brown'); +--+ | COUNT(*) | +--+ |18510 | +--+ 1 row in set (21.19 sec) mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('Black') LIMIT 10; +--+ | COUNT(*) | +--+ | 120309 | +--+ 1 row in set (1 min 25.00 sec) Ouch, that last one hurt. - John -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Innodb table definitions
From the MySQL docs: Each |MyISAM| table is stored on disk in three files. The files have names that begin with the table name and have an extension to indicate the file type. An `.frm' file stores the table definition. The data file has an `.MYD' (MYData) extension. The index file has an `.MYI' (MYIndex) extension. My question is, why does MySQL create a .frm (table definition) for a table if that table is of type InnoDB. For example, consider the following: use test; CREATE TABLE mytest ( id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, name VARCHAR(255) NOT NULL ) TYPE=InnoDB; This created inside the 'test' directory: mytest.frm Can anyone provide a reason for this. thanks, Mayuran -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Using Access as front-end for Blob.
Anyone know how to use access as front end for accessing blobs in mysql. I have found some sample code for storing blobs in mysql, but for whatever reason when I export the tables to mysql and link them it stops working. Even though the blobs remain and can be accessed without corruption from other front-ends. And if I import back again into access without linking. It will work. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help, slave wont stay running!
I deleted every table off the slave, and reloaded them, I do this twice a week because it wont replicate The master server has a check optimize every sunday I had a similar situation one week ago. Found one of the tables (MyISAM) had a corrupt index. After fixing it, everything was fine again. Regards. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Escaped BLOB data in XML
Hello, i have a table with a LONGBLOB column. We store some small images in it. I want to export them in XML format with schema like: cdata/c cdata/c ... ... Now the problem is even if I mysql_real_escape() and changing entities like , to lt; gt; the data some of the characters are of ascii value 12,13 etc. None of the XML parsers are able to recognise it and they throw up error? I googled but couldnt find a refernce on how to handle such characters in XML. Any pointersI am using Expat as my apps XML parser. Regards, Karam __ Do you Yahoo!? New and Improved Yahoo! Mail - Send 10MB messages! http://promotions.yahoo.com/new_mail -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Replace delayed locks table
SQL is, replace delayed into table c1, c1, c3 select c1, c2, c3 from temp table. This takes 2 hours to comlete, the temp table is rather large. The table being updated is locked, the whole time, all web requests are locked and the pages time out. Is there any way to get this to run without locking the whole table? I thought with myisam it would only lock a table if you delete records, and insert records, it locks it to fill the gaps. If I need to switch to another table type it's an option, having locked tables is NOT an option. Thanks in advance Matt -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
CSV Output
Im looking for a way to output a file into true CSV format (with quoted fields) using the Perl DBI. Does anyone know of a way to do this (in either native MySQL or Perl)? Thanks as always! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help, slave wont stay running!
Check it out mysql start slave; Query OK, 0 rows affected (0.00 sec) mysql start slave; ERROR 1198: This operation cannot be performed with a running slave, run SLAVE S TOP first mysql start slave; ERROR 1198: This operation cannot be performed with a running slave, run SLAVE S TOP first mysql start slave; Query OK, 0 rows affected (0.00 sec) mysql start slave; Query OK, 0 rows affected (0.00 sec) I can start slave over and over, it does one event, stops, start it and it does one event, then stops, over and over and over I just keep running start slave really fast to get threw the updates. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Cannot connect via TCP (Lost connection to MySQL server during query)
Hi, I'm running into troubles trying to connect to MySQL (version 4.0.18) via TCP/IP. Connecting on the actual machine via Unix sockets works just fine--it's solid as a rock. But as soon as I attempt to connect via TCP (from either the local machine or a remote machine), mysqld crashes and I get the ERROR 2013: Lost connection to MySQL server during query error. This happens to me using both version 4.0.18 and 4.0.20 (I was running .20 and downgraded to .18 to see if that fixed the problem. It didn't, but I haven't upgraded back to .20 again yet.) So for instance, after starting mysqld, this works fine: $ mysql -h localhost But the following command does not: $ mysql -h 127.0.0.1 ERROR 2013: Lost connection to MySQL server during query Running mysqladmin version immediately after getting the error confirms that the server did crash and come back up. If I telnet to 127.0.0.1 port 3306, it just immediately closes the connection. (Connection closed by foreign host.) Again, this is due to the server crashing. I haven't been able to get any useful (to me, anyway) information out of mysqld.err. I get a backtrace, but the stack trace ends in New values of fp=0x81bdea4 failed sanity check, terminating stack trace! (which doesn't look normal to me). When I try to follow the instructions at http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html, nm tells me there are no symbols for mysqld. I'm not familiar with resolving stack traces, so treat me as a newbie in that regard. For that matter, it's entirely possible that I'm making a newbie mistake somewhere else. If that's the case, please point me to a FAQ and flame away. :) But I've read the docs and Googled this one pretty thoroughly, and although I've found people that seem to have the same problem, I haven't yet found the answer to that problem. Let me know if I can provide anything to make the problem more clear. Thanks for your help! Matt Winckler -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: CSV Output
As is the case for all things Perl, CPAN is your answer: http://search.cpan.org/~alancitt/Text-CSV-0.01/CSV.pm NAME Text::CSV - comma-separated values manipulation routines SYNOPSIS use Text::CSV; $version = Text::CSV-version(); # get the module version $csv = Text::CSV-new(); # create a new object $status = $csv-combine(@columns);# combine columns into a string $line = $csv-string(); # get the combined string $status = $csv-parse($line); # parse a CSV string into fields @columns = $csv-fields();# get the parsed fields $status = $csv-status(); # get the most recent status $bad_argument = $csv-error_input(); # get the most recent bad argument DESCRIPTION Text::CSV provides facilities for the composition and decomposition of comma-separated values. An instance of the Text::CSV class can combine fields into a CSV string and parse a CSV string into fields. Eamon Daly NextWave Media Group LLC Tel: 1 773 975-1115 Fax: 1 773 913-0970 - Original Message - From: David Perron [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, August 12, 2004 1:07 PM Subject: CSV Output Im looking for a way to output a file into true CSV format (with quoted fields) using the Perl DBI. Does anyone know of a way to do this (in either native MySQL or Perl)? Thanks as always! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: CSV Output
You can use mysqldump with the --fields-terminated by= option. -Original Message- From: David Perron To: [EMAIL PROTECTED] Sent: 8/12/04 1:07 PM Subject: CSV Output Im looking for a way to output a file into true CSV format (with quoted fields) using the Perl DBI. Does anyone know of a way to do this (in either native MySQL or Perl)? Thanks as always! -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Fulltext Search takes 17 sec.
Those times shouldn't be that far apart. What is the hardware (RAM, CPU, etc.) and OS you are running this on? Is there anything else running on it that might be causing memory to page out? If you are running Unix, try running: vm_stat 1 That will show you memory stats every second. Watch the pagein/pageout columns when you have a query running. Ideally, they should be zero. Otherwise you are basically using your hard drive as RAM, which would be the cause of your slow down. On Aug 12, 2004, at 12:53 PM, [EMAIL PROTECTED] wrote: Thanks for all your help guys, Using COUNT(*) I get the following: mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('blue jeans'); +--+ | COUNT(*) | +--+ |51513 | +--+ 1 row in set (48.58 sec) The EXPLAIN for the fulltext queries look like this: mysql EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('blue jeans'); +--+--+---+---+- +--+--+-+ | table| type | possible_keys | key | key_len | ref | rows | Extra | +--+--+---+---+- +--+--+-+ | product_fulltext | fulltext | search_ft | search_ft | 0 | |1 | Using where | +--+--+---+---+- +--+--+-+ 1 row in set (0.00 sec) Seems like the explain is working properly. But even using COUNT and not pulling any data from the db, mysql still seems to lag significantly depending on the number of fulltext matches. mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('Oregon'); +--+ | COUNT(*) | +--+ | 1876 | +--+ 1 row in set (3.14 sec) mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('Brown'); +--+ | COUNT(*) | +--+ |18510 | +--+ 1 row in set (21.19 sec) mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('Black') LIMIT 10; +--+ | COUNT(*) | +--+ | 120309 | +--+ 1 row in set (1 min 25.00 sec) Ouch, that last one hurt. - John -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- Brent Baisley Systems Architect Landover Associates, Inc. Search Advisory Services for Advanced Technology Environments p: 212.759.6400/800.759.0577 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: CSV Output
At 11:07 AM 8/12/2004, David Perron wrote: Im looking for a way to output a file into true CSV format (with quoted fields) using the Perl DBI. Does anyone know of a way to do this (in either native MySQL or Perl)? Thanks as always! You could try DBD::CSV http://search.cpan.org/search?query=DBD%3A%3ACSV It calls itself alpha software, but states that that's due to the interface possibly changing, not the code quality. YMMV. -jc -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
GUI for MySQL
This is my first attempt to design and test MySQL. I have used MS SQL for number of years. I do appreciate if members of this list can recommend a good GUI application for MySQL. I want the GUI application to design DB, design Quiries, etc. Thanks. Kirti -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
AW: GUI for MySQL
Kirti Have a look at the free DBManager Pro from DBTools : http://www.dbtools.com.br/EN/dbmanagerpro.php Freddie -Ursprüngliche Nachricht- Von: Kirti S. Bajwa [mailto:[EMAIL PROTECTED] Gesendet: Donnerstag, 12. August 2004 20:09 An: '[EMAIL PROTECTED]' Betreff: GUI for MySQL This is my first attempt to design and test MySQL. I have used MS SQL for number of years. I do appreciate if members of this list can recommend a good GUI application for MySQL. I want the GUI application to design DB, design Quiries, etc. Thanks. Kirti -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: GUI for MySQL
I would suggest DBDesigner available on http://www.fabforce.net/ -Mensaje original- De: Kirti S. Bajwa [mailto:[EMAIL PROTECTED] Enviado el: Jueves, 12 de Agosto de 2004 12:09 p.m. Para: '[EMAIL PROTECTED]' Asunto: GUI for MySQL This is my first attempt to design and test MySQL. I have used MS SQL for number of years. I do appreciate if members of this list can recommend a good GUI application for MySQL. I want the GUI application to design DB, design Quiries, etc. Thanks. Kirti -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Help with sql without using INTERSECT
I have the following table: +-+-+ | ID_AGE | ID_ENTRY | +-+-+ | 1 | 1 | | 1 | 4 | | 1 | 5 | | 2 | 1 | | 2 | 2 | | 2 | 3 | | 2 | 4 | | 2 | 6 | | 2 | 7 | | 2 | 8 | | 2 |10 | | 2 |11 | | 2 |13 | | 2 |14 | | 2 |15 | | 2 |19 | | 2 |20 | | 2 |21 | | 2 |22 | | 2 |24 | | 3 |14 | | 3 |16 | | 3 |17 | | 3 |18 | | 3 |19 | | 3 |22 | +-+--+ And since INTERSECT is not currently supported how do I select the ID_ENTRY that has both 1 and 2 for ID_AGE SELECT ID_ENTRY WHERE ID_AGE = 1 INTERSECT SELECT ID_ENTRY EHRE ID_AGE=2; The results should be 1 and 4. Thanks in advanced. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Fulltext Search takes 17 sec.
Thanks for the vmstat tip. I ran vmstat 1 on the query on a slightly quicker query so I wouldn't have a ton of numbers to post from the vmstat. mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('Pink'); +--+ | COUNT(*) | +--+ |12231 | +--+ 1 row in set (8.05 sec) procs memoryswap io system cpu r b w swpd free buff cache si sobibo incs us sy id 0 0 0 200888 29572 14820 356696 0 0 0 0 53140 0 0 100 0 1 0 200888 28356 14828 357772 0 0 1084 0 614 128 1 0 99 0 1 0 200888 25932 14836 360204 0 0 243228 835 360 0 0 99 0 1 0 200888 24412 14836 362944 0 0 2740 0 867 398 0 1 99 0 1 0 200888 21792 14840 365560 0 0 261612 859 374 0 0 100 0 1 0 200888 18664 14840 368688 0 0 3128 0 903 400 0 1 99 0 1 0 200888 16016 14840 371336 0 0 2648 0 870 390 0 0 100 0 1 0 200888 13500 14844 373848 0 0 251224 870 372 0 0 99 0 1 0 200888 11368 14840 375984 0 0 2132 0 822 392 0 0 99 0 0 0 200888 10100 14848 377244 0 0 126012 717 289 0 0 100 0 0 0 200888 10100 14828 377264 0 0 0 0 52420 0 0 100 I really don't know what I'm looking for here, is bi and bo the page-in and page-out variables you were talking about, or is it si and so. The system shouldn't be taxed at all since I am the only user on the machine and its not running anything else at the moment. The server is a Dual Xeon 2GHz, 512 MB RAM, 72 GB hard drive running on Linux RedHat 7.3 I did notice that the key_buffer_size and sort_buffer_size were a little high (I guess I though the machine had 1Gb of RAM instead of 512) so I decreased the key_buffer_size to 125Mb and the sort_buffer_size to 45 Mb. Restarted mysql and ran an identical query, the result was about 20% slower after I lowered the buffer sizes.(Guess it didn't help) - John - John -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Documentation problem?
hi all. In my struggles to buildl MySQL 4.1.3deta, i read a lot of online documentation, and in the process i found a lot of info about various configure flags in writeups all over the web. One problem i encountered, that i finally figured out was that some documentation names the flag --with-extra-charsets as --with-extra-charset I finally got it right, but most of what i read omitted the (s). The configure script (quietly) allows the incorrect flag. I dont know what can be done, with so many contributors to the documentation, reviewing everything in that much detail doesnt seem wholly reasonable. I'd guess there are a myriad of similar little issues. Any thoughts on how to minimize this problem. Just wanted to bring this out. thanks sean peters [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help with sql without using INTERSECT
Jeff Meyer wrote: I have the following table: +-+-+ | ID_AGE | ID_ENTRY | +-+-+ | 1 | 1 | | 1 | 4 | | 1 | 5 | | 2 | 1 | | 2 | 2 | | 2 | 3 | | 2 | 4 | | 2 | 6 | | 2 | 7 | | 2 | 8 | | 2 |10 | | 2 |11 | | 2 |13 | | 2 |14 | | 2 |15 | | 2 |19 | | 2 |20 | | 2 |21 | | 2 |22 | | 2 |24 | | 3 |14 | | 3 |16 | | 3 |17 | | 3 |18 | | 3 |19 | | 3 |22 | +-+--+ And since INTERSECT is not currently supported how do I select the ID_ENTRY that has both 1 and 2 for ID_AGE SELECT ID_ENTRY WHERE ID_AGE = 1 INTERSECT SELECT ID_ENTRY EHRE ID_AGE=2; The results should be 1 and 4. Thanks in advanced. select a.id_entry from idtable a, idtable b where a.identry=b.identry and a.age=1 and b.age=2; -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Replace delayed locks table
matt ryan wrote: SQL is, replace delayed into table c1, c1, c3 select c1, c2, c3 from temp table. This takes 2 hours to comlete, the temp table is rather large. The table being updated is locked, the whole time, all web requests are locked and the pages time out. Is there any way to get this to run without locking the whole table? I thought with myisam it would only lock a table if you delete records, and insert records, it locks it to fill the gaps. Replace deletes and inserts. If I need to switch to another table type it's an option, having locked tables is NOT an option. Thanks in advance Matt -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Innodb table definitions
Mayuran Yogarajah wrote: From the MySQL docs: Each |MyISAM| table is stored on disk in three files. The files have names that begin with the table name and have an extension to indicate the file type. An `.frm' file stores the table definition. The data file has an `.MYD' (MYData) extension. The index file has an `.MYI' (MYIndex) extension. My question is, why does MySQL create a .frm (table definition) for a table if that table is of type InnoDB. For example, consider the following: use test; CREATE TABLE mytest ( id INT NOT NULL PRIMARY KEY AUTO_INCREMENT, name VARCHAR(255) NOT NULL ) TYPE=InnoDB; This created inside the 'test' directory: mytest.frm Can anyone provide a reason for this. To share code. Show create table, etc. thanks, Mayuran -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Cannot connect via TCP (Lost connection to MySQL server during query)
What hardware and OS? How did you get and install mysql? MySQL supplied binary? 3rd party binary? Built from source? If the answer is not MySQL supplied binary, my first suggestion would be to try that to see if the problem goes away. Michael Matt Winckler wrote: Hi, I'm running into troubles trying to connect to MySQL (version 4.0.18) via TCP/IP. Connecting on the actual machine via Unix sockets works just fine--it's solid as a rock. But as soon as I attempt to connect via TCP (from either the local machine or a remote machine), mysqld crashes and I get the ERROR 2013: Lost connection to MySQL server during query error. This happens to me using both version 4.0.18 and 4.0.20 (I was running .20 and downgraded to .18 to see if that fixed the problem. It didn't, but I haven't upgraded back to .20 again yet.) So for instance, after starting mysqld, this works fine: $ mysql -h localhost But the following command does not: $ mysql -h 127.0.0.1 ERROR 2013: Lost connection to MySQL server during query Running mysqladmin version immediately after getting the error confirms that the server did crash and come back up. If I telnet to 127.0.0.1 port 3306, it just immediately closes the connection. (Connection closed by foreign host.) Again, this is due to the server crashing. I haven't been able to get any useful (to me, anyway) information out of mysqld.err. I get a backtrace, but the stack trace ends in New values of fp=0x81bdea4 failed sanity check, terminating stack trace! (which doesn't look normal to me). When I try to follow the instructions at http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html, nm tells me there are no symbols for mysqld. I'm not familiar with resolving stack traces, so treat me as a newbie in that regard. For that matter, it's entirely possible that I'm making a newbie mistake somewhere else. If that's the case, please point me to a FAQ and flame away. :) But I've read the docs and Googled this one pretty thoroughly, and although I've found people that seem to have the same problem, I haven't yet found the answer to that problem. Let me know if I can provide anything to make the problem more clear. Thanks for your help! Matt Winckler -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Cannot connect via TCP (Lost connection to MySQL server during query)
Michael Stassen wrote: What hardware and OS? Pentium II 300, 192 MB RAM, almost-brand-new 80GB hard drive, running Gentoo Linux (kernel 2.4.25-gentoo). How did you get and install mysql? MySQL supplied binary? 3rd party binary? Built from source? If the answer is not MySQL supplied binary, my first suggestion would be to try that to see if the problem goes away. I installed it from an ebuild, via Gentoo's portage system, compiled from source. Thanks for the suggestion; I'll give the MySQL binaries a shot. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Escaped BLOB data in XML
Karam Chand wrote: i have a table with a LONGBLOB column. We store some small images in it. I want to export them in XML format with schema like: cdata/c cdata/c ... ... Now the problem is even if I mysql_real_escape() and changing entities like , to lt; gt; the data some of the characters are of ascii value 12,13 etc. None of the XML parsers are able to recognise it and they throw up error? I googled but couldnt find a refernce on how to handle such characters in XML. This doesn't have anything to do with MySQL. XML isn't really designed for directly containing binary data, so people generally use Base64 encoding (or occasionally some other method of encoding binary data in ASCII). The XML parser isn't going to be able to return the raw binary data -- you'll have to decode it. -- Keith Ivey [EMAIL PROTECTED] Washington, DC -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
my.cnf not read in 4.1.3
Hi all: I am running MySQL 4.1.3 under Debian. I have the problem that my options in my.cnf are not going into effect. I want to increase max_heap_table_size from the default of 16M to 400M (I have 4GB of memory), but putting that line into my.cnf doesn't do anything. (Yes it is in the mysqld section of the file.) Also I have tried doing a set global max_heap_table_size=4; in the mysql command line but it doesn't change the value either, according to SHOW VARIABLES. What am I doing wrong? Regards, Ryan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Replace delayed locks table
If you are using a MyISAM table type, the entire table is either locked or it isn't. Normally this isn't a problem most statements happen quickly. However, yours takes a couple of hours to finish so the entire table has to stay locked until your statement completes. The way I see it, you have only a few options: 1) Redesign your update to happen in smaller batches (scripting maybe?). This gives the engine a chance to handle other requests between the updates/replaces. 2) Change your update/replace so that it doen't need to work on so many records. That will help it to finish sooner and not lock you down for so long. 3) Change to an InnoDB table type. It supports row-level locking. Shawn Green Database Administrator Unimin Corporation - Spruce Pine matt ryan [EMAIL PROTECTED] wrote on 08/12/2004 01:59:42 PM: SQL is, replace delayed into table c1, c1, c3 select c1, c2, c3 from temp table. This takes 2 hours to comlete, the temp table is rather large. The table being updated is locked, the whole time, all web requests are locked and the pages time out. Is there any way to get this to run without locking the whole table? I thought with myisam it would only lock a table if you delete records, and insert records, it locks it to fill the gaps. If I need to switch to another table type it's an option, having locked tables is NOT an option. Thanks in advance Matt -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Fulltext Search takes 17 sec.
Grumble, grumble. I was hoping vmstat would tell us more. In a nutshell, every system has a bottleneck and the bottleneck is always CPU, RAM, I/O or Network. Nutshells are great since they make things seem so simple. You've got a 1 in 4 chance sf picking the right piece that is your bottleneck without doing any analysis. What vmstat does tell us is that the bottleneck is not the CPU, since it is idle almost the entire time (the last column). Your memory swapd, si, so columns don't change, so the OS is fine with the RAM. So we've ruled out CPU and RAM. At least as far as the OS is concerned. That leaves disk and network. Well, it's not the network, since we are not using it. So that leaves I/O, which is almost always disk. Your bi (block in) column climbs as does your interrupt (in) and context switching (cs). A process can be kicked off the CPU (interrupt and/or context switching) because it lost priority or because it is blocked (i.e. waiting for data). Since I'm guessing I/O is your problem, you may try running iotstat now. That will show you the disk and CPU activity. Now, all these numbers are from the OS point of view, not MySQL. So if MySQL has a low priority (nice value), it won't be given enough resources to tax any part of the system. That's an extreme example, you won't likely need to change the nice value of MySQL. But for fun, you could try changing the nice value of MySQL on your system to see how it affects things. But this isn't a technique you should rely on. On Aug 12, 2004, at 2:50 PM, [EMAIL PROTECTED] wrote: Thanks for the vmstat tip. I ran vmstat 1 on the query on a slightly quicker query so I wouldn't have a ton of numbers to post from the vmstat. mysql SELECT COUNT(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('Pink'); +--+ | COUNT(*) | +--+ |12231 | +--+ 1 row in set (8.05 sec) procs memoryswap io system cpu r b w swpd free buff cache si sobibo incs us sy id 0 0 0 200888 29572 14820 356696 0 0 0 0 53140 0 0 100 0 1 0 200888 28356 14828 357772 0 0 1084 0 614 128 1 0 99 0 1 0 200888 25932 14836 360204 0 0 243228 835 360 0 0 99 0 1 0 200888 24412 14836 362944 0 0 2740 0 867 398 0 1 99 0 1 0 200888 21792 14840 365560 0 0 261612 859 374 0 0 100 0 1 0 200888 18664 14840 368688 0 0 3128 0 903 400 0 1 99 0 1 0 200888 16016 14840 371336 0 0 2648 0 870 390 0 0 100 0 1 0 200888 13500 14844 373848 0 0 251224 870 372 0 0 99 0 1 0 200888 11368 14840 375984 0 0 2132 0 822 392 0 0 99 0 0 0 200888 10100 14848 377244 0 0 126012 717 289 0 0 100 0 0 0 200888 10100 14828 377264 0 0 0 0 52420 0 0 100 I really don't know what I'm looking for here, is bi and bo the page-in and page-out variables you were talking about, or is it si and so. The system shouldn't be taxed at all since I am the only user on the machine and its not running anything else at the moment. The server is a Dual Xeon 2GHz, 512 MB RAM, 72 GB hard drive running on Linux RedHat 7.3 I did notice that the key_buffer_size and sort_buffer_size were a little high (I guess I though the machine had 1Gb of RAM instead of 512) so I decreased the key_buffer_size to 125Mb and the sort_buffer_size to 45 Mb. Restarted mysql and ran an identical query, the result was about 20% slower after I lowered the buffer sizes.(Guess it didn't help) - John - John -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- Brent Baisley Systems Architect Landover Associates, Inc. Search Advisory Services for Advanced Technology Environments p: 212.759.6400/800.759.0577 -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help with sql without using INTERSECT
INTERSECT sound very much like UNION DISTINCT (http://dev.mysql.com/doc/mysql/en/UNION.html) because this query should give you what you asked for and is very similar to yours: (SELECT ID_ENTRY FROM table WHERE ID_AGE = 1) UNION DISTINCT (SELECT ID_ENTRY FROM table WHERE ID_AGE=2) However, I can also think of other ways of answering this same question SELECT ID_ENTRY, count(1) FROM table WHERE id_age in (1,2) GROUP BY id_entry HAVING count(1)=2 SELECT t1.ID_ENTRY FROM table t1 INNER JOIN table t2 ON t1.id_entry = t2.id_entry AND t1.id_age=1 AND t2.id_age=2 Use whichever seems best for your circumstances Shawn Green Database Administrator Unimin Corporation - Spruce Pine news [EMAIL PROTECTED] wrote on 08/12/2004 02:48:21 PM: I have the following table: +-+-+ | ID_AGE | ID_ENTRY | +-+-+ | 1 | 1 | | 1 | 4 | | 1 | 5 | | 2 | 1 | | 2 | 2 | | 2 | 3 | | 2 | 4 | | 2 | 6 | | 2 | 7 | | 2 | 8 | | 2 |10 | | 2 |11 | | 2 |13 | | 2 |14 | | 2 |15 | | 2 |19 | | 2 |20 | | 2 |21 | | 2 |22 | | 2 |24 | | 3 |14 | | 3 |16 | | 3 |17 | | 3 |18 | | 3 |19 | | 3 |22 | +-+--+ And since INTERSECT is not currently supported how do I select the ID_ENTRY that has both 1 and 2 for ID_AGE SELECT ID_ENTRY WHERE ID_AGE = 1 INTERSECT SELECT ID_ENTRY EHRE ID_AGE=2; The results should be 1 and 4. Thanks in advanced. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Fulltext Search takes 17 sec.
Wow, thanks Brent, nice to learn something new (I'm not much of a 'server' guy) Tried the iostat and got some results while performing a general query: mysql SELECT count(*) FROM product_fulltext WHERE MATCH(search_text) AGAINST('DISK'); +--+ | count(*) | +--+ | 4975 | +--+ 1 row in set (11.28 sec) IOStat Output -- avg-cpu: %user %nice%sys %idle 0.250.000.00 99.75 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-00.00 0.00 0.00 0 0 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.250.000.00 99.75 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 110.00 1872.0088.00 1872 88 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.250.000.25 99.50 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 347.00 6200.00 240.00 6200240 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 1.250.000.75 98.00 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 140.00 2024.00 1408.00 2024 1408 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.000.000.50 99.50 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 342.00 6568.00 112.00 6568112 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.000.000.50 99.50 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 368.00 7600.00 0.00 7600 0 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.250.000.50 99.25 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 363.00 7160.0088.00 7160 88 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.250.000.50 99.25 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 371.00 7104.00 0.00 7104 0 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.000.000.25 99.75 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 343.00 6360.00 0.00 6360 0 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.250.000.50 99.25 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 360.00 6696.0064.00 6696 64 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.000.000.25 99.75 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 367.00 7184.00 0.00 7184 0 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.250.000.00 99.75 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 330.00 6448.0056.00 6448 56 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.250.001.00 98.75 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-0 380.00 7176.00 0.00 7176 0 dev8-10.00 0.00 0.00 0 0 avg-cpu: %user %nice%sys %idle 0.000.000.00 100.00 Device:tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn dev8-00.00 0.00 0.00 0 0 dev8-10.00 0.00 0.00 0 0 Not knowing much about this, it looks like the write speed is pretty slow (I guess it should take longer than reads though). -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: my.cnf not read in 4.1.3
Are you sure that the server is using the my.cnf file which you are changing? After making the change, what is the output of --print-defaults for mysqld? If it's not as expected, odds are good that the server is getting its defaults from a different location. --V [EMAIL PROTECTED] wrote: Hi all: I am running MySQL 4.1.3 under Debian. I have the problem that my options in my.cnf are not going into effect. I want to increase max_heap_table_size from the default of 16M to 400M (I have 4GB of memory), but putting that line into my.cnf doesn't do anything. (Yes it is in the mysqld section of the file.) Also I have tried doing a set global max_heap_table_size=4; in the mysql command line but it doesn't change the value either, according to SHOW VARIABLES. What am I doing wrong? Regards, Ryan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: my.cnf not read in 4.1.3
What else is in the my.cnf file? For instance, when I run `mysqld --print-defaults` on my system I see this: mysqld would have been started with the following arguments: --basedir=/path/to/mysql-4.0 --datadir=/path/to/mysql-4.0/data --port=1032 --socket=/path/to/mysql-4.0/mysql.sock --user=username --log=/path/to/mysql-4.0/logs/this.log --log-slow-queries=/path/to/mysql-4.0/logs/this-slow.log --log-error=/path/to/mysql-4.0/logs/this-err.log --log-long-format --tmpdir=/path/to/mysql-4.0/tmp Each of those parameters is an option set in the my.cnf file. So I'd imagine that if you have anything at all in the [mysqld] group in the file you'd be seeing them in the --print-defaults output. If there are no objections from your sysadmin, I suggest moving your my.cnf file from /etc/mysql straight into /etc. Either that or into the default datadir for your installation. Both should work better for you (but the /etc/my.cnf location would be a sure-thing). Cheers, --V [EMAIL PROTECTED] wrote: The output of mysqld --print-defaults is blank, so that means it should be looking for my.cnf in the default location, which is /etc/mysql/my.cnf, right? I am indeed editing the file in that location. Ryan Are you sure that the server is using the my.cnf file which you are changing? After making the change, what is the output of --print-defaults for mysqld? If it's not as expected, odds are good that the server is getting its defaults from a different location. --V [EMAIL PROTECTED] wrote: Hi all: I am running MySQL 4.1.3 under Debian. I have the problem that my options in my.cnf are not going into effect. I want to increase max_heap_table_size from the default of 16M to 400M (I have 4GB of memory), but putting that line into my.cnf doesn't do anything. (Yes it is in the mysqld section of the file.) Also I have tried doing a set global max_heap_table_size=4; in the mysql command line but it doesn't change the value either, according to SHOW VARIABLES. What am I doing wrong? Regards, Ryan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: C API: mysql_options and mysql_real_connect
At 8:06 -0700 8/12/04, V. M. Brasseur wrote: Paul DuBois wrote: At 13:03 -0700 8/11/04, V. M. Brasseur wrote: Assuming a my.cnf file which looks like this: [client] port=3306 socket=/path/to/mysql.sock [app] user=appuser password=apppwd host=my.host.com Ignore for now the insecurity of putting a password in the my.cnf file. This is mostly a hypothetical question at the moment. Calling mysql_options(MYSQL, MYSQL_READ_DEFAULT_FILE, /path/to/my.cnf); and mysql_options(MYSQL, MYSQL_READ_DEFAULT_GROUP, app); in the client will read the options in these two groups. How, if at all, would something like this be useful to mysql_real_connect? From my research it appears that you still need to specify the user, host, pwd and port (assuming TCP/IP connection) when calling mysql_real_connect(), so setting these parms in the my.cnf file does not really help for this scenario. Something (a non-API function, most likely) would still need to parse the file separately and grab the parms for passing to mysql_real_connect(). Is this an accurate assessment? No. If you pass NULL in the mysql_real_connect() params, the values from the option file(s) are used. Even for the password param? The mysql_real_connect() write-up in your MySQL book says that a NULL passed for password results in allowing connections only if there is no password in the mysql.user.password column for the current user. Perhaps having the password defined via a mysql_options() call trumps this NULL behavior? Yes, that's correct. I take it that you're not finding this to be true? -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: C API: mysql_options and mysql_real_connect
Paul DuBois wrote: At 8:06 -0700 8/12/04, V. M. Brasseur wrote: Paul DuBois wrote: At 13:03 -0700 8/11/04, V. M. Brasseur wrote: Assuming a my.cnf file which looks like this: [client] port=3306 socket=/path/to/mysql.sock [app] user=appuser password=apppwd host=my.host.com Ignore for now the insecurity of putting a password in the my.cnf file. This is mostly a hypothetical question at the moment. Calling mysql_options(MYSQL, MYSQL_READ_DEFAULT_FILE, /path/to/my.cnf); and mysql_options(MYSQL, MYSQL_READ_DEFAULT_GROUP, app); in the client will read the options in these two groups. How, if at all, would something like this be useful to mysql_real_connect? From my research it appears that you still need to specify the user, host, pwd and port (assuming TCP/IP connection) when calling mysql_real_connect(), so setting these parms in the my.cnf file does not really help for this scenario. Something (a non-API function, most likely) would still need to parse the file separately and grab the parms for passing to mysql_real_connect(). Is this an accurate assessment? No. If you pass NULL in the mysql_real_connect() params, the values from the option file(s) are used. Even for the password param? The mysql_real_connect() write-up in your MySQL book says that a NULL passed for password results in allowing connections only if there is no password in the mysql.user.password column for the current user. Perhaps having the password defined via a mysql_options() call trumps this NULL behavior? Yes, that's correct. I take it that you're not finding this to be true? I can't tell yet, as I haven't gotten the coding done. This was mostly a fact-finding excursion, setting up expectations for when I've finally finished with my changes. Many thanks for the assist. You've cleared up a lot for me. --V -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: C API: mysql_options and mysql_real_connect
At 14:40 -0700 8/12/04, V. M. Brasseur wrote: Paul DuBois wrote: At 8:06 -0700 8/12/04, V. M. Brasseur wrote: Paul DuBois wrote: At 13:03 -0700 8/11/04, V. M. Brasseur wrote: Assuming a my.cnf file which looks like this: [client] port=3306 socket=/path/to/mysql.sock [app] user=appuser password=apppwd host=my.host.com Ignore for now the insecurity of putting a password in the my.cnf file. This is mostly a hypothetical question at the moment. Calling mysql_options(MYSQL, MYSQL_READ_DEFAULT_FILE, /path/to/my.cnf); and mysql_options(MYSQL, MYSQL_READ_DEFAULT_GROUP, app); in the client will read the options in these two groups. How, if at all, would something like this be useful to mysql_real_connect? From my research it appears that you still need to specify the user, host, pwd and port (assuming TCP/IP connection) when calling mysql_real_connect(), so setting these parms in the my.cnf file does not really help for this scenario. Something (a non-API function, most likely) would still need to parse the file separately and grab the parms for passing to mysql_real_connect(). Is this an accurate assessment? No. If you pass NULL in the mysql_real_connect() params, the values from the option file(s) are used. Even for the password param? The mysql_real_connect() write-up in your MySQL book says that a NULL passed for password results in allowing connections only if there is no password in the mysql.user.password column for the current user. Perhaps having the password defined via a mysql_options() call trumps this NULL behavior? Yes, that's correct. I take it that you're not finding this to be true? I can't tell yet, as I haven't gotten the coding done. This was mostly a fact-finding excursion, setting up expectations for when I've finally finished with my changes. Many thanks for the assist. You've cleared up a lot for me. Here's a short test program: #include my_global.h #include my_sys.h #include mysql.h static MYSQL *conn; /* pointer to connection handler */ void print_error (MYSQL *conn, char *message) { fprintf (stderr, %s\n, message); if (conn != NULL) { fprintf (stderr, Error %u (%s)\n, mysql_errno (conn), mysql_error (conn)); } } int main (int argc, char *argv[]) { my_init (); /* initialize connection handler */ conn = mysql_init (NULL); if (conn == NULL) { print_error (NULL, mysql_init() failed (probably out of memory)); exit (1); } /* set options */ mysql_options (conn, MYSQL_READ_DEFAULT_FILE, ./my-opts); mysql_options (conn, MYSQL_READ_DEFAULT_GROUP, my-option-group); /* connect to server */ if (mysql_real_connect (conn, NULL, NULL, NULL, NULL, 0, NULL, 0) == NULL) { print_error (conn, mysql_real_connect() failed); mysql_close (conn); exit (1); } /* disconnect from server */ mysql_close (conn); exit (0); } Compile the program and run it. It will likely fail to connect. Then create a file named my-opts in the same directory and put a [my-option-group] group in it: [my-option-group] user=your-user-name password=your-password Run the program again. This time it should work. Change the group name to be [mysql]. The program should still work. -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Query using join is not using index to sort the rows
Hi, I am having trouble trying to figure out the reason of this. The query (explained) is: EXPLAIN SELECT * FROM news,users WHERE news.user_id=users.user_id ORDER BY date DESC I divided the result table in two parts to improve readability: ++-+--+--+---+ | id | select_type | table| type | possible_keys | ++-+--+--+---+ | 1 | SIMPLE | news | ALL | [NULL]| | 1 | SIMPLE | users| ALL | PRIMARY | ++-+--+--+---+ ++-++--+-+ | key| key_len | ref| rows | Extra | ++-++--+-+ | [NULL] | [NULL] | [NULL] | 5000 | Using temporary; Using filesort | | [NULL] | [NULL] | [NULL] |3 | Using where | ++-++--+-+ I have an index on the date column (that is on the news table), but it looks like MySQL is not using it to get the rows sorted. I tried the query using LEFT JOIN, and this way MySQL uses the index: EXPLAIN SELECT * FROM news LEFT JOIN users USING (user_id) ORDER BY date DESC ++-+--++---+ | id | select_type | table| type | possible_keys | ++-+--++---+ | 1 | SIMPLE | news | index | [NULL]| | 1 | SIMPLE | users| eq_ref | PRIMARY | ++-+--++---+ +-+-+---+--+---+ | key | key_len | ref | rows | Extra | +-+-+---+--+---+ | date| 4 | [NULL]| 5000 | | | PRIMARY | 2 | mydb.news.user_id |1 | | +-+-+---+--+---+ Also the query time drops by 1/4 in a 5000 rows table. I am not sure if I want to use LEFT JOIN only to optimize the results (in my case it doesn't matter if it is an INNER or LEFT JOIN, but a INNER JOIN looks more *natural*). I would like to see an explanation of this behaviour, and advise about this, because I have been trying to find the clue for lot of time without luck. Thanks, IrYoKu __ Renovamos el Correo Yahoo!: ¡100 MB GRATIS! Nuevos servicios, más seguridad http://correo.yahoo.es -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
multiple server versions on 1 box startup error#
I am trying to run two different MySQL server versions on the same solaris machine. I already have a 4.0.2 running, and i have successfully installed 4.1.3 beta. I ran mysql_install_db successfully (after setting LD_LIBRARY_PATH), but when i try to run mysqld_safe --user=mysql I get the error: A mysqld process already exists [1]+ Exit 1 ./mysqld_safe --user=mysql This is certainly true, because my 4.0.2 is running, but i dont understand why this is a problem. I am running the mysqld_safe for version 4.1.3, and when i built 4.1.3 i used the configure options: (among others) --prefix=/usr/loca/mysql-4.1.3 --with-tcp-port=3306 --with-unix-socket-path=/tmp/mysql-4.1.3.sock --datadir=/var/mysql-4.1.3 And these options are all different from the 4.0.2 configuration. So i assumed that things would run ok. The only thing i can think of is that the /vaar/mysql-4.1.3/my.cnf is not being read correctly. Perhaps i have an error in it, but i'd think that mysqld_safe would let me know. If its not being read, then some options from my default /etc/my.cnf are not being overridden, and that could be the problem. Here is the info on the /var/mysql-4.1.3/my.cnf file: -rw-r--r-- 1 root other 2042 Aug 12 18:08 my.cnf and the permissions ownership are identical to that for /etc/my.cnf I am at a loss thanks much. sean peters [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: multiple server versions on 1 box startup error#
Hi Sean, Have you checked the pid file? Have you given it a unique path for this server? Regards David Logan Database Administrator HP Managed Services 139 Frome Street, Adelaide 5000 Australia +61 8 8408 4273 - Work +61 417 268 665 - Mobile +61 8 8408 4259 - Fax -Original Message- From: sean c peters [mailto:[EMAIL PROTECTED] Sent: Friday, 13 August 2004 8:03 AM To: [EMAIL PROTECTED] Subject: multiple server versions on 1 box startup error# I am trying to run two different MySQL server versions on the same solaris machine. I already have a 4.0.2 running, and i have successfully installed 4.1.3 beta. I ran mysql_install_db successfully (after setting LD_LIBRARY_PATH), but when i try to run mysqld_safe --user=mysql I get the error: A mysqld process already exists [1]+ Exit 1 ./mysqld_safe --user=mysql This is certainly true, because my 4.0.2 is running, but i dont understand why this is a problem. I am running the mysqld_safe for version 4.1.3, and when i built 4.1.3 i used the configure options: (among others) --prefix=/usr/loca/mysql-4.1.3 --with-tcp-port=3306 --with-unix-socket-path=/tmp/mysql-4.1.3.sock --datadir=/var/mysql-4.1.3 And these options are all different from the 4.0.2 configuration. So i assumed that things would run ok. The only thing i can think of is that the /vaar/mysql-4.1.3/my.cnf is not being read correctly. Perhaps i have an error in it, but i'd think that mysqld_safe would let me know. If its not being read, then some options from my default /etc/my.cnf are not being overridden, and that could be the problem. Here is the info on the /var/mysql-4.1.3/my.cnf file: -rw-r--r-- 1 root other 2042 Aug 12 18:08 my.cnf and the permissions ownership are identical to that for /etc/my.cnf I am at a loss thanks much. sean peters [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
datadir specification, etc
I am 100% convinced that mysql 4.1.3 beta is not properly reading the my.cnf configuration files. If i remove the /etc/my.cnf file and try to start mysql 4.1.3 with (im working from /usr/local/mysql-4.1.3/bin) ./mysqld_safe i get the following output: (mccoy is the name of the machine im on) touch: /usr/local/mysql-4.1.3/var/mccoy.err cannot create chown: /usr/local/mysql-4.1.3/var/mccoy.err: No such file or directory Starting mysqld daemon with databases from /usr/local/mysql-4.1.3/var ./mysqld_safe: /usr/local/mysql-4.1.3/var/mccoy.err: cannot create If i remove the /var/mysql-4.1.3/my.cnf file, i get the same output as above, so its not being read either way. And i did specify /var/mysql-4.1.3/ as my datadir with .configure when building 4.1.3 if i put the /etc/my.cnf file back, i get the following: A mysqld process already exists So clearly, the /etc/my.cnf is being read, but the /var/mysql-4.1.3/my.cnf is not. so i guess it doesnt matter what i specify in there at this point. One strange thing is that ./msqyd_safe tries to use the databases in /usr/local/mysql-4.1.3/var/ But i specified a different datadir with configure! my configure --prefix=/usr/local/mysql-4.1.3 but why should that matter? In fact, when i installed 4.1.3 (make install), the directory /usr/local/mysql-4.1.3/var/ was NOT created. I dont think most of the info ive given matters, because my run-time configuration doesnt appear to be the problem. I dont believe that my build configuration took effect properly. Does any of this make sense? Still completely lost. thanks sean peters [EMAIL PROTECTED] *** Here's some my.cnf data, if it really matters *** Here is part of the /var/mysql-4.1.3/my.cnf file: [client] port= 3307 socket = /tmp/mysql-4.1.3.sock pid-file= /usr/local/mysql-4.1.3/mysql-4.1.3.pid datadir = /var/mysql-4.1.3/ [mysqld] port= 3307 socket = /tmp/mysql-4.1.3.sock pid-file= /usr/local/mysql-4.1.3/mysql-4.1.3.pid datadir = /var/mysql-4.1.3/ And here is info from /etc/my.cnf file: [client] port= 3306 socket = /tmp/mysql.sock [mysqld] port= 3306 socket = /tmp/mysql.sock -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Order by with one exception
Bump Mysql 4, this has been bothering me for some time now... I made a mailing list archiver, I thread discussions by subject. I chose to not use message-id's since so many people hijack threads. In most cases, I ORDER BY id, which is simply a auto-inc that is set as they come in, in order via email. Occasionally, messages will be out of order due to email delivery issues. I tried date stamping them, but users clocks are so messed up that never panned out :-) Given this case: Subject ID RE: Order by with one exception 1 RE: Order by with one exception 2 RE: Order by with one exception 3 Order by with one exception 4 RE: Order by with one exception 5 RE: Order by with one exception 6 RE: Order by with one exception 7 As you can see, these are in correct order, but in this case, I want to push the one without the Re: to the top. I can not just order by subject, id, since not a subject could start with a letter after R. Suggestions? -- - Scott HanedaTel: 415.898.2602 http://www.newgeo.com Fax: 313.557.5052 [EMAIL PROTECTED]Novato, CA U.S.A. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Cannot connect via TCP (Lost connection to MySQL server during query)
Michael Stassen wrote: What hardware and OS? How did you get and install mysql? MySQL supplied binary? 3rd party binary? Built from source? If the answer is not MySQL supplied binary, my first suggestion would be to try that to see if the problem goes away. Michael That did end up being the problem...MySQL's binaries seem to work fine. Thanks again for your rapid insight! Matt -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: GUI for MySQL
Hello, You can try SQLyog at http://www.webyog.com/ . Its not free but it has got some really powerful features. Regards, Karam --- Kirti S. Bajwa [EMAIL PROTECTED] wrote: This is my first attempt to design and test MySQL. I have used MS SQL for number of years. I do appreciate if members of this list can recommend a good GUI application for MySQL. I want the GUI application to design DB, design Quiries, etc. Thanks. Kirti -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] __ Do you Yahoo!? New and Improved Yahoo! Mail - 100MB free storage! http://promotions.yahoo.com/new_mail -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: OpenSSL
On Wed, 11 Aug 2004, Jeremy Rice wrote: ./configure --prefix=/usr/local --sysconfdir=/etc/mysql --localstatedir=/var --enable-shared --enable-static --with-libwrap --with-berkeley-db --enable-thread-safe-client --with-mysqld-user=mysql --with-openssl --with-openssl-includes=/usr/local/ssl/include --with-openssl-libs=/usr/local/ssl/lib ...I believe you need to enable vio here (--with-vio), as well. Thanks. I'll add that one. opensslv.h exists, although I don't know where exactly its looking for it. Any ideas spring into anyone's mind? The OpenSSL installation is just fine. Of the dozens of other packages I've compiled against this OpenSSL install (and other identical installs on other servers) everything worked without a hitch except for MySQL. I'm stumped as to why it's not working though. Is there some trick to getting MySQL to compile with OpenSSL installed in its default location? This piece of the make process is BROKEN in MySQL. (Though MySQL themselves will not admit as much.) My sysadmin spent two days wrestling with this recently. He can't reply to your message at the moment, but he did scribble down some notes that may help. :) First, you have to add -L /usr/local/ssl/lib to ALL your MySQL makefiles. Fun. I'm overjoyed. :-) It's a fix though and I'll try it. Second, you will run into some problem somewhere with lex_hash. In that case, you'll need to: cd sql rm lex_hash.h gen_lex_hash make lex_hash.h Hmmm... That sounds entertaining. If that's all it takes though it's worth a shot. ...And there may have been some other craziness as well. There *are* messages about this out there... it just takes some digging to find them. Keep at it. It *is* possible. ;) I wonder if RH patched MySQL up when they built it. Maybe their SRPM would have a few clues as to what's jacked up and how to fix it. I think I'll try disecting that too. Thanks for all the info. I'll report back with a HOWTO if I get a clean fix. Justin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Escaped BLOB data in XML
Hello, Hmmm. I was figuring that out. mysql_escape_string() only escapes characters like \r, \n, \\, 0 etc. it still keep other non-character data same like it keep ascii 15 to ascii 15 that no parser is able to handle. Isnt there any better way then base64 to handle this. Just like replacin to lt; solves the problem in the data? Regards, Karam --- Keith Ivey [EMAIL PROTECTED] wrote: Karam Chand wrote: i have a table with a LONGBLOB column. We store some small images in it. I want to export them in XML format with schema like: cdata/c cdata/c ... ... Now the problem is even if I mysql_real_escape() and changing entities like , to lt; gt; the data some of the characters are of ascii value 12,13 etc. None of the XML parsers are able to recognise it and they throw up error? I googled but couldnt find a refernce on how to handle such characters in XML. This doesn't have anything to do with MySQL. XML isn't really designed for directly containing binary data, so people generally use Base64 encoding (or occasionally some other method of encoding binary data in ASCII). The XML parser isn't going to be able to return the raw binary data -- you'll have to decode it. -- Keith Ivey [EMAIL PROTECTED] Washington, DC -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Mail - Helps protect you from nasty viruses. http://promotions.yahoo.com/new_mail -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Escaped BLOB data in XML
This leads me to another question. What are the valid ASCII characters that XML parser understands. Are they only a-1,A-Z,0-9,., etc or some other characters. Regards, Karam --- Karam Chand [EMAIL PROTECTED] wrote: Hello, Hmmm. I was figuring that out. mysql_escape_string() only escapes characters like \r, \n, \\, 0 etc. it still keep other non-character data same like it keep ascii 15 to ascii 15 that no parser is able to handle. Isnt there any better way then base64 to handle this. Just like replacin to lt; solves the problem in the data? Regards, Karam --- Keith Ivey [EMAIL PROTECTED] wrote: Karam Chand wrote: i have a table with a LONGBLOB column. We store some small images in it. I want to export them in XML format with schema like: cdata/c cdata/c ... ... Now the problem is even if I mysql_real_escape() and changing entities like , to lt; gt; the data some of the characters are of ascii value 12,13 etc. None of the XML parsers are able to recognise it and they throw up error? I googled but couldnt find a refernce on how to handle such characters in XML. This doesn't have anything to do with MySQL. XML isn't really designed for directly containing binary data, so people generally use Base64 encoding (or occasionally some other method of encoding binary data in ASCII). The XML parser isn't going to be able to return the raw binary data -- you'll have to decode it. -- Keith Ivey [EMAIL PROTECTED] Washington, DC -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Mail - Helps protect you from nasty viruses. http://promotions.yahoo.com/new_mail -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Mail is new and improved - Check it out! http://promotions.yahoo.com/new_mail -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Replication blocked
Hi, I have 2 DB server, running as master and slave. I just add an index to one of my table on master, it took about 12 minutes. During adding the index, I have insert/update queries to other tables. On master this has no problem at all. The problem is, on slave these queries were blocked by the previous 12 minutes query. Does this mean that there is only 1 thread to run the SQL from master? Can this be set to multithread? Thank you very much. Regards, bk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]