Re: myisampack
Jacob Friis wrote: Is it possible to append rows to a table packed by myisampack? No, RTFM, Compressed storage format is a read-only format that is generated with the myisampack tool. http://dev.mysql.com/doc/mysql/en/Compressed_format.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
COMPRESS() vs myisampack
I have a table with lots of text data, and would like to compress this in order to save space. I would either compress the whole table with myisampack or just the text data with COMPRESS(). When I SELECT from this table, would it be equally fast to use UNCOMPRESS() for text data or use a table packed by myisampack? Thanks, Jacob -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Replication question...
Jason Lixfeld [EMAIL PROTECTED] wrote on 03/12/2004 03:32:32: I'm very new to mysql and replication. I've got a case where I have 2 servers, each have database A and database B. I want server 1 to be master for database A and slave for database B and I would like server 2 to be slave for database A and master for database B. From what I've read, if a server is a master, it's a master and likewise for a slave and they can't share the role depending on the requirements of the user. Can someone enlighten me? No, I think it is more flexible than that, and will probably fulfil your needs. You can set up circular replication, of which your two-server case is the simplest example. It is then up to you to ensure at application level that no single table is updated on more than one machine. If you observer that rule than, so far as I understand, replication will do what you want. If you break that rule, behaviour will be unpredictable. You might want to use the replicate_do_db variable to provide a partial enforcement of the rule. Alec -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Different password() function ?
I just upgrade my MySQL from 4.0.20 to 4.1.7, however i found new things here, password() function in 4.0.20 - password(xxx) result in 5336eb751494bdb1 in 4.1.7 - password(xxx) result in *3E5287812B7D1F947439AC45E739353 how to get backward compatibility for this function ? since i use password() to encrypt users password Thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Different password() function ?
Thanks, however why this mailing list doesn't have Reply-To field :-( SciBit MySQL Team wrote: Hi Ady, See your OLD_PASSWORD MySQL option in your MySQL Docs. Or the following articles: http://dev.mysql.com/doc/mysql/en/MySQL_4.1_Nutshell.html http://forum.scibit.com/viewtopic.php?t=195 Kind Regards SciBit MySQL Team http://www.scibit.com MySQL Products: http://www.scibit.com/products/mycon http://www.scibit.com/products/mysqlcomponents http://www.scibit.com/products/mysqlx http://www.scibit.com/products/mascon -Original Message- From: Ady Wicaksono [EMAIL PROTECTED] To: [EMAIL PROTECTED] [EMAIL PROTECTED] CC: Subject: Different password() function ? Sent: Fri, 03 Dec 2004 10:01:06 GMT Received: Fri, 03 Dec 2004 09:51:54 GMT Read: Fri, 03 Dec 2004 09:57:47 GMT I just upgrade my MySQL from 4.0.20 to 4.1.7, however i found new things here, password() function in 4.0.20 - password(xxx) result in 5336eb751494bdb1 in 4.1.7 - password(xxx) result in *3E5287812B7D1F947439AC45E739353 how to get backward compatibility for this function ? since i use password() to encrypt users password Thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Different password() function ?
Hi Ady, See your OLD_PASSWORD MySQL option in your MySQL Docs. Or the following articles: http://dev.mysql.com/doc/mysql/en/MySQL_4.1_Nutshell.html http://forum.scibit.com/viewtopic.php?t=195 Kind Regards SciBit MySQL Team http://www.scibit.com MySQL Products: http://www.scibit.com/products/mycon http://www.scibit.com/products/mysqlcomponents http://www.scibit.com/products/mysqlx http://www.scibit.com/products/mascon -Original Message- From: Ady Wicaksono [EMAIL PROTECTED] To: [EMAIL PROTECTED] [EMAIL PROTECTED] CC: Subject: Different password() function ? Sent: Fri, 03 Dec 2004 10:01:06 GMT Received: Fri, 03 Dec 2004 09:51:54 GMT Read: Fri, 03 Dec 2004 09:57:47 GMT I just upgrade my MySQL from 4.0.20 to 4.1.7, however i found new things here, password() function in 4.0.20 - password(xxx) result in 5336eb751494bdb1 in 4.1.7 - password(xxx) result in *3E5287812B7D1F947439AC45E739353 how to get backward compatibility for this function ? since i use password() to encrypt users password Thanks -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Serious error in update Mysql 4.1.7
Hi Luciano, Not that this reply will solve your problem, but let it serve as a notice. It is NEVER a good idea to use a FLOAT/BLOB column in your where clause as MySQL can not uniquely identify the record. Especially not with floats because of the inherent floating point error made between machines after a specific number of decimals (which depends on the hardware on which the MySQL is running). To clarify: select MyFloat from MyTable; Machine A might result in: 0.123456789012345[987345765] Machine B, using exactly the same MySQL version with exactly the same table and data: 0.123456789012345[765365423] Because of precision floating point errors (in the sample, after the 15th decimal) the values in the square brackets will differ and effectively be random numbers. You can thus see the problem in asking MySQL to match floating point data using a WHERE clause. In fact you can do the same query twice on the same machine and MySQL won't be able to locate the record as the ultimate float value will differ twice in a row. Always remember computers are binary machines which loves integers. After filling the internal 8 bytes with a floating value, the rest of any floating value precision becomes a toss up. Another sample (MySQL 4.1.7): mysql select pi(); +--+ | PI() | +--+ | 3.141593 | +--+ 1 row in set (0.00 sec) mysql select pi()=3.141593; +---+ | pi()=3.141593 | +---+ | 0 | +---+ 1 row in set (0.00 sec) If the sample you gave was auto-generated by the MyODBC driver it most likely compiled the WHERE clause because you don't have an unique primary key in your table. Best advise is to always add a primary key AUTOINC column to all tables. This will not only result in all your queries always being able to find the exact record, but will also reduce the traffic your current queries cause. The addition of an AUTOINC column is mainly due to MySQL's lack of server side cursors. This will be corrected it seems in MySQL 5, after which everyone will always be able to find their records independent of the data contained in the table. Kind Regards SciBit MySQL Team http://www.scibit.com MySQL Products: http://www.scibit.com/products/mycon http://www.scibit.com/products/mysqlcomponents http://www.scibit.com/products/mysqlx http://www.scibit.com/products/mascon -Original Message- From: Luciano Pulvirenti [EMAIL PROTECTED] To: [EMAIL PROTECTED] [EMAIL PROTECTED] CC: Subject: Serious error in update Mysql 4.1.7 Sent: Fri, 03 Dec 2004 08:18:05 GMT Received: Fri, 03 Dec 2004 08:22:55 GMT Read: Fri, 03 Dec 2004 09:24:15 GMT I am trying Mysql 4.1.7 before putting it in production in 4.0.16 substitution on Windows NT. I have found an anomaly for me serious. I use Visual Basic 6 with ADO last version and the driver MYODBC 3.51.10. The program produces the following query: UPDATE `paghe`.`anagpaghe` SET `giorni_congedo_mp`=1.25000e+001, `giorni_congedo_anno_prec_mp`=0.0e+000, `giorni_permessi_retrib_mp`=2.0e+000, `giorni_congedo`=1.15000e+001, `giorni_congedo_anno_prec`=0.0e+000, `giorni_permessi_retribuiti`=2.0e+000, `swnuovo`=0 WHERE `matricola`=43258 AND `giorni_congedo_mp`=1.25000e+001 AND `giorni_congedo_anno_prec_mp`=0.0e+000 AND `giorni_permessi_retrib_mp`=2.0e+000 AND `giorni_congedo`=1.15000e+001 AND `giorni_congedo_anno_prec`=0.0e+000 AND `giorni_permessi_retribuiti`=2.0e+000 AND `swnuovo`=1 Mysql doesn't succeed to update the record because no succeeds in finding the record corresponding to the syntax WHERE. I have made some tests have discover that the cause is AND `giorni_congedo`=1.15000e+001 In the version 4.0.16 work correctly. The fields giorni... have declared in the structure double(5,1). Thank you -- Internal Virus Database is out-of-date. Checked by AVG Anti-Virus. Version: 7.0.290 / Virus Database: 265.4.3 - Release Date: 26/11/2004 -- MySQL Windows Mailing List For list archives: http://lists.mysql.com/win32 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: myisamchk sort buffer too small, check table has ran 1 week and no end in sight
Hello. I've taken this information from documentation at http://dev.mysql.com/doc/mysql/en/myisamchk_syntax.html Did it solve your problem? If didn't, send me you my.cnf file and information about version of MySQL and operating system. matt_lists [EMAIL PROTECTED] wrote: Gleb Paharenko wrote: Hello. There is a variable sort_buffer, which you can set with -O sort_buffer=xxxM or in my.cnf. myisamchk -o -p -f --sort_key_blocks=512M - -key_cache_block_size=512M it's no longer -O sort_buffer=xxxM it's now --sort_key_blocks and I set it to 512M and it still said there was not enough -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Illegal mix of collations with 4.1.7
Hello. Use Perl, DBI :) Frederic Wenzel [EMAIL PROTECTED] wrote: On Tue, 30 Nov 2004 19:24:05 +0200, Gleb Paharenko [EMAIL PROTECTED] wrote: The first impression is that you forgot to convert character columns. See: http://dev.mysql.com/doc/mysql/en/Upgrading-from-4.0.html http://dev.mysql.com/doc/mysql/en/Charset-conversion.html Once the Character Sets are set up (everything is utf8 now on my installation), how can it be achieved to convert *ALL* columns in *ALL* tables to the same, new COLLATION value? Changing them by hand would lead to admin's fun for, say, weeks ;) Thanks in advance, Fred -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: 4.1.7 character set conversions
Hello. In most cases, if A,B,.. are in different charsets you'll get an error. Use SET NAMES, CONVERT() or CAST() functions, introduced to convert everything to the same single charset. Because in other case you'll get results converted to character_set_results, except it was set to NULL. May be you should think about unicode in your project. D?ms?di Gergely [EMAIL PROTECTED] wrote: Hi! I've been digging through the web for a few days, but I didn't get any answers for my little question, so I try to ask it here: For example I set the following character sets: character_set_client = A character_set_connection = B character_set_database = C character_set_results = D character_set_server = E character_set_system = F a table named table = G a column named column = H The question is: Exactly what character conversions will be done by the mysqld, when i issue a statement like: SELECT column FROM table WHERE column = USER() and column like 'some text in A encoding%'; Which character sets must be equal (or convertable to each other) for this query to run properly? Thanks in advance! -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: unicode urgent
Hello. You may use _utf8 0xXX form of input, where is 0x - hexademical representation of your japanese string (each unicode character has it's own hexademical value). For an example see: http://dev.mysql.com/doc/mysql/en/Charset-literal.html hi i want to do a insert query which contains utf 8 (japanese characters) characters in the database but surprisingly when i done this it stores ??? and also when i retrieves it fails to give actual picture iam using mysql 4.1.b alpha using mysqlcc as my editor and dos promt looking for your kind help DD TATA INFOTECH LTD Deepankar Das [EMAIL PROTECTED] wrote: -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL Newbie: Running in UNIX
Hello. Try to connect as root (use -u root) or another user which has privilege to create new databases. I would like to try out MySQL on my UNIX account (Solaris). I DO NOT HAVE root privilieges in UNIX. I only have a normal user privilege. I have installed MySQL on my user UNIX account. Looks like I get the Server going and can also get the status from the Server. But, I get an error msg when I try to create a database. I have attached the unix screen dump below. Any help will be greatly appreciated. Thanks, -Ashfaq Hossain / UNIX screen dump / Unix System / unix uname -a SunOS 5.8 Generic_108528-24 sun4u sparc SUNW,Sun-Fire-880 / Running MySQL Processes / unix ps -ef | grep mysql ashfaq 22040 22015 0 11:13:23 pts/63 0:00 /home/ashfaq/MySQL/mysql-standard-4.1.7-sun-solaris2.9-sparc-64bit/bin/m ysqld - ashfaq 14026 9830 0 11:18:08 pts/63 0:00 grep mysql ashfaq 22015 9830 0 11:13:23 pts/63 0:00 /bin/sh ./bin/mysqld_safe / Starting the Client / unix bin/mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 7 to server version: 4.1.7-standard Type 'help;' or '\h' for help. Type '\c' to clear the buffer. / Checking Status / mysql status -- bin/mysql Ver 14.7 Distrib 4.1.7, for sun-solaris2.9 (sparc) Connection id: 7 Current database: Current user: [EMAIL PROTECTED] SSL:Not in use Current pager: stdout Using outfile: '' Using delimiter:; Server version: 4.1.7-standard Protocol version: 10 Connection: Localhost via UNIX socket Server characterset:latin1 Db characterset:latin1 Client characterset:latin1 Conn. characterset:latin1 UNIX socket:/tmp/mysql.sock Uptime: 8 min 0 sec Threads: 1 Questions: 24 Slow queries: 0 Opens: 12 Flush tables: 1 Open tables: 1 Queries per second avg: 0.050 / / / Error Message : Please HELP! / / / mysql create database info ; ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'info' ThreadHossain, Ashfaq (Ashfaq) [EMAIL PROTECTED] wrote: -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Possible bug with wait_timeout
Hello. The value of wait_timeout is initialized from wait_timeout variable or from the global interactive_timeout variable, depending on the type of client. Put interactive_timeout=10 in your config file. See: http://dev.mysql.com/doc/mysql/en/Server_system_variables.html Andrew Braithwaite [EMAIL PROTECTED] wrote: Hi all, In version 4.0.18 when setting the wait_timeout variable to 10 in my.cnf, it seems to work when looking at 'mysqladmin variables' as it is indeed showing up as 10. However, when in the mysql client and I do a 'show variables' it is showing up with the default value of 28800. I'm certain that I've connected to the same server and was using all the tools from /usr/bin/mysql and specifying paths like this bin/safe_mysqld, bin/mysqladmin, bin/mysql etc... Anyone seen this before or am I going crazy? Cheers, Andrew query, sql -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Update optimization...
Hello. Did you read all promptings at: http://dev.mysql.com/doc/mysql/en/Full_table.html Manish [EMAIL PROTECTED] wrote: I am trying to execute this query and it is failing with Table is full error (I know I can make temp tables big). update t1, t2 set t1.XXX=1 where t1.YYY=t2. and t2. like '%X%'; My t1 has 10,00,000+ records and t2 has about 70,000 records. I would like to know how can I optimize this query? What are the parameters for this optimization? Can someone give me links where I can read up about such optimizations for update query. TIA, - Manish -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: myisampack
No. Compressed storage format is a read-only format that is generated with the myisampack tool. See: http://dev.mysql.com/doc/mysql/en/Compressed_format.html Jacob Friis [EMAIL PROTECTED] wrote: Is it possible to append rows to a table packed by myisampack? Thanks, Jacob -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Best learning path to DBA?
Hello. Install MySQL and read lists.mysql and try to find answers by yourself, using books, google, everything you want! And, of course, feel free to ask here - we are glad to provide an answer. :) Eve Atley [EMAIL PROTECTED] wrote: I am an web designer / php programmer / unofficial network administrator with a well-rounded technical background - but, as they say, a jack of all trades (expert in nothing). If I were to pursue a DBA path, what would be a good way of going about it? School, books, etc... - Eve -- For technical support contracts, goto https://order.mysql.com/?ref=ensita This email is sponsored by Ensita.NET http://www.ensita.net/ __ ___ ___ __ / |/ /_ __/ __/ __ \/ /Gleb Paharenko / /|_/ / // /\ \/ /_/ / /__ [EMAIL PROTECTED] /_/ /_/\_, /___/\___\_\___/ MySQL AB / Ensita.NET ___/ www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
characterset problem 4.1.7
Dear all, I was using mysql 4.1.0, all table use utf8. I can use php to store and display utf8 character without any problem. Yesterday I upgraded 4.1.0 to 4.1.7. I use mysql-control-center and I can see the data in 4.1.7 are utf8 characters. However, when I use my php program to display it, all things become ??? I supspect it is due to the client connection? What config I need to do in order to make php display correctly? Please Please! Thank you very much. Francis Mak
Re: repair table priv
Does this silence mean that nobody knows?!? :) I've been trying to find what privilege is needed to 'REPAIR TABLE'. I couldn't find any usefull hint on the net or in the archives. Could anyone help me out? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Is this the best/fastest solution?
In article [EMAIL PROTECTED], Jigal van Hemert [EMAIL PROTECTED] writes: Two tables (simplified, because other fields are not used in query; indexes other than primary key removed): CREATE TABLE `msg_content` ( `msg_id` int(14) NOT NULL auto_increment, `subject` varchar(255) NOT NULL default '', `content` mediumtext NOT NULL, PRIMARY KEY (`msg_id`), ) TYPE=InnoDB COMMENT='contains actual content of messages'; CREATE TABLE `msg_addressee` ( `id` int(14) NOT NULL auto_increment, `account_id` int(14) NOT NULL default '0', `msg_id` int(14) NOT NULL default '0', `status` set('deleted','replied','forwarded','admin') default NULL, PRIMARY KEY (`id`), ) TYPE=InnoDB COMMENT='link table to link accounts to messages'; A message is inserted once in the `msg_content` table and for each from/to/cc/.. an entry is inserted in the `msg_addressee` table. If someone deletes the message from his/her mailbox the entry in the `msg_addressee` table is marked 'deleted' by setting the `status` field accordingly. I want to do some garbage collection and find the messages for which *all* entries in the msg_addressee table have the status field set to 'deleted' This is what I came up with: SELECT t1.`msg_id` FROM `msg_content` AS t1 JOIN `msg_addressee` AS t2 ON t1.`msg_id` = t2.`msg_id` AND FIND_IN_SET( t2.`status` , 'deleted' ) 0 LEFT JOIN `msg_addressee` t3 ON t1.`msg_id` = t3.`msg_id` AND FIND_IN_SET( t3.`status` , 'deleted' ) =0 GROUP BY t2.`msg_id` , t3.`msg_id` HAVING COUNT( t3.`msg_id` ) =0 Try the following: SELECT t1.msg_id FROM msg_content t1 LEFT JOIN msg_addressee t2 ON t1.msg_id = t2.msg_id AND FIND_IN_SET ('deleted', t2.status) = 0 WHERE t2.id IS NULL -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Column type question ?
TAG wrote: I have an application that reads files converts them and then inserts them into a database. It has 2 columns that I need help with. First is the OFFSET column - this stores the datafile offset .. In C it is a UNSIGNED LONG and looks like : 0x2528 the second colun is a CRC for the file chunk and looks like 0x2e04b273 The equivalent column type in MySQL would be an UNSIGNED INTEGER for both of these. -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Cannot GRANT REPLICATION SLAVE
Hi, I have this weird things happens. kaspia:/var/lib/mysql# mysql --version mysql Ver 12.22 Distrib 4.0.21, for pc-linux-gnu (i386) mysql show grants for root@localhost; +-+ | Grants for [EMAIL PROTECTED] | +-+ | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION | +-+ mysql grant replication slave on *.* to backup@192.168.1.32; Query OK, 0 rows affected (0.00 sec) mysql show grants for backup@192.168.1.32; +---+ | Grants for [EMAIL PROTECTED]| +---+ | GRANT USAGE ON *.* TO 'backup'@'192.168.1.32' | +---+ I can grant another previleges, but not replication slave and replication clients. How do I fix this? Thank you in advance. --bk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Column type question ?
thanks ;) On Fri, 03 Dec 2004 13:18:53 +0100, Roger Baklund [EMAIL PROTECTED] wrote: TAG wrote: I have an application that reads files converts them and then inserts them into a database. It has 2 columns that I need help with. First is the OFFSET column - this stores the datafile offset .. In C it is a UNSIGNED LONG and looks like : 0x2528 the second colun is a CRC for the file chunk and looks like 0x2e04b273 The equivalent column type in MySQL would be an UNSIGNED INTEGER for both of these. -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: MySQL/InnoDB-5.0.2 is released
Walt, - Original Message - From: kernel [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Thursday, December 02, 2004 11:01 PM Subject: Re: MySQL/InnoDB-5.0.2 is released MySQL to return wrong results if a SELECT uses two indexes at the same time Does mysql 5.0.x have the ability to use more than one index per table on a select ? in certain cases yes. It is the Row Ordered Retrieval code, that takes the insterection of row id's (or primary key values). We had to rewrite a simple select id from table_a where last_name like 'smith%' and first_name like 'john%' to select id from table_a left join ( select id from table_a where last_name like 'smith%' group by id ) as t2 on t2.id = table_a.id where table_a.first_name like 'john%' limit 201; We had tried an index on last_name, an index on first_name, and a combo index of (last_name, first_name). We cut the run time from 1min 57sec to 3seconds. I do not know if ROR works for that query. You have to test. walt Best regards, Heikki Heikki Tuuri wrote: Hi! InnoDB is the MySQL table type that supports FOREIGN KEY constraints, row-level locking, Oracle-style consistent, non-locking SELECTs, multiple tablespaces, and a non-free online hot backup tool. Release 5.0.2 is a snapshot of the 5.0 development branch of MySQL. Unfortunately, this snapshot still contains some critical bugs, like http://bugs.mysql.com/bug.php?id=5401, which causes MySQL to return wrong results if a SELECT uses two indexes at the same time. InnoDB in MySQL-5.0.2 is almost the same as in the upcoming MySQL-4.1.8 release. Marko's new compact InnoDB table format did not make it to 5.0.2. The new compact table format will be pushed to the 5.0 BitKeeper tree today, and it will be included in 5.0.3. The biggest downside of InnoDB when compared to MyISAM has been that InnoDB tables take a lot more space than MyISAM tables. The new compact InnoDB table format will make InnoDB tables substantially smaller. You can look at the InnoDB roadmap at http://www.innodb.com/todo.php InnoDB functionality changed from 4.1: * If you specify the option innodb_locks_unsafe_for_binlog in my.cnf, InnoDB no longer in an UPDATE or a DELETE locks rows that do not get updated or deleted. This greatly reduces the probability of deadlocks. If you do not specify the option, InnoDB locks all rows that the UPDATE or DELETE scans, to ensure serializability. Upgrading to 5.0.2: * If you have created or used InnoDB tables with TIMESTAMP columns in MySQL versions 4.1.0-4.1.3, you have to rebuild those tables when you upgrade to MySQL-4.1.4 or later. The storage format in those MySQL versions for a TIMESTAMP column was wrong. If you upgrade from 4.0 to 4.1.4 or later, then no rebuild of TIMESTAMP tables is needed. * If you have stored characters ASCII(32) to non-latin1 non-BINARY indexed columns in MySQL versions = 4.1.2, then you have to rebuild those tables after you upgrade to = 4.1.3. The reason is that the sorting order of those characters and the space character changes for some character sets in 4.1.3. See the MySQL/InnoDB-4.1.3 changelog for a precise description of the cases where you need to rebuild the table. Also MyISAM tables have to be rebuilt or repaired in these cases. * If you have used column prefix indexes on UTF-8 columns or other multibyte character set columns in 4.1.0 - 4.1.5, you have to rebuild the tables when you upgrade to 4.1.6 or later. * If you have used accent characters (ASCII codes = 128) in database names, table names, constraint names, or column names in versions 4.1, you cannot upgrade to = 4.1 directly, because 4.1 uses UTF-8 to store metadata names. Use RENAME TABLE to overcome this if the accent character is in the table name or the database name, or rebuild the table. Best regards, Heikki Tuuri Innobase Oy http://www.innodb.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Cannot GRANT REPLICATION SLAVE
I had the same stuff going on for 4.0.7g on windows. Olivier Hi, I have this weird things happens. kaspia:/var/lib/mysql# mysql --version mysql Ver 12.22 Distrib 4.0.21, for pc-linux-gnu (i386) mysql show grants for \root\@\localhost\; +-+ | Grants for [EMAIL PROTECTED] | +-+ | GRANT ALL PRIVILEGES ON *.* TO 'root'@'localhost' WITH GRANT OPTION | +-+ mysql grant replication slave on *.* to \backup\@\192.168.1.32\; Query OK, 0 rows affected (0.00 sec) mysql show grants for \backup\@\192.168.1.32\; +---+ | Grants for [EMAIL PROTECTED] | +---+ | GRANT USAGE ON *.* TO 'backup'@'192.168.1.32' | +---+ I can grant another previleges, but not replication slave and replication clients. How do I fix this? Thank you in advance. --bk -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
Use of limit with IN on subquery
Hi all- I'm using MySQL 4.1.7, trying to do a subquery that's apparently unsupported - I'm hoping someone can provide a quick alternative for me. I have a simple table Z with 3 columns, k1, k2, and total_amt, where k1 and k2 make up the primary key. I want to create a report that lists the the top 10 total_amt for each k1. My original query was going to be of the form: sql select a.k1, a.k2, a.total_amt from Z a where a.total_amt in (select b.total_amt from Z b where b.k1 = a.k1 order by b.total_amt desc limit 10) order by a.k1, a.total_amt desc ; /sql But I get the dreaded error: ERROR 1235: This version of MySQL doesn't yet support 'LIMIT IN/ALL/ANY/SOME subquery'. Is there a better way to do this query? Thanks for your help. Regards, R
Full Text Wild Card Searches
I've been trying to implement full text searches. Genreally I use either LIKE or REGEXP for searches but wanted to try some Full Text Searches. It's wonderfully fast and is working well for full words, however I have not found a way to add wildcards in a search. For example if in my text fields I have the word residential I would like the key resident to match it. Is there any way to do this in Full Text Searches? Thanks, Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Use of limit with IN on subquery
Rick Robinson wrote: Hi all- I'm using MySQL 4.1.7, trying to do a subquery that's apparently unsupported - I'm hoping someone can provide a quick alternative for me. I have a simple table Z with 3 columns, k1, k2, and total_amt, where k1 and k2 make up the primary key. I want to create a report that lists the the top 10 total_amt for each k1. Sounds like a group-wise maximum type of question, take a look at this: http://dev.mysql.com/doc/mysql/en/example-Maximum-column-group-row.html -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
sum queries
I'm trying to create a table, and ultimately a graph, out of some syslog data. I have a table with the following info in it: Time (unixtime stamp), bytes rcvd and bytes sent I want to create a sql statement that group the data over a certain period of time, and produces the following table: Time Sent Rcvd From x to y Total bytes sent during period x to y Total bytes rcvd during period x to y From y to z Total bytes sent during period y to z Total bytes rcvd during period y to z From z to a Total bytes sent during period z to a Total bytes rcvd during period z to a From a to b Total bytes sent during period a to b Total bytes rcvd during period a to b Now to create this I've tried a horrible method of using php to call sql table, and then go through each one, and add it to each period of time as appropriate. it's messy and slow! What I want to do is have the mysql do this.is it possible? Cheers Nunners
Re: Use of limit with IN on subquery
Rick Robinson wrote: I'm using MySQL 4.1.7, trying to do a subquery that's apparently unsupported - I'm hoping someone can provide a quick alternative for me. I have a simple table Z with 3 columns, k1, k2, and total_amt, where k1 and k2 make up the primary key. I want to create a report that lists the the top 10 total_amt for each k1. Hm... top 10... group-wise maximum is maybe not the best approach. Maybe a self-join is better in this case. A small experiment: use test; create table tt2 ( k1 char(1) not null, k2 int not null, total_amt int, primary key(k1,k2)); insert into tt2 values('a',1,412),('a',2,142),('a',3,123),('a',4,312); insert into tt2 values('b',1,441),('b',2,251),('b',3,421),('b',4,331); select * from tt2; +++---+ | k1 | k2 | total_amt | +++---+ | a | 1 | 412 | | a | 2 | 142 | | a | 3 | 123 | | a | 4 | 312 | | b | 1 | 441 | | b | 2 | 251 | | b | 3 | 421 | | b | 4 | 331 | +++---+ 8 rows in set (0.02 sec) select t1.k1, max(t1.total_amt) first, max(t2.total_amt) second, max(t3.total_amt) third from tt2 t1 left join tt2 t2 on t2.k1 = t1.k1 and t2.total_amt t1.total_amt left join tt2 t3 on t3.k1 = t1.k1 and t3.total_amt t2.total_amt group by t1.k1; ++---++---+ | k1 | first | second | third | ++---++---+ | a | 412 |312 | 142 | | b | 441 |421 | 331 | ++---++---+ 2 rows in set (0.05 sec) -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Use of limit with IN on subquery
Hi Roger- Thanks for responding. It's what I was afraid of - I essentially have to a write what amounts to a mini-batch process to get the info I want. In my specific case, it looks like I'd have to: 1)- Create a temp result table 2)- Get a list of distinct k1 3)- For each distinct k1, select the top ten into temp result table 4)- Use temp table to product result And since I can't do 2) and 3) together because of the IN/LIMIT restriction, I'll have to craft a script/job/other to do it. Does anyone know if/when this restriction on use of IN and LIMIT will be supported? Thanks, R -Original Message- From: Roger Baklund [mailto:[EMAIL PROTECTED] Sent: Friday, December 03, 2004 10:32 AM To: Mysql Cc: [EMAIL PROTECTED] Subject: Re: Use of limit with IN on subquery Rick Robinson wrote: Hi all- I'm using MySQL 4.1.7, trying to do a subquery that's apparently unsupported - I'm hoping someone can provide a quick alternative for me. I have a simple table Z with 3 columns, k1, k2, and total_amt, where k1 and k2 make up the primary key. I want to create a report that lists the the top 10 total_amt for each k1. Sounds like a group-wise maximum type of question, take a look at this: http://dev.mysql.com/doc/mysql/en/example-Maximum-column-group-row.html -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Full Text Wild Card Searches
On Fri, Dec 03, 2004 at 10:30:25AM -0500, Michael J. Pawlowsky wrote: I've been trying to implement full text searches. Genreally I use either LIKE or REGEXP for searches but wanted to try some Full Text Searches. It's wonderfully fast and is working well for full words, however I have not found a way to add wildcards in a search. For example if in my text fields I have the word residential I would like the key resident to match it. Is there any way to do this in Full Text Searches? If you are doing the search in Boolean mode, you can add '*' to the end of a word to match all words beginning with that prefix. For example: SELECT record FROM table WHERE MATCH (record) AGAINST ('resident*' IN BOOLEAN MODE) (Using 'IN BOOLEAN MODE' requires MySQL 4.0 or later.) -- Jim Winstead MySQL Inc. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Full Text Wild Card Searches
Jim Winstead wrote: For example: SELECT record FROM table WHERE MATCH (record) AGAINST ('resident*' IN BOOLEAN MODE) (Using 'IN BOOLEAN MODE' requires MySQL 4.0 or later.) I guess it is time to upgrade from 3.23, I'm using RedHat Linux and have upgraded on some machines but it is a pain to do since many dependencies need to be recompiled as well. Thanks, Mike -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: sum queries
James Nunnerley wrote: I'm trying to create a table, and ultimately a graph, out of some syslog data. I have a table with the following info in it: Time (unixtime stamp), bytes rcvd and bytes sent I want to create a sql statement that group the data over a certain period of time, and produces the following table: You don't tell us what this certain period is, in general you use date time functions to manipulate time. If your period was one hour, you could use something like this: select date_format(ts,%Y-%m-%d %H) period,sum(rcvd),sum(sent) from mytable group by period; URL: http://dev.mysql.com/doc/mysql/en/Date_and_time_functions.html -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: sum queries
-Original Message- From: Roger Baklund [mailto:[EMAIL PROTECTED] Sent: 03 December 2004 16:03 To: [EMAIL PROTECTED] Cc: James Nunnerley Subject: Re: sum queries James Nunnerley wrote: I'm trying to create a table, and ultimately a graph, out of some syslog data. I have a table with the following info in it: Time (unixtime stamp), bytes rcvd and bytes sent I want to create a sql statement that group the data over a certain period of time, and produces the following table: You don't tell us what this certain period is, in general you use date time functions to manipulate time. If your period was one hour, you could use something like this: Apologies, I was hoping that the time period would be dynamic, as it's being called from a php script, which might choose an overall period of 7 days, split into 3 hour periods, or 24 hours, split into half hour periods. select date_format(ts,%Y-%m-%d %H) period,sum(rcvd),sum(sent) from mytable group by period; So the below query above will allow me to group by hour - which is quite useful - is there anyway of grouping by say 3 hour periods? Thanks James URL: http://dev.mysql.com/doc/mysql/en/Date_and_time_functions.html -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Use of limit with IN on subquery
Hi Roger- Thanks for responding so quickly. Hmm. I like it. It would get a bit hairy if I wanted top 50 or top 100. And if I wanted the top # to be dynamic, I'll need to construct the query on the fly...but that may be very workable. I need to play with it a bit to see how it will perform. I have a table with about 500,000 rows (not really too big), so I'm hopeful. Thanks again. Best regards, Rick -Original Message- From: Roger Baklund [mailto:[EMAIL PROTECTED] Sent: Friday, December 03, 2004 10:49 AM To: Mysql Cc: [EMAIL PROTECTED] Subject: Re: Use of limit with IN on subquery Rick Robinson wrote: I'm using MySQL 4.1.7, trying to do a subquery that's apparently unsupported - I'm hoping someone can provide a quick alternative for me. I have a simple table Z with 3 columns, k1, k2, and total_amt, where k1 and k2 make up the primary key. I want to create a report that lists the the top 10 total_amt for each k1. Hm... top 10... group-wise maximum is maybe not the best approach. Maybe a self-join is better in this case. A small experiment: use test; create table tt2 ( k1 char(1) not null, k2 int not null, total_amt int, primary key(k1,k2)); insert into tt2 values('a',1,412),('a',2,142),('a',3,123),('a',4,312); insert into tt2 values('b',1,441),('b',2,251),('b',3,421),('b',4,331); select * from tt2; +++---+ | k1 | k2 | total_amt | +++---+ | a | 1 | 412 | | a | 2 | 142 | | a | 3 | 123 | | a | 4 | 312 | | b | 1 | 441 | | b | 2 | 251 | | b | 3 | 421 | | b | 4 | 331 | +++---+ 8 rows in set (0.02 sec) select t1.k1, max(t1.total_amt) first, max(t2.total_amt) second, max(t3.total_amt) third from tt2 t1 left join tt2 t2 on t2.k1 = t1.k1 and t2.total_amt t1.total_amt left join tt2 t3 on t3.k1 = t1.k1 and t3.total_amt t2.total_amt group by t1.k1; ++---++---+ | k1 | first | second | third | ++---++---+ | a | 412 |312 | 142 | | b | 441 |421 | 331 | ++---++---+ 2 rows in set (0.05 sec) -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: sum queries
-Original Message- From: James Nunnerley [mailto:[EMAIL PROTECTED] Sent: 03 December 2004 16:13 To: 'Roger Baklund'; '[EMAIL PROTECTED]' Subject: RE: sum queries -Original Message- From: Roger Baklund [mailto:[EMAIL PROTECTED] Sent: 03 December 2004 16:03 To: [EMAIL PROTECTED] Cc: James Nunnerley Subject: Re: sum queries James Nunnerley wrote: I'm trying to create a table, and ultimately a graph, out of some syslog data. I have a table with the following info in it: Time (unixtime stamp), bytes rcvd and bytes sent I want to create a sql statement that group the data over a certain period of time, and produces the following table: You don't tell us what this certain period is, in general you use date time functions to manipulate time. If your period was one hour, you could use something like this: Apologies, I was hoping that the time period would be dynamic, as it's being called from a php script, which might choose an overall period of 7 days, split into 3 hour periods, or 24 hours, split into half hour periods. select date_format(ts,%Y-%m-%d %H) period,sum(rcvd),sum(sent) from mytable group by period; So the below query above will allow me to group by hour - which is quite useful - is there anyway of grouping by say 3 hour periods? Also I've spotted another problem, in the fact that I only have the date/time as a unixtime stamp! Is there a way within the MySQL query of changing a unixtime stamp to something meaningful? Thanks James URL: http://dev.mysql.com/doc/mysql/en/Date_and_time_functions.html -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Use of limit with IN on subquery
Don't bother. This is a very expensive solution. You get nearly a Cartesian product on each JOIN. I've got a 40 row test table with 20 values in each of 2 groups. The top 3 version of this examines 2302 rows to produce the 3 values for each of the 2 groups. The top 10 version has been running for several minutes... Michael Rick Robinson wrote: Hi Roger- Thanks for responding so quickly. Hmm. I like it. It would get a bit hairy if I wanted top 50 or top 100. And if I wanted the top # to be dynamic, I'll need to construct the query on the fly...but that may be very workable. I need to play with it a bit to see how it will perform. I have a table with about 500,000 rows (not really too big), so I'm hopeful. Thanks again. Best regards, Rick -Original Message- From: Roger Baklund [mailto:[EMAIL PROTECTED] Sent: Friday, December 03, 2004 10:49 AM To: Mysql Cc: [EMAIL PROTECTED] Subject: Re: Use of limit with IN on subquery Rick Robinson wrote: I'm using MySQL 4.1.7, trying to do a subquery that's apparently unsupported - I'm hoping someone can provide a quick alternative for me. I have a simple table Z with 3 columns, k1, k2, and total_amt, where k1 and k2 make up the primary key. I want to create a report that lists the the top 10 total_amt for each k1. Hm... top 10... group-wise maximum is maybe not the best approach. Maybe a self-join is better in this case. A small experiment: use test; create table tt2 ( k1 char(1) not null, k2 int not null, total_amt int, primary key(k1,k2)); insert into tt2 values('a',1,412),('a',2,142),('a',3,123),('a',4,312); insert into tt2 values('b',1,441),('b',2,251),('b',3,421),('b',4,331); select * from tt2; +++---+ | k1 | k2 | total_amt | +++---+ | a | 1 | 412 | | a | 2 | 142 | | a | 3 | 123 | | a | 4 | 312 | | b | 1 | 441 | | b | 2 | 251 | | b | 3 | 421 | | b | 4 | 331 | +++---+ 8 rows in set (0.02 sec) select t1.k1, max(t1.total_amt) first, max(t2.total_amt) second, max(t3.total_amt) third from tt2 t1 left join tt2 t2 on t2.k1 = t1.k1 and t2.total_amt t1.total_amt left join tt2 t3 on t3.k1 = t1.k1 and t3.total_amt t2.total_amt group by t1.k1; ++---++---+ | k1 | first | second | third | ++---++---+ | a | 412 |312 | 142 | | b | 441 |421 | 331 | ++---++---+ 2 rows in set (0.05 sec) -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: sum queries
James Nunnerley wrote: * Roger Baklund: select date_format(ts,%Y-%m-%d %H) period,sum(rcvd),sum(sent) from mytable group by period; So the below query above will allow me to group by hour - which is quite useful - is there anyway of grouping by say 3 hour periods? Not using the date_format() function, as far as I can tell. But you could convert your time to seconds, divide by 3*60*60=10800, and group by the result. Something like this: select from_unixtime( floor(unix_timestamp(ts) / 10800) * 10800, %Y-%m-%d %H+3h) period,sum(rcvd),sum(sent) from mytable group by period; To get the actual first and last timestamp within each period, you could just add min(ts) and max(ts) to the field list. -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Use of limit with IN on subquery
Michael Stassen wrote: Don't bother. This is a very expensive solution. You get nearly a Cartesian product on each JOIN. I've got a 40 row test table with 20 values in each of 2 groups. The top 3 version of this examines 2302 rows to produce the 3 values for each of the 2 groups. The top 10 version has been running for several minutes... It just finished: +--+---++---++---+---+-++---+---+ | g| first | second | third | fourth | fifth | sixth | seventh | eighth | ninth | tenth | +--+---++---++---+---+-++---+---+ | a| 392 |339 | 332 |330 | 304 | 279 | 271 |250 | 183 | 179 | | b| 390 |338 | 302 |273 | 272 | 268 | 245 |215 | 211 | 189 | +--+---++---++---+---+-++---+---+ 2 rows in set (7 min 41.06 sec) Nearly 8 minutes to get the top 10 for two 20-row groups. This definitely doesn't scale. Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Bug? Can't create/write to file '/root/tmp/ibu6vdue' (Errcode: 13)
After installing MySQL 5.0.2 on MDK10.0 mysqld doesn't start with error: Can't create/write to file '/root/tmp/ibu6vdue' (Errcode: 13) I think it's a problem with innodb, if I add skip-innodb on my.cnf mysqld starts up: 041203 14:04:01 mysqld started 041203 14:04:01 [Warning] Asked for 196608 thread stack, but got 126976 /usr/sbin/mysqld: ready for connections. Version: '5.0.2-alpha-standard-log' socket: '/var/lib/mysql/mysql.sock' port: But when I enable innodb (#skip-innodb on my.cnf); 041203 14:04:56 mysqld started 041203 14:04:56 [Warning] Asked for 196608 thread stack, but got 126976 ./usr/sbin/mysqld: Can't create/write to file '/root/tmp/ibu6vdue' (Errcode: 13) 041203 14:04:56 InnoDB: Error: unable to create temporary file; errno: 13 041203 14:04:56 [ERROR] Can't init databases 041203 14:04:56 [ERROR] Aborting 041203 14:04:56 [Note] /usr/sbin/mysqld: Shutdown complete 041203 14:04:56 mysqld ended The only way to do work innodb is chmod 777 /root and /root/tmp, =( Alejandro -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Use of limit with IN on subquery
I think Roger was actually on the right track with his initial suggestion that this is a groupwise maximum problem as described in the manual page he referenced. Try this: CREATE TEMPORARY TABLE topten (k1 CHAR(1), total_amt int); LOCK TABLES Z AS x READ, Z AS y READ; INSERT INTO topten SELECT x.k1, x.total_amt FROM Z x JOIN Z y ON x.k1 = y.k1 GROUP BY x.k1, x.total_amt HAVING SUM(y.total_amt x.total_amt) 10 ORDER BY x.k1, x.total_amt DESC; SELECT x.* FROM Z AS x JOIN topten ON x.k1=topten.k1 AND x.total_amt = topten.total_amt ORDER BY x.k1, x.total_amt DESC; UNLOCK TABLES; DROP TABLE topten; If you want the top N, you only have to change the 10 to N in the HAVING clause. It's still a bit expensive, but you only join the table to itself once regardless of N, so it should scale a bit better. Michael Michael Stassen wrote: Michael Stassen wrote: Don't bother. This is a very expensive solution. You get nearly a Cartesian product on each JOIN. I've got a 40 row test table with 20 values in each of 2 groups. The top 3 version of this examines 2302 rows to produce the 3 values for each of the 2 groups. The top 10 version has been running for several minutes... It just finished: +--+---++---++---+---+-++---+---+ | g| first | second | third | fourth | fifth | sixth | seventh | eighth | ninth | tenth | +--+---++---++---+---+-++---+---+ | a| 392 |339 | 332 |330 | 304 | 279 | 271 | 250 | 183 | 179 | | b| 390 |338 | 302 |273 | 272 | 268 | 245 | 215 | 211 | 189 | +--+---++---++---+---+-++---+---+ 2 rows in set (7 min 41.06 sec) Nearly 8 minutes to get the top 10 for two 20-row groups. This definitely doesn't scale. Michael -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: Use of limit with IN on subquery
You could do something like this, not sure what your intent is if among the top total_amt is a single exact amount that occurred 30 or 40 times...are you implying the top 10 items or the top 10 distinct items? select * from table_z a where 10=(select count(*) from table_z b where b.k1=a.k1 and b.total_amt=a.total_amt) Ed -Original Message- From: Rick Robinson [mailto:[EMAIL PROTECTED] Sent: Friday, December 03, 2004 9:14 AM To: 'Roger Baklund'; 'Mysql' Subject: RE: Use of limit with IN on subquery Hi Roger- Thanks for responding so quickly. Hmm. I like it. It would get a bit hairy if I wanted top 50 or top 100. And if I wanted the top # to be dynamic, I'll need to construct the query on the fly...but that may be very workable. I need to play with it a bit to see how it will perform. I have a table with about 500,000 rows (not really too big), so I'm hopeful. Thanks again. Best regards, Rick -Original Message- From: Roger Baklund [mailto:[EMAIL PROTECTED] Sent: Friday, December 03, 2004 10:49 AM To: Mysql Cc: [EMAIL PROTECTED] Subject: Re: Use of limit with IN on subquery Rick Robinson wrote: I'm using MySQL 4.1.7, trying to do a subquery that's apparently unsupported - I'm hoping someone can provide a quick alternative for me. I have a simple table Z with 3 columns, k1, k2, and total_amt, where k1 and k2 make up the primary key. I want to create a report that lists the the top 10 total_amt for each k1. Hm... top 10... group-wise maximum is maybe not the best approach. Maybe a self-join is better in this case. A small experiment: use test; create table tt2 ( k1 char(1) not null, k2 int not null, total_amt int, primary key(k1,k2)); insert into tt2 values('a',1,412),('a',2,142),('a',3,123),('a',4,312); insert into tt2 values('b',1,441),('b',2,251),('b',3,421),('b',4,331); select * from tt2; +++---+ | k1 | k2 | total_amt | +++---+ | a | 1 | 412 | | a | 2 | 142 | | a | 3 | 123 | | a | 4 | 312 | | b | 1 | 441 | | b | 2 | 251 | | b | 3 | 421 | | b | 4 | 331 | +++---+ 8 rows in set (0.02 sec) select t1.k1, max(t1.total_amt) first, max(t2.total_amt) second, max(t3.total_amt) third from tt2 t1 left join tt2 t2 on t2.k1 = t1.k1 and t2.total_amt t1.total_amt left join tt2 t3 on t3.k1 = t1.k1 and t3.total_amt t2.total_amt group by t1.k1; ++---++---+ | k1 | first | second | third | ++---++---+ | a | 412 |312 | 142 | | b | 441 |421 | 331 | ++---++---+ 2 rows in set (0.05 sec) -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysql 4.1.7 make error manager.c: In function `mysql_manager_connect':
Hello, Sorry for the cross post, I didn't get any response from the internals list... I'm trying to compile mysql 4.1.7 from source on SLES 8, running on os390. I'm having a problem that others have seemed to have: http://lists.mysql.com/internals/17577 http://www.linuxquestions.org/questions/history/225560 http://forums.mysql.com/read.php?11,6459,6459 http://lists.mysql.com/internals/3838 But the fixes that were suggested were not clear, and in some cases, like for 17577, I tried adding the references to the other header files, but it did not fix the problem. If anyone can be of assistance, it would greatly be appreciated. -Thanks, RYAN configure options #!/bin/sh ./configure \ --prefix=/usr/local/mysql-4.1.7 \ --localstatedir=/usr/local/mysql/data \ --without-debug /configure options error gcc -DDEFAULT_CHARSET_HOME=\/usr/local/mysql-4.1.7\ -DDATADIR=\/usr/local/mysql/data\ -DSHAREDIR=\/usr/local/mysql-4.1.7/share/mysql\ -DUNDEF_THREADS_HACK -DDONT_USE_RAID -I. -I. -I.. -I../include -O3 -DDBUG_OFF -MT manager.lo -MD -MP -MF .deps/manager.Tpo -c manager.c -fPIC -DPIC -o .libs/manager.o manager.c: In function `mysql_manager_connect': manager.c:136: warning: passing arg 5 of `gethostbyname_r' from incompatible pointer type manager.c:136: too few arguments to function `gethostbyname_r' manager.c:136: warning: assignment makes pointer from integer without a cast make[2]: *** [manager.lo] Error 1 make[2]: Leaving directory `/admin/packages/cbtap/mysql-4.1.7/libmysql' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/admin/packages/cbtap/mysql-4.1.7' make: *** [all] Error 2 /error excerpts from manager.c 24 #include mysql.h 25 #include mysql_version.h 26 #include mysqld_error.h 27 #include my_sys.h 28 #include mysys_err.h 29 #include m_string.h 30 #include m_ctype.h 31 #include my_net.h 32 #include errmsg.h 33 #include violite.h 34 #include sys/stat.h 35 #include signal.h 36 #include errno.h 37 #include sys/types.h 130 else 131 { 132 int tmp_errno; 133 struct hostent tmp_hostent,*hp; 134 char buff2[GETHOSTBYNAME_BUFF_SIZE]; 135 hp = my_gethostbyname_r(host,tmp_hostent,buff2,sizeof(buff2), 136 tmp_errno); 137 if (!hp) 138 { 139 con-last_errno=tmp_errno; 140 sprintf(con-last_error,Could not resolve host '%-.64s',host); 141 my_gethostbyname_r_free(); 142 goto err; 143 } 144 memcpy(sock_addr.sin_addr,hp-h_addr, (size_t) hp-h_length); 145 my_gethostbyname_r_free(); 146 } /excerpts from manager.c -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Use of limit with IN on subquery
Michael Stassen wrote: Michael Stassen wrote: Don't bother. This is a very expensive solution. You get nearly a Cartesian product on each JOIN. I've got a 40 row test table with 20 values in each of 2 groups. The top 3 version of this examines 2302 rows to produce the 3 values for each of the 2 groups. The top 10 version has been running for several minutes... It just finished: [...] 2 rows in set (7 min 41.06 sec) Nearly 8 minutes to get the top 10 for two 20-row groups. This definitely doesn't scale. My (new) computer used 3 minutes, top 5 values took 3 seconds. I have used similar code in production, but it was not top 10, it was more like top 4, and it worked just fine, millions of rows. You say nearly cartesian product on each join, I would say nearly cartesian within each group. The size of the groups very much matters in this case. In my case the groups was typically 1-5 in size, sometimes 10 or even 100, but mostly (99%) small groups. top 50 and definitely top 100 would never work with this approach, as MySQL can only join 31 (or 63 on 64 bit platform) tables. -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
General Sql question
I am kinda new to SQL, and am having a problem trying to get something done. I'm trying to search for usernames in one of my tables. The name is stored as firstname lastname. I wrote my query as select Cust_ID, Cust_Name from mailings where ucase(Name) = ucase(Cust_Name) When it runs, I get a error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Grosz = Cust_Name' at line 3 It seems that its only searching for the name after the space between the first and last name. How can I correct this in the query? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: MySQL support for AMD64
Hi there, I have had good success with gentoo (full source install/compile) and MySQL's binaries on Opteron systems. We're running 2 opteron boxes (1 with 16 gigs of RAM, 1 with 32 gigs of RAM) with no problems. We are running at about 1000-2000 queries per second (mostly inserts). Cheers, Mark Steele Implementation Director CDT Inc. -Original Message- From: Lynn Bender [mailto:[EMAIL PROTECTED] Sent: November 30, 2004 2:23 PM To: [EMAIL PROTECTED] Subject: MySQL support for AMD64 I just received a box with the following specs: Dual AMD64 8G ram Two 3ware 2.4 terabyte RAID 5 arrays. My company has been using Redhat for most of its production machines. 1. Does anyone have any success/horror stories running MySQL 4.0.x on RHES 3/ AMD64? 2. Does anyone have alternate recommendations for running MySQL databases in the terabyte range on AMD64? Thanks Lynn Bender UnsubCentral Secure Email List Suppression Management Neutral. Bonded. Trusted. You are receiving this commercial email from a representative of UnsubCentral, Inc. 13171 Pond Springs Road, Austin, TX 78729 Toll Free: 800.589.0445 To cease all communication with UnsubCentral, visit http://www.unsubcentral.com/unsubscribe or send an email to [EMAIL PROTECTED] smime.p7s Description: S/MIME cryptographic signature
if last binary byte is space (ascii 32) mysql drops it
This all started when one of the 16 byte binary primary keys kicked out a duplicate key error. It seems mysql does not store the last byte of the binary value if it is a space. That is, ascii 32 or hex 20. How do I force it to store the space? Thanks! create table testtable ( id binary(16) NOT NULL PRIMARY KEY ) ENGINE=MyISAM; insert into testtable ( id ) values (0x3b3331105ee3f0779ad5f041e75f9420); select hex(id) from testtable; #HEX value retreived is 3B3331105EE3F0779AD5F041E75F94 and has length 30 select hex(id) from testtable where id=0x3b3331105ee3f0779ad5f041e75f9420; #nothing found -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Use of limit with IN on subquery
What if we used the MySQL-specific feature group-wise auto_increment ? http://dev.mysql.com/doc/mysql/en/example-AUTO_INCREMENT.html For MyISAM and BDB tables you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column)+1 WHERE prefix=given-prefix. This is useful when you want to put data into ordered groups. So to group his records by k1: CREATE TABLE groupwizeCounter ( id int auto_incrment, , k1 datatype here , k2 datatype here , total_amt datatype here , PRIMARY KEY (k1, id) } type=INNODB; INSERT groupwizeCounter (k1, k2, total_amt) SELECT (k1,k2,total_amt) FROM Z ORDER BY total_amt DESC; SELECT k1, k2, total_amt FROM groupwizeCounter WHERE id = 10; ORDER BY k1 What do y'all think? Shawn Green Database Administrator Unimin Corporation - Spruce Pine Roger Baklund [EMAIL PROTECTED] wrote on 12/03/2004 12:40:24 PM: Michael Stassen wrote: Michael Stassen wrote: Don't bother. This is a very expensive solution. You get nearly a Cartesian product on each JOIN. I've got a 40 row test table with 20 values in each of 2 groups. The top 3 version of this examines 2302 rows to produce the 3 values for each of the 2 groups. The top 10 version has been running for several minutes... It just finished: [...] 2 rows in set (7 min 41.06 sec) Nearly 8 minutes to get the top 10 for two 20-row groups. This definitely doesn't scale. My (new) computer used 3 minutes, top 5 values took 3 seconds. I have used similar code in production, but it was not top 10, it was more like top 4, and it worked just fine, millions of rows. You say nearly cartesian product on each join, I would say nearly cartesian within each group. The size of the groups very much matters in this case. In my case the groups was typically 1-5 in size, sometimes 10 or even 100, but mostly (99%) small groups. top 50 and definitely top 100 would never work with this approach, as MySQL can only join 31 (or 63 on 64 bit platform) tables. -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: General Sql question
Your sample query is not valid SQL. What tool/language are you using to run this query? There must be something interpreting what you entered and mis-representing your query to the MySQL server. Without that piece of the puzzle I am completely in the dark. It would also help to know what version MySQL server you are using and if you are connecting to your MySQL server through any kind of library (connector/J, ODBC, etc.) which one and what version is it? Shawn Green Database Administrator Unimin Corporation - Spruce Pine Steve Grosz [EMAIL PROTECTED] wrote on 12/03/2004 12:58:30 PM: I am kinda new to SQL, and am having a problem trying to get something done. I'm trying to search for usernames in one of my tables. The name is stored as firstname lastname. I wrote my query as select Cust_ID, Cust_Name from mailings where ucase(Name) = ucase(Cust_Name) When it runs, I get a error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Grosz = Cust_Name' at line 3 It seems that its only searching for the name after the space between the first and last name. How can I correct this in the query? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Table Names Begin with '#'
Hello, As you may know, the mambo (open source CMS) table names begin with a # character. How does one describe a table that begins with #? -DK Daniel Kemper Lead Programmer Intellisys, Inc.
Re: General Sql question
I am writing this by hand, and is being used within Coldfusion. MySql is v 4.1.7 and I am connecting via ODBC. Steve [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Your sample query is not valid SQL. What tool/language are you using to run this query? There must be something interpreting what you entered and mis-representing your query to the MySQL server. Without that piece of the puzzle I am completely in the dark. It would also help to know what version MySQL server you are using and if you are connecting to your MySQL server through any kind of library (connector/J, ODBC, etc.) which one and what version is it? Shawn Green Database Administrator Unimin Corporation - Spruce Pine Steve Grosz [EMAIL PROTECTED] wrote on 12/03/2004 12:58:30 PM: I am kinda new to SQL, and am having a problem trying to get something done. I'm trying to search for usernames in one of my tables. The name is stored as firstname lastname. I wrote my query as select Cust_ID, Cust_Name from mailings where ucase(Name) = ucase(Cust_Name) When it runs, I get a error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Grosz = Cust_Name' at line 3 It seems that its only searching for the name after the space between the first and last name. How can I correct this in the query? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Bug? Can't create/write to file '/root/tmp/ibu6vdue' (Errcode: 13)
Alejandro, - Original Message - From: Alejandro D. Burne [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, December 03, 2004 7:20 PM Subject: Bug? Can't create/write to file '/root/tmp/ibu6vdue' (Errcode: 13) After installing MySQL 5.0.2 on MDK10.0 mysqld doesn't start with error: Can't create/write to file '/root/tmp/ibu6vdue' (Errcode: 13) InnoDB, and mysqld in general, must be able to create temporary files. http://dev.mysql.com/doc/mysql/en/Temporary_files.html MySQL uses the value of the TMPDIR environment variable as the pathname of the directory in which to store temporary files. If you don't have TMPDIR set, MySQL uses the system default, which is normally `/tmp', `/var/tmp', or `/usr/tmp'. If the filesystem containing your temporary file directory is too small, you can use the --tmpdir option to mysqld to specify a directory in a filesystem where you have enough space. I think it's a problem with innodb, if I add skip-innodb on my.cnf mysqld starts up: 041203 14:04:01 mysqld started 041203 14:04:01 [Warning] Asked for 196608 thread stack, but got 126976 /usr/sbin/mysqld: ready for connections. Version: '5.0.2-alpha-standard-log' socket: '/var/lib/mysql/mysql.sock' port: But when I enable innodb (#skip-innodb on my.cnf); 041203 14:04:56 mysqld started 041203 14:04:56 [Warning] Asked for 196608 thread stack, but got 126976 ./usr/sbin/mysqld: Can't create/write to file '/root/tmp/ibu6vdue' (Errcode: 13) 041203 14:04:56 InnoDB: Error: unable to create temporary file; errno: 13 041203 14:04:56 [ERROR] Can't init databases 041203 14:04:56 [ERROR] Aborting 041203 14:04:56 [Note] /usr/sbin/mysqld: Shutdown complete 041203 14:04:56 mysqld ended The only way to do work innodb is chmod 777 /root and /root/tmp, =( Alejandro Best regards, Heikki Tuuri Innobase Oy Foreign keys, transactions, and row level locking for MySQL InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM tables http://www.innodb.com/order.php Order MySQL technical support from https://order.mysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: if last binary byte is space (ascii 32) mysql drops it
In the last episode (Dec 03), Mark Maunder said: This all started when one of the 16 byte binary primary keys kicked out a duplicate key error. It seems mysql does not store the last byte of the binary value if it is a space. That is, ascii 32 or hex 20. How do I force it to store the space? Thanks! create table testtable ( id binary(16) NOT NULL PRIMARY KEY ) ENGINE=MyISAM; There is no native mysql BINARY type; it's mapped to CHAR(n) BINARY. CHARS and VARCHARS trim trailing blanks (A known issue, but low priority I think). Try using a TINYBLOB column type instead. -- Dan Nelson [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: if last binary byte is space (ascii 32) mysql drops it
Thing is I don't want a dynamic table for performance reasons. I'm storing an MD5 hash which is 16 bytes. As a workaround I'm only using 8 bytes of the hash and storing it in a bigint(20) column for now. So I guess eighteen quintillion, four hundred forty six quadrillion, seven hundred forty four trillion, seventy three billion, seven hundred nine million, five hundred fifty two thousand possible combinations will have to be unique enough for now. This turned out to be a very hard to debug little issue for me. Perhaps others will be more lucky. I'd like to see it fixed asap. On Fri, 2004-12-03 at 12:10, Dan Nelson wrote: In the last episode (Dec 03), Mark Maunder said: This all started when one of the 16 byte binary primary keys kicked out a duplicate key error. It seems mysql does not store the last byte of the binary value if it is a space. That is, ascii 32 or hex 20. How do I force it to store the space? Thanks! create table testtable ( id binary(16) NOT NULL PRIMARY KEY ) ENGINE=MyISAM; There is no native mysql BINARY type; it's mapped to CHAR(n) BINARY. CHARS and VARCHARS trim trailing blanks (A known issue, but low priority I think). Try using a TINYBLOB column type instead. -- Mark D. Maunder [EMAIL PROTECTED] http://www.workzoo.com/ The Best jobs from the Best Job Sites. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: if last binary byte is space (ascii 32) mysql drops it
At 14:10 -0600 12/3/04, Dan Nelson wrote: In the last episode (Dec 03), Mark Maunder said: This all started when one of the 16 byte binary primary keys kicked out a duplicate key error. It seems mysql does not store the last byte of the binary value if it is a space. That is, ascii 32 or hex 20. How do I force it to store the space? Thanks! create table testtable ( id binary(16) NOT NULL PRIMARY KEY ) ENGINE=MyISAM; There is no native mysql BINARY type; it's mapped to CHAR(n) BINARY. CHARS and VARCHARS trim trailing blanks (A known issue, but low priority I think). Try using a TINYBLOB column type instead. I agree about using the TINYBLOB to avoid trailing space truncation, but BINARY and VARBINARY are MySQL data types now. http://dev.mysql.com/doc/mysql/en/BINARY_VARBINARY.html -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Use of limit with IN on subquery
[EMAIL PROTECTED] wrote: What if we used the MySQL-specific feature group-wise auto_increment ? I was thinking of a similar idea, with user variables, also MySQL-specific. What do y'all think? I think it should work, but only Rick can tell... :) -- Roger -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: if last binary byte is space (ascii 32) mysql drops it
So what you're saying is that BINARY isn't binary because it chomps spaces off the end, thereby corrupting the binary data. Sounds like a bug. Should I report it? On Fri, 2004-12-03 at 12:30, Paul DuBois wrote: I agree about using the TINYBLOB to avoid trailing space truncation, but BINARY and VARBINARY are MySQL data types now. http://dev.mysql.com/doc/mysql/en/BINARY_VARBINARY.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: if last binary byte is space (ascii 32) mysql drops it
Mark, - Original Message - From: Mark Maunder [EMAIL PROTECTED] Newsgroups: mailing.database.myodbc Sent: Friday, December 03, 2004 10:52 PM Subject: Re: if last binary byte is space (ascii 32) mysql drops it So what you're saying is that BINARY isn't binary because it chomps spaces off the end, thereby corrupting the binary data. Sounds like a bug. Should I report it? it is in the TODO to implement a real VARCHAR to MySQL. That fix may also fix the annoying problem that MySQL removes trailing characters ASCII(32) from the end of all strings, including a BINARY string. It is really counterintuitive, I agree. It does not hurt to report this as a bug at bugs.mysql.com. It could speed up the fixing of this problem. On Fri, 2004-12-03 at 12:30, Paul DuBois wrote: I agree about using the TINYBLOB to avoid trailing space truncation, but BINARY and VARBINARY are MySQL data types now. http://dev.mysql.com/doc/mysql/en/BINARY_VARBINARY.html Best regards, Heikki Tuuri Innobase Oy Foreign keys, transactions, and row level locking for MySQL InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM tables http://www.innodb.com/order.php Order MySQL technical support from https://order.mysql.com/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
mysql and large integers
It looks like when mysql coerces character strings into integers, it turns them into signed int's. Obviously if the column is unsigned, this is a problem. Don't use quotes you say. Problem is that the perl DBI API seems to put quotes around everything. So when I grab a really really large integer from the db using the perl api, and then try to get a child record referencing the same large integer ID, the DB doesn't give me anything because it's coercing a large chunk of text into a signed integer and truncating it. Another not-really-a-bug but definitely a pitfall. And it sucks because after not being able to use md5 hashes to index my records using BINARY(16) because binary isn't really binary because it cuts off spaces, I'm losing a digit of my next-best-thing thanks to unsigned integers which are actually signed. Don't make me go spend my life savings on Oracle! Here's an example in case you're really bored. The problem below exists because 9358082631434058695 2^63 ##First with no quotes around the large integer: mysql select job_id from wordbarrel_9a where job_id=9358082631434058695; +-+ | job_id | +-+ | 9358082631434058695 | +-+ 1 row in set (0.00 sec) ##Then with quotes: mysql select job_id from wordbarrel_9a where job_id='9358082631434058695'; Empty set (0.00 sec) mysql desc wordbarrel_9a; +--+-+--+-+-+---+ | Field| Type| Null | Key | Default | Extra | +--+-+--+-+-+---+ | job_id | bigint(20) unsigned | | PRI | 0 | | +--+-+--+-+-+---+ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql and large integers
At 16:34 -0800 12/3/04, Mark Maunder wrote: It looks like when mysql coerces character strings into integers, it turns them into signed int's. Obviously if the column is unsigned, this is a problem. Don't use quotes you say. Problem is that the perl DBI API seems to put quotes around everything. So when I grab a really really large integer from the db using the perl api, and then try to get a child record referencing the same large integer ID, the DB doesn't give me anything because it's coercing a large chunk of text into a signed integer and truncating it. You don't indicate when it is that DBI is putting quotes around everything, but if what you mean is that values bound to placeholders get quoted, you can suppress that. perldoc DBI shows this information: Data Types for Placeholders The \%attr parameter can be used to hint at the data type the placeholder should have. Typically, the driver is only interested in knowing if the placeholder should be bound as a number or a string. $sth-bind_param(1, $value, { TYPE = SQL_INTEGER }); As a short-cut for the common case, the data type can be passed directly, in place of the \%attr hash reference. This example is equivalent to the one above: $sth-bind_param(1, $value, SQL_INTEGER); The TYPE value indicates the standard (non-driver-specific) type for this parameter. To specify the driver-specific type, the driver may support a driver-specific attribute, such as { ora_type = 97 }. The SQL_INTEGER and other related constants can be imported using use DBI qw(:sql_types); See DBI Constants for more information. Another not-really-a-bug but definitely a pitfall. And it sucks because after not being able to use md5 hashes to index my records using BINARY(16) because binary isn't really binary because it cuts off spaces, I'm losing a digit of my next-best-thing thanks to unsigned integers which are actually signed. Don't make me go spend my life savings on Oracle! Here's an example in case you're really bored. The problem below exists because 9358082631434058695 2^63 ##First with no quotes around the large integer: mysql select job_id from wordbarrel_9a where job_id=9358082631434058695; +-+ | job_id | +-+ | 9358082631434058695 | +-+ 1 row in set (0.00 sec) ##Then with quotes: mysql select job_id from wordbarrel_9a where job_id='9358082631434058695'; Empty set (0.00 sec) mysql desc wordbarrel_9a; +--+-+--+-+-+---+ | Field| Type| Null | Key | Default | Extra | +--+-+--+-+-+---+ | job_id | bigint(20) unsigned | | PRI | 0 | | +--+-+--+-+-+---+ -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: mysql and large integers
Thanks very much Paul. My day has just improved. On Fri, 2004-12-03 at 16:53, Paul DuBois wrote: At 16:34 -0800 12/3/04, Mark Maunder wrote: It looks like when mysql coerces character strings into integers, it turns them into signed int's. Obviously if the column is unsigned, this is a problem. Don't use quotes you say. Problem is that the perl DBI API seems to put quotes around everything. So when I grab a really really large integer from the db using the perl api, and then try to get a child record referencing the same large integer ID, the DB doesn't give me anything because it's coercing a large chunk of text into a signed integer and truncating it. You don't indicate when it is that DBI is putting quotes around everything, but if what you mean is that values bound to placeholders get quoted, you can suppress that. perldoc DBI shows this information: Data Types for Placeholders The \%attr parameter can be used to hint at the data type the placeholder should have. Typically, the driver is only interested in knowing if the placeholder should be bound as a number or a string. $sth-bind_param(1, $value, { TYPE = SQL_INTEGER }); As a short-cut for the common case, the data type can be passed directly, in place of the \%attr hash reference. This example is equivalent to the one above: $sth-bind_param(1, $value, SQL_INTEGER); The TYPE value indicates the standard (non-driver-specific) type for this parameter. To specify the driver-specific type, the driver may support a driver-specific attribute, such as { ora_type = 97 }. The SQL_INTEGER and other related constants can be imported using use DBI qw(:sql_types); See DBI Constants for more information. Another not-really-a-bug but definitely a pitfall. And it sucks because after not being able to use md5 hashes to index my records using BINARY(16) because binary isn't really binary because it cuts off spaces, I'm losing a digit of my next-best-thing thanks to unsigned integers which are actually signed. Don't make me go spend my life savings on Oracle! Here's an example in case you're really bored. The problem below exists because 9358082631434058695 2^63 ##First with no quotes around the large integer: mysql select job_id from wordbarrel_9a where job_id=9358082631434058695; +-+ | job_id | +-+ | 9358082631434058695 | +-+ 1 row in set (0.00 sec) ##Then with quotes: mysql select job_id from wordbarrel_9a where job_id='9358082631434058695'; Empty set (0.00 sec) mysql desc wordbarrel_9a; +--+-+--+-+-+---+ | Field| Type| Null | Key | Default | Extra | +--+-+--+-+-+---+ | job_id | bigint(20) unsigned | | PRI | 0 | | +--+-+--+-+-+---+ -- Mark D. Maunder [EMAIL PROTECTED] http://www.workzoo.com/ The Best jobs from the Best Job Sites. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Performance and indexing for time intervals...
I'm running into some performance problems with a table of time intervals. I'd like to look up the record that covers/overlaps a given instant, and I was hoping that someone might help me out. Consider these tables: create table items ( item_id integer auto_increment not null, item_name varchar(40), primary key(item_id) ); create table price_intervals ( item_id integer not null, beginning datetime not null, end datetime not null, price decimal(6,2) not null, primary key(item_id, beginning, end) ); Items contains a list of 100,000 items. Price_intervals contains a list of 1,000,000 item prices and the intervals during which they are valid. The intervals over which they are valid are continuous and of varying lengths. Given a point in time, I'd like to be able to look up a price for each of the items at that moment. To do this, I'll need to know the record in the price_intervals table that begins most recently before my sample point. Since I need to do this performantly, I'd ideally like to do it using an index in O(1) time. My initial attempt at this was the following: SELECT item_name, price from items join price_intervals on (items.item_id = price_intervals.item_id) WHERE price_intervals.beginning = '11-01-01T00:00:00' AND price_intervals.end '11-01-01T00:00:00'; This query works, but it only uses the item_id portion of the price_interval primary key, and it ends up scanning through all of the 1,000,000 price_intervals for each journey (this sort of makes sense since the 'less than' and 'greater than' can't be combined on the same index). (explain plan) *** 1. row *** id: 1 select_type: SIMPLE table: price_intervals type: ALL possible_keys: PRIMARY key: NULL key_len: NULL ref: NULL rows: 100 Extra: Using where *** 2. row *** id: 1 select_type: SIMPLE table: items type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 4 ref: test.price_intervals.item_id rows: 1 Extra: I thought I could optimize it with this query: SELECT item_name, price from items join price_intervals on (items.item_id = price_intervals.item_id) WHERE price_intervals.beginning = (SELECT max(beginning) from price_intervals WHERE price_intervals.item_id = items.item_id and price_intervals.beginning = '11-01-01T00:00:00'); It turns out, however, that mysql doesn't seem to use the index on (item_id, beginning) on this query -- I would expect it to roll backward through this index to get the MAX() value -- instead, the subquery visits all of the price_interval records for the item; this makes it signifcantly slower than the first. (explain plan) *** 1. row *** id: 1 select_type: PRIMARY table: items type: ALL possible_keys: PRIMARY key: NULL key_len: NULL ref: NULL rows: 10 Extra: *** 2. row *** id: 1 select_type: PRIMARY table: price_intervals type: eq_ref possible_keys: PRIMARY key: PRIMARY key_len: 12 ref: test.items.item_id,func rows: 1 Extra: Using where *** 3. row *** id: 2 select_type: DEPENDENT SUBQUERY table: price_intervals type: ref possible_keys: PRIMARY key: PRIMARY key_len: 4 ref: test.items.item_id rows: 1 Extra: Using where; Using index I'm curious how others have handled problems like this -- I know mysql doesn't have much in the way of time interval support, but are their performant workaround people have developed? In particular, is there a way to get the predecessor of a record in an indexed column in O(1) time? I think this would be what I need to locate the band of a continuous time series that corresponds to a sample. Thanks very much for any insights, Dylan _ Express yourself instantly with MSN Messenger! Download today - it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: COMPRESS() vs myisampack
myisampack will result in a smaller table. The only downside is that the table becomes read only. On Fri, 03 Dec 2004 09:31:22 +0100, Jacob Friis [EMAIL PROTECTED] wrote: I have a table with lots of text data, and would like to compress this in order to save space. I would either compress the whole table with myisampack or just the text data with COMPRESS(). When I SELECT from this table, would it be equally fast to use UNCOMPRESS() for text data or use a table packed by myisampack? Thanks, Jacob -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- Eric Bergen [EMAIL PROTECTED] bleated.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Table Names Begin with '#'
Try enclosing it in backticks like this. `#table` Here is the url for reference: http://dev.mysql.com/doc/mysql/en/Legal_names.html On Fri, 3 Dec 2004 13:37:38 -0600, Daniel Kemper [EMAIL PROTECTED] wrote: Hello, As you may know, the mambo (open source CMS) table names begin with a # character. How does one describe a table that begins with #? -DK Daniel Kemper Lead Programmer Intellisys, Inc. -- Eric Bergen [EMAIL PROTECTED] bleated.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Performance and indexing for time intervals...
In the last episode (Dec 03), FFF FFF said: I'm running into some performance problems with a table of time intervals. I'd like to look up the record that covers/overlaps a given instant, and I was hoping that someone might help me out. Consider these tables: create table items ( item_id integer auto_increment not null, item_name varchar(40), primary key(item_id) ); create table price_intervals ( item_id integer not null, beginning datetime not null, end datetime not null, price decimal(6,2) not null, primary key(item_id, beginning, end) ); Items contains a list of 100,000 items. Price_intervals contains a list of 1,000,000 item prices and the intervals during which they are valid. The intervals over which they are valid are continuous and of varying lengths. Given a point in time, I'd like to be able to look up a price for each of the items at that moment. To do this, I'll need to know the record in the price_intervals table that begins most recently before my sample point. Since I need to do this performantly, I'd ideally like to do it using an index in O(1) time. My initial attempt at this was the following: SELECT item_name, price from items join price_intervals on (items.item_id = price_intervals.item_id) WHERE price_intervals.beginning = '11-01-01T00:00:00' AND price_intervals.end '11-01-01T00:00:00'; This query works, but it only uses the item_id portion of the price_interval primary key, and it ends up scanning through all of the 1,000,000 price_intervals for each journey (this sort of makes sense since the 'less than' and 'greater than' can't be combined on the same index). (explain plan) A regular horizontal explain is easier to read imho. *** 1. row *** id: 1 select_type: SIMPLE table: price_intervals Note that mysql picked price_intervals as the driving table. This means it is going to find the records that match your WHERE clause, than find the items that match as a 2nd step. Try creating an index on (beginning, end). Or, try changing your JOIN to a STRAIGHT JOIN, which will force mysql to use the tables in the order you list them. -- Dan Nelson [EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: a query to insert values into two different tables using mySQL Server
Isn't that technically multiple queries? In mysql an insert inserts into one table. That's it. -Eric On Wed, 1 Dec 2004 15:50:15 +0100, ***ADI*** [EMAIL PROTECTED] wrote: in MS SQL u can do it by the following query: declare @transool varchar(20) set @transool = 'opcofficelink' begin transaction @transool insert into Table1 (Sine_20_Sec, Sine_30_Sec) values (@Sine_20_Sec, @Sine_30_Sec) insert into Table2 (Sine_20_Sec, Sine_30_Sec) values (@Sine_20_Sec, @Sine_30_Sec) commit transaction @transool is there any similar query that can be used in mySQL? what i want to get is insert into Table1 and update row in Table2 the same values. i can't use two querries =/ thanx for ANY help, ADI -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- Eric Bergen [EMAIL PROTECTED] bleated.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Help me optimize this query
If you change the update to a select you can use explain to see it's execution path just like you would with a normal select. The rows returned are the rows that would be matched by the update query. Example: explain select * from t1, t2 where t1.YYY=t2. and t2. like '%X%'; Optimize the query then turn it back into an update. -Eric On Wed, 1 Dec 2004 11:38:29 +0530, Manish [EMAIL PROTECTED] wrote: I am trying to execute this query and it is failing with Table is full error (I know I can make temp tables big). update t1, t2 set t1.XXX=1 where t1.YYY=t2. and t2. like '%X%'; My t1 has 10,00,000+ records and t2 has about 70,000 recorsds. I would like to know how can I optimize this query? What are the parmeters for this optimization? Can someone give me links where I can read up about such optimizations for update query. TIA, - Manish -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- Eric Bergen [EMAIL PROTECTED] bleated.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Cannot GRANT REPLICATION SLAVE
I can grant another previleges, but not replication slave and replication clients. How do I fix this? Thank you in advance. I checked my mysql.user table, and I just found out that it didn't have Repl_slave_priv and Repl_client_priv columns. How is this possible? mysql select * from mysql.user\G *** 14. row *** Host: 192.168.1.32 User: backup1 Password: 2cd93c3e746362cf Select_priv: N Insert_priv: N Update_priv: N Delete_priv: N Create_priv: N Drop_priv: N Reload_priv: N Shutdown_priv: N Process_priv: N File_priv: N Grant_priv: N References_priv: N Index_priv: N Alter_priv: N That is all the colums in mysql.user. Please help, and thank you in advance. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: Backup problems
Steve Grosz wrote: I am using the MySql Administrator tool to schedule weekly backups on my databases. I have defined the databases I want backed up and how often, plus where to store the data. I ran a sample, but it appears that just the structure is being backed up, not the data in the tables as well. How do you define this? What am I missing? Maybe there is a checkbox somewhere that you have to check i.e. something labelled export data . Why are you not using mysqldump (http://dev.mysql.com/doc/mysql/en/mysqldump.html ) ? -- Raj Shekhar, System Administrator Media Web India http://www.netphotograph.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: General Sql question
Does Name exist as a column in your table, or is it a ColdFusion variable? I know very little about how ColdFusion works, but it does parse the query, and alter it, before it gets sent to ODBC. Just looks like it's using ucase(Name) as a coldfusion function, then replacing it in the query. Just something to look into. You would probably have much better luck with a ColdFusion list Chris Steve Grosz wrote: I am writing this by hand, and is being used within Coldfusion. MySql is v 4.1.7 and I am connecting via ODBC. Steve [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Your sample query is not valid SQL. What tool/language are you using to run this query? There must be something interpreting what you entered and mis-representing your query to the MySQL server. Without that piece of the puzzle I am completely in the dark. It would also help to know what version MySQL server you are using and if you are connecting to your MySQL server through any kind of library (connector/J, ODBC, etc.) which one and what version is it? Shawn Green Database Administrator Unimin Corporation - Spruce Pine Steve Grosz [EMAIL PROTECTED] wrote on 12/03/2004 12:58:30 PM: I am kinda new to SQL, and am having a problem trying to get something done. I'm trying to search for usernames in one of my tables. The name is stored as firstname lastname. I wrote my query as select Cust_ID, Cust_Name from mailings where ucase(Name) = ucase(Cust_Name) When it runs, I get a error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'Grosz = Cust_Name' at line 3 It seems that its only searching for the name after the space between the first and last name. How can I correct this in the query? -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]