Re: Another replication question
Ok, I'll not post any more! Just reading! David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 在 2010年12月23日 下午8:14,Johan De Meersman vegiv...@tuxera.be写道: Glad to hear I'm not the only one annoyed :-) I've plonked him in the meantime. 2010/12/23 Jorg W Young jorgwyoung...@gmail.comjorgwyoung%2...@gmail.com jorgwyoung%2...@gmail.com jorgwyoung%252...@gmail.com This guy has been saying nothing meaningful on this list, but advertise his blog everywhere. Just be shame. He should be kicked out from the list. Jorg. 2010/12/23 杨涛涛 david.y...@actionsky.com: This way is very well, but it has to do lots of human work. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be -- Bier met grenadyn Is als mosterd by den wyn Sy die't drinkt, is eene kwezel Hy die't drinkt, is ras een ezel
Re: MySQL upgrade from 5.0.51a to 5.1.53
Hi all, I did a upgrade from 5.0 to 5.1 yesterday. And I paste some of my steps on the following. 1. Use mysqldump to dump all the data to the disk. 2. Backup all the data including mysql system files to a safe place. 3. Remove all the file related to mysql. 4. Install the new 5.1 version. 5. Import the exist data into the new version. I don't know whether the above is fit for you or not. David Yeung, In China, Beijing. My Msn: yueliangdao0...@gmail.com 2010/11/30 Machiel Richards machi...@rdc.co.za Hi Thank you for the response, after reading a bit more about this, I suspect you are right. From what I can gather I need to do the following in order to use this installation method: - create a full backup of the old database and put in a safe place. - create a backup of the my.cnf file in a seperate directory (in order to have access to the configuration to replicate details to new database config). - use package-manager (apt-get in this case) to remove current version of MySQL that is running as well as all config files and /etc/mysql dir - setup new mysql version using the tar file - start the new server and restore the backup. - run mysql_upgrade in order to ensure all tables are well as well as privilege tables. Does this seem right to you guys? Machiel -Original Message- From: Johan De Meersman vegiv...@tuxera.be To: Machiel Richards machi...@rdc.co.za Cc: mysql mailing list mysql@lists.mysql.com Subject: Re: MySQL upgrade from 5.0.51a to 5.1.53 Date: Tue, 30 Nov 2010 11:29:29 +0100 I suspect you need to have the new version running instead of the old one, for the mysql_upgrade script to work. On Tue, Nov 30, 2010 at 11:23 AM, Machiel Richards machi...@rdc.co.za wrote: HI Guys I found some info regarding a method to upgrade mysql databases. Currently the version is at mysql-5.0.51a (as installed via the ubuntu-8.0.4 respository). I downloaded the only version available from the web (5.1.53) that will work on ubuntu [except for source]. This is the binary package that need to be untarred into a directory and then the database to be initialized. The option was was looking at was to do the following (according to some reccomendations) - untar new version into seperate directory. - keep the current version running. - cd to /path/to/new/directory/bin - ./mysql_upgrade -u root -p Running this I get a couple of errors / warnings though and I am hoping that someone can provide me with some answers or otherwise a better way to do the upgrade. Errors received: Enter password: Looking for 'mysql' as: ./mysql Looking for 'mysqlcheck' as: ./mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' machiel.test1 OK machiel.test2 OK mysql.columns_priv OK mysql.db OK mysql.eventOK mysql.func OK mysql.help_categoryOK mysql.help_keyword OK mysql.help_relationOK mysql.help_topic OK mysql.host OK mysql.ndb_binlog_index OK mysql.plugin OK mysql.proc OK mysql.procs_priv OK mysql.servers OK mysql.tables_priv OK mysql.time_zoneOK mysql.time_zone_leap_secondOK mysql.time_zone_name OK mysql.time_zone_transition OK mysql.time_zone_transition_typeOK mysql.user OK Running 'mysql_fix_privilege_tables'... ERROR 1193 (HY000) at line 69: Unknown system variable 'have_csv' ERROR 1064 (42000) at line 71: You have an error in your SQL syntax; check the manual
Re: 4 days to drop Index
Hi. I think you should upgrade your hardware and adjust your mysqld's parameters. Then your job will be fine. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/23 Adarsh Sharma adarsh.sha...@orkash.com Dear all, Things becomes too lazy when it takes too much time. I am suffering from this problem when droping index of size 17.7 GB on a table of size 24.7 GB. This table have some more indexes of different sizes. It takes near about 4 days to drop the index. Can anyone Please guide me whether it is fine in MySQL or things needed to work on. Thanks Regards Adarsh Sharma -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: Table cache not being updated
Yeah, Just entering the mysql command line client, typing the keyword 'status' , you'll get the absolute value. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/22 Wagner Bianchi wagnerbianch...@gmail.com The response of table_cahe's new values is not imedite. MySQL cache engine will putting new objects in cache on demand intead of to replace them. Configure new table_cahe value at my.cnf or your configuration file, restart mysqld e going on monitoring. Let time pass and see what happen. Best regards. -- Wagner Bianchi 2010/12/22 杨涛涛 david.y...@actionsky.com How did you adjust this variable? Stay it in my.cnf or just set it? David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/11/24 Machiel Richards machi...@rdc.co.za Hi All Maybe someone can help me with this one. We have set the table_cache to 1024, however the open tables value stays 64 of 64. Everything I checked stated that the open tables is related to the table_cache variable. Can someone please assist on why the value isn't being updated? The MySQL version is 5.051a regards Machiel
Re: sub query to daily usage subtraction
Hi, I'm confusion with your issue. Can you show us your detailed wanted? David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/1 Mohan L l.mohan...@gmail.com Dear All, My Mysql Table contain 5 column id,data,storage,bandwidth_sent, bandwidth_received , like this : http://pastebin.com/ghXWMZ7V. The storage and bandwidth information is accumulated one. I need to fetch my daily usage from data bases. I need to subtract today's storage -yesterday storage bases on date .Here id is same. I am new to witting mysql query (learning). I think this can be done using sub query. any help will be really appreciated . Thanks Rg Mohan L
Re: explain shows type = ALL for indexed column
Yeah. The subquery sometimes creates temporary tables, so the performance is worse than join. Just leaving it. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/8 Aaron Turner synfina...@gmail.com Thanks Gavin. Rewriting the query to not use the subselect solved the problem! On Tue, Dec 7, 2010 at 11:33 AM, Gavin Towey gto...@ffn.com wrote: Mysql often handles subqueries poorly. It's best to rewrite that as a JOIN instead: http://dev.mysql.com/doc/refman/5.1/en/rewriting-subqueries.html If you have further questions after doing that, show the table structures, the query, and the explain output. -- Aaron Turner http://synfin.net/ Twitter: @synfinatic http://tcpreplay.synfin.net/ - Pcap editing and replay tools for Unix Windows Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. -- Benjamin Franklin carpe diem quam minimum credula postero -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: unfortunately truncate tables
I'm sorry to hear about you. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/4 kranthi kiran kranthikiran@gmail.com Hi all unfortunately some body truncate the 2 tables , in this data we did not enable log-bin file.how can recovery that data.i am using innodb engine , plz help me its urgent , thanks advance
Re: Database procedures
Hi Richards . Here are some of my codes. You can do some changes whatever you want. http://blog.chinaunix.net/u/29134/showart_1002486.html David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/14 Machiel Richards machi...@rdc.co.za HI All Just to give you some idea of what I have tried thus far: mysql delimiter // mysql create procedure select_delete_id (in dt date, out id bigint) begin select max(id) into id from archive_collections where utc dt; end// Query OK, 0 rows affected (0.00 sec) mysql delimiter ; mysql call select_delete_id(2010-12-13 00:00,@delete_id); Query OK, 0 rows affected (0.00 sec) mysql select @delete_id; ++ | @delete_id | ++ | NULL | ++ 1 row in set (0.00 sec) The delete_id should however return the value 3823054 and not null. Regards Machiel -Original Message- From: Machiel Richards machi...@rdc.co.za To: mysql mailing list mysql@lists.mysql.com Subject: Database procedures Date: Tue, 14 Dec 2010 08:37:10 +0200 HI All I am hoping that someone with more experience than me can assist here. I am trying to find out how to write a database procedure within MySQL, however I have never worked with procedures in my life and the resources I found on the net thus far seems greek to me... What we are trying to achieve is the following: There are currently 2 specific tables which contains archived data which are being deleted manually each day. We would like to put the delete of the data within a procedure and then have an automated process to execute the procedure. What we are using currently is the following: 1. = select max(id) from table_1 where utc Date; 2. delete from table_2 where id ; 3. delete from table_1 where id ; basically we currently delete everything older than the start of yesterday but this might be changin at some point to be done hourly as the system grows. Does anybody have a good resource which explains exactly how I would be able to create this procedure or can otherwise assist or guide me? I would really appreciate the help as I would love to learn how to write procedures. Regards Machiel
Re: Table cache not being updated
How did you adjust this variable? Stay it in my.cnf or just set it? David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/11/24 Machiel Richards machi...@rdc.co.za Hi All Maybe someone can help me with this one. We have set the table_cache to 1024, however the open tables value stays 64 of 64. Everything I checked stated that the open tables is related to the table_cache variable. Can someone please assist on why the value isn't being updated? The MySQL version is 5.051a regards Machiel
Re: Parameter(s) need to take care when setting replication over WAN
This is depending on your production environment. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/4 Ryan Chan ryanchan...@gmail.com Hello, I am setting MySQL (5.0) replication over WAN for backup purpose (with SSL enabled). I have read the document: http://dev.mysql.com/doc/refman/5.0/en/replication.html, and wonder if any special tunning needed for using replication over WAN. Are there any configuration parameter(s) need special handling? What I understand are: - --master-connect-retry=60 - --master-retry-count=86400 Any others? Thanks. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: MyQuery 3.4.1 Released
It's great! David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/11/29 Anders Karlsson and...@recordedfuture.com I have released MyQuery 3.4.1 today. MyQuery is a Windows based Ad-Hoc query tool with some interesting features: - Colour coded syntax - Based on Scintilla - Code folding - Based on Scintilla - Ability to strat a script run inside the script. - Support for error/stop/continue script editing - Highly configurable with user defined tools and many other features - Powerful plugin API And a bunch more things. MyQuery is completely free and Open Source. Version 3.4.1 is a minor bugfix version, where the main thing being fixed is a bug that caused issues to run MyQuery without Admin rights on Windows 7. Read more on the blog here: http://karlssonondatabases.blogspot.com/2010/11/announcement-myquery-341-released.html Or just go ahead and download it from here: http://sourceforge.net/projects/myquery/ Best regards Anders Karlsson -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: SQLStats 1.1 available
I can not visit this page in china. http://karlssonondatabases.blogspot.com/2010/11/monitoring-mysql-sql-statement-way-it.html David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/11/24 Anders Karlsson and...@recordedfuture.com If you downloaded this earlier today, note that there is now a 1.2 version available, that also shows ROWS_EXAMINED in INFORMATION_SCHEMA tables. /Karlsson Claudio Nanni wrote On 2010-11-24 09:02: Cool! On Nov 24, 2010 8:46 AM, Anders Karlsson and...@recordedfuture.commailto: and...@recordedfuture.com wrote: SQLStats is a MySQL 5.5 plugin that allows MySQL SQL Statement monitoring in real time, without any Proxies, source code modifications, different connectors or anything. Read more on my blog here: http://karlssonondatabases.blogspot.com/2010/11/monitoring-mysql-sql-statement-way-it.html The plugin is GPL and is downloadable from sourceforge here: https://sourceforge.net/projects/sqlstats/ Best regards Anders Karlsson -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=claudio.na...@gmail.com
Re: INSERT DELAYED and logging
Hi. I think if there are not some concurrency visitors, you should not use it. Otherwise, just put it. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/1 Wagner Bianchi wagnerbianch...@gmail.com I'll provide it to, bear with me, pls... Best regards. -- WB 2010/11/30 Johan De Meersman vegiv...@tuxera.be Interesting, but I feel the difference is rather small - could you rerun with, say, 50.000 queries ? Also, different concurrency levels (1, 100) might be interesting to see. Yes, I'm to lazy to do it myself, what did you think :-p On Tue, Nov 30, 2010 at 4:01 PM, Wagner Bianchi wagnerbianch...@gmail.com wrote: Friends, I did a benchmark regarding to this subject. Please, I am considering your comments. = http://wbianchi.wordpress.com/2010/11/30/insert-x-insert-delayed/ Best regards. -- WB 2010/11/30 Wagner Bianchi wagnerbianch...@gmail.com Maybe, the table in use must be a table that is inside cache now - SHOW OPEN TABLES, controlled by table_cache, I mean. Well, if the amount of data trasactioned is too small as a simple INSERT, you don't have to be worried, I suggest. If you partition the table, we must a benchmark to know the performance relation of a INSERT and compress data into Archive Storage Engine or the insertion data into a partitioned table. Best regards. -- WB 2010/11/30 Johan De Meersman vegiv...@tuxera.be I would assume that it's slower because it gets put on the delay thread anyway, and thus executes only whenever that thread gets some attention. I'm not sure wether there are other influencing factors. I should also think that not in use in this context means not locked against inserts, so the MyISAM insert-while-selecting at the end of a continguous table may well apply. No guarantees, though - I'm not that hot on this depth. On Tue, Nov 30, 2010 at 8:46 AM, WLGades wlga...@gmail.com wrote: What I'm confused by though, is this line. Note that INSERT DELAYED is slower than a normal INSERT if the table is not otherwise in use. What's the definition of in use? Does a logging table do that given that it's pretty much append-only/write-only? Waynn On Mon, Nov 29, 2010 at 10:19 PM, Johan De Meersman vegiv...@tuxera.bewrote: No, I think it's a good idea to do INSERT DELAYED here - it's only logging application, and it's generally more important to not slow down the application for that. It's only ever into a single table, so there's only going to be a single delay thread for it anyway. Archive tables are a good idea, agreed, but I suspect that inserts into that are going to be slower than into regular MyISAM because of the compression, so why not use that overhead to (slightly) speed up your end-user experience instead ? You can always partition the table based on the log date or whatever, if your table risks getting too big. On Tue, Nov 30, 2010 at 1:03 AM, Wagner Bianchi wagnerbianch...@gmail.com wrote: Well, analyze if you need to create an excessive overhead into the MySQL Server because a simple INSERT. What you must have a look is it: - How much data this connection is delivering to MySQL's handlers? - A word DELAYED in this case is making MySQL surfer? Perhaps, you are sophisticating something that do not need it. Besides it, analyzing your log table, I imagine this table can be an Archive table instead of MyISAM. Log tables or history tables can be controlled by Archive Storage Engine to have more compressed data. Although, Archive Storage Engine only supports SELECT and INSERT. Maybe, a good deal to you, get rid of you INSERT DELAYED: - ALTER TABLE tbl_name ENGINE = ARCHIVE; Best regards. -- WB 2010/11/29 WLGades wlga...@gmail.com I'm adding a table to our site that logs all page loads. In the past, when I built this, I used MyISAM and INSERT DELAYED. I went back to look at the documentation to see if I should still do this, and saw this (taken from http://dev.mysql.com/doc/refman/5.1/en/insert-delayed.html): Note that INSERT DELAYED is slower than a normal INSERT if the table is not otherwise in use. There is also the additional overhead for the server to handle a separate thread for each table for which there are delayed rows. This means that you should use INSERT DELAYED only when you are really sure that you need it. Does that mean that I shouldn't use it if all I'm doing is INSERT (essentially an append-only table), with only very occasional SELECTs? In addition, the last time I took this approach for logging, it worked well until the table got to
Re: Another replication question
This way is very well, but it has to do lots of human work. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/11/24 Rolando Edwards redwa...@logicworks.net MySQL, by design, cannot do that. A DB Server can be Master to Multiple Slaves Think of the CHANGE MASTER TO command. Its internal design cannot accommodate reading from more than one set of relay logs. You could attempt something convoluted, like 1) STOP SLAVE; 2) CHANGE MASTER TO Master 1 3) START SLAVE; 4) Read and process some binary log transactions, wait till you are zero sec behind master 5) STOP SLAVE; 6) CHANGE MASTER TO Master 2 7) START SLAVE; 8) Read some process binary log transactions, wait till you are zero sec behind master 9) Repeat from step 1 Make sure Each Master is updating only one specific set of databases, mutual exclusive from other Masters Make sure you properly record the log file and log position from each master I would never try this under normal circumstances. I think this was described in the High Performance MySQL book http://www.amazon.com/dp/0596101716?tag=xaprb-20 Rolando A. Edwards MySQL DBA (CMDBA) 155 Avenue of the Americas, Fifth Floor New York, NY 10013 212-625-5307 (Work) 201-660-3221 (Cell) AIM Skype : RolandoLogicWorx redwa...@logicworks.net http://www.linkedin.com/in/rolandoedwards -Original Message- From: Machiel Richards [mailto:machi...@rdc.co.za] Sent: Wednesday, November 24, 2010 7:20 AM To: mysql mailing list Subject: Another replication question Hi All I am back once again with another replication question (maybe this can also be handled by MMM but not sure) this time for a different client. We are trying to find out how to setup 3 different masters to replicate to a single slave server (without the need to have 3 different instances running on the slave machine). Does anybody have any ideas? Any ideas will be greatly appreciated. Regards Machiel
Re: Discontinued AUTO_INCREMENT problem....
Hi. You can show us your show create table statement as well. 杨涛 我博客1:http://yueliangdao0608.cublog.cn My 我博客2:http://yueliangdao0608.blog.51cto.com 2010/12/20 Xavier Correyeur x.correy...@free.fr Hi everybody ! A have a discontinued AUTO_INCREMENT sequence when i insert data in a table with a 100 (or more) items SELECT request. The problem (or situation) is reproductible, you can see an example below. Anybody could explain this to me ? Cheers XC My MySQL version : Ver 14.14 Distrib 5.1.41, for debian-linux-gnu (i486) using readline 6.1 == Example = -- CREATE test table mysql create table test(`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` VARCHAR(255), `test` int(10), KEY `keyid`(`id`)) ENGINE=InnoDB DEFAULT CHARSET=latin1; Query OK, 0 rows affected (0.00 sec) -- INSERT DATA FROM ANOTHER TABLE mysql insert into test(name) select `name`from user limit 100; Query OK, 100 rows affected (0.01 sec) Records: 100 Duplicates: 0 Warnings: 0 -- AUTO_INCREMENT ID CHECK = OK mysql select max(`id`) from test; +---+ | max(`id`) | +---+ | 100 | +---+ 1 row in set (0.00 sec) --INSERT DATA WITH CHECKED SELECTREQUEST 2 = DATA INSERT OK mysql insert into test(name) select `name` from userlimit 100; Query OK, 100 rows affected (0.01 sec) Records: 100 Duplicates: 0 Warnings: 0 -- AUTO_INCREMENT ID CHECK = should be 100 + 100 = 200 -- = 27 IDs are unset, first ID of 2nd insert is 128 instead of 101 -- No field between 100 and 128 mysql select max(`id`) from test; +---+ | max(`id`) | +---+ | 227 | +---+ 1 row in set (0.00 sec) == End Example = -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: mysqldumpslow
Yeah, It can only parse the local file. If you expect to parse the remote one, just download it,haha. 杨涛 我博客1:http://yueliangdao0608.cublog.cn My 我博客2:http://yueliangdao0608.blog.51cto.com 2010/12/19 Eric Bergen eric.ber...@gmail.com I don't think so. mysqldumpslow parses the log file on the local server. On Fri, Dec 17, 2010 at 3:01 AM, partha sarathy par...@mafiree.com wrote: Is it possible to run mysqldumpslow on a remote host? -Partha -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=eric.ber...@gmail.com -- Eric Bergen eric.ber...@gmail.com http://www.ebergen.net -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: max allowed packets on Mac
I agree with michael dykman. 杨涛 我博客1:http://yueliangdao0608.cublog.cn My 我博客2:http://yueliangdao0608.blog.51cto.com 2010/12/18 Michael Dykman mdyk...@gmail.com I'm not nuts about deploying MySQL on macs, but I think you need to assume root. As your mac admin user, try $ sudo su - and give your admin password when asked. You should now be root and can expect permissions to behave as you would expect. - michael dykman On Fri, Dec 17, 2010 at 12:16 PM, g...@noiseunit.com g...@noiseunit.com wrote: Hi, I found this page on the wiki regarding max allowed packets. http://dev.mysql.com/doc/refman/5.0/en/packet-too-large.html However, on my Mac OSX when I try to run the following from terminal window I get Permission Denied shell mysqld --max_allowed_packet=16M And if I try to run as sudo but I again get Permission Denied even though I am logged in as Admin on my machine. So I tried to chmod 777 on mysqld but again received an error: chmod: Unable to change file mode on mysqld: Operation not permitted Any help on how I can rectify this would be appreciated. Thanks, Greg -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=mdyk...@gmail.com -- - michael dykman - mdyk...@gmail.com May the Source be with you. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: Kill DELETE Query
Yeah. The Delete from tablename will remove the record one by one. So you can cancel it whenever you don't need to continue. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com 2010/12/17 Willy Mularto sangpr...@gmail.com Thanks for the confirmation. sangprabv sangpr...@gmail.com http://www.petitiononline.com/froyo/ On Dec 17, 2010, at 1:31 PM, Ananda Kumar wrote: No...it will not. On Fri, Dec 17, 2010 at 11:26 AM, Willy Mularto sangpr...@gmail.com wrote: Thanks for the reply. I used non stored procedure approach. Another question is if I kill the process will it crash the table? Thanks. sangprabv sangpr...@gmail.com http://www.petitiononline.com/froyo/ On Dec 17, 2010, at 12:06 PM, Ananda Kumar wrote: If u have used a stored proc to delete the rows, and commting freqently, then the kill will happen faster. If you have just used delete from table_name where condition, then it would take toot much time to rollback all the deleted but not commited rows. Regards anandkl On Fri, Dec 17, 2010 at 8:37 AM, Willy Mularto sangpr...@gmail.com wrote: Hi List, I run a delete query to delete around 1 million rows in innodb table, It's been hours and still unfinish. Is it safe to kill that delete query process while the table is also inserting and updating other rows? Thanks. sangprabv sangpr...@gmail.com http://www.petitiononline.com/froyo/ -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=anan...@gmail.com
Re: MySQL Parallel Inserts
Hi. MySQL do the sql parse in the single way, just one thread. So if there are parallel insert statement, it will not faster than before. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com 2010/12/17 Andy listan...@gmail.com Shawn/Krishna, Thank you. I will try this. -Andy On Tue, Dec 14, 2010 at 8:13 AM, Krishna Chandra Prajapati prajapat...@gmail.com wrote: Hi Andy, I agree, prefer LOAD DATA INFILE... command for bulk loading. It is 30% faster than normal inserts. Krishna On Tue, Dec 14, 2010 at 8:02 AM, Andy listan...@gmail.com wrote: Greetings everyone. I am in a situation where I need to do parallel inserts into MySQL database from inside my Perl program. Basically, I have several million records to insert into the database, and hence I would rather do them in parallel than doing them one at a time. I looked around but did not find any information on doing this. Does MySQL not support parallel reads/writes? Thanks in advance. Andy
Re: PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
Maybe you should give a password to MySQL's root. And I think this note will disappear. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/17 Yves Goergen nospam.l...@unclassified.de On 14.12.2010 20:36 CE(S)T, Alejandro Bednarik wrote: Are you using ubuntu deb's or mysql bin? Do you get that message when the service start? Look init script to see what it does. I get the message when MySQL is started. I've installed the Ubuntu standard package mysql-server-5.1. It has already asked for a root password during package configuration and I did enter some password there. I just tried to grep my whole filesystem for parts of this message but either grep failed allocating memory (there's plenty left!) or it didn't find the text in a file I could start something with. So I still don't know where the message comes from! -- Yves Goergen LonelyPixel nospam.l...@unclassified.de Visit my web laboratory at http://beta.unclassified.de -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: Discontinued AUTO_INCREMENT problem....
Hi. This is a good point for this issue. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/21 partha sarathy par...@mafiree.com Hi, There is one variable called innodb_autoinc_lock_mode. If the value is 0, this issue wont come. You might set it to 1 or 2. -Partha www.mafiree.com - Original Message From: Wagner Bianchi wagnerbianch...@gmail.com To: 杨涛涛 david.y...@actionsky.com Cc: Xavier Correyeur x.correy...@free.fr; mysql@lists.mysql.com Sent: Tue, 21 December, 2010 3:28:00 PM Subject: Re: Discontinued AUTO_INCREMENT problem Too curious...could you share a SHOW CREATE TABLE from this table as requested before? Best regards. -- Wagner Bianchi 2010/12/21 杨涛涛 david.y...@actionsky.com Hi. You can show us your show create table statement as well. 杨涛 我博客1:http://yueliangdao0608.cublog.cn My 我博客2:http://yueliangdao0608.blog.51cto.com 2010/12/20 Xavier Correyeur x.correy...@free.fr Hi everybody ! A have a discontinued AUTO_INCREMENT sequence when i insert data in a table with a 100 (or more) items SELECT request. The problem (or situation) is reproductible, you can see an example below. Anybody could explain this to me ? Cheers XC My MySQL version : Ver 14.14 Distrib 5.1.41, for debian-linux-gnu (i486) using readline 6.1 == Example = -- CREATE test table mysql create table test(`id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` VARCHAR(255), `test` int(10), KEY `keyid`(`id`)) ENGINE=InnoDB DEFAULT CHARSET=latin1; Query OK, 0 rows affected (0.00 sec) -- INSERT DATA FROM ANOTHER TABLE mysql insert into test(name) select `name`from user limit 100; Query OK, 100 rows affected (0.01 sec) Records: 100 Duplicates: 0 Warnings: 0 -- AUTO_INCREMENT ID CHECK = OK mysql select max(`id`) from test; +---+ | max(`id`) | +---+ | 100 | +---+ 1 row in set (0.00 sec) --INSERT DATA WITH CHECKED SELECTREQUEST 2 = DATA INSERT OK mysql insert into test(name) select `name` from userlimit 100; Query OK, 100 rows affected (0.01 sec) Records: 100 Duplicates: 0 Warnings: 0 -- AUTO_INCREMENT ID CHECK = should be 100 + 100 = 200 -- = 27 IDs are unset, first ID of 2nd insert is 128 instead of 101 -- No field between 100 and 128 mysql select max(`id`) from test; +---+ | max(`id`) | +---+ | 227 | +---+ 1 row in set (0.00 sec) == End Example = -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: Negative connection/thread IDs in mysqld.log?
Hi, The negative IDs is fine. You should read the document for the data type section, especially INT. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/15 Atle Veka at...@flyingcroc.net Hi, I tried searching for an answer to this to no avail. I noticed my mysqld.log contains negative connection/thread IDs. mysqld itself shows positive IDs as expected. Some sort of wrap? /usr/sbin/mysqld, Version: 5.0.32. started with: Tcp port: 3306 Unix socket: /tmp/mysql.sock Time Id CommandArgument 101212 8:00:01 -2049127301 Connect ... Regards, Atle -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: MySql - crashes daily
Hi. Can you show me some of your critical system logs? David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/7 Raj Shekhar rajl...@rajshekhar.net In infinite wisdom Basil Daoust bdao...@lemonfree.com wrote: Any help greatly appreciated. Our mysql is restarting a LOT! Here is what is in the log, it doesn't seem to give much help. Do you have the coredump from this crash? If yes, can you load the core dump into gdb and run bt (backtrace) and see what it produces? Check this link to see what steps are required http://ronaldbradford.com/blog/how-to-crash-mysqld-intentionally-2010-03-05/ -- Raj Shekhar - If there's anything more important than my ego around, I want it caught and shot now. - Read the latest at my blog: Humor in the bookstore http://rajshekhar.net/blog/archives/391-Humor-in-the-bookstore.html -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com
Re: MySQL restore failing
Hi, I think you didn't run mysql_upgrade script. David Yeung, In China, Beijing. My First Blog:http://yueliangdao0608.cublog.cn My Second Blog:http://yueliangdao0608.blog.51cto.com My Msn: yueliangdao0...@gmail.com 2010/12/3 Shawn Green (MySQL) shawn.l.gr...@oracle.com On 12/3/2010 03:40, Machiel Richards wrote: I checked now and saw that they have already attempted a restore previously and hence the original table was dropped and recreated. I found some links on the internet stating that after restoring the dump file to the new version, the proc table should be dumped using the new version and reloaded. ... snip ... Any ideas? The easiest way to migrate between major versions is to dump logical contents of the system data tables (the entire MySQL database) separately from the rest of the data. It changes very slowly so there is no risk of being out of sync with the rest of the data. For example, instead of dumping the user tables (user, db, privs-table, etc...) and restoring them as raw data on the new system, you should get the SHOW GRANTS reports for each of your users. http://dev.mysql.com/doc/refman/5.1/en/show-grants.html Instead of dumping the raw data in the `proc` table, use the --routines option of mysqldump instead to write out the stored procedures as SQL statements. http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_routines Did you remember to dump your triggers (which are database-specific) using the --events option so that they were recreated with your production data tables? The tables in the `mysql` database can and usually do change sizes and definitions between major versions. The utility mysql_upgrade will modify the table definitions to match the current version after you restore your old-version tables but if you want to try to avoid that step, you can use my techniques. -- Shawn Green MySQL Principal Technical Support Engineer Oracle USA, Inc. Office: Blountville, TN -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=yueliangdao0...@gmail.com