Re: best way to copy a innodb table

2013-07-02 Thread Arjun
Well, the easy way to chunk the inserts is by use of limit. Here is what I used for one of my projects: Insert ignore into t1 (f1, f2, f3) Select f1, f2, f3 from t2 limit 100, 100 Inserts 1M records at a time starting from 1M th record in t2 and you can keep incrementing this offset as

Get Affected Rows after Stored Procedure COMMIT

2013-07-02 Thread Neil Tompkins
Hi, I have a number of INSERT and UPDATE statements in a MySQL Stored Procedure, that works in the form of START TRANSACTION followed by COMMIT. Also I am handling any EXCEPTION. However, after calling COMMIT, how can I get the number of Rows that were affected either INSERTED or UPDATTED ?

Full text search and sign as a part of the keyword

2013-07-02 Thread lamp
Hi to all, I have this full text search query SELECT name, org_id, address_id FROM organization WHERE org_active='Y' AND MATCH(name) AGAINST('ABC*' IN BOOLEAN MODE) and I'm not getting any results. And there IS a org ABC, Inc. My assumption is the ampersand sign as a part of the

Re: best way to copy a innodb table

2013-07-02 Thread Michael Dykman
Another technique to avoid impact to the source database is to create your target as MyISAM, pump your records into that (no ACID overhead) and at the end : ALTER mytable engine=InnoDb The alter can take awhile but it will impose no strain on the source server at all. On Tue, Jul 2, 2013

Mysql resource limits.

2013-07-02 Thread Rafał Radecki
Hi All. I would like to limit resources available to a given user in mysql. I know that there is https://dev.mysql.com/doc/refman/5.5/en/user-resources.html, I also know that cgroups can be used at operating system level. What are your experiences in limiting resources in mysql? I've user

Re: Full text search and sign as a part of the keyword

2013-07-02 Thread shawn green
Hello, (my response is not top-posted) On 7/2/2013 12:50 PM, l...@afan.net wrote: Another correction: Searching for Com, the test org is NOT gonna be listed but all others will. Searching for Com no results at all. � � Actually, looks like I'm wrong. For testing purpose I made an

Re: Full text search and sign as a part of the keyword

2013-07-02 Thread lamp
Another correction: Searching for Com, the test org is NOT gonna be listed but all others will. Searching for Com no results at all. � � Actually, looks like I'm wrong. For testing purpose I made an org CompMe When search for Comp it's gonna be shown on the list. When search

Re: best way to copy a innodb table

2013-07-02 Thread Perrin Harkins
On Thu, Dec 1, 2011 at 5:54 PM, Miguel Angel Nieto miguel.ni...@percona.com wrote: You should check pt-archiver. +1. It works very well for this type of job. - Perrin

RE: Mysql resource limits.

2013-07-02 Thread Rick James
cgroups won't work for individual MySQL users, only for mysqld as a whole. Monitor the slowlog and help the naughty users fix their naughty queries. -Original Message- From: Rafał Radecki [mailto:radecki.ra...@gmail.com] Sent: Tuesday, July 02, 2013 3:07 AM To: mysql@lists.mysql.com

RE: Full text search and sign as a part of the keyword

2013-07-02 Thread Rick James
FULLTEXT (at least the MyISAM version) has 3 gotchas: ft_min_word_len=4, stopwords, and the 50% rule -Original Message- From: shawn green [mailto:shawn.l.gr...@oracle.com] Sent: Tuesday, July 02, 2013 10:21 AM To: mysql@lists.mysql.com Subject: Re: Full text search and sign as a

Re: Get Affected Rows after Stored Procedure COMMIT

2013-07-02 Thread hsv
2013/07/02 12:29 +0100, Neil Tompkins I have a number of INSERT and UPDATE statements in a MySQL Stored Procedure, that works in the form of START TRANSACTION followed by COMMIT. Also I am handling any EXCEPTION. However, after calling COMMIT, how can I get the number of Rows that were

Re: Full text search and sign as a part of the keyword

2013-07-02 Thread lamp
Actually, looks like I'm wrong. For testing purpose I made an org CompMe When search for Comp it's gonna be shown on the list. When search for Comp it's also gonna be shown. But Construction Company as well. Then I changed the name of the test org to ComMe. Searching for Com, the test org is

Re: database perfomance worries

2013-07-02 Thread Reindl Harald
Am 02.07.2013 23:28, schrieb Andy Wallace: mysql update agent set number_of_emails = 5 where acnt = 'AR287416'; Query OK, 1 row affected (36.35 sec) Rows matched: 1 Changed: 1 Warnings: 0 36 seconds to update one table? The primary key is `acnt`. If I run the same (basic)

RE: best way to copy a innodb table

2013-07-02 Thread Rick James
The particular example given here is unsafe and slow. * Without an ORDER BY, you are not guaranteed that the chunks will be distinct. * If there are any INSERTs/DELETEs between chunk copies, you will get dups/missing rows for two reasons: the inserted/deleted rows, and the OFFSET is not quite

Re: database perfomance worries

2013-07-02 Thread Andy Wallace
Thanks for the response: how large is your database? about 33GB, and growing how large is innodb_ubber? from my.cnf: # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size=2048M innodb_additional_mem_pool_size

Re: database perfomance worries

2013-07-02 Thread Singer Wang
O On 2013-07-02 5:31 PM, Andy Wallace awall...@ihouseweb.com wrote: We are on a quest to improve the overall performance of our database. It's generally working pretty well, but we periodically get big slowdowns for no apparent reason. A prime example today - in the command line interface to

Re: database perfomance worries

2013-07-02 Thread Reindl Harald
Am 03.07.2013 01:25, schrieb Andy Wallace: Thanks for the response: how large is your database? about 33GB, and growing how large is innodb_ubber? from my.cnf: # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high

Re: database perfomance worries

2013-07-02 Thread spameden
We are on a quest to improve the overall performance of our database. It's generally working pretty well, but we periodically get big slowdowns for no apparent reason. A prime example today - in the command line interface to the DB, I tried to update one record, and got: