Fetch rows_affected after each INSERT/UPDATE. Tally them in @variables, if you
like. The information is not (I think) available after COMMIT.
> -Original Message-
> From: Neil Tompkins [mailto:neil.tompk...@googlemail.com]
> Sent: Tuesday, July 02, 2013 4:30 AM
> To: [MySQL]
> Subject:
>
> > We are on a quest to improve the overall performance of our database.
> It's
> > generally
> > working pretty well, but we periodically get big slowdowns for no
> apparent
> > reason. A
> > prime example today - in the command line interface to the DB, I tried to
> > update one
> > record, an
Am 03.07.2013 01:25, schrieb Andy Wallace:
> Thanks for the response:
>
>> how large is your database?
> about 33GB, and growing
>
>> how large is innodb_ubber?
> from my.cnf:
># You can set .._buffer_pool_size up to 50 - 80 %
># of RAM but beware of setting memory usage too high
>i
O
On 2013-07-02 5:31 PM, "Andy Wallace" wrote:
> We are on a quest to improve the overall performance of our database. It's
> generally
> working pretty well, but we periodically get big slowdowns for no apparent
> reason. A
> prime example today - in the command line interface to the DB, I tried
Thanks for the response:
> how large is your database?
about 33GB, and growing
> how large is innodb_ubber?
from my.cnf:
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size=2048M
innodb_additional_mem_pool_siz
The particular example given here is unsafe and slow.
* Without an ORDER BY, you are not guaranteed that the chunks will be distinct.
* If there are any INSERTs/DELETEs between chunk copies, you will get
dups/missing rows for two reasons: the inserted/deleted rows, and the OFFSET
is not quite
Am 02.07.2013 23:28, schrieb Andy Wallace:
> mysql> update agent set number_of_emails = 5 where acnt = 'AR287416';
> Query OK, 1 row affected (36.35 sec)
> Rows matched: 1 Changed: 1 Warnings: 0
>
> 36 seconds to update one table? The primary key is `acnt`. If I run the same
> (ba
Actually, looks like I'm wrong.
For testing purpose I made an org
"Comp&Me"
When search for "Comp" it's gonna
be shown on the list.
When search for "Comp&" it's also gonna be shown. But
"Construction Company" as well.
Then I changed the name of
the test org to "Com&Me".
Searching for "Com", th
2013/07/02 12:29 +0100, Neil Tompkins
I have a number of INSERT and UPDATE statements in a MySQL Stored
Procedure, that works in the form of START TRANSACTION followed by COMMIT.
Also I am handling any EXCEPTION.
However, after calling COMMIT, how can I get the number of Rows that were
FULLTEXT (at least the MyISAM version) has 3 gotchas:
ft_min_word_len=4, stopwords, and the 50% rule
> -Original Message-
> From: shawn green [mailto:shawn.l.gr...@oracle.com]
> Sent: Tuesday, July 02, 2013 10:21 AM
> To: mysql@lists.mysql.com
> Subject: Re: Full text search and & sign as
cgroups won't work for individual MySQL "users", only for mysqld as a whole.
Monitor the slowlog and help the naughty users fix their naughty queries.
> -Original Message-
> From: Rafał Radecki [mailto:radecki.ra...@gmail.com]
> Sent: Tuesday, July 02, 2013 3:07 AM
> To: mysql@lists.mysql
On Thu, Dec 1, 2011 at 5:54 PM, Miguel Angel Nieto wrote:
> You should check pt-archiver.
+1. It works very well for this type of job.
- Perrin
Another correction: Searching for "Com", the test org is NOT
gonna be listed but all others will.
Searching for "Com&" no results at all.
�
�
>
> Actually, looks like I'm wrong.
> For testing purpose I made an org
> "Comp&Me"
> When search for "Comp" it's gonna
> be shown on the list.
Hello,
(my response is not top-posted)
On 7/2/2013 12:50 PM, l...@afan.net wrote:
Another correction: Searching for "Com", the test org is NOT
gonna be listed but all others will.
Searching for "Com&" no results at all.
�
�
Actually, looks like I'm wrong.
For testing purpose I mad
Hi All.
I would like to limit resources available to a given user in mysql. I know
that there is https://dev.mysql.com/doc/refman/5.5/en/user-resources.html,
I also know that cgroups can be used at operating system level.
What are your experiences in limiting resources in mysql? I've user percona
Another technique to avoid impact to the source database is to create your
target as MyISAM, pump your records into that (no ACID overhead) and at the
end :
ALTER mytable engine=InnoDb
The alter can take awhile but it will impose no strain on the source server
at all.
On Tue, Jul 2, 2013 at
Hi to all,
I have this full text search query
SELECT name, org_id,
address_id
FROM organization
WHERE org_active='Y' AND MATCH(name) AGAINST('AB&C*' IN BOOLEAN
MODE)
and I'm not getting any results. And there IS a org AB&C,
Inc.
My assumption is the ampersand sign as a part of the
Hi,
I have a number of INSERT and UPDATE statements in a MySQL Stored
Procedure, that works in the form of START TRANSACTION followed by COMMIT.
Also I am handling any EXCEPTION.
However, after calling COMMIT, how can I get the number of Rows that were
affected either INSERTED or UPDATTED ?
Tha
Well, the easy way to chunk the inserts is by use of limit. Here is what I
used for one of my projects:
Insert ignore into t1 (f1, f2, f3)
Select f1, f2, f3 from t2 limit 100, 100
Inserts 1M records at a time starting from 1M th record in t2 and you can keep
incrementing this offset as
19 matches
Mail list logo